uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,928 | arxiv | \section{Introduction}\label{sec:intro}
A fundamental problem in representation theory is to compute (and to classify in some comprehensible way) the irreducible unitary representations of a real reductive group. There are three well-known procedures for building new representations from old: parabolic induction, cohomological induction, and complementary series. Empirical evidence suggests that every irreducible unitary representation can be formed through these procedures from a small set of building blocks. These building blocks are the unipotent representations.
A satisfying theory of unipotent representations would include a precise definition, a uniform construction, a good character theory, and a reasonable parameterization. No such theory exists. Vogan, Barbasch, and others have made important strides towards these goals in a number of special cases (\cite{BarbaschVogan1985} is a highlight), but the general theory remains more or less a mystery.
Let $G_{\mathbb{R}}$ be a real reductive group in the sense of \cite{Vogan1991}, Def 6.1. Choose a maximal compact subgroup $K_{\mathbb{R}} \subset G_{\mathbb{R}}$, the fixed points of a Lie group involution $\theta: G_{\mathbb{R}} \to G_{\mathbb{R}}$. Denote the Lie algebras of $G_{\mathbb{R}}$ and $K_{\mathbb{R}}$ by $\mathfrak{g}_{\mathbb{R}}$ and $\mathfrak{k}_{\mathbb{R}}$, respectively. Form the complexifications $K, \mathfrak{k}$, and $\mathfrak{g}$ of $K_{\mathbb{R}},\mathfrak{k}_{\mathbb{R}}$, and $\mathfrak{g}_{\mathbb{R}}$. The complexification of $d\theta$ is a Lie algebra involution of $\mathfrak{g}$, which defines a decomposition
$$\mathfrak{g}= \mathfrak{k} \oplus \mathfrak{p}$$
into $+1$ and $-1$ eigenspaces.
A unitary representation of $G_{\mathbb{R}}$ is a Hilbert space $X^{(2)}$ with a continuous action of $G_{\mathbb{R}}$. It is, by definition, an analytic creature. Thankfully, $X^{(2)}$ has an algebraic model, its Harish-Chandra module $X$, which captures most of its essential features. The construction of $X$ and its precise relation to $X^{(2)}$ are interesting and important matters, but beyond the scope of this paper. The representation $X^{(2)}$ will play no role in our analysis apart from context and motivation.
$X$ has a rich algebraic structure. It is, first and foremost, a representation of $\mathfrak{g}$. It also comes equipped with an algebraic action of $K$. These operations are compatible in two different ways. If $X$ is the Harish-Chandra module of a unipotent representation it should meet several basic requirements. In Section \ref{sec:unipotent}, we package these requirements into a working definition. These requirements narrow down the space of unipotent Harish-Chandra modules, but are not the material of a good definition. Our goal is to find other, more natural properties which are related to these requirements with the hope of identifying a more natural definition.
In \cite{Vogan1991}, Vogan offers a blueprint. He defines a closed, $K$-invariant subset $\operatorname{AV}(X) \subset (\mathfrak{g}/\mathfrak{k})^*$ called the associated variety of $X$. This variety measures the `size' of the Harish-Chandra module. He also defines some equivariant vector bundles on the open $K$-orbits in $\operatorname{AV}(X)$. If $X$ has the characteristics of a unipotent Harish-Chandra module (in the sense of Section \ref{sec:unipotent}), Vogan proves that these vector bundles have a very special form. They are almost (but not quite) local systems. These vector bundles contain so much information, that they almost determine $X$. Vogan ends his paper with a conjecture. Roughly speaking, he conjectures that they \emph{do} determine $X$ when $X$ is unipotent. A little more precisely,
\begin{conjecture}[Vogan]
Suppose $X$ is a unipotent Harish-Chandra module (in the sense of Section \ref{sec:unipotent}). Assume $\operatorname{AV}(X)$ contains a single open $K$-orbit $\mathcal{O} \subset \operatorname{AV}(X)$ and
$$\operatorname{codim}(\operatorname{AV}(X) \setminus \mathcal{O},\operatorname{AV}(X)) \geq 2$$
Let $\mathcal{E} \to \mathcal{O}$ be the $K$-equivariant vector bundle alluded to above. Then there is an isomorphism
$$X \cong_K \Gamma(\mathcal{O},\mathcal{E})$$
of representations of $K$
\end{conjecture}
We will prove Vogan's conjecture under two additional assumptions: $G_{\mathbb{R}}$ is complex and $\operatorname{codim}(\operatorname{AV}(X) \setminus \mathcal{O},\operatorname{AV}(X)) \geq 4$. We will actually prove a slightly stronger assertion:
\begin{theorem}\label{thm:maintheorem1}
Suppose $X$ is a unipotent Harish-Chandra module and $G_{\mathbb{R}}$ is complex. Assume $\operatorname{AV}(X)$ contains a single open $K$-orbit $\mathcal{O} \subset \operatorname{AV}(X)$ and
$$\operatorname{codim}(\operatorname{AV}(X) \setminus \mathcal{O},\operatorname{AV}(X)) \geq 4$$
Let $\mathcal{E} \to \mathcal{O}$ be the $K$-equivariant vector bundle alluded to above. Then there is an equality
$$[\operatorname{gr} X] = [\Gamma(\mathcal{O},\mathcal{E})]$$
in the Grothendieck group $K_0\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$ of $K$-equivariant coherent sheaves on $(\mathfrak{g}/\mathfrak{k})^*$. In particular, there is an isomorphism
$$X \cong_K \Gamma(\mathcal{O},\mathcal{E})$$
of representations of $K$.
\end{theorem}
The main ingredient in our proof of Theorem \ref{thm:maintheorem1} is a microlocalization functor for Harish-Chandra modules. The inspiration for this functor comes from Losev who considers in \cite{Losev2011} a similar functor in a slightly different context.
\section{Organization}
In Section \ref{sec:nilpotentcones} we describe the geometric environment where all of the action takes place: the cone $\mathcal{N}_{\theta}^*$ of nilpotent elements in $(\mathfrak{g}/\mathfrak{k})^*$. The $K$-orbits in $\mathcal{N}^*_{\theta}$ are related to the nilpotent $G_{\mathbb{R}}$-orbits in $\mathfrak{g}_{\mathbb{R}}^*$ by results of Kostant-Sekiguchi and Vergne. In Section \ref{sec:HCmodules}, we provide a precise definition of Harish-Chandra modules and their geometric invariants. In Section \ref{sec:unipotent}, we offer a working definition of unipotent representations and explain the constraints on their associated vector bundles. In Section \ref{sec:Rees}, we introduce a big abelian category $M(\mathfrak{g}_{\hbar},K)$ containing all of our objects of interest: filtered Harish-Chandra modules and $K$-equivariant coherent sheaves on $\mathcal{N}_{\theta}^*$. In Section \ref{sec:locab}, we recall some basic facts about the localization of categories. The material here is mostly taken from \cite{Popescu1973} and detailed proofs are omitted. This section is largely intended for context, although a few of the general facts about localization functors presented in Section \ref{sec:locab} prove useful in Section \ref{sec:quantloc}. In Section \ref{sec:quantloc}, we construct a left-exact endo-functor
$$\Phi_{\mathcal{O}}: M(\mathfrak{g}_{\hbar},K) \to M(\mathfrak{g}_{\hbar},K)$$
using ideas developed by Losev (\cite{Losev2011}). Under a codimension condition on $\partial \mathcal{O}$, $\Phi_{\mathcal{O}}$ descends to a functor
$$\overline{\Phi}_{\mathcal{O}}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) $$
on ordinary Harish-Chandra modules. Heuristically, this functor `microlocalizes over $\mathcal{O}$'. This construction fits squarely into the general framework outlined in Section \ref{sec:locab}. From the results of Section \ref{sec:quantloc}, we obtain an alternative characterization of unipotent representations: a unipotent Harish-Chandra module attached to a nilpotent $K$-orbit $\mathcal{O}$ is canonically isomorphic to its image under $\overline{\Phi}_{\mathcal{O}}$. In this sense, a unipotent Harish-Chandra module is a microlocal object. In Section \ref{sec:cohomologyvanishing}, we prove a vanishing theorem for nilpotent orbits. The vanishing is needed to get good behavior out of $\overline{\Phi}_{\mathcal{O}}$ in the cases we consider. In Section \ref{sec:mainthm}, we prove the main theorem as a consequence.
\section{Three Nilpotent Cones}\label{sec:nilpotentcones}
An element $\lambda \in \mathfrak{g}^*$ is nilpotent if it is identified by an invariant, symmetric, non-degenerate form with a nilpotent element of $\mathfrak{g}$. Equivalently (and more invariantly), $\lambda$ is nilpotent if it annihilates its stabilizer in $\mathfrak{g}$. Let $\mathcal{N}^*$ be the set of nilpotent elements of $\mathfrak{g}^*$. $\mathcal{N}^*$ is closed (in the Zariski topology on $\mathfrak{g}^*$) and $\mathbb{C}^{\times}$-invariant.
The adjoint group $G = \mathrm{Ad}(\mathfrak{g})$ acts on $\mathcal{N}^*$ with finitely many orbits. Each orbit carries a distinguished ($G$-invariant, complex-algebraic) symplectic form. The $G$-orbits in $\mathcal{N}^*$ are partially ordered by the dominance relation $\mathcal{O} \leq \mathcal{O}' \iff \overline{\mathcal{O}} \subseteq \overline{\mathcal{O}'}$.
There are two additional cones living inside of $\mathcal{N}^*$ which are closely related to the representation theory of $G_{\mathbb{R}}$:
\begin{align*}
\mathcal{N}^*_{\theta} &= \mathcal{N}^* \cap (\mathfrak{g}/\mathfrak{k})^* = \{\lambda \in \mathcal{N}^*: \lambda(\mathfrak{k})=0\}\\
\mathcal{N}^*_{\mathbb{R}} &= \mathcal{N}^* \cap \mathfrak{g}_{\mathbb{R}}^* = \{\lambda \in \mathcal{N}^*: \lambda(i\mathfrak{g}_{\mathbb{R}})=0, \lambda(\mathfrak{g}_{\mathbb{R}}) \subseteq \mathbb{R}\}
\end{align*}
$\mathcal{N}_{\theta}^*$ is a $K$ and $\mathbb{C}^{\times}$-invariant Zariski-closed subvariety of $\mathcal{N}^*$ containing finitely-many $K$-orbits. $\mathcal{N}^*_{\mathbb{R}}$ is a $G_{\mathbb{R}}$ and $\mathbb{R}^{\times}$-invariant analytically-closed subset of $\mathcal{N}^*$ containing finitely-many $G_{\mathbb{R}}$-orbits. The $K$-orbits in $\mathcal{N}_{\theta}^*$ and the $G_{\mathbb{R}}$-orbits in $\mathcal{N}_{\mathbb{R}}^*$ are also partially ordered. In both cases, the definition is the same: $\mathcal{O} \leq \mathcal{O}' \iff \overline{\mathcal{O}} \subseteq \overline{\mathcal{O}'}$.
Orbits in $\mathcal{N}^*, \mathcal{N}_{\theta}^*$, and $\mathcal{N}_{\mathbb{R}}^*$ are related in the following way:
\begin{theorem}[Kostant-Sekiguchi-Vergne-Barbasch-Sepanski, \cite{KostantRallis1971}, \cite{Sekiguchi1987}, \cite{Vergne1995},\cite{BarbaschSepanski1998}]\label{thm:KostantSekiguchi}
There is an order-preserving bijection
$$\phi: \mathcal{N}^*_{\theta}/K \to \mathcal{N}^*_{\mathbb{R}}/G_{\mathbb{R}}
$$
characterized by the property that $G \cdot \mathcal{O} = G \cdot \phi(\mathcal{O})$ for every $\mathcal{O} \in \mathcal{N}_{\theta}^*/K$. $\mathcal{O}$ is a Lagrangian submanifold of the complex nilpotent orbit $G\cdot \mathcal{O}$ and $\phi(\mathcal{O})$ is real form of the same complex orbit. As manifolds, $\mathcal{O}$ and $\phi(\mathcal{O})$ are diffeomorphic.
\end{theorem}
\begin{example}
Let
$$G_{\mathbb{R}} = SU(1,1) = \left\{\begin{pmatrix} a & b \\ \overline{b} & \overline{a}\end{pmatrix}: |a|^2-|b|^2=1\right\} \qquad \mathfrak{g}_{\mathbb{R}} = \left\{\begin{pmatrix} xi & y+zi \\ y-zi & -xi\end{pmatrix}: x,y,z \in \mathbb{R}\right\}$$
Choose
$$K_{\mathbb{R}} = \left\{\begin{pmatrix} a & 0 \\ 0 & \overline{a}\end{pmatrix}: |a|^2=1\right\} $$
Then
$$
G = PGL_2(\mathbb{C}) \qquad K = \left\{\begin{pmatrix} a & 0 \\ 0 & a^{-1} \end{pmatrix}: a \in \mathbb{C}^{\times}\right\} \qquad \mathfrak{p} = \left\{\begin{pmatrix} x & y \\ y & -x\end{pmatrix}: a,b \in \mathbb{C}\right\}$$
The killing form identifies adjoint orbits for $G$ (resp $G_{\mathbb{R}}$) with co-adjoint orbits for $G$ (resp $G_{\mathbb{R}}$) and nilpotent $K$-orbits in $(\mathfrak{g}/\mathfrak{k})^*$ with nilpotent $K$-orbits in $\mathfrak{p}$.
Because of the trace condition, an element $X \in \mathfrak{sl}_2(\mathbb{C})$ is nilpotent if and only if $\det(X)=0$. Therefore
\begin{align*}
\mathcal{N}^* &= \left\{\begin{pmatrix} x & y \\ z & -x\end{pmatrix} \in \mathfrak{sl}_2(\mathbb{C}): x^2+yz=0\right\}\\
\mathcal{N}_{\theta}^* &= \left\{\begin{pmatrix} x & y \\ y & -x\end{pmatrix} \in \mathfrak{sl}_2(\mathbb{C}): x^2+y^2=0\right\}\\
\mathcal{N}_{\mathbb{R}}^* &= \left\{\begin{pmatrix} xi & y+zi \\ y-zi & -xi\end{pmatrix} \in \mathfrak{sl}_2(\mathbb{C}): x^2=y^2+z^2\right\}
\end{align*}
In words: $\mathcal{N}^*$ is a complex quadric cone of complex dimension two, $\mathcal{N}_{\theta}^*$ is the union of two intersecting complex lines, and $\mathcal{N}_{\mathbb{R}}^*$ is a real quadric cone of real dimension two. $G$ has two orbits on $\mathcal{N}^*$: the origin, and everything else. $K$ has three orbits on $\mathcal{N}_{\theta}^*$: the origin, and the two punctured lines. $G_{\mathbb{R}}$ has three orbits on $\mathcal{N}_{\mathbb{R}}^*$: the origin, and the two punctured half-cones. The bijection of Theorem \ref{thm:KostantSekiguchi} matches the punctured complex lines with the punctured real half-cones and the origin with the origin. Consistent with Theorem \ref{thm:KostantSekiguchi}, there are diffeomorphisms between correlated orbits and the ordering is preserved.
\end{example}
If $G_{\mathbb{R}}$ is the real points of a complex algebraic group (henceforth, if $G_{\mathbb{R}}$ is `complex'), then $G \cong G_{\mathbb{R}} \times G_{\mathbb{R}}$ and we can choose $K_{\mathbb{R}} \subset G_{\mathbb{R}}$ so that $K$ embeds in $G \cong G_{\mathbb{R}} \times G_{\mathbb{R}}$ as a diagonal copy of $G_{\mathbb{R}}$. In this case,
$$\mathfrak{k} \cong \mathfrak{g}/\mathfrak{k} \cong \mathfrak{g}_{\mathbb{R}}$$
as representations of $K$. The isomorphism $\mathfrak{g}_{\mathbb{R}}^* \cong (\mathfrak{g}/\mathfrak{k})^*$ identifies nilpotent $K$-orbits in $(\mathfrak{g}/\mathfrak{k})^*$ with nilpotent $G_{\mathbb{R}}$-orbits in $\mathfrak{g}_{\mathbb{R}}^*$. In particular, every $\mathcal{O} \in \mathcal{N}_{\theta}^*/K$ is an algebraic variety of even complex dimension with a distinguished symplectic form.
\section{Harish-Chandra Modules and their Geometric Invariants}\label{sec:HCmodules}
A $(\mathfrak{g},K)$-module is a left $U(\mathfrak{g})$-module $X$ together with an algebraic $K$-action compatible with the $U(\mathfrak{g})$-action in two different ways
\begin{enumerate}
\item The action map $U(\mathfrak{g}) \otimes X \to X$ is $K$-equivariant,
\item The $\mathfrak{k}$-action, coming from the inclusion $\mathfrak{k} \subset \mathfrak{g} \subset U(\mathfrak{g})$, agrees with the differentiated action of $K$
\end{enumerate}
A morphism of $(\mathfrak{g},K)$-modules is a homomorphism of $U(\mathfrak{g})$-modules intertwining the actions of $K$. Write $M(\mathfrak{g},K)$ for the abelian category of $(\mathfrak{g},K)$-modules (and morphisms defined as above) and $HC(\mathfrak{g},K)$ for the full subcategory of $(\mathfrak{g},K)$-modules finitely generated over $U(\mathfrak{g})$. The objects of $HC(\mathfrak{g},K)$ are called Harish-Chandra modules.
Following \cite{Vogan1991}, we will associate to every Harish-Chandra module $X$ some geometric data in $\mathcal{N}_{\theta}^*$. We will need the concept of a \emph{good filtration} of $X$. A filtration of $X$
$$...\subseteq X_{-1} \subseteq X_0 \subseteq X_1 \subseteq ... , \qquad \bigcap_m X_m = 0, \qquad \bigcup_m X_m = X$$
by complex subspaces is \emph{compatible} if
\begin{enumerate}
\item $U_m(\mathfrak{g})X_n \subseteq X_{m+n}$
\item $KX_m \subseteq X_m$
\end{enumerate}
for every $m,n \in \mathbb{Z}$. The first condition allows us to define on $\operatorname{gr}(X) = \bigoplus_n X_n/X_{n-1}$ the structure of a graded $S(\mathfrak{g})$-module. The second condition allows us to define on $\operatorname{gr}(X)$ a graded algebraic $K$-action. These two structures satisfy compatibility conditions mirroring the compatibility conditions on $X$:
\begin{enumerate}
\item The action map $S(\mathfrak{g}) \otimes \operatorname{gr}(X) \to \operatorname{gr}(X)$ is $K$-equivariant,
\item The subspace $\mathfrak{k} \subset \mathfrak{g} \subset S(\mathfrak{g})$ acts by $0$ on $\operatorname{gr}(X)$
\end{enumerate}
In short, $\operatorname{gr}(X)$ has the structure of a graded, $K$-equivariant $S(\mathfrak{g}/\mathfrak{k})$-module. In geometric terms, $\operatorname{gr}(X)$ is a graded, $K$-equivariant quasi-coherent sheaf on $(\mathfrak{g}/\mathfrak{k})^*$. A compatible filtration is \emph{good} if additionally
\begin{enumerate}[resume]
\item\label{cond3} $\operatorname{gr}(X)$ is finitely-generated over $S(\mathfrak{g})$
\end{enumerate}
If we adopt the geometric point of view suggested above, condition \ref{cond3} implies that $\operatorname{gr}(X)$ is \emph{coherent}.
Note that every Harish-Chandra module $X$ admits a good filtration. Since $X$ is finitely-generated, it contains a finite-dimensional $K$-invariant generating subspace $X_0 \subset X$. And the filtration
$$X_0 \subset U_1(\mathfrak{g})X_0 \subset U_2(\mathfrak{g})X_0 \subset ... $$
is necessarily good.
Although $\operatorname{gr}(X)$ depends on the good filtration used to define it, its class in the Grothendieck group $K_0\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$ does not. More precisely,
\begin{proposition}\label{prop:grprop}
$\operatorname{gr}$ defines a group homomorphism
$$
K_0HC(\mathfrak{g},K) \to K_0\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*, \qquad X \mapsto [\operatorname{gr}(X)]
$$
\end{proposition}
Proposition \ref{prop:grprop} provides us with a recipe for attaching geometric invariants to Harish-Chandra modules. A function $\varphi: \operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^* \to S$ with values in a semigroup $S$ is \emph{additive} if $\varphi(B) = \varphi(A)+\varphi(C)$ whenever there is a short exact sequence $0 \to A \to B \to C \to 0$. Under this condition, $\varphi$ is well-defined on classes in $K_0HC(\mathfrak{g},K)$ and therefore (by Proposition \ref{prop:grprop}), defines an (additive) function $\varphi[\operatorname{gr} (X)]$ on Harish-Chandra modules.
The simplest example of this construction is the \emph{associated variety} $\operatorname{AV}(X)$ of a Harish-Chandra module $X$. Let $S$ be the set of Zariski-closed subsets of $(\mathfrak{g}/\mathfrak{k})^*$ with addition defined by $\cup$. Let $\varphi$ be the function
$$\varphi: \operatorname{Coh}(\mathfrak{g}/\mathfrak{k})^* \to S, \qquad \varphi(M) = \mathrm{Supp}(M) = V(\mathrm{Ann}(M))$$
If $0 \to A \to B \to C \to 0$ is a short exact sequence in $\operatorname{Coh}(\mathfrak{g}/\mathfrak{k})^*$, then there are inclusions
$$\mathrm{Ann}(A)\mathrm{Ann}(C) \subseteq \mathrm{Ann}(B) \subseteq \mathrm{Ann}(A) \cap \mathrm{Ann}(C)$$
A prime ideal of $S(\mathfrak{g}/\mathfrak{k})$ containing $\mathrm{Ann}(B)$ must contain either $\mathrm{Ann}(A)$ or $\mathrm{Ann}(C)$ (by the first inclusion) and a prime ideal containing either $\mathrm{Ann}(A)$ or $\mathrm{Ann}(C)$ must contain $\mathrm{Ann}(B)$ (by the second). Hence, $\varphi$ is an additive function on $\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$. We define
$$\operatorname{AV}(X) = \mathrm{Supp}(\operatorname{gr} X)$$
From Proposition \ref{prop:grprop}, this is a well-defined (additive) function on Harish-Chandra modules.
If $X$ has finite length (a slightly stronger condition than finite generation), then $\operatorname{AV}(X)$ has a very rigid structure.
\begin{proposition}[\cite{KostantRallis1971}]\label{AVprop}
Suppose $X$ is a Harish-Chandra module of finite length. Then $\operatorname{AV}(X)$ is a Zariski-closed, $K$-invariant subset of $\mathcal{N}_{\theta}^*$.
\end{proposition}
A $K$-invariant subset of $\mathcal{N}_{\theta}^*$ is a union of finitely-many $K$-orbits. Select the $K$-orbits $\mathcal{O}_1,...\mathcal{O}_t$ which are maximal in $\operatorname{AV}(X)$ (with respect to the dominance ordering). Then
$$\operatorname{AV}(X) = \bigcup_{i=1}^t \overline{\mathcal{O}_i}$$
is a decomposition into irreducible components. Note that $\operatorname{AV}(X)$ (like any closed, $K$-invariant subset of $\mathcal{N}_{\theta}^*$) is completely determined by its maximal $K$-orbits. If $X$ is irreducible, there are rigid constraints on the maximal orbits which can appear.
\begin{theorem}[\cite{KostantRallis1971},\cite{Vogan1991}]\label{thm:irredassvar}
Suppose $X$ is an irreducible Harish-Chandra module. Let $\mathcal{O}_1,...,\mathcal{O}_t$ be the maximal $K$-orbits in its associated variety. Then $\mathcal{O}_1, ..., \mathcal{O}_t$ are Lagrangian submanifolds of the same co-adjoint $G$-orbit. In particular, they have equal dimension and are conjugate under $G$. At least one of the following is true:
\begin{enumerate}
\item $t=1$, i.e. $\operatorname{AV}(X)$ is irreducible
\item $\operatorname{codim}(\partial \mathcal{O}_i, \overline{\mathcal{O}_i}) =1$ for every $1 \leq i \leq t$ and the components $\overline{\mathcal{O}_1},...,\overline{\mathcal{O}_t}$ form a single class under the equivalence relation generated by
$$\overline{\mathcal{O}_i} \sim \overline{\mathcal{O}_j} \iff \operatorname{codim}(\overline{\mathcal{O}_i} \cap \overline{\mathcal{O}_j}, \operatorname{AV}(X)) = 1$$
\end{enumerate}
\end{theorem}
\begin{proof}
The first half of the theorem (asserting that the $\mathcal{O}_i$ are lagrangians of the same co-adjoint orbit) is a result of Kostant-Rallis (\cite{KostantRallis1971}).
The second half of the theorem (imposing codimension constraints on the component intersections) is Proposition 3.11 in \cite{Vogan1991}. Actually, Vogan proves a slighly weaker assertion, but (with one easy modification), his argument can be upgraded to prove the stronger claim stated above.
The main idea in Vogan's proof is the localization of a good filtration. We will summarize the main results. Suppose $\mathcal{F}_nX$ is a good filtration of $X$. Vogan associates to every closed, $\mathbb{C}^{\times}$-invariant subset $Z \subset (\mathfrak{g}/\mathfrak{k})^*$ a special filtration $\mathcal{F}^ZX$ of $X$ called the `localization of $\mathcal{F}$.'
If $U = (\mathfrak{g}/\mathfrak{k})^* \setminus Z$ and $\operatorname{gr}(X,\mathcal{F})(U)$ is the localization (in the ordinary sense), there are natural maps $\operatorname{gr}(X,\mathcal{F}) \to \operatorname{gr}(X,\mathcal{F}^Z)$ and $\operatorname{gr}(X,\mathcal{F}^Z) \to \operatorname{gr}(X,\mathcal{F})(U)$. The second is an injection. These maps commute with the natural map $\operatorname{gr}(X,\mathcal{F}) \to \operatorname{gr}(X,\mathcal{F})(U)$.
\begin{center}
\begin{tikzcd}
\operatorname{gr}(X,\mathcal{F}) \arrow[r]\arrow[dr] & \operatorname{gr}(X,\mathcal{F}^Z) \arrow[d,hookrightarrow]\\
& \operatorname{gr}(X,\mathcal{F})(U)
\end{tikzcd}
\end{center}
In particular, if $\operatorname{gr}(X,\mathcal{F})(U)$ is a finitely-generated $S(\mathfrak{g}/\mathfrak{k})$-module, so is $\operatorname{gr}(X,\mathcal{F}_Z)$.
Number the maximal $K$-orbits in $\operatorname{AV}(X)$ so that $\overline{\mathcal{O}_1},...,\overline{\mathcal{O}_r}$ is an equivalence class under the relation `connected in codimension 1.' Suppose $r<t$ and define
$$Y = \bigcup_{i=1}^r \overline{\mathcal{O}_i} \qquad Z = \bigcup_{i=r+1}^t \overline{\mathcal{O}_i} $$
Form the localized filtration $\mathcal{F}^Z$. By definition, the irreducible components of $Y$ and $Z$ intersect in codimension $\geq 2$. Consequently, $\operatorname{gr}(X,\mathcal{F})(U)$ is a finitely-generated $S(\mathfrak{g}/\mathfrak{k})$-module. Hence, $\mathcal{F}^Z$ is a good filtration and $\operatorname{AV}(X) = \mathrm{Supp}(\operatorname{gr}(X,\mathcal{F}^Z)) \subseteq Y$. This contradicts the assumption $r<t$.
\end{proof}
In Section \ref{sec:quantloc}, we will prove a weaker version of Theorem \ref{thm:irredassvar} using the machinery of microlocalization. It is worth noting that if $G_{\mathbb{R}}$ is complex, then $\operatorname{codim}{(\partial \mathcal{O},\overline{\mathcal{O}})} \geq 2$ for \emph{every} nilpotent $K$-orbit (see the remarks at the end of Section \ref{sec:nilpotentcones}). In this case, Theorem \ref{thm:irredassvar} implies $\operatorname{AV}(X)$ irreducible for every irreducible $X$
The associated variety has two close cousins, which we will only mention in passing. The \emph{wave front set} is a closed, $G_{\mathbb{R}}$-invariant subset of $\mathcal{N}_{\mathbb{R}}^*$ associated to a nice representation of $G_{\mathbb{R}}$. It is an analytic notion, defined in terms of distribution characters. The second related concept is the associated variety of a two-sided ideal $I \subset U(\mathfrak{g})$. It is defined by $\mathrm{AV}(I) = V(\operatorname{gr}(I))$ and, when $I$ is the annihilator of a finite-length Harish-Chandra module, is a closed, $G$-invariant subset of $\mathcal{N}^*$. The relationship between these three geometric invariants closely mirrors the relationship between $\mathcal{N}^*,\mathcal{N}^*_{\theta}$, and $\mathcal{N}_{\mathbb{R}}^*$ explained in Proposition \ref{thm:KostantSekiguchi}. Roughly: if $X$ is the Harish-Chandra module of a representation $V$, then
$$\mathrm{WF}(V) = \phi(\operatorname{AV}(X)), \quad G\cdot \mathrm{WF}(V) = G \cdot \operatorname{AV}(X) = \operatorname{AV}(\mathrm{Ann}(X))$$
An important consequence (which will prove useful later) is the following
\begin{proposition}\label{prop:anndeterminesdimav}
If $X$ is a Harish-Chandra module, the annihilator of $X$ determines the dimension of $\operatorname{AV}(X)$.
\end{proposition}
The associated variety of a Harish-Chandra module is an important invariant, but carries almost no information about the $K$-action. The \emph{orbit datum} of a Harish-Chandra module is a refinement of the associated variety which captures much of this missing information.
\begin{definition}
Enumerate the $K$-orbits in $\mathcal{N}_{\theta}^*$: $\mathcal{O}_1,...,\mathcal{O}_n$. An \emph{orbit datum} for the pair $(\mathfrak{g},K)$ is a tuple
$$([\mathcal{V}_1],...,[\mathcal{V}_n]) \in K_0\operatorname{Coh}^K(\mathcal{O}_1) \times ... \times K_0\operatorname{Coh}^K(\mathcal{O}_n)$$
subject to the following two conditions
\begin{enumerate}
\item All of the classes $[\mathcal{V}_i]$ are genuine, i.e. they are represented by actual $K$-equivariant sheaves
\item The $K$-orbits corresponding to nonzero classes are mutually incomparable, i.e. none is bigger than another
\end{enumerate}
\end{definition}
Let $S$ be the set of all orbit data for the pair $(\mathfrak{g},K)$. Turn $S$ into a semigroup by introducing the operation $([\mathcal{V}_i]) + ([\mathcal{V}'_i]) = ([\mathcal{V}''_i])$ where $[\mathcal{V}''_i] = [\mathcal{V}_i] + [\mathcal{V}'_i]$ unless $\mathcal{O}_i$ is dominated by a second orbit $\mathcal{O}_j$ with either $[\mathcal{V}_j]$ or $[\mathcal{V}'_j]$ nonzero, in which case $[\mathcal{V}''_i] = 0$.
Suppose $M \in \operatorname{Coh}^K(\mathcal{N}_{\theta}^*)$. Hence, $\mathrm{Supp}(M)$ is a union of $K$-orbits in $\mathcal{N}_{\theta}^*$. Its maximal $K$-orbits are open in $\mathrm{Supp}(M)$. They are the only such orbits. Define the function
$$\varphi: \operatorname{Coh}^K(\mathcal{N}_{\theta}^*) \to S, \qquad \varphi(M) = ([M|_{\mathcal{O}_1}],...,[M|_{\mathcal{O}_n}])$$
where, by convention, $M|_{\mathcal{O}_i}=0$ if $\mathcal{O}_i$ is not open.
The function $\varphi: \operatorname{Coh}^K(\mathcal{N}_{\theta}^*) \to S$ can be extended to the larger set $\operatorname{Coh}^K_{\mathcal{N}^*_{\theta}}(\mathfrak{g}/\mathfrak{k})^*$ as follows: if $M \in \operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$ has support in $\mathcal{N}^*_{\theta}$, it admits a finite filtration by $K$-invariant subsheaves
$$0=M_0 \subset M_1 \subset ... \subset M_t = M, \qquad M_i/M_{i-1} \in \operatorname{Coh}^K(\mathcal{N}^*_{\theta}) \ \text{for} \ 1 \leq i \leq t$$
For example, if $J = \mathrm{Ann}M$ and $I=I(\mathcal{N}^*_{\theta})$, then $\sqrt{J} = I$. Hence $I^N \subseteq J$ for $N >>0$ (since $S(\mathfrak{g}/\mathfrak{k})$ is Noetherian). Then one can define $M_i = I^{N-i}M$. The filtration $0 = M^0 \subset ... \subset M^N = M$ has the property mentioned above.
Fix a finite filtration of $M$ with this property. Define the function
$$\varphi: \operatorname{Coh}^K_{\mathcal{N}^*_{\theta}}(\mathfrak{g}/\mathfrak{k})^* \to S, \qquad \varphi(M) = \sum_{i=1}^N \varphi(M_i/M_{i-1})$$
In \cite{Vogan1991}, Vogan proves that $\varphi$ is well-defined and additive. Consequently (from Propositions \ref{prop:grprop} and \ref{AVprop}) there is an (additive) function on Harish-Chandra modules $\mathrm{OD}(X) = \varphi[\operatorname{gr}(X)]$, called the orbit datum of $X$.
\section{Unipotent Representations}\label{sec:unipotent}
Unipotent Harish-Chandra modules are a vaguely-defined class of unitary irreducible Harish-Chandra modules which form the building blocks of the unitary dual. Here is a working definition:
\begin{definition}\label{def:unipotence}
Let $\mathcal{O} \in \mathcal{N}^*/G$ be a nilpotent co-adjoint orbit. Suppose $\mathcal{A}$ is a \emph{unipotent Dixmier algebra} associated to $\mathcal{O}$ (see \cite{vogan1988} for a definition. Roughly, $\mathcal{A}$ is a filtered algebra with left and right $\mathfrak{g}$-actions and a canonical isomorphism $\operatorname{gr}(\mathcal{A}) \cong \mathbb{C}[\mathcal{O}]$). A unipotent Harish-Chandra module attached to $\mathcal{O}$ is an irreducible $(\mathfrak{g},K)$-module $X$ satisfying three properties:
\begin{enumerate}
\item $X$ is unitary
\item $\mathrm{Ann}(X) \subset U(\mathfrak{g})$ is a maximal ideal
\item The action map $U(\mathfrak{g}) \to \mathrm{End}(X)$ factors through $\mathcal{A}$
\end{enumerate}
\end{definition}
In \cite{Vogan1991}, Vogan proves that if $X$ satisfies the conditions of Definition \ref{def:unipotence}, then $\mathrm{OD}(X)$ has a very special form.
\begin{definition}\label{def:admissibility1}
Suppose $G$ is a complex algebraic group and $H \subset G$ is a subgroup. $H$ acts on $\mathfrak{g}$ and $\mathfrak{h}$ and therefore on $(\mathfrak{g}/\mathfrak{h})^*$. A representation $\rho: H \to GL(V)$ is \emph{admissible} if
\begin{equation}\label{eqn:admissibility}
2d\rho = \mathrm{Tr}|_{(\mathfrak{g}/\mathfrak{h})^*} \cdot \mathrm{Id}_V
\end{equation}
\end{definition}
This formulation of admissibility obscures its geometric nature. Let $Z$ be a homogeneous space for $G$. Suppose $z \in Z$ and $G^z= H$. Form the universal $G$-equivariant cover $p: \tilde{Z} \to Z$. If we choose a lift $\tilde{z} \in \tilde{Z}$ of $z$, then $G^{\tilde{z}} = H^0$ and $p$ is induced from the inclusion $H^0 \subset H$. $G$-equivariant vector bundles on $Z$ (resp. $\tilde{Z}$) correspond to algebraic representations of $H$ (resp. $H^0$). Under this correspondence, the canonical bundle $\omega_{\tilde{Z}} \to \tilde{Z}$ (i.e. the line bundle of top degree differential forms) corresponds to the one-dimensional representation of $H^0$ defined by $\det|_{(\mathfrak{g}/\mathfrak{h})^*}$. Now the geometric meaning of Definition \ref{def:admissibility1} is transparent.
\begin{definition}\label{def:admissibility2}
Suppose $Z$ is a homogeneous space for $G$ and $p: \tilde{Z} \to Z$ is its universal equivariant cover. A $G$-equivariant vector bundle $\mathcal{V} \to Z$ is \emph{admissible} if
$$
(p^*\mathcal{V})^{\otimes 2} = \omega_{\tilde{Z}}^{\oplus R}
$$
where $R = \mathrm{rank}(\mathcal{V})^2$.
\end{definition}
Admissible vector bundles are closely related to equivariant local systems. Recall, a local system on $Z$ is a pair $(\mathcal{V},\nabla)$ consisting of a vector bundle $\mathcal{V}$ and a flat connection $\nabla$. If $Z$ is homogeneous and the pair $(\mathcal{V},\nabla)$ is equivariant, then $\nabla$ is uniquely determined by $\mathcal{V}$. In this case (the case of homogeneous $Z$), the flat connection $\nabla$ is a \emph{condition} on $\mathcal{V}$ (rather than additional data). This condition has a simple description via the correspondence $\operatorname{Coh}^G(Z) \cong H-\mathrm{rep}$ described above: an equivariant vector bundle $\mathcal{V}$ on $Z$ is an equivariant local system if the fiber $V = \mathcal{V}_z$ has trivial restriction to $H^0$. Said another way, $\mathcal{V}$ is an equivariant local system if $p^*\mathcal{V}$ is a multiple of the structure sheaf $\mathcal{O}_{\tilde{Z}}$. If $\omega_Z$ is trivial then this condition coincides precisely with Definition \ref{def:admissibility2}. If $\omega_Z$ is nontrivial, the existence of admissible vector bundles is an extra condition on $Z$. We say that $Z$ is admissible if this condition is satisfied. In this case, if we fix an irreducible admissible vector bundle $\mathcal{E} \to Z$, tensoring with $\mathcal{E}$ defines a (non-canonical) bijection between equivariant local systems and admissible vector bundles on $Z$.
A local system on $Z$ is the same thing as a left $\mathcal{O}_Z$-coherent $\mathcal{D}_Z$-module. The right $\mathcal{O}_Z$-coherent $\mathcal{D}_Z$-modules are obtained by tensoring with $\omega_Z$. $\mathcal{E}$ is roughly a square root of $\omega_Z$ (it is exactly a square root of $\omega_Z$ if $H$ is connected). In light of these observations, it is perhaps helpful to regard admissible vector bundles as being halfway in between left and right $\mathcal{D}_Z$-modules.
We are mostly interested in admissible vector bundles because of their close connection to unipotent representations. In \cite{Vogan1991}, Vogan proves the following
\begin{theorem}[\cite{Vogan1991}]\label{thm:unipotentadmissible}
Suppose $X$ is a unipotent $(\mathfrak{g},K)$-module with orbit datum $\mathrm{OD}(X) = ([\mathcal{V}_1],...,[\mathcal{V}_n])$. Then every nonzero $\mathcal{V}_i$ is admissible.
\end{theorem}
Only certain nilpotent $K$-orbits are admissible, so Theorem \ref{thm:unipotentadmissible} imposes strong additional constraints on the associated varieties of unipotent representations.
\section{The Rees Construction}\label{sec:Rees}
We want to perform operations on filtered Harish-Chandra modules analogous to the restriction and extension of coherent sheaves on $\mathcal{N}^*_{\theta}$. The first problem we encounter is that the category $\mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$ of well-filtered Harish-Chandra modules is not abelian. Cokernels are not well-defined. The solution is to pass to a larger abelian category containing $\mathrm{HC}^{\mathrm{filt}}(\mathrm{g},K)$.
Let $A$ be an associative algebra equipped with an increasing filtration by subspaces
$$...\subseteq
A_{-1} \subseteq A_0 \subseteq A_1 \subseteq ..., \qquad A_mA_n \subseteq A_{m+n}, \qquad \bigcap_m A_m=0, \qquad \bigcup_m A_m = A$$
Form the polynomial algebra $A[\hbar,\hbar^{-1}]$ in the formal symbol $\hbar$. Define a $\mathbb{Z}$-grading by declaring $\mathrm{deg}(A) = 0$ and $\mathrm{deg}(\hbar) =1$. The \emph{Rees algebra} of $A$ is the graded subalgebra
$$R_{\hbar}A = \bigoplus_{m \in \mathbb{Z}} A_m\hbar^m \subset A[\hbar,\hbar^{-1}]$$
In a precise sense, $R_{\hbar}A$ interpolates between $A$ and $\operatorname{gr}(A)$.
\begin{proposition}\label{prop:reesfacts}
The subspaces $\hbar R_{\hbar}A \subset R_{\hbar}A$ and $(\hbar-1)R_{\hbar}A \subset R_{\hbar}A$ are two-sided ideals and
\begin{enumerate}
\item There is a canonical isomorphism of graded algebras
$$R_{\hbar}A/\hbar R_{\hbar}A \cong \operatorname{gr}(A)$$
\item There is a canonical isomorphism of filtered algebras
$$R_{\hbar}A/(\hbar -1)R_{\hbar}A \cong A$$
\end{enumerate}
\end{proposition}
\begin{proof}
The ideals are two-sided since the elements $\hbar, \hbar-1$ are central.
\begin{enumerate}
\item The linear maps $A_n\hbar^n \cong A_n \to A_n/A_{n-1}$ assemble into a surjective homomorphism of graded algebras
$$R_{\hbar}A \to \operatorname{gr}(A)$$
The kernel of this map is the graded subalgebra
$$\bigoplus_nA_{n-1}\hbar^n = \bigoplus_nA_n\hbar^{n+1} = \hbar R_{\hbar}A$$
\item The inclusions $A_n\hbar^n \cong A_n \subseteq A$ assemble into a filtered homomorphism
$$
i: R_{\hbar}A \to A
$$
which is surjective since the filtration is exhaustive. Choose an element $a$ in the kernel of $i$
$$a = a_p\hbar^p + a_{p+1}\hbar^{p+1} + ... + a_q \hbar^q, \qquad \sum_{n=p}^q a_n = 0$$
If we define $b_n = -a_p - a_{p+1} - ... - a_n \in A_n$ for $p \leq n \leq q$, then one easily computes
$$
(\hbar -1)\sum_{n=p}^{q-1}b_n\hbar^n = a
$$
In particular, $a \in (\hbar - 1)R_{\hbar}A$. On the other hand, the subalgebra $(\hbar - 1)R_{\hbar}A$ is spanned by the elements $(\hbar - 1)a_n\hbar^n$ and by an easy computation $i((\hbar-1)a_n\hbar^n) =0$.
\end{enumerate}
\end{proof}
Now suppose $M$ is a module for $A$ equipped with an increasing filtration by subspaces compatible with the filtration on $A$
$$... \subseteq M_{-1} \subseteq M_0 \subseteq M_1 \subseteq ..., \qquad A_mM_n \subseteq M_{m+n}, \qquad \bigcap_m M_m =0, \qquad \bigcup_m M_m = M, $$
Form the vector space $M[\hbar,\hbar^{-1}]$ and define a $\mathbb{Z}$-grading (as above) by $\deg(M) = 0$ and $\deg(\hbar)=1$. $M[\hbar,\hbar^{-1}]$ is a graded module for $R_{\hbar}A$ due to the compatibility of the filtration. The \emph{Rees module} of $M$ is the graded $R_{\hbar}A$-submodule
$$M_{\hbar} = \bigoplus_{m \in \mathbb{Z}}M_m\hbar^m \subset M[\hbar,\hbar^{-1}]$$
Take $A = U(\mathfrak{g})$ with its standard filtration. $K$ acts on $U(\mathfrak{g})$ by filtered automorphisms and therefore on its Rees algebra $R_{\hbar}U(\mathfrak{g})$ by graded automorphisms. A $(\mathfrak{g}_{\hbar},K)$-module is a graded left $R_{\hbar}U(\mathfrak{g})$-module $X_{\hbar}$ equipped with a graded algebraic $K$-action satisfying the following two conditions:
\begin{enumerate}
\item The action map $R_{\hbar}U(\mathfrak{g}) \otimes X_{\hbar} \to X_{\hbar}$ is $K$-equivariant,
\item The $R_{\hbar}U(\mathfrak{g})$-action, restricted to the subspace $\mathfrak{k}\hbar \subset \mathfrak{g}\hbar \subset R_{\hbar}U(\mathfrak{g})$, coincides with $\hbar$ times the differentiated action of $K$.
\end{enumerate}
A morphism of $(\mathfrak{g}_{\hbar},K)$-modules is a graded homomorphism of $R_{\hbar}U(\mathfrak{g})$-modules intertwining with the actions of $K$. Write $M(\mathfrak{g}_{\hbar},K)$ for the abelian category of $(\mathfrak{g}_{\hbar},K)$-modules (and morphisms defined as above) and $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$ for the full subcategory of finitely-generated $(\mathfrak{g}_{\hbar},K)$-modules.
The assignment $X \mapsto R_{\hbar}X$ defines a functor from the category $\mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$ of well-filtered Harish-Chandra modules to $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$.
\begin{proposition}
If $X \in \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$ (with filtration $...\subseteq X_{-1} \subseteq X_0 \subseteq X_1 \subseteq ...$), $R_{\hbar}X$ has the structure of a $(\mathfrak{g}_{\hbar},K)$-module, finitely-generated over $R_{\hbar}U(\mathfrak{g})$. The assignment $X \mapsto R_{\hbar}X$ upgrades to a functor
$$R_{\hbar}: \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K) \to \mathrm{HC}(\mathfrak{g}_{\hbar},K)$$
defined on morphisms $f: X \to Y$ by $(R_{\hbar}f)(x\hbar^m) = f(x)\hbar^m$.
$R_{\hbar}$ is a fully-faithful embedding. Its image is the subcategory $\mathrm{HC}^{\mathrm{tf}}(\mathfrak{g}_{\hbar},K)$ of Harish-Chandra $(\mathfrak{g}_{\hbar},K)$-modules which are $\hbar$-torsion-free.
\end{proposition}
\begin{proof}
There is a functor
$$\hbar=1: \mathrm{HC}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$$
defined by $X_{\hbar} \mapsto X_{\hbar}/(\hbar-1)X_{\hbar}$. The argument provided in the proof of Proposition \ref{prop:reesfacts} (replacing rings with modules) shows that $(\hbar=1) \circ R_{\hbar}$ is the identity functor on $\mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$. It remains to exhibit a natural isomorphism $R_{\hbar}(X_{\hbar}/(\hbar-1)X_{\hbar}) \cong X_{\hbar}$ for every $X_{\hbar} \in \mathrm{HC}^{\mathrm{tf}}(\mathfrak{g}_{\hbar},K)$.
Fix $X_{\hbar} \in \mathrm{HC}^{\mathrm{tf}}(\mathfrak{g}_{\hbar},K)$ and write $X_{\hbar}^n$ for its $n$th graded component. For every integer $N$, define the graded subspace
$$X_{\hbar}^{\leq N} = \bigoplus_{n \leq N} X_{\hbar}^n$$
There is a linear map
$$
\varphi_N: X_{\hbar}^{\leq N} \to X_{\hbar}^N, \qquad \varphi_N(x) = \sum_{n \leq N} x^n\hbar^{N-n}
$$
This map is surjective, since (for example) it restricts to the identity map on $X_{\hbar}^N$. We will show that
$$\ker{\varphi_N}=(\hbar-1)X_{\hbar} \cap X_{\hbar}^{\leq N}$$
Suppose
$$(\hbar-1)(x^p + x^{p+1} + ... + x^q) \in (\hbar-1)X_{\hbar} \cap X_{\hbar}^{\leq N}$$
Then $\hbar x^n - x^{n+1} = 0$ for every $n \geq N$ and consequently $x \in X_{\hbar}^{\leq N-1}$ since $X_{\hbar}$ is $\hbar$-torsion free. Then by a simple computation $\varphi_N((\hbar-1)(x^p+...+x^q))=0$.
Conversely, suppose
$$x = x^p + x^{p+1} + ... + x^N \in \ker{\varphi_N}$$
Then $\sum_{n \leq N}x^n \hbar^{N-n} = 0$. For $n \leq N$, define
$$
y^n = -x^n - \hbar x^{n-1} - \hbar^2 x^{n-2} - ... \in X_{\hbar}^n
$$
Then by a simple computation
$$x=(\hbar-1)(y^{N-1} + y^{N-2} + ...) \in (\hbar-1)X_{\hbar} \cap X_{\hbar}^{\leq N}$$
This proves $\ker{\varphi_N}=(\hbar-1)X_{\hbar} \cap X_{\hbar}^{\leq N}$. As a result, $\varphi_N$ induces a linear isomorphism
$$
\varphi_N: (X_{\hbar}/(\hbar-1)X_{\hbar})^{\leq N} = X_{\hbar}^{\leq N}/\left((\hbar-1)X_{\hbar}\cap X_{\hbar}^{\leq N}\right) \cong X_{\hbar}^N
$$
We can assemble these maps into a graded isomorphism
$$
\varphi = \bigoplus_N \varphi_N: R_{\hbar}(X_{\hbar}/(\hbar-1)X_{\hbar}) \cong X_{\hbar}
$$
It is clear from its construction that $\varphi$ is a $R_{\hbar}U(\mathfrak{g})$-module homomorphism and is compatible with the $K$-actions.
\end{proof}
Besides $R_{\hbar}$, there are several other functors relating the categories $\mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$, $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$ and $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$. Proposition \ref{prop:reesfacts} applied to $A=U(\mathfrak{g})$ gives us canonical isomorphisms $R_{\hbar}U(\mathfrak{g})/\hbar R_{\hbar}U(\mathfrak{g}) \cong S(\mathfrak{g})$ and $R_{\hbar}U(\mathfrak{g})/(\hbar-1) R_{\hbar}U(\mathfrak{g}) \cong U(\mathfrak{g})$. Every $M \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$ can be regarded as a finitely-generated $(\mathfrak{g}_{\hbar},K)$-module via the quotient map $R_{\hbar}U(\mathfrak{g}) \to S(\mathfrak{g})$. On the other hand, if $X_{\hbar} \in \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g}_{\hbar},K)$, then $X_{\hbar}/\hbar X_{\hbar}$ has the structure of a graded, $K$-equivariant, coherent sheaf on $(\mathfrak{g}/\mathfrak{k})^*$ and $X_{\hbar}/(\hbar-1)X_{\hbar}$ has the structure of a well-filtered Harish-Chandra module. These operations define functors, which are related by the following proposition.
\begin{proposition}\label{prop:functors}
The functors
\begin{align*}
i: \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^* &\to \mathrm{HC}(\mathfrak{g}_{\hbar},K)\\
M &\mapsto M\\
\hbar=0: \mathrm{HC}(\mathfrak{g}_{\hbar},K) &\to \operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*\\
X_{\hbar} &\mapsto X_{\hbar}/\hbar X_{\hbar}\\
\hbar=1: \mathrm{HC}(\mathfrak{g}_{\hbar},K) &\to \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)\\
X_{\hbar} &\mapsto X_{\hbar}/(\hbar-1)X_{\hbar}\\
\operatorname{gr}: \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K) &\to \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*\\
X &\mapsto \operatorname{gr}(X)
\end{align*}
satisfy the relations
\begin{enumerate}
\item $(\hbar=0) \circ i = \mathrm{id}$
\item $(\hbar=1) \circ R_{\hbar}=\mathrm{id}$
\item $(\hbar=0) \circ R_{\hbar} = \operatorname{gr}$
\end{enumerate}
\end{proposition}
\begin{center}
\begin{tikzcd}[column sep = large, row sep = large]
& \mathrm{HC}(\mathfrak{g}_{\hbar},K) \arrow[dl, "\hbar=0", shift left=.5ex] \arrow[r, hookrightarrow] \arrow[d,"\hbar=1", shift left=.5ex]& M(\mathfrak{g}_{\hbar},K)\\
\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^* \arrow[ur, hookrightarrow, "i", shift left=.5ex] & \mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K) \arrow[u,hookrightarrow, "R_{\hbar}", shift left=.5ex] \arrow[l, "\operatorname{gr}"] \arrow[d, "\text{forget}"] & \\
& \mathrm{HC}(\mathfrak{g},K) \arrow[r,hookrightarrow] & M(\mathfrak{g},K)
\end{tikzcd}
\end{center}
\begin{proof}
The first relation is obvious. The second and third follow from Proposition \ref{prop:reesfacts} (replacing $A$ with $X$).
\end{proof}
Thus, $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$ is an abelian category containing (as full embedded subcategories) both $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$ and $\mathrm{HC}^{\mathrm{filt}}(\mathfrak{g},K)$. The $\hbar$-parameter interpolates between coherent sheaves and filtered Harish-Chandra modules.
\section{The Localization of Abelian Categories}\label{sec:locab}
In this section, we recall some basic facts about the localization of abelian categories. Our development roughly follows \cite{Popescu1973}, Chapter 4.
Let $\mathcal{C}$ be an abelian category. A full subcategory $\mathcal{B} \subset \mathcal{C}$ is \emph{Serre} if for every short exact sequence $0 \to X \to Y \to Z \to 0$ in $\mathcal{C}$,
$$\mathcal{Y} \in \mathcal{B} \iff X \in \mathcal{B} \ \text{and} \ Z \in \mathcal{B}$$
In other words, $\mathcal{B}$ is a full subcategory which is closed under the formation of subobjects, quotients, and extensions.
\begin{example}\label{ex:restriction1}
Let $X$ be a variety containing a closed subset $Z$. Write $\operatorname{Coh}(X)$ for the category of coherent sheaves on $X$ and $\operatorname{Coh}_Z(X)$ for the full subcategory of sheaves supported in $Z$. Suppose $0 \to A \to B \to C \to 0$ is a short exact sequence in $\operatorname{Coh}(X)$. Support is additive on short exact sequences (see the remarks after Proposition \ref{prop:grprop}). Hence, $\mathrm{Supp}(B) = \mathrm{Supp}(A) \cup \mathrm{Supp}(C)$, which implies
$$\mathrm{Supp}(B) \subseteq Z \iff \mathrm{Supp}(A) \subseteq Z \ \text{and} \ \mathrm{Supp}(C) \subseteq Z$$
Therefore, $\operatorname{Coh}_Z(X)$ is a Serre subcategory of $\operatorname{Coh}(X)$.
\end{example}
\begin{proposition}\label{prop:quotcat}
Let $\mathcal{C}$ be an abelian category and $\mathcal{B} \subset \mathcal{C}$ a Serre subcategory. There is an abelian category $\mathcal{C}/\mathcal{B}$, unique up to equivalence, receiving an exact, essentially surjective functor $T: \mathcal{C} \to \mathcal{C}/\mathcal{B}$ with kernel $\ker{T} = \{C \in \mathcal{C}: TC=0\}$ equal to $\mathcal{B}$ satisfying the following universal property: if $F: \mathcal{A} \to \mathcal{D}$ is an exact functor with $\mathcal{B} \subseteq \ker{G}$, then there is a unique exact functor $G: \mathcal{C}/\mathcal{B} \to \mathcal{D}$ such that $F = G \circ T$.
\end{proposition}
\begin{proof}
Define $\mathcal{C}/\mathcal{B}$ to be the abelian category having the same objects as $\mathcal{C}$ but with morphisms defined by
$$\mathrm{Hom}_{\mathcal{C}/\mathcal{B}}(C,C') = \lim_{S,S'}\mathrm{Hom}_{\mathcal{C}}(S,C'/S')$$
with $S$ running over all subobjects of $C$ with $C/S \in \mathcal{B}$ and $S'$ running over all subobjects of $C'$ in $\mathcal{B}$. By construction, every morphism in $\mathcal{C}$ maps to a morphism in $\mathcal{C}/\mathcal{B}$. Therefore, the identity map on objects defines a functor $T: \mathcal{C} \to \mathcal{C}/\mathcal{B}$ which is exact and essentially surjective. If $f: C \to C'$ is a morphism, $Tf=0$ if and only if there are subobjects $S \subset C$ and $S' \subset C'$ as above with $S \to C \overset{f}{\to} C' \to C'/S'$ equal to $0$. In particular, $TC=0$ if and only if $C \in \mathcal{B}$. It is not hard to see that $\mathcal{C}/\mathcal{B}$ satisfies the universal property described in the proposition. We leave the details to the reader. Since $\mathcal{C}/\mathcal{B}$ is characterized by a universal property, its uniqueness is automatic.
\end{proof}
\begin{example}\label{ex:restriction2}
In the setting of Example \ref{ex:restriction1}, let $j: U \subset X$ denote the open complement of $Z$. Restriction to $U$ defines an exact functor $j^*: \operatorname{QCoh}(X) \to \operatorname{QCoh}(U)$ with kernel equal to $\operatorname{QCoh}_Z(X)$. Suppose $F: \operatorname{QCoh}(X) \to \mathcal{D}$ is an exact functor to an abelian category $\mathcal{D}$. The direct image functor $j_*: \operatorname{QCoh}(U) \to \operatorname{QCoh}(X)$ is right inverse to $j^*$. Define $G=F\circ j_*$. $G$ is exact by Corollary 3.12, \cite{Popescu1973} and $G\circ j^* = F$. Hence, $\operatorname{QCoh}(X)/\operatorname{QCoh}_Z(C) \cong \operatorname{QCoh}(U)$ by the uniqueness claim of Proposition \ref{prop:quotcat}.
There is also an equivalence $\operatorname{Coh}(X)/\operatorname{Coh}_Z(X) \cong \operatorname{Coh}(U)$, although the proof is more delicate. In general (without codimension conditions on $Z$), $j_*$ does not preserve coherence, so the argument of the previous paragraph does not apply.
\end{example}
A Serre subcategory $\mathcal{B}$ of an abelian category $\mathcal{C}$ is \emph{localizing} if the quotient functor $T: \mathcal{C} \to \mathcal{C}/\mathcal{B}$ admits a right-adjoint $L: \mathcal{C}/\mathcal{B}\to \mathcal{C}$. We call the composition $LT: \mathcal{C} \to \mathcal{C}$ the \emph{localization} of $\mathcal{C}$ with respect to the localizing subcategory $\mathcal{B}$. When it exists, it is unique up to natural isomorphism.
\begin{example}
In the setting of of Examples \ref{ex:restriction1} and \ref{ex:restriction2}, assume $\operatorname{codim}(Z,X) \geq 2$. Then $j_*: \operatorname{Coh}(U) \to \operatorname{Coh}(X)$ is right adjoint (and right inverse) to restriction. Hence, $j_*j^*: \operatorname{Coh}(X) \to \operatorname{Coh}(X)$ is the localization of $\operatorname{Coh}(X)$ with respect to $\operatorname{Coh}_Z(X)$. On the other hand, if $\operatorname{codim}(Z,X)=1$, $\operatorname{Coh}_Z(X)$ is not localizing. No right adjoint exists.
\end{example}
The following proposition catalogs the essential properties of $L$.
\begin{proposition}[\cite{Popescu1973}]\label{prop:generalpropsoflocalization}
Suppose $\mathcal{B}$ is a localizing subcategory of an abelian category $\mathcal{C}$ and that $L: \mathcal{C}/\mathcal{B}\to \mathcal{C}$ is right adjoint to the quotient functor $T: \mathcal{C} \to \mathcal{C}/\mathcal{B}$.
\begin{enumerate}
\item $L$ is left exact
\item $TL$ is naturally isomorphic to $\mathrm{id}_{\mathcal{C}/\mathcal{B}}$
\item An object $C \in \mathcal{C}$ is in the image of $L$ if and only if it has no nontrivial maps from, or extensions by, objects in $\mathcal{B}$. Symbolically,
$$C \in \mathrm{Im}(L) \iff \mathrm{Hom}(\mathcal{B},C)=\mathrm{Ext}^1(C,\mathcal{B})=0$$
\item For every object $C \in \mathcal{C}$, the canonical morphism $C \to LT(C)$ has kernel and cokernel in $\mathcal{B}$.
\end{enumerate}
\end{proposition}
We conclude this section with a useful criterion.
\begin{proposition}[\cite{Popescu1973},Theorem 4.9]\label{prop:criterionquotient}
Suppose $T:\mathcal{C} \to \mathcal{A}$ is an exact functor of abelian categories with a fully faithful right adjoint $L: \mathcal{A} \to \mathcal{C}$. Then $T$ is a quotient functor and $\mathcal{A} \cong \mathcal{C}/\ker{T}$.
\end{proposition}
\section{Microlocalization for Harish-Chandra Modules}\label{sec:quantloc}
Returning to the setting of Sections \ref{sec:intro} through \ref{sec:Rees}, choose $\chi \in \mathcal{N}^*_{\theta}$ and let $\mathcal{O} = K \cdot \chi \subset \mathcal{N}^*_{\theta}$. If $X_{\hbar} \in M(\mathfrak{g}_{\hbar},K)$, $X_{\hbar}/\hbar X_{\hbar}$ has the structure of a graded $K$-equivariant quasi-coherent sheaf on $(\mathfrak{g}/\mathfrak{k})^*$. Define the support of $X_{\hbar}$ to be the support of $X_{\hbar}/\hbar X_{\hbar}$, a $K$ and $\mathbb{C}^{\times}$-invariant subset of $(\mathfrak{g}/\mathfrak{k})^*$. If $Z$ is a subset of $(\mathfrak{g}/\mathfrak{k})^*$, we can consider the full subcategories $\operatorname{QCoh}_Z^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$, $\operatorname{Coh}_Z^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$, $M_Z(\mathfrak{g}_{\hbar},K)$, $\mathrm{HC}_Z(\mathfrak{g}_{\hbar},K)$, $\mathrm{HC}_Z^{\mathrm{filt}}(\mathfrak{g},K)$, and $\mathrm{HC}_Z(\mathfrak{g},K)$ of objects supported in $Z$. We are particularly interested in the special cases $Z= \overline{\mathcal{O}}$ and $Z=\partial \mathcal{O}$. We begin with a simple observation.
\begin{proposition}\label{prop:Serresubcat}
The subcategories
\begin{align*}
\operatorname{QCoh}^{K,\mathbb{C}^{\times}}_{\partial \mathcal{O}}(\mathfrak{g}/\mathfrak{k})^* &\subset \operatorname{QCoh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*\\
\operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\partial \mathcal{O}}(\mathfrak{g}/\mathfrak{k})^* &\subset \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*\\
\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g},K) &\subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)\\
\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K) &\subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)
\end{align*}
are Serre.
\end{proposition}
\begin{proof}
The first three subcategories are Serre by the additivity of support (see the remarks following Proposition \ref{prop:grprop}). For the final subcategory, we will need some new ideas.
In Section \ref{sec:HCmodules}, we developed a theory of good filtrations for finitely-generated $(\mathfrak{g},K)$-modules. As the reader may have suspected, the ideas in Section \ref{sec:HCmodules} are a special case of a more general construction. Given a filtered algebra $A$ with a filtered, algebraic $K$-action, one can define the notion of an $(A,K)$-module along the lines of Section \ref{sec:HCmodules}. A good filtration of an $(A,K)$-module $X$ is an increasing filtration by subspaces subject to
\begin{enumerate}
\item $A_mX_n \subseteq X_{m+n}$
\item $KX_m \subseteq X_m$
\item $\operatorname{gr}(X)$ is finitely-generated over $\operatorname{gr}(X)$
\end{enumerate}
If $\operatorname{gr}(A)$ is finitely-generated and commutative, this is a reasonable definition. In particular, under these conditions on $\operatorname{gr}(A)$:
\begin{enumerate}
\item Every finitely-generated $(A,K)$-module admits a good filtration, and
\item taking $\operatorname{gr}$ defines a group homomorphism
$$K_0\mathrm{HC}(A,K) \to K_0\operatorname{Coh}^K(\mathrm{Spec}(\operatorname{gr}(A)))$$
\end{enumerate}
For a proof of the second fact (the first fact is easy), see Proposition $2.2$ in \cite{Vogan1991} (replacing $U(\mathfrak{g})$ with $A$ wherever it occurs).
The algebra $R_{\hbar}U(\mathfrak{g})$ has two natural $K$-invariant filtrations. One is the filtration defined by the grading. The second is inherited from the usual filtration on $U(\mathfrak{g})$. More precisely
$$(R_{\hbar}U(\mathfrak{g}))_m = U_m(\mathfrak{g})[\hbar] \cap R_{\hbar}U(\mathfrak{g}) = \hbar^m U_m(\mathfrak{g})[\hbar]$$
Its associated graded identifies (in a natural way) with the algebra $S(\mathfrak{g})[\hbar]$. Suppose $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$. We have defined $\mathrm{Supp}(X_{\hbar}) = \mathrm{Supp}(X_{\hbar}/\hbar X_{\hbar})$. But the commentary above suggests an alternative definition. Since $\operatorname{gr} R_{\hbar}U(\mathfrak{g})$ is finitely-generated and commutative, there is a reasonable theory of good filtrations for finitely-generated $(\mathfrak{g}_{\hbar},K)$-modules. In particular, every object $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$ admits a good filtration. By definition, $X_{\hbar}$ comes equipped with a $\mathbb{Z}$-grading compatible with the $\mathbb{Z}$-grading on $R_{\hbar}U(\mathrm{g})$, and since the filtration on $R_{\hbar}U(\mathfrak{g})$ is by graded subspaces, we can choose a filtration on $X_{\hbar}$ with the same property. By the definition of a good filtration, $ \operatorname{gr}(X_{\hbar})$ is a coherent sheaf on $(\mathfrak{g}/\mathfrak{k})^* \times \mathbb{C}$, and one can define
$$\mathrm{Supp}(X_{\hbar}) = \mathrm{Supp}(\operatorname{gr} X_{\hbar}) \cap (\hbar =0) = \mathrm{Supp}(\operatorname{gr} X_{\hbar}/\hbar \operatorname{gr} X_{\hbar})$$
This is a well-defined subset of $(\mathfrak{g}/\mathfrak{k})^*$. One can show without too much difficulty that it agrees with our original definition of support, i.e. that
$$\mathrm{Supp}(\operatorname{gr} X_{\hbar}/\hbar \operatorname{gr} X_{\hbar}) = \mathrm{Supp}(X_{\hbar}/\hbar X_{\hbar})$$
The key point is that for $p,q >>0$ and $x \in X_{\hbar}$ homogeneous we have
$$\mathrm{deg}(x) >p \ \text{and} \ x \in X_{p-q} \implies x \in \hbar X_{\hbar}$$
This is deduced directly from the definition of a good filtration.
Now it follows from the properties of support (see the remarks after Proposition \ref{prop:grprop}) that $\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K) \subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ is Serre.
\end{proof}
\begin{remark}
Note that the subcategory $M_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K) \subset M_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ is missing from this list. It is closed under quotients and extensions, but not under subobjects. Take, for instance, $\mathfrak{g} = \mathbb{C}$. Then $R_{\hbar}U(\mathfrak{g}) = \mathbb{C}[x,\hbar]$. Let $M= \mathbb{C}(\hbar)$, the field of rational functions in $\hbar$. $M$ is an (infinitely-generated) $\mathbb{C}[x,\hbar]$-module with $x$ acting by $0$ and $M/\hbar M=0$ (since $\mathbb{C}(\hbar)$ is a field). Hence, $\mathrm{Supp}(M) = \emptyset$. Yet the submodule $L=\mathbb{C}[\hbar]$ has $L/\hbar L=\mathbb{C}$ and is therefore supported at a point.
\end{remark}
Our proximate goal is to prove that under a codimension condition on $\mathcal{O}$
$$\operatorname{codim}(\partial \mathcal{O}, \overline{\mathcal{O}}) \geq 2$$
$\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$ is a localizing subcategory of $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ and to construct the corresponding localization functor
$$\Phi_{\mathcal{O}}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$$
$\Phi_{\mathcal{O}}$ will descend to a functor
$$\overline{\Phi}_{\mathcal{O}}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
which will inherit all of the interesting properties of $\Phi_{\mathcal{O}}$.
Our construction is adapted from Losev, who constructs analogous functors in (\cite{Losev2011}) for Harish-Chandra bimodules. Most of the proofs in this section are due essentially to Losev, although some arguments have been modified to accommodate our slightly more general setting.
Fix an element $\chi \in \mathcal{O}$. If we fix an invariant form on $\mathfrak{g}$, $\chi$ is identified with a nilpotent element $e \in \mathfrak{p}$, which belongs to an $\mathfrak{sl}_2$ triple $(e,f,h) \in \mathfrak{p} \times \mathfrak{p} \times \mathfrak{k}$. The centralizer $L = K^{e,f,h}$ is a Levi subgroup of $K^e = K^{\chi}$.
Define the maximal ideal $I_{\chi} \subset R_{\hbar}U(\mathfrak{g})$ as the preimage under the canonical surjection $R_{\hbar}U(\mathfrak{g}) \to S(\mathfrak{g})$ of the maximal ideal defining $\chi$. Then consider the completion of $R_{\hbar}U(\mathfrak{g})$ with respect to $I_{\chi}$:
$$\widehat{R_{\hbar}U(\mathfrak{g})} = \varprojlim R_{\hbar}U(\mathfrak{g})/I_{\chi}^n$$
There is a canonical surjection $\widehat{R_{\hbar}U(\mathfrak{g})} \to R_{\hbar}U(\mathfrak{g})/I_{\chi}$. Since $R_{\hbar}U(\mathfrak{g})/I_{\chi}$ is a field, its kernel $\hat{I_{\chi}} \subset \widehat{R_{\hbar}U(\mathfrak{g})}$ is the unique maximal (left, right, and two-sided) ideal in $\widehat{R_{\hbar}U(\mathfrak{g})}$.
The basic properties of the algebra $\widehat{R_{\hbar}U(\mathfrak{g})}$ and its finitely-generated modules are summarized in \cite{Losev2011}. Here are a few:
\begin{proposition}[\cite{Losev2011}]\label{prop:completionfacts}
\begin{enumerate}
\item $\widehat{R_{\hbar}U(\mathfrak{g})}$ is Noetherian
\item $\widehat{R_{\hbar}U(\mathfrak{g})}$ is separated and complete in the $\hat{I_{\chi}}$-adic topology, i.e.
$$\bigcap_n \hat{I_{\chi}}^n\widehat{R_{\hbar}U(\mathfrak{g})}=0, \qquad \widehat{R_{\hbar}U(\mathfrak{g})} \cong \varprojlim \widehat{R_{\hbar}U(\mathfrak{g})}/\hat{I_{\chi}}^n$$
\item If $\hat{X}_{\hbar}$ is a finitely-generated $\widehat{R_{\hbar}U(\mathfrak{g})}$-module, $X_{\hbar}$ is separated and complete in the $\hat{I_{\chi}}$-adic topology, i.e.
$$\bigcap_n \hat{I_{\chi}}^n\hat{X}_{\hbar}=0, \qquad \hat{X}_{\hbar} \cong \varprojlim \hat{X}_{\hbar}/\hat{I_{\chi}}^n\hat{X}_{\hbar}$$
\end{enumerate}
\end{proposition}
If a group (or Lie algebra) acts on $R_{\hbar}U(\mathfrak{g})$ and preserves $I_{\chi}$, then it acts naturally on the completion $\widehat{R_{\hbar}U(\mathfrak{g})}$. There are two reasonable group actions with this property:
\begin{enumerate}
\item \textbf{The adjoint action of} $L$. Since $L$ preserves $\chi \in \mathfrak{g}^*$, it preserves the maximal ideal defining it and therefore its preimage $I_{\chi} \subset R_{\hbar}U(\mathfrak{g})$. Consequently, it lifts to an action on $\widehat{R_{\hbar}U(\mathfrak{g})}$. In fact, the entire centralizer $K^{\chi}$ acts in this fashion, but for reasons that will soon become apparent we will not consider the action of the unipotent radical.
\item \textbf{The Kazhdan action of} $\mathbb{C}^{\times}$. The element $h \in \mathfrak{k}$ determines a unique co-character $\gamma: \mathbb{C}^{\times} \to K$ with $d\gamma_1(1) = h$. We get an algebraic action of $\mathbb{C}^{\times}$ on $U(\mathfrak{g})$ by composing $\gamma$ with $\mathrm{Ad}$:
$$t \cdot X_1...X_m = \mathrm{Ad}(\gamma(t))(X_1)...\mathrm{Ad}(\gamma(t))(X_m)$$
Finally, we extend this action to the polynomial algebra $U(\mathfrak{g})[\hbar]$ by defining $t \cdot \hbar = t^2\hbar$. This action obviously preserves the subalgebra $R_{\hbar}U(\mathfrak{g}) \subset U(\mathfrak{g})[\hbar]$.
$\mathbb{C}^{\times}$ also acts on $\mathfrak{g}^*$ by $t \cdot \zeta = t^{-2} \mathrm{Ad}^*(\gamma(t))(\zeta)$. This induces a $\mathbb{C}^{\times}$-action on $S(\mathfrak{g}) = \mathbb{C}[\mathfrak{g}^*]$ characterized by $t \cdot X = t^2 \mathrm{Ad}(\gamma(t))(X)$. These actions (of $\mathbb{C}^{\times}$ on $R_{\hbar}U(\mathfrak{g}), S(\mathfrak{g})$, and $\mathfrak{g}^*$) are what Losev calls in \cite{Losev2011} the Kazhdan actions of $\mathbb{C}^{\times}$. The canonical map $R_{\hbar}U(\mathfrak{g}) \to S(\mathfrak{g})$ is equivariant with respect to the Kazhdan actions on $R_{\hbar}U(\mathfrak{g})$ and $S(\mathfrak{g})$. The definitions are rigged so that $\chi$ is fixed by $\mathbb{C}^{\times}$:
\begin{align*}
t \cdot \chi &= t^{-2} \gamma(t) \cdot \chi = t^{-2} \chi(\gamma(t)^{-1} \cdot) = t^{-2} (e, \gamma(t)^{-1}\cdot)\\
&= t^{-2}(\gamma(t)\cdot e, \cdot) = t^{-2}(t^2e,\cdot) = (e,\cdot) = \chi
\end{align*}
Hence, the Kazhdan action preserves the ideal defining $\chi$ and therefore its preimage $I_{\chi} \subset R_{\hbar}U(\mathfrak{g})$. Consequently, it lifts to an action on $\widehat{R_{\hbar}U(\mathfrak{g})}$.
\end{enumerate}
Two comments on these definitions are in order. First, since $L$ centralizes $\gamma(\mathbb{C}^{\times})$, these two actions commute. This is not the case if we consider the full action of $K^{\chi}$ and this is the principal reason why we restrict our attention to $L$. Second, neither action is algebraic (i.e. locally finite), except in the most trivial situations. However, both actions can be differentiated (to the Lie algebras $\mathfrak{l}$ and $\mathbb{C}$, respectively) since they are lifted from algebraic actions on $R_{\hbar}U(\mathfrak{g})$.
Suppose $X_{\hbar} \in M(\mathfrak{g}_{\hbar},K)$. Form the completion of $X_{\hbar}$ with respect to $I_{\chi}$:
$$\hat{X}_{\hbar} = \varprojlim X_{\hbar}/I_{\chi}^nX_{\hbar}$$
$\hat{X}_{\hbar}$ has lots of interesting structure. For one, it is obviously a module for $\widehat{R_{\hbar}U(\mathfrak{g})}$. The $L$-action on $X_{\hbar}$ lifts to an action on $\hat{X}_{\hbar}$, since $L$ preserves $I_{\chi}$. The naive $\mathbb{C}^{\times}$-action on $X_{\hbar}$ (obtained from the grading) \emph{does not} lift to the completion (since $I_{\chi}X_{\hbar}$ is not usually graded). But as with $R_{\hbar}U(\mathfrak{g})$ we can define a slightly modified action (call it the Kazhdan action on $X_{\hbar}$) by
$$t \cdot x = t^{2n}\gamma(t)x, \qquad n = \mathrm{deg}(n)$$
and this action \emph{does} lift to the completion. Therefore, $\hat{X}_{\hbar}$ has the structure of a $\widehat{R_{\hbar}U(\mathfrak{g})}$-module with actions of $L$ and $\mathbb{C}^{\times}$. Once again, these actions are \emph{not} algebraic. But they \emph{do} differentiate to the Lie algebras. The axioms for $M(\mathfrak{g}_{\hbar},K)$ impose various compatibility conditions on these three algebraic structures.
\begin{proposition}\label{prop:structureofcompletion}
If $X_{\hbar} \in M(\mathfrak{g}_{\hbar},K)$, then $\hat{X}_{\hbar}$ has the structure of a $\widehat{R_{\hbar}U(\mathfrak{g})}$-module with actions of $L$ and $\mathbb{C}^{\times}$ satisfying the following properties
\begin{enumerate}
\item\label{structureofcompletion1} The $L$ and $\mathbb{C}^{\times}$-actions commute
\item\label{structureofcompletion2} The action map $\widehat{R_{\hbar}U(\mathfrak{g})} \otimes \hat{X}_{\hbar} \to \hat{X}_{\hbar}$ is both $L$ and $\mathbb{C}^{\times}$-equivariant
\item\label{structureofcompletion3} The $\widehat{R_{\hbar}U(\mathfrak{g})}$-action, restricted to the subspace $\mathfrak{l}\hbar \subset \mathfrak{g}\hbar \subset R_{\hbar}U(\mathfrak{g}) \subset \widehat{R_{\hbar}U(\mathfrak{g})}$ coincides with $\hbar$ times the differentiated action of $L$.
\end{enumerate}
\end{proposition}
A $(\hat{g}_{\hbar},L)$-module is a left $\widehat{R_{\hbar}U(\mathfrak{g})}$-module with $L$ and $\mathbb{C}^{\times}$-actions satisfying the conditions of Proposition \ref{prop:structureofcompletion}. A morphism of $(\hat{g}_{\hbar},L)$-modules is a $L$ and $\mathbb{C}^{\times}$ equivariant $\widehat{R_{\hbar}U(\mathfrak{g})}$-module homomorphism. Write $M(\hat{g}_{\hbar},L)$ for the abelian category of $(\hat{g}_{\hbar},L)$-modules (with morphisms defined as above) and $\mathrm{HC}(\hat{g}_{\hbar},L)$ for the full subcategory of $(\hat{g}_{\hbar},L)$-modules finitely-generated over $\widehat{R_{\hbar}U(\mathfrak{g})}$. Completion defines a functor $M(\mathfrak{g}_{\hbar},K) \to M(\hat{g}_{\hbar},L)$. Its restriction to the subcategory $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$ is exact.
\begin{proposition}[\cite{Losev2011}]\label{prop:completionexact}
If $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$, the natural map
$$
\widehat{R_{\hbar}U(\mathfrak{g})} \otimes_{R_{\hbar}U(\mathfrak{g})} X_{\hbar} \to \hat{X}_{\hbar}$$
is an isomorphism. In particular, $\hat{X}_{\hbar}$ is a finitely-generated $\widehat{R_{\hbar}U(\mathfrak{g})}$-module. In other words, the completion functor restricts
$$\hat{\cdot}: \mathrm{HC}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$$
This functor is exact.
\end{proposition}
\begin{proof}
If the algebra $R_{\hbar}U(\mathfrak{g})$ were commutative, this would be a standard consequence of the Artin-Rees lemma. A proof in the commutative case can be found in \cite{Eisenbud1995}, Theorem 7.2. As Losev points out in \cite{Losev2011}, the standard proof for commutative algebras works in our setting more or less without change. The key point is that $R_{\hbar}U(\mathfrak{g})$ is Noetherian and has a commutative associated graded. For the details, see \cite{Losev2011}, Proposition 2.4.1.
\end{proof}
\begin{corollary}\label{cor:completionfacts}
Suppose $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$. Then
\begin{enumerate}
\item There is a natural isomorphism
$$\hat{X}_{\hbar}/\hbar \hat{X}_{\hbar} \cong \widehat{X_{\hbar}/\hbar X_{\hbar}}$$
\item $\hat{X}_{\hbar}=0$ if and only if $\chi \notin \mathrm{Supp}(X_{\hbar})$.
\end{enumerate}
\end{corollary}
\begin{proof}
\begin{enumerate}
\item
Let $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$. Multiplication by $\hbar$ defines a short exact sequence in $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$
$$X_{\hbar} \overset{\hbar}{\to} X_{\hbar} \to X_{\hbar}/\hbar X_{\hbar} \to 0$$
Since completion is exact, we get a short exact sequence in $\mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$
$$\hat{X}_{\hbar} \overset{\hbar}{\to} \hat{X}_{\hbar} \to \widehat{X_{\hbar}/\hbar X_{\hbar}} \to 0$$
which gives rise to the desired isomorphism.
\item Use the description of (commutative) completion provided in part (1) of Proposition \ref{prop:4completions} to observe that that $\widehat{X_{\hbar}/\hbar X_{\hbar}} = 0$ if and only if $\chi \notin \mathrm{Supp}(X_{\hbar}/\hbar X_{\hbar}) =: \mathrm{Supp}(X_{\hbar})$. From the previous part, $\widehat{X_{\hbar}/\hbar X_{\hbar}} = 0$ if and only if $\hat{X}_{\hbar} = \hbar \hat{X}_{\hbar}$. From Proposition \ref{prop:completionexact}, $\hat{X}_{\hbar}$ is a finitely-generated $\widehat{R_{\hbar}U(\mathfrak{g})}$-module and therefore, from Proposition \ref{prop:completionfacts}, separated in the $\hat{I}_{\chi}$-adic topology. In particular (since $\hbar \in \hat{I}_{\chi}$)
$$\bigcap_n \hbar^n \hat{X}_{\hbar} =0$$
Consequently, $\hat{X}_{\hbar} = \hbar \hat{X}_{\hbar}$ if and only if $\hat{X}_{\hbar} = 0$. Putting all of these implications together, we deduce the result.
\end{enumerate}
\end{proof}
For every $\hat{X}_{\hbar} \in M(\hat{\mathfrak{g}}_{\hbar},L)$, we will define a special subspace $\Gamma \hat{X}_{\hbar}$ of $\hat{X}_{\hbar}$ (actually $\Gamma X_{\hbar}$ is not, strictly speaking, a subspace when $K$ is disconnected. We will address this difficulty in a moment). $\Gamma$ is basically a Zuckerman functor. See \cite{KnappVogan1995} for a thorough treatment of Zuckerman functors and the related theory of cohomological induction.
As usual, denote the identity component of $K$ by $K^0$. Let $K^1 = LK^0$. $K^1$ is a subgroup of $K$ with Lie algebra $\mathfrak{k}$ and the component group of $L/(L \cap K^0)$. The construction of $\Gamma \hat{X}_{\hbar}$ proceeds in stages:
\begin{enumerate}
\item First, take the subspace $\Gamma^0\hat{X}_{\hbar}$ of $K^0$-finite vectors. More precisely, define
\begin{align*}
\Gamma^0\hat{X}_{\hbar} = \{&x \in \hat{X}_{\hbar}: x \ \text{belongs to a finite-dimensional } \mathfrak{k}\hbar-\text{invariant}\\
&\text{subspace which integrates to a representation of } K^0\}
\end{align*}
Since $K^0$ is connected, $\Gamma^0\hat{X}_{\hbar}$ has a well-defined algebraic $K^0$-action. It is also an $R_{\hbar}U(\mathfrak{g})$ submodule of $\hat{X}_{\hbar}$ and the $K^0$-action is compatible with the module structure in the two usual ways. Since $\mathfrak{k}\hbar$ is stable under the $L$ and Kazhdan $\mathbb{C}^{\times}$-actions on $R_{\hbar}U(\mathfrak{g})$, $\Gamma^0\hat{X}_{\hbar}$ is stable under the $L$ and Kazhdan $\mathbb{C}^{\times}$-actions on $\hat{X}_{\hbar}$. The $L$-action on $\Gamma^0\hat{X}_{\hbar}$ is locally-finite---its differential coincides with the locally finite action of $\mathfrak{k}\hbar$. This presents an interesting complication. $\Gamma^0\hat{X}_{\hbar}$ has two (in general, distinct) algebraic actions of $L \cap K^0$. One comes from the $K^0$-action built into the definition of $\Gamma^0 \hat{X}_{\hbar}$. The other comes from the $L$-action on $\hat{X}_{\hbar}$.
\item Next, form the subspace of $\Gamma^0\hat{X}_{\hbar}$ consisting of vectors on which the two $L \cap K^0$-actions coincide.
$$\Gamma^1\hat{X}_{\hbar} = \{x \in \Gamma^0\hat{X}_{\hbar}: (L \cap K^0) \cdot_1 x = (L \cap K^0) \cdot_2 x\} $$
Since both $L \cap K^0$-actions differentiate to the same action of $\mathfrak{l} = \mathrm{Lie}(L \cap K^0)$, they differ by a representation of the component group $C=(L \cap K^0)/L^0$. $\Gamma^1\hat{X}_{\hbar}$ is the space of $C$-invariants in $\Gamma^0\hat{X}_{\hbar}$. This subspace is an $R_{\hbar}U(\mathfrak{g})$-submodule of $\Gamma^0\hat{X}_{\hbar}$ and is stable under $\mathbb{C}^{\times}$. It has algebraic actions of $L$ and $K^0$ which agree on the intersection and therefore an algebraic action of $K^1=LK^0$.
\item Take the $\mathbb{C}^{\times}$-finite vectors in $\Gamma^1\hat{X}_{\hbar}$.
\begin{align*}
\Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar} = \{&x \in \Gamma^1\hat{X}_{\hbar}: x \ \text{belongs to a finite-dimensional }\\
&\mathbb{C}^{\times}-\text{invariant subspace}\}
\end{align*}
This subspace has the structure of a $R_{\hbar}U(\mathfrak{g})$-module with algebraic actions of $K^1$ and $\mathbb{C}^{\times}$. The $K^1$-action is compatible with the module structure in the two usual ways. The $\mathbb{C}^{\times}$-action is compatible with the module structure in the sense that the action map $R_{\hbar}U(\mathfrak{g}) \otimes \Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar} \to \Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar}$ is $\mathbb{C}^{\times}$-equivariant. Note that the actions of $K^1$ and $\mathbb{C}^{\times}$ do not, in general, commute. The actions of $L$ and $\mathbb{C}^{\times}$ obviously do, but the actions of $K^0$ and $\mathbb{C}^{\times}$ do not. We can fix this by composing the existing $\mathbb{C}^{\times}$-action with $\gamma(t)^{-1}$ (in effect, undoing the `Kazhdanification' required to make the original $\mathbb{C}^{\times}$-action lift to the completion). The result is a grading on $\Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar}$ which is manifestly even. Halve it, to obtain a grading which is compatible (under the natural map $X_{\hbar} \to \hat{X}_{\hbar}$) with the original grading on $X_{\hbar}$. With this new grading, $\Gamma^1_{\mathrm{lf}}$ has the structure of a graded, $K^1$-equivariant $R_{\hbar}U(\mathfrak{g})$-module (with the standard grading on $R_{\hbar}U(\mathfrak{g})$).
\item The final step is to perform a finite induction
$$\Gamma \hat{X}_{\hbar} = \mathrm{Ind}^K_{K^1} \Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar}$$
If we identify $\Gamma \hat{X}_{\hbar}$ with functions
$$\{f: K \to \Gamma^1_{\mathrm{lf}} \hat{X}_{\hbar}: f(k'k) = k' \cdot f(k) \ \text{for } k' \in K^1,k \in K\}$$
there is an $R_{\hbar}U(\mathfrak{g})$-module structure on $\Gamma \hat{X}_{\hbar}$ defined by
$$(Yf)(k) = (k\cdot Y)f(k), \qquad Y \in R_{\hbar}U(\mathfrak{g}), k \in K, f \in \Gamma\hat{X}_{\hbar}$$
and an algebraic $\mathbb{C}^{\times}$-action defined by
$$(t\cdot f)(k) = t \cdot f(k), \qquad t \in \mathbb{C}^{\times}, k \in K, f \in \Gamma\hat{X}_{\hbar}$$
Summarizing, $\Gamma \hat{X}_{\hbar}$ has the structure of a $R_{\hbar}U(\mathfrak{g})$-module with algebraic $K$ and $\mathbb{C}^{\times}$-actions. It is easy to check that these three structures satisfy the defining properties of a $(\mathfrak{g}_{\hbar},K)$-module.
\end{enumerate}
Since all of the ingredients used to define $\Gamma \hat{X}_{\hbar}$ ($K^0$-finite vectors, $C$-invariants, $\mathbb{C}^{\times}$-finite vectors, induction) are functorial, $\Gamma$ defines a functor $M(\hat{\mathfrak{g}}_{\hbar},L) \to M(\mathfrak{g}_{\hbar},K)$. We will define
$$\Phi_{\chi}= \Gamma \circ \hat{\cdot}: M(\mathfrak{g}_{\hbar},K) \to M(\hat{\mathfrak{g}}_{\hbar},K)$$
$\Phi_{\chi}$ is clearly left exact: it is the composite of a completion functor (exact, by Proposition \ref{prop:completionexact}), $K^0$-finite vectors (left exact), $C$-invariants (left exact), $\mathbb{C}^{\times}$-finite vectors (left exact), and finite induction (exact). In Section \ref{sec:mainthm}, we will study its right derived functors. We will need the following
\begin{proposition}\label{prop:enoughinjectives}
The category $M(\mathfrak{g}_{\hbar},K)$ has enough injectives.
\end{proposition}
\begin{proof}
This follows from an easy general fact: suppose $\mathcal{A}$ and $\mathcal{B}$ are abelian categories and $(L:\mathcal{A}\to \mathcal{B}, R: \mathcal{B} \to \mathcal{A})$ is an adjunction. Suppose that $L$ is exact and that the natural map $A \to RLA$ is an injection for every $A \in \mathcal{A}$. Then if $\mathcal{B}$ has enough injectives, so does $\mathcal{A}$.
If $\mathcal{A} = M(\mathfrak{g}_{\hbar},K)$ and $\mathcal{B} = R_{\hbar}U(\mathfrak{g})-\mathrm{mod}$, the forgetful functor $L: \mathcal{A} \to \mathcal{B}$ has a right adjoint $R: \mathcal{B} \to \mathcal{A}$ defined in much the same way as $\Gamma$. If $B \in \mathcal{B}$, the subspace
\begin{align*}
R^0(B) = \{&b \in B: b \ \text{belongs to a finite-dimensional } \mathfrak{k}\hbar - \text{invariant subspace} \\
&\text{which integrates to a representation of } \ K^0\}
\end{align*}
has the structure of a $K^0$-equivariant $R_{\hbar}U(\mathfrak{g})$-module and
$$R^1(B) = \mathrm{Ind}^K_{K^0}R^0B$$
has the structure of a $K$-equivariant $R_{\hbar}U(\mathfrak{g})$-module. We need to force a grading on $R^1B$ (compatible with the $K$-action and the module structure in all of the usual ways). Define
$$R(B) = \bigoplus_{n \in \mathbb{Z}} R^1(B)$$
putting one copy of $R^1B$ in every integer degree. Give $R(B)$ the structure of a $(\mathfrak{g}_{\hbar},K)$-module by defining
\begin{align*}
&Y(b_n) = (Yb)_{m+n} &&Y \in R_{\hbar}U(\mathfrak{g})_m\\
&k(b_n) = (kb)_n &&k \in K\\
&t(b_n) = (t^nb)_n &&t \in \mathbb{C}^{\times}
\end{align*}
It is easy to check that $\mathcal{A},\mathcal{B},L$, and $R$ satisfy the conditions listed above. It is well known that $R-\mathrm{mod}$ has enough injectives for any ring $R$. Hence, $\mathcal{A}$ has enough injectives by the general fact above.
\end{proof}
For the remainder of this section, we will enforce the assumption
$$\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}}) \geq 2$$
Although Losev never states this assumption in \cite{Losev2011}, it is implicit in the setting he considers: for Losev, $G_{\mathbb{R}}$ is complex and hence, $\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}})$ is even.
Let $j: \mathcal{O} \subset \overline{\mathcal{O}}$ be the inclusion.
\begin{proposition}\label{prop:coherencepreserved}
Recall the containments
$$\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}}) \subset \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^* \subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \subset M(\mathfrak{g}_{\hbar},K)$$
from Proposition \ref{prop:functors}. $\Phi_{\chi}$ preserves all three subcategories of $M(\mathfrak{g}_{\hbar},K)$. Its restriction to $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$ coincides with the functor
$$j_*j^*: \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}}) \to \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$$
\end{proposition}
\begin{proof}
Suppose $M \in \operatorname{Coh}^{K, \mathbb{C}^{\times}}(\overline{\mathcal{O}})$. By the definition of $\Phi_{\chi}$ and Proposition \ref{prop:geomsignificance}, it is clear that $\Phi_{\chi}M$ coincides with the $\mathbb{C}^{\times}$-finite part of $j_*j^*M$. But the $\mathbb{C}^{\times}$-action on $j_*j^*M$ is already finite, so $\Phi_{\chi}M = j_*j^*M$. By Proposition \ref{thm:finitegeneration} and the codimension condition on $\mathcal{O}$, this is an object in $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$.
Now suppose $M \in \operatorname{Coh}^{K, \mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$. $M$ admits a finite filtration by $K$ and $\mathbb{C}^{\times}$-equivariant subsheaves
$$0 = M_0 \subset M_1 \subset ... \subset M_t = M, \qquad N_i :=M_i/M_{i-1} \in \operatorname{Coh}^{K, \mathbb{C}^{\times}}(\overline{\mathcal{O}}) \ \text{for} \ 1 \leq i \leq t$$
We have $\Phi_{\chi}M_1 \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$ by the result of the previous paragraph. Suppose $\Phi_{\chi}M_i \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$ for some index $i <t$. There is a short exact sequence
$$0 \to M_{i} \to M_{i+1} \to N_{i+1} \to 0$$
in $\operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$. By the left exactness of $\Phi_{\chi}$, there is a long exact sequence in $\operatorname{QCoh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$
$$0 \to \Phi_{\chi}M_i \to \Phi_{\chi}M_{i+1} \to \Phi_{\chi}N_{i+1} \to ... $$
$\Phi_{\chi}M_i$ is coherent by hypothesis and $\Phi_{\chi}N_{i+1}$ is coherent since $N_{i+1} \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$. Hence, $\Phi_{\chi}M_{i+1}$ is coherent, since it is sandwiched in an exact sequence between coherent sheaves. By induction on $i$, $\Phi_{\chi}M \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$.
Finally, suppose $X_{\hbar} \in \mathrm{HC}(\mathfrak{g}_{\hbar},K)$. Define $M = X_{\hbar}/\hbar X_{\hbar} \in \operatorname{Coh}^{K, \mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$. There is a short exact sequence
$$0 \to \hbar X_{\hbar} \to X_{\hbar} \to M \to 0$$
in $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$. By the left exactness of $\Phi_{\chi}$, there is a long exact sequence
$$0 \to \Phi_{\chi}\hbar X_{\hbar} \to \Phi_{\chi}X_{\hbar} \to \Phi_{\chi} \to ... $$
in $M_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ and hence an inclusion
$$\Phi_{\chi}X_{\hbar}/\Phi_{\chi}\hbar X_{\hbar} \subseteq \Phi_{\chi}M $$
It is clear from the construction of $\Phi_{\chi}$ that $\hbar\Phi_{\chi}X_{\hbar} = \Phi_{\chi}\hbar X_{\hbar}$. So we obtain from above
$$\Phi_{\chi}X_{\hbar}/\hbar \Phi_{\chi}X_{\hbar} \subseteq \Phi_{\chi}M$$
The left hand side is coherent since the right hand side is coherent. Choose a finite set of generators $x_1,...,x_n$ for $\Phi_{\chi}X_{\hbar}/\hbar \Phi_{\chi}X_{\hbar}$ over $S(\mathfrak{g}/\mathfrak{k})$. Choose arbitrary lifts $\tilde{x}_1,...,\tilde{x}_n$ to $\Phi_{\chi}X_{\hbar}$ and form the $(\mathfrak{g}_{\hbar},K)$-submodule $R \subset \Phi_{\chi}$ generated by these elements. By definition, $\Phi_{\chi}X_{\hbar} = R + \hbar \Phi_{\chi}X_{\hbar}$. If we replace $X_{\hbar}$ with $\hbar X_{\hbar}$ and repeat the same argument, we obtain $\hbar \Phi_{\chi}X_{\hbar} = \hbar R + \hbar^2 \Phi_{\chi}X_{\hbar}$, and hence $\Phi_{\chi}X_{\hbar} = R + \hbar^2\Phi_{\chi}X_{\hbar}$. Then, $\Phi_{\chi}X_{\hbar} = R + \hbar^n\Phi_{\chi}X_{\hbar}$ by a simple induction on $n$.
Since $R$ is finitely-generated over a nonnegatively graded ring, its grading is bounded from below. Choose an integer $N$ such that $R_n = 0$ for every $n < N$. If $n<N$ and $x \in (\Phi_{\chi}X_{\hbar})_n$, then $x \in \bigcap_n \hbar^n\Phi_{\chi}X_{\hbar}$. Since $\hat{X}_{\hbar}$ is separated in the $\hat{I}_{\chi}$-adic topology (part (3) of Proposition \ref{prop:completionfacts}),
$$\bigcap_n \hbar^n \hat{X}_{\hbar} \subseteq \bigcap_n \hat{I}_{\chi}^n\hat{X}_{\hbar} = 0$$
Then it is clear from the construction of $\Gamma$ that
$$\bigcap_n \hbar^n \Phi_{\chi} X_{\hbar} = 0$$
So, $x=0$ and we see that the grading on $\Phi_{\chi}X_{\hbar}$ is (also) bounded from below. Now suppose $n$ is arbitrary and $y \in (\Phi_{\chi}X_{\hbar})_n$. Choose $m$ so large that $(\hbar^m\Phi_{\chi}X_{\hbar})_n=0$. Then $\Phi_{\chi}X_{\hbar} = R + \hbar^m \Phi_{\chi}X_{\hbar}$ implies $y \in R$. This proves that $\Phi_{\chi}X_{\hbar} = R$.
Now, $\Phi_{\chi}X_{\hbar}$ is a finitely-generated $(\mathfrak{g}_{\hbar},K)$-module and hence an object in $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$. From the inclusion $\Phi_{\chi}X_{\hbar}/\hbar \Phi_{\chi}X_{\hbar}\subseteq \Phi_{\chi}M$ and the additivity of support, we have $\Phi_{\chi}X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$.
\end{proof}
Write $\mathcal{A}_{\overline{\mathcal{O}}}$ for the full image of the completion functor $\hat{\cdot}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$.
\begin{proposition}\label{prop:adjunction}
The functors
\begin{align*}
\hat{\cdot}: &\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathcal{A}_{\overline{\mathcal{O}}}\\
\Gamma:&\mathcal{A}_{\overline{\mathcal{O}}} \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)
\end{align*}
are left and right adjoints.
\end{proposition}
\begin{proof}
Both functors factor through the intermediate category $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K^1)$
\begin{center}
\begin{tikzcd}
\mathrm{HC}(\mathfrak{g}_{\hbar},K) \arrow[r, "\mathrm{res}",shift left=.5ex] & \mathrm{HC}(\mathfrak{g}_{\hbar},K^1) \arrow[r,"\hat{\cdot}"
,shift left=.5ex] \arrow[l,"\mathrm{Ind}",shift left=.5ex] & \mathcal{A}_{\overline{\mathcal{O}}} \arrow[l,"\Gamma^1_{\mathrm{lf}}",shift left=.5ex]
\end{tikzcd}
\end{center}
The functor $\mathrm{Ind}^K_{K^1}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K^1) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$, as we have defined it, is left-adjoint to $\mathrm{res}$. There is an alternative definition of $\mathrm{Ind}^K_{K^1}$ via tensor products and co-invariants (rather than functions and invariants), given by
$$\mathrm{Ind}^K_{K^1} V = \mathbb{C}[K] \otimes_{\mathbb{C}[K^1]} V$$
and this second version of induction is right-adjoint to $\mathrm{res}$. Since $[K:K^1] < \infty$, these two versions coincide. Thus, it suffices to exhibit an adjunction between the two functors on the right.
Choose $X \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K^1)$ and $Y \in \mathcal{A}_{\overline{\mathcal{O}}}$. We want to define a natural bijection
$$\mathrm{Hom}_{\mathfrak{g}_{\hbar},K^1,\mathbb{C}^{\times}}(X,\Gamma^1_{\mathrm{lf}}Y) \cong \mathrm{Hom}_{\hat{\mathfrak{g}}_{\hbar},L,\mathbb{C}^{\times}}(\hat{X},Y)$$
Suppose $f \in \mathrm{Hom}_{\mathfrak{g}_{\hbar},K^1,\mathbb{C}^{\times}}(X,\Gamma^1_{\mathrm{lf}}Y)$. Compose $f$ with the inclusion $i:\Gamma^1_{\mathrm{lf}}Y \subset Y$ to obtain an $L$ and $\mathbb{C}^{\times}$-equivariant $R_{\hbar}U(\mathfrak{g})$-module homomorphism $i\circ f: X \to Y$. $Y$ is complete in the $\hat{I}_{\chi}$-adic topology (part (3) of Proposition \ref{prop:completionfacts}), so this homomorphism extends to a (unique) morphism in $\mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$
$$\widehat{i\circ f}: \hat{X} \to Y$$
On the other hand, if $g \in \mathrm{Hom}_{\hat{\mathfrak{g}}_{\hbar},L,\mathbb{C}^{\times}}(\hat{X},Y)$, the restriction $g|_X$ takes values in $\Gamma^1_{\mathrm{lf}}Y$. One easily checks that the assignments $f \mapsto \widehat{i \circ f}$ and $g \mapsto g|_X$ define mutually inverse bijections.
\end{proof}
In \cite{Losev2011}, Losev studies a functor closely related to $\Phi_{\chi}$ and describes some of its most important properties. The following proposition establishes some of the corresponding properties of $\Phi_{\chi}$.
\begin{proposition}\label{prop:allpropsoflocfunctor}
The functor
$$\Phi_{\chi}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$$
which is well-defined by Proposition \ref{prop:coherencepreserved}, has the following properties:
\begin{enumerate}
\item For every $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$, there is a natural map
$$X_{\hbar} \to \Phi_{\chi}X_{\hbar}$$
and its completion
$$\hat{X}_{\hbar} \to \widehat{\Phi_{\chi}X_{\hbar}}$$
is an injection.
\item $\ker{\Phi_{\chi}} = \mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$.
\item For every $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$,
$$\mathrm{Ann}(X_{\hbar}) \subseteq \mathrm{Ann}(\Phi_{\chi}X_{\hbar})$$
\item Form the right derived functors $R^i\Phi_{\chi}: M(\mathfrak{g}_{\hbar},K) \to M(\mathfrak{g}_{\hbar},K)$ using Proposition \ref{prop:enoughinjectives}. Then if $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$, the gradings on $R^i\Phi_{\chi}X_{\hbar}$ are bounded from below.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item This is a formal consequence of the adjunction $(\hat{\cdot},\Gamma)$ established in Proposition \ref{prop:adjunction}. The natural map $X_{\hbar} \to \Phi_{\chi}X_{\hbar}$ is the morphism in $\mathrm{Hom}_{\mathfrak{g}_{\hbar},K,\mathbb{C}^{\times}}(X_{\hbar},\Phi_{\chi}X_{\hbar})$ corresponding to the identity map $\mathrm{id} \in \mathrm{Hom}_{\hat{\mathfrak{g}}_{\hbar},L,\mathbb{C}^{\times}}(\hat{X}_{\hbar},\hat{X}_{\hbar})$. Its completion is a morphism
$$\hat{X}_{\hbar} \to \widehat{\Phi_{\chi}X_{\hbar}}$$
in $\mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$. On the other hand, the identity map $\mathrm{id} \in \mathrm{Hom}_{\mathfrak{g}_{\hbar},K,\mathbb{C}^{\times}}(\Phi_{\chi}X_{\hbar},\Phi_{\chi}X_{\hbar})$ corresponds to a natural map $\widehat{\Phi_{\chi}X_{\hbar}} \to \hat{X}_{\hbar}$ in $\mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$. Since all maps are natural, the composition
$$\hat{X}_{\hbar} \to \widehat{\Phi_{\chi}X_{\hbar}} \to \hat{X}_{\hbar}$$
is the identity. In particular, $\hat{X}_{\hbar} \to \widehat{\Phi_{\chi}X_{\hbar}}$ is an injection.
\item If $X_{\hbar} \in \mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$, then $\Phi_{\chi}X_{\hbar} = 0$ by (one half of) part (2) of Proposition \ref{cor:completionfacts}. Conversely, if $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ and $\Phi_{\chi}X_{\hbar} =0$, then $\widehat{\Phi_{\chi}X_{\hbar}} = 0$ and hence $\hat{X}_{\hbar} = 0$ by the result of the previous part. Then $X_{\hbar} \in \mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$ by (the other half of) part (2) of Proposition \ref{cor:completionfacts}.
\item Since $\hat{X}_{\hbar}$ is an inverse limit of quotients $X_{\hbar}/I_{\chi}^nX_{\hbar}$ --- each annihilated by $\mathrm{Ann}(X_{\hbar})$---there is an obvious inclusion $\mathrm{Ann}(X_{\hbar}) \subseteq \mathrm{Ann}(\hat{X}_{\hbar})$. And since $\Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar} \subseteq \hat{X}_{\hbar}$, we have $\mathrm{Ann}(\hat{X}_{\hbar}) \subseteq \mathrm{Ann}(\Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar})$. Examining the formula for the $R_{\hbar}U(\mathfrak{g})$-action on $\mathrm{Ind}^K_{K^1}\Gamma^1_{\mathrm{lf}}\hat{X_{\hbar}}$, it is clear that $\mathrm{Ann}(\Gamma \hat{X}_{\hbar})$ is the largest $K$-invariant subspace of $\mathrm{Ann}(\Gamma^1_{\mathrm{lf}}\hat{X}_{\hbar})$. But $\mathrm{Ann}(X_{\hbar})$ is already $K$-invariant, so $\mathrm{Ann}(X_{\hbar}) \subseteq \mathrm{Ann}(\Phi_{\chi} X_{\hbar})$.
\item Consider the abelian category $M^{\geq 0}(\mathfrak{g}_{\hbar},K)$ of $(\mathfrak{g}_{\hbar},K)$-modules with nonnegative gradings. In the proof of Proposition \ref{prop:enoughinjectives}, we defined a functor $R: R_{\hbar}U(\mathfrak{g})-\mathrm{mod} \to M(\mathfrak{g}_{\hbar},K)$ right adjoint to the forgetful functor. $R$ was defined by
$$R(B) = \bigoplus_{n \in \mathbb{Z}}R^1(B), \quad B \in R_{\hbar}U(\mathfrak{g})-\mathrm{mod}$$
for $R^1B$ a certain $K$-equivariant $R_{\hbar}U(\mathfrak{g})$-module produced canonically from $B$. We could have defined
$$R(B) = \bigoplus_{n \geq 0}R^1(B), \quad B \in R_{\hbar}U(\mathfrak{g})-\mathrm{mod}$$
This is still a $(\mathfrak{g}_{\hbar},K)$-module since $R_{\hbar}U(\mathfrak{g})$ is nonnegatively graded, and the resulting functor $R: R_{\hbar}U(\mathfrak{g})-\mathrm{mod} \to M^{\geq 0}(\mathfrak{g}_{\hbar},K)$ is right adjoint to the corresponding forgetul functor. Then the general fact cited in the proof of Proposition \ref{prop:adjunction} implies enough injectives in $M^{\geq 0}(\mathfrak{g}_{\hbar},K)$.
Now, consider the category $M^b(\mathfrak{g}_{\hbar},K)$ of $(\mathfrak{g}_{\hbar},K)$-modules with gradings bounded from below. If $X_{\hbar} \in M^b(\mathfrak{g}_{\hbar},K)$, we can shift the grading on $X_{\hbar}$ by an appropriate integer $N$ to obtain an object $X_{\hbar}^N \in M^{\geq 0}(\mathfrak{g}_{\hbar},K)$. $X_{\hbar}^N$ has an injective covering $X_{\hbar}^N \hookrightarrow I$ in $M^{\geq 0}(\mathfrak{g},K)$ by the result of the previous paragraph. Since the shift $I^{-N} \in M^b(\mathfrak{g}_{\hbar},K)$ remains injective, $X_{\hbar} \hookrightarrow I^{-N}$ is an injective covering of $X_{\hbar}$. Hence, $M^b(\mathfrak{g}_{\hbar},K)$ has enough injectives as well.
Let $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$. $X_{\hbar}$ is finitely-generated over a nonnegatively-graded ring and, therefore, an object of $M^b(\mathfrak{g}_{\hbar},K)$. Choose an injective resolution $0 \to X_{\hbar} \to I^{\bullet}$ in $M^b(\mathfrak{g}_{\hbar},K)$. The result follows from the standard construction of $R^i\Phi_{\chi}X_{\hbar}$.
\end{enumerate}
\end{proof}
From these properties, we deduce
\begin{proposition}\label{prop:phiislocalization}
$\Phi_{\chi}$ is a localization functor for the subcategory $\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K) \subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$.
\end{proposition}
\begin{proof}
We will apply the general criterion of Proposition \ref{prop:criterionquotient}. We proved in Proposition \ref{prop:adjunction} that the functors
\begin{center}
\begin{tikzcd}
\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \arrow[r, shift left=.5ex,"\hat{\cdot}"] & \mathcal{A}_{\overline{\mathcal{O}}} \arrow[l, shift left=.5ex,"\Gamma"]
\end{tikzcd}
\end{center}
form an adjoint pair and in part $(2)$ of Proposition \ref{prop:allpropsoflocfunctor} that $\ker{\Phi_{\chi}} = \mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$. It remains to show that $\Gamma: \mathcal{A}_{\overline{\mathcal{O}}} \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ is fully faithful.
Choose objects $\hat{X}_{\hbar},\hat{Y}_{\hbar} \in \mathcal{A}_{\overline{\mathcal{O}}}$. Suppose $f \in \mathrm{Hom}_{\mathfrak{g}_{\hbar},K,\mathbb{C}^{\times}}(\Gamma \hat{X}_{\hbar},\Gamma \hat{Y}_{\hbar})$. Compose $f$ with the natural map $\Gamma \hat{Y}_{\hbar} \to Y_{\hbar} \to \hat{Y}_{\hbar}$ to obtain an $L$ and $\mathbb{C}^{\times}$-equivariant $R_{\hbar}U(\mathfrak{g})$-module homomorphism $\Gamma \hat{X}_{\hbar} \to \hat{Y}_{\hbar}$. Since $\hat{Y}_{\hbar}$ is complete in the $\hat{I}_{\chi}$-adic topology (Proposition \ref{prop:completionfacts}), this homomorphism extends to a unique morphism $\tilde{f} \in \mathrm{Hom}_{\hat{\mathfrak{g}}_{\hbar}, L,\mathbb{C}^{\times}}(\widehat{\Phi_{\chi}X_{\hbar}},\hat{Y}_{\hbar})$ making the following diagram commute
\begin{center}
\begin{tikzcd}
\Gamma \hat{X}_{\hbar} \arrow[r,"f"] \arrow[d] & \Gamma \hat{Y}_{\hbar} \arrow[d] \\
\widehat{\Phi_{\chi}X_{\hbar}} \arrow[r,dashrightarrow,"\exists ! \tilde{f}"] & \hat{Y}_{\hbar}
\end{tikzcd}
\end{center}
The restriction $\tilde{f}|_{\hat{X}_{\hbar}}$ is a morphism in $\mathrm{Hom}_{\hat{g}_{\hbar},L,\mathbb{C}^{\times}}(\hat{X}_{\hbar},\hat{Y}_{\hbar})$ and
the correspondence $f \mapsto \tilde{f}|_{\hat{X}_{\hbar}}$ defines a map $\mathrm{Hom}(\Gamma \hat{X}_{\hbar}, \Gamma \hat{Y}_{\hbar}) \to \mathrm{Hom}(\hat{X}_{\hbar},\hat{Y}_{\hbar})$ which is manifestly inverse to $\Gamma$.
\end{proof}
From Proposition \ref{prop:phiislocalization} and the general properties of localization (Proposition \ref{prop:generalpropsoflocalization}), we get a number of additional properties more or less for free:
\begin{corollary}
$\Phi_{\chi}$ has the following additional properties
\begin{enumerate}
\item For every $X_{\hbar} \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$, the kernel and cokernel of the natural map
$$X_{\hbar} \to \Phi_{\chi} X_{\hbar}$$
are objects in $\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$
\item $X_{\hbar} \in \mathrm{Im}\Phi_{\chi}$ if and only if
$$\mathrm{Hom}(\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K),X_{\hbar})=\mathrm{Ext}^1(X_{\hbar},\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K))=0$$
\item $\hat{\cdot} \circ \Gamma: \mathcal{A}_{\overline{\mathcal{O}}} \to \mathcal{A}_{\overline{\mathcal{O}}}$ is the identity functor. In particular, the injection $\hat{X}_{\hbar} \hookrightarrow \widehat{\Phi_{\chi}X_{\hbar}}$ from part $(1)$ of Proposition \ref{prop:allpropsoflocfunctor} is actually an isomorphism.
\end{enumerate}
\end{corollary}
If we choose a different representative $\chi' \in \mathcal{O}$, we get a different functor $\Phi_{\chi'}:\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$. This functor enjoys all of the properties enumerated above. In particular, it is a localization functor for the subcategory $\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g}_{\hbar},K)$. But localization functors are defined by a universal property and are consequently unique up to natural isomorphism. This proves
\begin{proposition}\label{prop:welldefined}
If $\chi,\chi' \in \mathcal{O}$, there is a natural isomorphism
$$\Phi_{\chi} \cong \Phi_{\chi'}$$
We can therefore write $\Phi_{\mathcal{O}}$ without ambiguity.
\end{proposition}
Recall the embedding
$$R_{\hbar}:\mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \hookrightarrow \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$$
from proposition \ref{prop:functors}. Setting $\hbar=1$ defines a right inverse to $R_{\hbar}$
$$\hbar=1: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
which restricts to an equivalence on the subcategory $\mathrm{HC}^{\mathrm{tf}}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$ of $\hbar$-torsion free $(\mathfrak{g}_{\hbar},K)$-modules. The condition of being $\hbar$-torsion free means that
$$0 \to X_{\hbar} \overset{\hbar^n}{\to} X_{\hbar} \quad \text{is exact} \ \forall n \in \mathbb{N}$$
Since $\Phi_{\mathcal{O}}$ is left-exact, this condition is preserved under application of $\Phi_{\mathcal{O}}$. So $\Phi_{\mathcal{O}}$ preserves the subcategory $\mathrm{HC}^{\mathrm{tf}}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$.
\begin{proposition}\label{prop:descenttoHC}
$\Phi_{\mathcal{O}}$ descends to a well-defined functor on $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$. More precisely, there is a unique functor
$$\overline{\Phi}_{\mathcal{O}}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
making the following diagram commute
\begin{center}
\begin{tikzcd}
\mathrm{HC}^{\mathrm{tf}}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \arrow[r,"\Phi_{\mathcal{O}}"] \arrow[d, "\sim","\hbar=1"] & \mathrm{HC}^{\mathrm{tf}}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) \arrow[d,"\sim","\hbar=1"] \\
\mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}} (\mathfrak{g},K) \arrow[d,"\mathrm{forget}"] \arrow[u,"R_{\hbar}",shift left=1ex] & \mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}} (\mathfrak{g},K) \arrow[d,"\mathrm{forget}"] \arrow[u,"R_{\hbar}",shift left=1ex]\\
\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \arrow[r,"\overline{\Phi}_{\mathcal{O}}"] &\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)
\end{tikzcd}
\end{center}
\end{proposition}
\begin{proof}
First, we will describe how we would \emph{like} to define $\overline{\Phi}_{\mathcal{O}}$. Then we will prove that this definition makes sense. Define the functor
$$P= \mathrm{forget} \circ (\hbar=1) \circ \Phi_{\mathcal{O}} \circ R_{\hbar}: \mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
For an object $X \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$, we would like to define
\begin{equation}\label{eqn:objects}
\overline{\Phi}_{\mathcal{O}}X = P(X,\mathcal{F})
\end{equation}
for any choice of good filtration $\mathcal{F}$. For a morphism $f: X \to Y$ in $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$ we would like to define
\begin{equation}\label{eqn:morphisms}
\overline{\Phi}_{\mathcal{O}}f: P(f: (X,\mathcal{F}) \to (Y,\mathcal{G}))
\end{equation}
for any choice of good filtrations $\mathcal{F}$ on $X$ and $\mathcal{G}$ on $Y$ compatible with $f$. There are several things to prove.
\textbf{Objects.} If $\mathcal{F}$ is a good filtration on $X$ and $s$ is an integer, write $\mathcal{F}^s$ for the filtration defined by $\mathcal{F}^s_iX=\mathcal{F}_{s+i}X$. $\mathcal{F}^s$ is good. If $s \geq t$ there is an identity map
$$\mathrm{id}_{\mathcal{F}^s,\mathcal{F}^t}:(X,\mathcal{F}^s) \to (X,\mathcal{F}^t)$$
and it is clear from the construction of $\Phi_{\mathcal{O}}$ that $P(\mathrm{id}_{\mathcal{F}^s,\mathcal{F}^t})$ is the identity.
Now let $\mathcal{F}$ and $\mathcal{G}$ be arbitrary good filtrations on $X$. There are integers $r \leq s \leq t \leq w$ such that for every integer $i$
\begin{equation}\label{eqn:goodfiltrations}
\mathcal{F}_{i+r}X \subseteq \mathcal{G}_{i+s}X \subseteq \mathcal{F}_{i+t}X \subseteq \mathcal{G}_{i+w}
\end{equation}
For a proof of this simple fact, see \cite{Vogan1991}, Proposition 2.2. So the identity map defines morphisms
\begin{align*}
&\mathrm{id}_{\mathcal{F}^r, \mathcal{G}^s}: (X,\mathcal{F}^r) \to (X,\mathcal{G}^s)\\
&\mathrm{id}_{\mathcal{G}^s, \mathcal{F}^t}: (X,\mathcal{G}^s) \to (X,\mathcal{F}^t)\\
&\mathrm{id}_{\mathcal{F}^t, \mathcal{F}^w}: (X,\mathcal{F}^t) \to (X,\mathcal{G}^w)
\end{align*}
in $\mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$. From the previous paragraph and the functoriality of $P$ we have
\begin{align*}
&P(\mathrm{id}_{\mathcal{G}^s, \mathcal{F}^t}) \circ P(\mathrm{id}_{\mathcal{F}^r, \mathcal{G}^s}) = P(\mathrm{id}_{\mathcal{G}^s, \mathcal{F}^t} \circ \mathrm{id}_{\mathcal{F}^r, \mathcal{G}^s}) = P(\mathrm{id}_{\mathcal{F}^r,\mathcal{F}^t}) = \mathrm{id}\\
&P(\mathrm{id}_{\mathcal{F}^t, \mathcal{G}^w}) \circ P(\mathrm{id}_{\mathcal{G}^s, \mathcal{F}^t}) = P(\mathrm{id}_{\mathcal{F}^s, \mathcal{G}^w} \circ \mathrm{id}_{\mathcal{G}^s, \mathcal{F}^t}) = P(\mathrm{id}_{\mathcal{G}^s,\mathcal{G}^w}) = \mathrm{id}
\end{align*}
Hence, $P(\mathrm{id}_{\mathcal{G}^s,\mathcal{F}^t}); P(X,\mathcal{G}^s) \to P(X,\mathcal{F}^t)$ is an isomorphism. But $P(X,\mathcal{F}^t) = P(X,\mathcal{F})$ and $P(X,\mathcal{G}^s) = P(X,\mathcal{G})$. So in fact $P(X,\mathcal{F}) \cong P(X,\mathcal{G})$. Note that this isomorphism is independent of $r,s,t$, and $w$. Thus, the isomorphisms identifying $P(X,\mathcal{F})$ and $P(X,\mathcal{G})$ are well-defined.
\textbf{Morphisms.} Suppose $f: X \to Y$ is a morphism in $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$. Choose two different lifts $f_{\mathcal{F},\mathcal{G}}: (X,\mathcal{F}) \to (Y,\mathcal{G})$ and $f_{\mathcal{F}',\mathcal{G}'}: (X,\mathcal{F}') \to (Y,\mathcal{G}')$ to $\mathrm{HC}^{\mathrm{filt}}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$. We hope to show that
$$P(f_{\mathcal{F},\mathcal{G}}) = P(f_{\mathcal{F}',\mathcal{G}'})$$
up to the isomorphisms $P(X,\mathcal{F}) \cong P(X,\mathcal{F}')$ and $P(Y,\mathcal{G}) \cong P(Y,\mathcal{G}')$ constructed above.
From (\ref{eqn:goodfiltrations}), there are integers $r$ and $s$ such that the identity maps on $X$ and $Y$ induce morphisms
\begin{align*}
&\mathrm{id}_{\mathcal{F}^r,\mathcal{F}}: (X,\mathcal{F}^r) \to (X,\mathcal{F})\\
&\mathrm{id}_{\mathcal{F}^r,\mathcal{F}'}: (X,\mathcal{F}^r) \to (X,\mathcal{F}')\\
&\mathrm{id}_{\mathcal{G},\mathcal{G}^s}: (X,\mathcal{G}) \to (X,\mathcal{G}^s)\\
&\mathrm{id}_{\mathcal{G}',\mathcal{G}^s}: (X,\mathcal{G}') \to (X,\mathcal{G}^s)
\end{align*}
$P(\mathrm{id}_{\mathcal{F}^r,\mathcal{F}})$ and $P(\mathrm{id}_{\mathcal{G},\mathcal{G}^s})$ are the identity maps (on $X$ and $Y$, respectively), and $P(\mathrm{id}_{\mathcal{F}^r,\mathcal{F}'})$ and $P(\mathrm{id}_{\mathcal{G}',\mathcal{G}^s})$ are isomorphisms. The isomorphisms $P(X,\mathcal{F}) \cong P(X,\mathcal{F}')$ and $P(Y,\mathcal{G}) \cong P(Y,\mathcal{G}')$ obtained from these maps coincide with the isomorphisms constructed above. By the functoriality of $P$, $P(f_{\mathcal{F},\mathcal{G}}) = P(f_{\mathcal{F}',\mathcal{G'}})$ up to these isomorphisms.
\end{proof}
As one might expect, $\overline{\Phi}_{\mathcal{O}}$ inherits all of the interesting properties of $\Phi_{\mathcal{O}}$. Combining Proposition \ref{prop:allpropsoflocfunctor} with Proposition \ref{prop:descenttoHC}, we easily deduce
\begin{proposition}\label{prop:allpropertiesofdescent}
The functor
$$\overline{\Phi}_{\mathcal{O}}: \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
which is well-defined by Proposition \ref{prop:descenttoHC}, has the following properties:
\begin{enumerate}
\item $\overline{\Phi}_{\mathcal{O}}$ is left exact.
\item $\ker{\overline{\Phi}_{\mathcal{O}}} = \mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g},K)$
\item $\overline{\Phi}_{\mathcal{O}}$ is a localization functor for the subcategory $\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g},K) \subset \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$
\item There is a natural transformation $\mathrm{id} \to \overline{\Phi}_{\mathcal{O}}$
\item For every $X \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$
$$\mathrm{Hom}(\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g},K),X)=\mathrm{Ext}^1(X,\mathrm{HC}_{\partial \mathcal{O}}(\mathfrak{g},K))=0$$
\item For every $X \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$,
$$\mathrm{Ann}(X) \subseteq \mathrm{Ann}(\overline{\Phi}_{\mathcal{O}}X)$$
In particular, $\overline{\Phi}_{\mathcal{O}}$ preserves central character.
\end{enumerate}
\end{proposition}
In short, $\overline{\Phi}_{\mathcal{O}}$ is a left exact endofunctor of $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$ which kills everything on the boundary. It is (in a precise sense) a quantum analogue of the classical localization functor $j_*j^*: \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}}) \to \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$.
While we have proved that the functors $\Phi_{\mathcal{O}}$ and $\overline{\Phi}_{\mathcal{O}}$ are localization functors (in the sense of Section \ref{sec:locab}), we have avoided any explicit description of the corresponding quotient categories. Proposition \ref{prop:phiislocalization} exhibits $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)/\mathrm{HC}_{\partial\mathcal{O}}(\mathfrak{g}_{\hbar},K)$ only as a full subcategory of $\mathrm{HC}(\hat{\mathfrak{g}}_{\hbar},L)$. In fact, an explicit description of this subcategory is possible through the theory of $\mathcal{W}$-algebras. In \cite{Losev2011}, Losev introduces the notion of a Harish-Chandra $\mathcal{W}$-bimodule. There is a related notion of a Harish-Chandra $(\mathcal{W},L)$-module. Losev's argument can be generalized to identify the quotient category $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)/\mathrm{HC}_{\partial\mathcal{O}}(\mathfrak{g}_{\hbar},K)$ with a certain category of Harish-Chandra $(\mathcal{W},L)$-modules. The argument is basically a recapitulation of \cite{Losev2011} in this slightly modified setting.
We pause to suggest an alternative characterization of $\overline{\Phi}_{\mathcal{O}}$. If $\lambda \in \mathfrak{h}^*/W$ is a regular infinitesimal character, there is an equivalence (due to Beilinson-Bernstein) between the category $\mathrm{HC}^{\lambda}(\mathfrak{g},K)$ of Harish-Chandra modules of infinitesimal character $\lambda$ and the category $\mathcal{D}_{\lambda}^K(\mathcal{B})$ of $K$-equivariant coherent $\mathcal{D}_{\lambda}$-modules on the flag variety $\mathcal{B}$. $K$ acts on $\mathcal{B}$ with finitely many orbits. Each $K$-orbit $Y \subset \mathcal{B}$ has a conormal bundle $T^*_Y\mathcal{B} \subset T^*\mathcal{B}$. The preimage of $\mathcal{N}^*_{\theta}$ under the moment map $T^*\mathcal{B}\to \mathcal{N}^*$ is the union of $T^*_Y\mathcal{B}$ for every $K$-orbit $Y$.
There is a notion of \emph{singular support} for any coherent $\mathcal{D}_{\lambda}$-module. It is defined in much the same way (via good filtrations and $\operatorname{gr}$) as the associated variety of a Harish-Chandra module. If $\mathcal{M} \in \mathcal{D}_{\lambda}(\mathcal{B})$, the singular support $\mathrm{SS}(\mathcal{M})$ is a closed, conical subset of $T^*\mathcal{B}$. If $\mathcal{M} \in \mathcal{D}^K_{\lambda}(\mathcal{B})$, $\mathrm{SS}(\mathcal{M})$ is a union of conormal bundles $T^*_Y\mathcal{B}$. For any $\mathcal{O} \in \mathcal{N}^*_{\theta}/K$, we can consider the full subcategory $\mathcal{D}^K_{\lambda,\mu^{-1}(\overline{\mathcal{O}})}(\mathcal{B}) \subset \mathcal{D}^K_{\lambda}(\mathcal{B})$ of $K$-equivariant $\mathcal{D}_{\lambda}$-modules with $\mathrm{SS}(\mathcal{M}) \subseteq \mu^{-1}(\overline{\mathcal{O}})$. There is an equivalence (restricted from the equivalence above)
\begin{equation}\label{eqn:BBequivalence} \mathrm{HC}^{\lambda}_{\overline{\mathcal{O}}}(\mathfrak{g},K) \cong \mathcal{D}^K_{\lambda,\mu^{-1}(\overline{\mathcal{O}})}(\mathcal{B})\end{equation}
On one level, objects of $\mathcal{D}_{\lambda}(\mathcal{B})$ are sheaves on the flag variety. They are also (in a slightly different sense), sheaves on $T^*\mathcal{B}$. Here is the idea. If $U \subset T^*\mathcal{B}$ is an open \emph{conical} subset of $T^*\mathcal{B}$, its complement $Z = T^*\mathcal{B} \setminus U$ is defined by an ideal sheaf $\mathcal{I}_Z \subset \mathcal{O}_{T^*\mathcal{B}}$ of homogeneous polynomials. The sheaf of (twisted) differential operators $\mathcal{D}_{\lambda}$ has a standard filtration (by degree) and $\operatorname{gr} \mathcal{D}_{\lambda} \cong \mathcal{O}_{T^*\mathcal{B}}$. On each open affine $A \subset \mathcal{B}$, choose homogeneous generators $f_1,...,f_n$ for $\mathcal{I}_Z$ and arbitrary lifts $\tilde{f}_1,...,\tilde{f}_n$ to $\mathcal{D}_{\lambda}$. Form the localization $\mathcal{M}_{\tilde{f}_1,...,\tilde{f}_n}$. Since $\mathcal{D}_{\lambda}$ is a sheaf of \emph{non-commutative} algebras, some care is required (in particular, there are Ore conditions to check. See ?? for details). One can verify that the localization $\mathcal{M}_{\tilde{f}_1,...,\tilde{f}_n}$ depends only on $\mathcal{I}_Z$ (not on the generators or lifts) and that these localizations patch together to form a $K$-equivariant $\mathcal{D}_{\lambda}$-module $\mathcal{M}|_U$ on $\mathcal{B}$. Conceptually, this sheaf is the restriction of $\mathcal{M}$ to the open subset $U$ (although not in the ordinary sense of quasi-coherent sheaves. $U$ is not a subset of $\mathcal{B}$).
Consider the special case $U = T^*\mathcal{B} \setminus \mu^{-1}(\partial \mathcal{O})$. Under the codimension condition on $\partial \mathcal{O}$, $\mathcal{M}|_U$ is probably a coherent $\mathcal{D}_{\lambda}$-module and hence an object of $\mathcal{D}_{\lambda, \mu^{-1}(\overline{\mathcal{O}})}^K(\mathcal{B})$. The assignment $\mathcal{M} \mapsto \mathcal{M}|_U$ defines a left exact endo-functor of $\mathcal{D}^K_{\lambda,\mu^{-1}(\overline{\mathcal{O}})}(\mathcal{B})$ which determines, by means of the equivalence, a left-exact endo-functor of $\mathrm{HC}^{\lambda}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$. We conjecture that this functor is naturally isomorphic to $\overline{\Phi}_{\mathcal{O}}$. If true, the proof should be formal. Since $\overline{\Phi}_{\mathcal{O}}$ is a localization functor, and localization functors are unique, one should try to demonstrate that the second functor is a localization functor for the same subcategory as the first.
Even if true, this alternative characterization offers almost no additional information in the setting we consider. As Vogan and Barbasch point out in \cite{BarbaschVogan1985}, the infinitesimal character of a unipotent Harish-Chandra module is almost always singular.
As an application of the ideas in this section, we conclude with an alternative proof of (a slightly weaker version of) Theorem \ref{thm:irredassvar}.
\begin{proposition}\label{prop:altirredassvar}
Let $X$ be an irreducible $(\mathfrak{g},K)$-module. Suppose $\mathcal{O} \in \mathcal{N}^*_{\theta}/K$ is open in $\mathrm{AV}(X)$ and $\operatorname{codim}{(\partial \mathcal{O}, \overline{\mathcal{O}})} \geq 2$. Then $\operatorname{AV}(X) = \overline{\mathcal{O}}$.
\end{proposition}
\begin{proof}
In Proposition \ref{prop:coherencepreserved}, we proved that $\Phi_{\mathcal{O}}$ restricts to an endofunctor of $\mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$. The key input was Proposition \ref{prop:geomsignificance} combined with Proposition \ref{thm:finitegeneration}. In proposition \ref{prop:geomsignificance}, we assumed $j:U \subset X$ an open and dense subset. If we assume $U$ only dense in a component, we can prove a similar result (by exactly the same methods). Namely, we can exhibit a natural isomorphism
$$\Gamma M \cong j_*j^*M$$
for every $ M \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$. The sheaf $j_*j^*M$ is coherent (by Proposition \ref{thm:finitegeneration}) and supported in $\overline{U}$. Repeating the proof of Proposition \ref{prop:coherencepreserved}, we see that $\Phi_{\mathcal{O}}$ restricts to a functor
$$\Phi_{\mathcal{O}}: \mathrm{HC}_{\operatorname{AV}(X)}(\mathfrak{g}_{\hbar},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K) $$
which descends to a functor
$$\overline{\Phi}_{\mathcal{O}}: \mathrm{HC}_{\mathrm{AV(X)}}(\mathfrak{g},K) \to \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g},K)$$
by a version of Proposition \ref{prop:descenttoHC}. From part (4) of Proposition \ref{prop:allpropertiesofdescent}, there is a natural morphism of Harish-Chandra modules
$$X \to \Phi_{\mathcal{O}}X$$
which is injective since $X$ is irreducible. Then by the additivity of support, $\operatorname{AV}(X) \subseteq \operatorname{AV}(\Phi_{\mathcal{O}}X) \subseteq \overline{\mathcal{O}}$.
\end{proof}
\section{A Vanishing Theorem for Nilpotent Orbits}\label{sec:cohomologyvanishing}
Retain the notation of the previous section. let $\mathcal{E} \to \mathcal{O}$ be an admissible vector bundle in the sense of Definition \ref{def:admissibility1}. Our goal in this section is to provide sufficient conditions on $\mathcal{O}$ (and possibly $\mathcal{E}$) guaranteeing $H^1(\mathcal{O},\mathcal{E})=0$. The significance of this condition will become apparent in Section \ref{sec:mainthm}. Our proofs will rely centrally on some ideas from algebraic geometry---cohomology with support, Cohen-Macaulay sheaves, and rational singularities---which may be unfamiliar to the representation theorists among us. We refer these readers to the references for detailed definitions and commentary. One of the objects we will be working with is the structure sheaf of $\mathcal{O}$. To avoid the obvious notational difficulty, we will use the calligraphic $\mathcal{S}$ for structure sheaves.
Our main result is the following:
\begin{proposition}\label{prop:complexvanishing}
Suppose $G_{\mathbb{R}}$ is complex. Let $d = \operatorname{codim}(\partial \mathcal{O}, \overline{\mathcal{O}})$. Then
$$H^i(\mathcal{O},\mathcal{E}) = 0, \qquad 0 <i < d-1$$
\end{proposition}
In particular, if $G_{\mathbb{R}}$ is complex and $\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}}) \geq 4$, then $H^1(\mathcal{O},\mathcal{E})=0$ for every admissible $\mathcal{E}\to \mathcal{O}$. Proposition \ref{prop:complexvanishing} will follow as a corollary from the following general lemma.
\begin{lemma}\label{lemma:cm}
Let $X$ be an affine variety and $U \subset X$ an open subset with complement $Z = X \setminus U$. Let $d = \operatorname{codim}(Z,X)$. If $M \in \operatorname{QCoh}(X)$ is Cohen-Macaulay, then
$$H^i(U, M|_U) = 0, \qquad 0<i<d-1$$
\end{lemma}
\begin{proof}
Let $H^i_Z(X,M)$ denote the cohomology of $X$ with support in $Z$. The groups $H^i_Z(X,M), H^j(X,M)$, and $H^k(U,M|_U)$ are related by a long exact sequences
\begin{equation}
0 \to H_Z^0(X, M) \to H^0(X, M) \to H^0(U, M|_U) \to ...
\end{equation}
See, e.g., Theorem 9.4 in \cite{milne}. Since $X$ is affine, $H^i(X, M) = 0$ for $i >0$. This, together, with the exact sequence above, produces a sequence of isomorphisms
\begin{equation}\label{isoms}
H^i(U, M|_U) \cong H_Z^{i+1}(X,M), \qquad i \geq 1
\end{equation}
The vanishing behavior of the cohomology groups $H_Z^i(X, M)$ is controlled by $\mathrm{depth}_Z(M)$. This is defined to be the length of the longest $M$-regular sequence of functions in the ideal defining $Z$. We have in general (without hypotheses on $X$ or on $M$)
\begin{equation}\label{vanishing}
H_Z^i(X,M) = 0, \ i<\mathrm{depth}_Z(M)
\end{equation}
See, e.g., Theorem 5.8 in \cite{Huneke2007}. And for $M$ Cohen-Macaulay
\begin{equation} \label{depthisd}
\mathrm{depth}_Z(M) = d
\end{equation}
See, e.g., Chapter 18 in \cite{Eisenbud1995}. Combining equations \ref{isoms}, \ref{vanishing}, and \ref{depthisd} proves the result.
\end{proof}
Our application of Lemma \ref{lemma:cm} will be somewhat indirect. We will need to introduce some auxilary varieties. Let $p: \tilde{\mathcal{O}} \to \mathcal{O}$ be the universal $K$-equivariant cover. As homogeneous spaces for $K$, $\mathcal{O} = K/K^{\chi}$ and $\tilde{\mathcal{O}} = K/(K^{\chi})^0$. The normalization of $\overline{\mathcal{O}}$ is an affine variety $N(\mathcal{O})$ with $K$-action, a $K$-equivariant inclusion $\mathcal{O} \subset N(\mathcal{O})$, and a finite, $K$-equivariant surjection $N(\overline{\mathcal{O}}) \to \overline{\mathcal{O}}$. If $\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}}) \geq 2$, then $N(\mathcal{O})$ is the affinization of $\mathcal{O}$.
There is a naturally defined variety $N(\tilde{\mathcal{O}})$ that has the same relationship to $\tilde{\mathcal{O}}$ that $N(\mathcal{O})$ has to $\mathcal{O}$. It is defined as the normalization of $\overline{\mathcal{O}}$ in the function field of $\tilde{\mathcal{O}}$. In \cite{BrylinskiKostant1994}, Kostant and Brylinski call this variety as the normal closure of $\tilde{\mathcal{O}}$. It is an affine variety with $K$-action in which $\tilde{O}$ naturally sits as an open, $K$-invariant subset. Furthermore, the covering map $p: \tilde{\mathcal{O}} \to \mathcal{O}$ extends to a finite, $K$-equivariant surjection $p: N(\tilde{\mathcal{O}}) \to N(\mathcal{O})$
\begin{center}
\begin{tikzcd}[column sep=large]
\tilde{\mathcal{O}} \arrow[r,hookrightarrow] \arrow{d}{p}
&N(\tilde{\mathcal{O}}) \arrow{d}{p}\\
\mathcal{O} \arrow[r,hookrightarrow] & N(\mathcal{O})
\end{tikzcd}
\end{center}
The varieties $N(\mathcal{O})$ and $N(\tilde{\mathcal{O}})$ are singular, but not terribly so.
\begin{theorem}\label{thm:normalclosuresarenice}
If $G_{\mathbb{R}}$ is complex, the varieties $N(\mathcal{O})$ and $N(\tilde{\mathcal{O}})$ are Gorenstein and Cohen-Macaulay with rational singularities.
\end{theorem}
\begin{proof}
In \cite{Hinich1991}, Hinich proves that $N(\mathcal{O})$ is Gorenstein with rational singularities.
In \cite{Broer1998}, Broer extends this result to $N(\tilde{\mathcal{O}})$. Rational singularities implies Cohen-Macaulay by a standard fact (see, e.g., \cite{KollarMori1998}).
\end{proof}
\begin{proposition}\label{prop:vanishingforregularrep}
In the setting of Proposition \ref{prop:complexvanishing},
$$H^i(\mathcal{O},p_*\mathcal{S}_{\tilde{\mathcal{O}}}) = 0, \qquad 0 < i < d-1$$
\end{proposition}
\begin{proof}
This is a straightforward application of Lemma \ref{lemma:cm} with $X = N(\mathcal{O})$, $U = \mathcal{O}$, and $M = p_*\mathcal{S}_{N(\tilde{\mathcal{O}})}$. $\mathcal{S}_{N(\tilde{\mathcal{O}})}$ is Cohen-Macaulay by
Theorem \ref{thm:normalclosuresarenice}. Then, $p_*\mathcal{S}_{N(\tilde{\mathcal{O}})}$ is Cohen-Macaulay by the finiteness of $p$ (Theorem 5.4, \cite{KollarMori1998}). Finally, since the normalization map $N(\mathcal{O}) \to \overline{\mathcal{O}}$ is finite $\mathrm{codim}(N(\mathcal{O}) \setminus \mathcal{O}, N(\mathcal{O})) = \mathrm{codim}(\overline{\mathcal{O}} \setminus \mathcal{O}, \overline{\mathcal{O}}) = d$.
\end{proof}
Suppose $G_{\mathbb{R}}$ is complex. By the remarks at the end of Section \ref{sec:nilpotentcones}, $\mathcal{O}$ has a distinguished symplectic form $\tau \in \Gamma(\mathcal{O},\wedge^2T^*\mathcal{O})$. Its top exterior power $\wedge^{\dim(\mathcal{O})/2}\tau$ is a nonvanishing section of the canonical bundle $\omega_{\mathcal{O}}$. Consequently, the morphism
$$\mathcal{S}_{\mathcal{O}} \to \omega_{\mathcal{O}}, \qquad f \mapsto f\wedge^{\dim(\mathcal{O})/2}\tau$$
is a global trivialization of $\omega_{\mathcal{O}}$.
Now the geometric condition on $\mathcal{E}$ formulated in Definition \ref{def:admissibility2} reduces to
$$(p^*\mathcal{E})^{\otimes 2} = \mathcal{S}_{\tilde{\mathcal{O}}}^{\oplus N}$$
As observed in Section \ref{sec:unipotent}, such a vector bundle is an equivariant local system.
\begin{proposition}\label{prop:decompofpushforward}
Let $\mathcal{L}_1,...,\mathcal{L}_n$ be the the irreducible equivariant local systems on $\mathcal{O}$. Then there is a canonical decomposition of $K$-equivariant vector bundles
$$p_*\mathcal{S}_{\tilde{\mathcal{O}}} \cong \bigoplus_{i=1}^n\mathcal{L}_i^{\oplus \mathrm{rank}\mathcal{L}_i} $$
\end{proposition}
\begin{proof}
As explained in the appendix, there is an equivalence of categories between $\operatorname{Coh}^K(\mathcal{O})$ and finite-dimensional representations of $K^{\chi}$. Under this equivalence, the left-hand side corresponds to the regular functions $\mathbb{C}[K^{\chi}/(K^{\chi})^0]$. The right hand side corresponds to $\bigoplus_{i=1}^n L_i^{\oplus \dim V_i}$, where $L_1,...,L_n$ are the irreducible representations of $K^{\chi}/(K^{\chi})^0$. Now the decomposition
$$\mathbb{C}[K^{\chi}/(K^{\chi})^0] \cong \bigoplus_{i=1}^n L_i^{\oplus \dim V_i}$$
is a standard fact from the representation theory of finite groups.
\end{proof}
Combining Propositions \ref{prop:vanishingforregularrep} and \ref{prop:decompofpushforward}, we obtain
$$H^i(\mathcal{O},\mathcal{L}_j)=0 , \qquad 0 < i < d-1, 1 \leq j \leq n$$
and therefore,
$$H^i(\mathcal{O}, \mathcal{E}) = 0, \qquad 0<i<d-1$$
since $\mathcal{E}$ is a direct sum of $\mathcal{L}_j$.
\section{The Main Theorem}\label{sec:mainthm}
For the remainder, assume $G_{\mathbb{R}}$ is complex. Let $X$ be a unipotent $(\mathfrak{g},K)$-module. From Theorem \ref{thm:irredassvar} (or Proposition \ref{prop:altirredassvar}) $\mathrm{AV}(X) = \overline{\mathcal{O}}$ for some $\mathcal{O} \in \mathcal{N}_{\theta}^*/K$, and from Theorem \ref{thm:unipotentadmissible}, $\mathrm{OD}(X)$ is (the class of an) admissible vector bundle $\mathcal{E}$.
From Proposition \ref{prop:allpropertiesofdescent}.4, there is a natural map of Harish-Chandra modules
\begin{equation}\label{isom:natisom}
\eta: X \to \overline{\Phi}_{\mathcal{O}}X
\end{equation}
which is injective, since $X$ is irreducible. Let $Y$ be the cokernel of $\eta$. Proposition \ref{prop:allpropertiesofdescent} tells us two important things about $Y$: $\mathrm{Ann}(X) \subseteq \mathrm{Ann}(Y)$ and $\mathrm{AV}(Y) \subseteq \partial \mathcal{O}$. The second inclusion implies that the first inclusion is strict. This follows easily from Proposition \ref{prop:anndeterminesdimav}. Since $\mathrm{Ann}(X)$ is a maximal ideal, we deduce $\mathrm{Ann}(Y) = U(\mathfrak{g})$. Hence, $\eta$ is an isomorphism.
Choose a good filtration on $X$ and let $X_{\hbar} = R_{\hbar}X \in \mathrm{HC}_{\overline{\mathcal{O}}}(\mathfrak{g}_{\hbar},K)$. Write $M = X_{\hbar}/\hbar X_{\hbar} = \operatorname{gr} (X) \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}_{\overline{\mathcal{O}}}(\mathfrak{g}/\mathfrak{k})^*$.
\begin{lemma}\label{lemma:unipotentvanishing}
$$R^i\Phi_{\mathcal{O}}X_{\hbar} = 0, \qquad 0 < i <d-1$$
\end{lemma}
\begin{proof}
$M$ admits a finite filtration with successive quotients in $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$:
$$0 = M_0 \subset M_1 \subset ... \subset M_t = M, \qquad N_k :=M_k/M_{k-1} \in \operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}}) \ \text{for} \ 1 \leq k \leq t$$
By definition
$$[\mathcal{E}] = \sum_{k=1}^t[N_k|_{\mathcal{O}}]$$
In particular, each $N_k|_{\mathcal{O}}$ is admissible. From Proposition \ref{prop:coherencepreserved}, the restriction of $\Phi_{\mathcal{O}}$ to $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\overline{\mathcal{O}})$ coincides with $j_*j^*$, which in turn coincides (since $\overline{\mathcal{O}}$ is affine) with $\Gamma(\mathcal{O},\cdot)$. This, together with Proposition \ref{prop:complexvanishing}, implies
\begin{equation}\label{eqn:admvanishing}
R^i\Phi_{\mathcal{O}}N_k = H^i(\mathcal{O},N_k|_{\mathcal{O}}) = 0, \qquad 0 < i < d-1
\end{equation}
The same is true for $M$, by a simple induction on $t$. If $t=1$, there is nothing to prove. Suppose
$$R^i\Phi_{\mathcal{O}}M_k = H^i(\mathcal{O},N_k|_{\mathcal{O}}) = 0, \qquad 0 < i < d-1$$
for $k < t$. There is a short exact sequence in $\mathrm{HC}(\mathfrak{g}_{\hbar},K)$
$$0 \to M_k \to M_{k+1} \to N_{k+1} \to 0$$
Since $\Phi_{\mathcal{O}}$ is left exact, there is an associated long exact sequence in $M(\mathfrak{g}_{\hbar},K)$
$$0 \to M_k \to M_{k+1} \to N_{k+1} \to R^1\Phi_{\mathcal{O}}M_k \to R^1\Phi_{\mathcal{O}}M_{k+1} \to R^1\Phi_{\mathcal{O}}N_{k+1} \to ... $$
For $0 < i <d-1$, we have $R^i\Phi_{\mathcal{O}}N_{k+1}$ (from equation (\ref{eqn:admvanishing})) and $R^i\Phi_{\mathcal{O}}M_k$ (by induction). Therefore, $R^i\Phi_{\mathcal{O}}M_{k+1}=0$.
\end{proof}
We will also need
\begin{lemma}
Suppose $\mathcal{V}$ and $\mathcal{W}$ are admissible vector bundles on $\mathcal{O}$ representing the same class in $K_0\operatorname{Coh}^K(\mathcal{O})$. Then if $\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}}) \geq 4$,
$$[j_*\mathcal{V}] = [j_*\mathcal{W}] \in K_0\operatorname{Coh}^K(\overline{\mathcal{O}})$$
\end{lemma}
\begin{proof}
Let $\mathcal{V}'$ be a completely reducible representative of $[\mathcal{V}] = [\mathcal{W}] \in K_0\operatorname{Coh}^K(\mathcal{O})$. We will show that $[j_*\mathcal{V}] = [j_*\mathcal{V}']$. The same argument shows that $[j_*\mathcal{W}] = [j_*\mathcal{V}']$ and hence $[j_*\mathcal{V}] = [j_*\mathcal{W}]$.
$\mathcal{V}$ has a finite filtration
$$0 = \mathcal{V}_0 \subset \mathcal{V}_1 \subset ... \subset \mathcal{V}_s = \mathcal{V}$$
by $K$-invariant sub-bundles with irreducible quotients $\mathcal{T}_i := \mathcal{V}_i/\mathcal{V}_{i-1}$. As $K$-equivariant vector bundles, $\mathcal{V}' \cong \bigoplus_{i=1}^s \mathcal{T}_i$. If $s=1$, $\mathcal{V} = \mathcal{V}'$, and there is nothing to prove. Suppose
$$[j_*\mathcal{V}_k] = [j_*\bigoplus_{i=1}^k \mathcal{T}_i]$$
for some integer $k < s$. There is a short exact sequence of $K$-equivariant vector bundles
$$0 \to \mathcal{V}_k \to \mathcal{V}_{k+1} \to \mathcal{T}_{k+1} \to 0$$
Since $j_*$ is left exact, there is an associated long exact sequence in $\operatorname{Coh}^K(\overline{\mathcal{O}})$
$$0 \to j_*\mathcal{V}_k \to j_*\mathcal{V}_{k+1} \to j_*\mathcal{T}_{k+1} \to H^1(\mathcal{O},\mathcal{V}_k) \to ...$$
We have $H^1(\mathcal{O},\mathcal{V}_k) = 0$ from Proposition \ref{prop:complexvanishing} (combined with the codimension condition on $\mathcal{O}$) and therefore
$$[j_*\mathcal{V}_{k+1}] = [j_*\mathcal{V}_k] + [j_*\mathcal{T}_{k+1}] = [j_*\bigoplus_{i=1}^{k+1}\mathcal{T}_i]$$
Then by induction, $[j_*\mathcal{V}] = [j_*\mathcal{V}']$.
\end{proof}
Applying this fact to the admissible vector bundles $\mathcal{E}$ and $j^*M$, we deduce
\begin{corollary}\label{cor:classesmatch}
If $\operatorname{codim}(\partial \mathcal{O},\overline{\mathcal{O}}) \geq 4$,
$$[j_*\mathcal{E}] = [\Phi_{\mathcal{O}}M] \in K_0\operatorname{Coh}^K(\overline{\mathcal{O}})$$
\end{corollary}
$X_{\hbar}$ is $\hbar$-torsion free, since it is the Rees module of a filtered Harish-Chandra module. Multiplication by $\hbar$ defines a short exact sequence
$$0 \to X_{\hbar} \to X_{\hbar} \to M \to 0 $$
There is an associated long exact sequence in $M(\mathfrak{g}_{\hbar},K)$
\begin{equation}\label{eqn:mainseq}
0 \to \Phi_{\mathcal{O}} X_{\hbar} \to \Phi_{\mathcal{O}}X_{\hbar} \to \Phi_{\mathcal{O}}M \to R^1\Phi_{\mathcal{O}} X_{\hbar} \to R^1\Phi_{\mathcal{O}}X_{\hbar} \to R^1\Phi_{\mathcal{O}}M \to ...\end{equation}
Since $R^1\Phi_{\mathcal{O}}M = 0$ (from Lemma \ref{lemma:unipotentvanishing}), the multiplication by $\hbar$ map $R^1\Phi_{\mathcal{O}}X_{\hbar} \to R^1\Phi_{\mathcal{O}}X_{\hbar}$ is surjective. The grading on $R^1\Phi_{\mathcal{O}}X_{\hbar}$ is bounded from below (Part (4) of Proposition \ref{prop:allpropsoflocfunctor}) and $\hbar$ increases degree. Under these conditions, the requirement $R^1\Phi_{\mathcal{O}}X_{\hbar} = \hbar R^1\Phi_{\mathcal{O}}X_{\hbar}$ forces $R^1\Phi_{\mathcal{O}}X_{\hbar}=0$.
Thus, the long exact sequence (\ref{eqn:mainseq}) induces an isomorphism in $\operatorname{Coh}^{K,\mathbb{C}^{\times}}(\mathfrak{g}/\mathfrak{k})^*$
\begin{equation}\label{eqn:mainresult}
\Phi_{\mathcal{O}}X_{\hbar}/\hbar \Phi_{\mathcal{O}}X_{\hbar} \cong \Phi_{\mathcal{O}}M
\end{equation}
The term on the left is the associated graded of the filtered Harish-Chandra module $\Phi_{\mathcal{O}}X_{\hbar}/(\hbar-1)\Phi_{\mathcal{O}}X_{\hbar}$. If we forget the filtration on $\Phi_{\mathcal{O}}X_{\hbar}/(\hbar-1)\Phi_{\mathcal{O}}X_{\hbar}$, there is an equality $\Phi_{\mathcal{O}}X_{\hbar}/(\hbar-1)\Phi_{\mathcal{O}}X_{\hbar} = \overline{\Phi}_{\mathcal{O}}X$ following from the definition of $\overline{\Phi}_{\mathcal{O}}$, and therefore an equality
$$[\Phi_{\mathcal{O}}X_{\hbar}/\hbar \Phi_{\mathcal{O}}X_{\hbar}] = [\operatorname{gr}(\overline{\Phi}_{\mathcal{O}}X)] \in K_0\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$$
But $\eta$ defines an isomorphism $X \cong \overline{\Phi}_{\mathcal{O}}X$, so in fact
$$[\Phi_{\mathcal{O}}X_{\hbar}/\hbar \Phi_{\mathcal{O}}X_{\hbar}] = [\operatorname{gr}(X)]$$
This, together with the isomorphism (\ref{eqn:mainresult}) and Lemma \ref{cor:classesmatch}, implies
$$[\operatorname{gr}(X)] = [j_*\mathcal{E}] \in K_0\operatorname{Coh}^K(\mathfrak{g}/\mathfrak{k})^*$$
In summary, we have proved
\begin{theorem}
Suppose $G_{\mathbb{R}}$ is complex and $X$ is a unipotent $(\mathfrak{g},K)$-module. Then $\operatorname{AV}(X) = \overline{\mathcal{O}}$ for some $\mathcal{O} \in \mathcal{N}^*_{\theta}/K$ and $\mathrm{OD}(X)$ is the class of an admissible vector bundle $\mathcal{E}$. Assume $\operatorname{codim}{(\partial \mathcal{O}, \overline{\mathcal{O}})} \geq 4$. Then, for any good filtration on $X$,
$$[\operatorname{gr}(X)] = [j_*\mathcal{E}] \in K_0\operatorname{Coh}^{K}(\mathfrak{g}/\mathfrak{k})^*$$
In particular, as $K$-representations
$$X \cong_K \Gamma(\mathcal{O},\mathcal{E})$$
\end{theorem}
|
2,869,038,154,929 | arxiv | \section{Introduction}
\label{Sec:Instroduction}
\subsection{Physical motivation}
\label{Subsec:physical_motivation}
The accretion of solids and eventually gas \citep{pollack1996formation} on planetary cores is widely used as the standard scenario for planet formation. Most studies in the field of planet formation begin with an initial planetary core that grows by either planetesimal- (\cite{ida2004toward}, \cite{mordasini2012extrasolar}, \cite{EmsenhuberPrepA}, \cite{EmsenhuberPrepB}, \cite{voelkel2020popsynth}) or pebble accretion (\cite{bitsch2015growth}, \cite{Ndugu_2017}, \cite{lambrechts2012rapid}). Recent work included the consistent formation \citep{Lenz_2019} and accretion of planetesimals onto planetary embryos into a global model of planet formation \citep{voelkel2020popsynth}. Despite the improvement, the presence of planetary embryos is still treated as an initial assumption. A fully consistent global model for planet formation however would also have to form planetary embryos based on the previous evolution of the system. Studies that form planetary embryos from planetesimals usually neglect the formation of the planetesimals, by assuming an initial distribution in the disk \citep{levison2015growing,walsh2015planetsimals,carter2015compositional,clement2020embryo}. The study that is presented in this paper is an expansion to our companion paper \citep{voelkel2020embI}, in which we investigated the formation of planetary embryos from a dynamically evolving planetesimal disk and derived a one dimensional, parameterized analytic model for planetary embryo formation. The effect of pebble accretion \citep{ormel2010effect, KlahrBodenheimer2006} on the formation of planetary embryo formation is now added to the same framework in this study.
\\
To motivate our work we discuss the following aspects (often either neglected or not accounted in detail by previous works). One aspect generally neglected in the study of pebble accretion is that the pebble flux in a disk is not a constant, but instead evolves due to radial drift and decays over time. Since pebble accretion relies on the active pebble flux, the time and location at which a planetary embryo is introduced into the simulation is therefore imperative for the evolution of said embryo. The accretion of planetesimals on planetary embryos, as well as planetesimal growth by collisions is sensitive to the size of planetesimals, the local planetesimal surface density and the orbital distance to the star. The evolution and growth of a planet thus strongly depends on its environment, but also the cores themselves are assumed to form from the smaller material in the disk. Modeling the formation of planetary embryos therefore requires an understanding of the local solid evolution of a circumstellar disk. To fully understand the local evolution however, one needs to understand the global evolution of the disk as well, since solids can drift through the circumstellar disk from far out regions. Modeling the formation of planetary embryos in the terrestrial planet region in a self consistent disk therefore requires the understanding of the global formation of planetesimals and evolution of the pebble flux during the time of embryo formation.
\subsection{Previous work}
\label{Subsec:previous_work}
This study is an extension and of our previous work \cite{voelkel2020embI} in which we studied the impact of the planetesimals surface density and disk mass on the formation of embryos. Our previous study found that the formation of planetary embryos from 100$\,$km planetesimals occurs from the inside out and that the orbital separation of initial embryos converges to $\approx 15$R$_{Hill}$. Our finding confirmed the oligarchic growth nature of the embryo formation process (\cite{kokubo1998oligarchic}, \cite{Kobayashi_2011} and \cite{walsh2019planetesimals} to mention just a few).
\\
One main result from our first study is that the total number of embryos does not simply increase by introducing more mass in the system. The embryos that exist grow larger, thus increasing their mutual orbital separation. Additionally the formation area within 1 Myr increases for higher disk masses, which leads to a similar number of embryos after 1 Myr for our systems.
The orbital separation leads to a cumulative number of embryos that increases logarithmic with distance. This behavior is not strongly influenced by the planetesimal surface density profile.
\\
In \cite{voelkel2020embI} we also introduced an analytic model that succeeded in reproducing the total number, spatial distribution and formation time of planetary embryos when given the same one dimensional planetesimal surface density evolution.
\\
In our companion paper, we find that the innermost embryos form while planetesimals are still forming as well. This instigates that an active pebble flux exists after the formation of the innermost embryos. The outer embryos form after the formation of planetesimals has mostly vanished. While the accretion of pebbles is not considered in our first study, their presence is promising for planetary growth.
\\
In addition to our previous work we now introduce the accretion of pebbles onto planetesimals and planetary embryos. Studies regarding the evolution of a planetary system from planetesimals and pebbles in the LIPAD \citep{levison2012lagrangian} Code have already been conducted by \cite{kretke2014challenges}. In contrast to what has been studied in \cite{kretke2014challenges} we introduce the planetesimal over time based on their formation of a one dimensional planetesimal formation model described in Sect. \ref{Sec:PPE}.
\subsection{The goal of this study}
\label{Subsec:study_goal}
Within this study we connect a global model for the evolution of a circumstellar disk, that involves the formation and drift of pebbles, as well as the pebble flux regulated formation of planetesimals with N-body simulations. The N-body then tracks their subsequent growth and dynamical evolution. Using this framework we study a wide range of parameters to investigate their individual contribution on the formation of planetary embryos and the evolution of planetary systems in the terrestrial planet region. This paper is an addition to our previous study \citep{voelkel2020embI} in which we study the impact of the planetesimal surface density profile and disk mass on the formation of planetary embryos in the terrestrial planet region. Additionally to the formation of planetesimals, we now introduce a radial pebble flux and the possibility of pebble accretion into our framework. The evolution of the pebble flux stems from the same disk evolution that also forms the planetesimals within the N-body simulation. Comparing our results from this study with our previous study, we present 18 different N-body siumulations in which we vary the planetesimal surface density profile, the total mass in planetesimals and the total pebble flux.
\\
In Sect. \ref{Sec:PPE} we summarize the theory behind our approach and explain the numerical setup in Sect. \ref{Sec:Simulation Setup}. Sect. \ref{Sec:Numerical_Results} presents the results that are discussed in Sect. \ref{Sec:Discussion}. Sect. \ref{Sec:Summary} summarizes our findings and gives an outlook to future work.
\section{Pebbles, planetesimals and embryos}
\label{Sec:PPE}
The goal of this study is to comprehensively model the formation and early dynamical evolution of planetary embryos, following an initial population of dust as it is converted into pebbles and planetesimals. We specifically focus on investigating how the accretion of pebbles impacts this formation process. The framework that we have chosen to make this possible is split up into two parallel sub-processes. We first compute the viscous evolution of a circumstellar gas disk including its solid evolution and planetesimal formation. The qualitative evolution of the solids will serve as a proxy for planetesimal formation and the pebble flux to be included in the N-body simulations. This way the N-body simulation runs with the planetesimals and pebbles that have been formed using the one dimensional approach, while continuing to compute their growth via collision and accretion.
\\
A detailed description of the pebble flux regulated planetesimal formation model and the two population solid evolution model can be found in \cite{Lenz_2019} and \cite{birnstiel2012simple}. Our approach of coupling the one dimensional planetesimal formation model to the N-body simulation in LIPAD \citep{levison2012lagrangian}, as well as a detailed description of the physical models is described in our previous work \citep{voelkel2020embI}. In the following we give a brief summary of the underlying physical principles.
\subsection{Planetesimal formation and pebble evolution}
\label{Subsec:Pts_formation}
Our framework uses the two population solid evolution approach from \cite{birnstiel2012simple} to compute the dust and pebble evolution of a viscously evolving circumstellar disk \citep{shakura1973black} and the pebble flux regulated planetesimal formation model by \cite{Lenz_2019}. This framework has recently been used to study the impact of planetesimal formation on the formation of planets \citep{voelkel2020popsynth} and was applied in our companion paper \citep{voelkel2020embI}.
\\
The two population model uses a parameterized mass relation between a small and a large population of solids in the disk, defined by their Stokes number. The small particles ($St \ll 1 $) are coupled to the dynamic motion of the gas and can be seen as dust, while the larger particles ($St \sim 1 $) are detached from the gas motion and can be seen as pebbles. The parameter that separates the two populations has been derived by fitting the two population approach to larger coagulation based simulations of grain growth \citep{Birnstiel_2010}. Planetesimals then form proportional to the radial pebble flux \citep{Lenz_2019}. The planetesimal formation model assumes that particle traps can appear at any location in the disk and last for a given lifetime. The model assumes that a fraction of the radial pebble flux that drifts through a particle trap can be transformed into planetesimals. Planetesimals form with an initial size of 100$\,$km in diameter \citep{Klahr2020, abod2019mass, Johansen2009}, which leads to a threshold mass that has to be reached in order to form planetesimals in this one dimensional approach. The approach itself does not specify what underlying mechanism/instability (e.g. streaming instability, Kelvin Helmholtz instability) drives the formation of planetesimals, it is a model independent framework that forms planetesimals based on the radial pebble flux.
\subsection{Embryo formation}
\label{Subsec:Embryo_formation}
We define a planetary embryo as an object with at least a lunar mass (M$_{e} = 0.0123$M$_{\oplus}$) in our study. Growing an embryo from 100 km-sized planetesimals (with a bulk density of $\rho = 2g/cm^3$) requires more than 5 orders of magnitude of growth). This would require hundreds of thousands of planetesimals to form a single embryo via collisions, making this problem computationally unfeasible for classical numerical integrators. Thus, in order to tackle this problem we use the code known as LIPAD \citep{levison2012lagrangian}. LIPAD is a lagrangian code that uses the concept of tracer particles to follow the dynamical/collisional/accretional evolution of a huge number of sub-km-sized planetesimals all the away to become planets. We direct the reader to \cite{voelkel2020embI} for a detailed description on how we convert the 1-D solid evolution outcomes into tracers, as well as \cite{levison2012lagrangian}; \cite{kretke2014challenges}; \cite{walsh2016terrestrial}; \cite{walsh2019planetesimals}; \cite{deienno2019energy}: \cite{deienno2020collisional} for a series of previous applications of LIPAD.
\\
Our study introduces planetesimal and pebble tracer particles and computes their growth by planetesimal collisions and pebble accretion. Tracer particles are represented by three quantities: mass, physical radius and bulk density. These three quantities relate to each other as $n_{pl} = m_{tr} / [ (4./3.) \rho r_{pl}^3 ]$. Here, $n_{pl}$ is the number of planetesimals represented by a single tracer particle, $m_{tr}$ is the tracer constant mass, $\rho$ its constant bulk density and $r_{pl}$ the planetesimal size, that the tracer will represent. This implies that the number of planetesimals represented by a single tracer is larger for smaller planetesimals. It also implies that as planetesimals growth due to their collisional evolution/accretion, they are less represented by a single tracer. As a result, once a planetesimal grows to the point where a tracer will represent only one object (a Moon sized object in our case), this tracer is promoted to an embryo and is then treated as an individual N-body object in the simulation. The promotion of a planetesimal tracer particle to a planetary embryo in LIPAD is what we define the initial formation of a planetary embryo.
\subsection{Pebble accretion}
\label{Subsec:Pebble_accretion}
The fundamental difference to part I of our study lies in the accretion of pebbles onto planetesimals and planetary embryos. In the following we will briefly explain the concept of pebble accretion based on \cite{ormel2010effect} and \cite{lambrechts2012rapid}. A detailed description on how pebble accretion is implemented in LIPAD can be found in \cite{kretke2014challenges}. When we refer to pebble accretion, we talk about the accretion of particles on bodies that is strongly enhanced by gas drag. For this to occur, several conditions need to be met. The stopping timescale of the particle that is to be accreted must be long compared to the timescale of deflection by the target's object gravity. More specifically, the gravitational encounter timescale must be shorter than four times the stopping time
\begin{align}
v_{rel} \frac{b^2}{G M_p} < 4 t_s
\end{align}
with $G$ as the gravitational constant and $t_s$ the stopping time.
$v_{rel}$ is given as the relative velocity of the particle and the planetesimal/planetary embryo of mass $M_p$. The impact parameter $b^2$ can then be expressed as
\begin{align}
b < \Tilde{R}_C = \left( \frac{4 G M_p t_s} {v_s} \right)^{1/2} .
\end{align}
The second criterion states that the stopping time of the particle must be short compared to the time it takes for the particle to drift past the target. The impact parameter for when a particle is deflected by $90^{\circ}$ then gives
\begin{align}
b = b_{90^{\circ}} = \frac{G M_p}{v_{rel}^2} .
\end{align}
In summary, the first criterion states that small dust cannot contribute to pebble accretion because it is too strongly coupled to the motion of the gas, while the second criterion illustrates why larger objects like planetesimals do not benefit from gas drag. The critical crossing time scale can then be defined as
\begin{align}
t_{s,*} = \frac{b_{90^{\circ}}}{v_{rel}} = \frac{G M_p}{v_{rel}^3} .
\end{align}
In the LIPAD simulation, pebbles radially drift inwards. The decision whether a pebble can be accreted by an object is made if the particle is within the Hill radius of the object and under the condition that
\begin{align}
b < R_C = \Tilde{R}_C \exp{\left[ - \left( \frac{t_s}{4 t_{s,*}} \right)^{\gamma} \right]}
\end{align}
with $\gamma = 0.65$.
Pebbles enter the N-body simulation in the form of pebble tracers \citep{kretke2014challenges}.
\section{Simulation Setup}
\label{Sec:Simulation Setup}
The setup of our present study is an expansion of our previous work \citep{voelkel2020embI} and is described there in greater detail, but for the purposes of this work we briefly describe the model setup here. We compute the first 1 Myr of a viscously evolving disk including the two population solid evolution and pebble flux regulated planetesimal formation model from Sect. \ref{Sec:PPE}. The mass rate of planetesmimal formation is then given as an input to the LIPAD N-body simulation in terms of a corresponding number of planetesimal tracers every 10$\,$kyr. With our setup we study the evolution of planetary embryo formation for 18 different systems in which we vary the total planetesimal disk mass after 1 Myr, the planetesimal surface density profile and the total pebble flux. The total planetesimal masses after 1 Myrs are given by 6$\,$M$_{\oplus}$, 13$\,$M$_{\oplus}$ and 27$\,$M$_{\oplus}$. The planetesimal surface density profile varies by $\Sigma_P \propto r^{-1.0}$, $\Sigma_P \propto r^{-1.5}$ and $\Sigma_P \propto r^{-2.0}$. Our study individually compares systems in which pebble accretion is active to those in which it is ignored. In addition to our previously published work \citep{voelkel2020embI} we introduce a radial pebble flux into the LIPAD simulation. Pebbles are placed outside the outer edge of our computational domain at 5$\,$au. The total mass of the pebble flux over 1 Myr is varied by 57.7$\,$M$_{\oplus}$ in the 6$\,$M$_{\oplus}$ case, 115.8$\,$M$_{\oplus}$ in the 13$\,$M$_{\oplus}$ case and 232.5$\,$M$_{\oplus}$ in the 27$\,$M$_{\oplus}$ case. The corresponding mass is introduced over 1 Myr into the simulation in the form of pebble tracers. These tracers do not contribute to the planetesimal formation, but can be accreted by the planetesimal tracers and embryos. The qualitative evolution of the pebble flux at 5$\,$au is taken from our one dimensional solid evolution model as well, similar to the formation of the planetesimal disk.
The change of the disk mass, the disk formation rate and the radial pebble flux at 5$\,$au that is used in our setups is shown in Fig. \ref{Fig:formation_rate}.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{./Disk_mass/Planetesimal_disk_mass-6-slope-10.pdf}
\caption{Percentage change of the planetesimal disk mass $\dot{M}_{disk}$ , the total disk mass $M_{disk}$ and the radial pebble flux at 5$\,$au. The disk mass (red dots) is normalized by the total disk mass after $10^6$ years. The green dots indicate the disk mass increase every $10^4$ years ($\dot{M}_{disk}$), normalized by the maximum mass change ($\dot{M}_{disk, max}$). The blue dots indicate the pebble flux every $10^4$ years ($\dot{M}_{peb}$), normalized by the maximum pebble flux ($\dot{M}_{peb, max}$).
We find that $\sim 90 \%$ of planetesimals have formed within 400$\,$ky with a peak in the pebble flux at $\sim 75$ky and another in planetesimal formation at $\sim 115$ky.
}
\label{Fig:formation_rate}
\end{figure}
\section{Numerical results}
\label{Sec:Numerical_Results}
In our analysis we focus on the time and semimajor axis evolution (Fig. \ref{Fig:Emb_form_LIPAD_6_ME}. - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}), the individual embryo masses (Fig. \ref{Fig:Embryo_masses}), the total number and mass in embryos over time (Fig. \ref{Fig:Embryo_number}), the mean orbital separation of embryos over time (Fig. \ref{Fig:Orbital_separation}) and the cumulative distribution of embryos in the disk (Fig. \ref{Fig:Cumulative_number}).
\subsection{Embryo formation}
\label{Subsec:Embryo_formation}
Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME} show the time, mass and semimajor axis evolution of planetary embryo formation with the LIPAD code. The toal mass after 1 Myr in planetesimals is given as $6 M_{\oplus}$ (Fig. \ref{Fig:Emb_form_LIPAD_6_ME}), $13 M_{\oplus}$ (Fig. \ref{Fig:Emb_form_LIPAD_13_ME}) and $27 M_{\oplus}$ (Fig. \ref{Fig:Emb_form_LIPAD_27_ME}). The simulations in which pebble accretion is not included (left panels) were taken from our previous work \citep{voelkel2020embI} and serve as comparison in this study. The panels on the right always show the same system in which pebble accretion is included. The color map shows the mass of the objects that are considered embryos in the LIPAD simulations, while the black dots refer to the location and time at which a tracer particle has been promoted to a planetary embryo. The black dots can therefore be interpreted as the initial formation of embryos. In addition to this we define the term 'active' embryos. This term refers to all objects above embryo mass at a given time. Every active embryo used to be an initial embryo, however not every initial embryo remains in the system due to mergers. The individual embryos are connected by a grey line for clarity.
\begin{figure*}[]
\centering
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_6_slope_10.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_6_slope_15.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_6_slope_20.pdf} \\
\caption{\small Time over semimajor axis evolution of the N-body simulation in LIPAD. The time and location at which an object has first reached lunar mass is indicated by the black dots in the plot. The subsequent growth of the embryo is tracked and connected with the grey lines, its mass is given by the colorbar. The mass in planetesimals after 1 Myr is given by 6 M$_{\oplus}$ in these runs, the planetesimal surface density slope is varied ($\Sigma_P \propto r^{-1.0}$, $\Sigma_P \propto r^{-1.5}$ , $\Sigma_P \propto r^{-2.0}$ ). The left panels show the system without pebble accretion. The right panels show the system in which pebble accretion is included. The red line indicates the time after which the analytic model presented in \cite{voelkel2020embI} states that embryo formation is possible.
}
\label{Fig:Emb_form_LIPAD_6_ME}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_13_slope_10.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_13_slope_15.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_13_slope_20.pdf} \\
\caption{\small Time over semimajor axis evolution of the N-body simulation in LIPAD. The time and location at which an object has first reached lunar mass is indicated by the black dots in the plot. The subsequent growth of the embryo is tracked and connected with the grey lines, its mass is given by the colorbar. The mass in planetesimals after 1 Myr is given by 13 M$_{\oplus}$ in these runs, the planetesimal surface density slope is varied ($\Sigma_P \propto r^{-1.0}$, $\Sigma_P \propto r^{-1.5}$ , $\Sigma_P \propto r^{-2.0}$ ). The left panels show the system without pebble accretion. The right panels show the system in which pebble accretion is included. The red line indicates the time after which the analytic model presented in \cite{voelkel2020embI} states that embryo formation is possible.
}
\label{Fig:Emb_form_LIPAD_13_ME}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_27_slope_10.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_27_slope_15.pdf} \\
\includegraphics[width=1.0\linewidth]{./Emb_Formation_LIPAD/mass_27_slope_20.pdf} \\
\caption{\small Time over semimajor axis evolution of the N-body simulation in LIPAD. The time and location at which an object has first reached lunar mass is indicated by the black dots in the plot. The subsequent growth of the embryo is tracked and connected with the grey lines, its mass is given by the colorbar. The mass in planetesimals after 1 Myr is given by 27 M$_{\oplus}$ in these runs, the planetesimal surface density slope is varied ($\Sigma_P \propto r^{-1.0}$, $\Sigma_P \propto r^{-1.5}$ , $\Sigma_P \propto r^{-2.0}$ ). The left panels show the system without pebble accretion. The right panels show the system in which pebble accretion is included. The red line indicates the time after which the analytic model presented in \cite{voelkel2020embI} states that embryo formation is possible.
}
\label{Fig:Emb_form_LIPAD_27_ME}
\end{figure*}
\subsection{Embryo masses}
\label{Subsec:Embryo_masses}
Fig. \ref{Fig:Embryo_masses} shows the number of different embryo masses after 1 Myrs for the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The blue and orange histograms refer to simulations where we considered and not considered pebble accretion, respectively. We see that without pebble accretion, there is no embryo with a mass higher than 1$\,$M$_{\oplus}$, whereas this is a very common outcome for the simulations in which pebble accretion is included. Generally in every system, the highest mass is achieved when pebble accretion is included.
\\
While the systems in which pebble accretion is neglected fail to build super earths with our input parameters, the formation of super earth planets becomes possible when including pebble accretion. While the number of active embryos decreases if pebble accretion is included ( see Fig. \ref{Fig:Embryo_number}), their masses increase drastically.
\begin{figure*}[]
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_6_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_6_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_6_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_13_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_13_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_13_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_27_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_27_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Histogramms/mass_27_slope_20.pdf}
\end{minipage}%
\caption{\small Embryo masses after 1 Myrs for the different parameters from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The orange histograms show the systems in which pebble accretion is neglected, whereas the blue histograms show the systems in which pebble accretion is enabled.
}
\label{Fig:Embryo_masses}
\end{figure*}
\subsection{Active number and total mass}
\label{Subsec:Active_number}
Fig. \ref{Fig:Embryo_number} shows the total number of embryos and the total mass that is in embryos over time for the setups from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. We also give the fraction of total embryo mass M$_{Emb}$ over the mass that was given to the planetesimal disk after 1 Myr (M$_{D}$) for each setup. The first embryos always form in the systems in which pebble accretion is enabled. However, the number of active embryos during the simulation is almost a factor of 2 below the number of embryos in the systems without pebble accretion. The mass in embryos differs even more strongly than the active number of embryos for the corresponding systems. The fraction M$_{Emb}/$M$_{D}$ consistently increases for higher total masses and steeper $\Sigma_P$-profiles respectively. In the systems in which pebble accretion is included, it can exceed unity. This means that the mass in planetary embryos can be higher than the mass that is transformed into planetesimals, due to pebble accretion.
\begin{figure*}[]
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Active_number/mass_6_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_6_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_6_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Active_number/mass_13_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_13_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_13_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Active_number/mass_27_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_27_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Active_number/mass_27_slope_20.pdf}
\end{minipage}%
\caption{\small Number of active embryos (solid line) and total mass in embryos (dashed line) over time for the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The orange curves refer to the systems in which pebble accretion is disabled, whereas the blue lines refer to the systems in which pebble accretion is enabled. We also give the fraction of embryo mass over the total mass that entered the planetesimal disk after 1 Myrs ($M_{Emb}/M_{D}$).
}
\label{Fig:Embryo_number}
\end{figure*}
\subsection{Orbital Separation}
\label{Subsec:Orbital_separation}
In Fig. \ref{Fig:Orbital_separation} we compare the mean orbital separation of embryos over time for the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The orbital separation is expressed in units of the embryos Hill radii. We can see that the mean orbital separation after 1 Myr converges to $\approx 10$R$_{Hill}$ for each setup. The simulations in which pebble accretion is included show a smoother and more stable behavior over time than the systems in which pebble accretion is neglected. The explanation for these differences lies in the fact that the first embryos can start growing further apart from each other in the runs that only consider planetesimal accretion. Therefore, numerous embryos are needed in order to converge for a characteristic orbital Hill spacing.
\\
When considering pebble accretion, embryos tend to initially grow closer to each other. Connecting the orbital separation from Fig. \ref{Fig:Orbital_separation} with the embryo masses from Fig. \ref{Fig:Embryo_masses} and the time semimajor axis evolution from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}, we can see that the absolute physical distance between embryos increases largely due to their mass increase and therefore their increasing Hill radius.
\\
The dynamical separation of embryos when expressed in Hill radii does not change, their physcial separation as a consequence does. The possible area of embryo formation on the other hand does not enlarge if pebble accretion is included (see Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}). Since the orbital separation increases, the number of active embryos within the possible area of embryo formation decreases, as a consequence of their rapid growth by pebble accretion.
\begin{figure*}[]
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_6_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_6_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_6_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_13_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_13_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_13_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_27_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_27_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Orbital_Seperation/mass_27_slope_20.pdf}
\end{minipage}%
\caption{\small Orbital separation of active embryos over time from the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The orange curves refer to the systems in which pebble accretion is disabled, whereas the blue curves refer to the systems in which pebble accretion is enabled. The distance is expressed in units of the embryos Hill radii.
}
\label{Fig:Orbital_separation}
\end{figure*}
\subsection{Cummulative distribution}
\label{Subsec:Cummulative_distribution}
As already seen in Fig. \ref{Fig:Embryo_number}, the total number of active embryos in the simulation decreases strongly if pebble accretion is included. In Fig. \ref{Fig:Cumulative_number} we show the cumulative number of initial embryos for the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The cumulative number without pebble accretion is shown by the orange dots, the blue dots refer to the simulations including pebble accretion. We also highlight where the innermost and outermost embryos form within 1 Myrs for each setup via vertical dotted lines with corresponding colors. We find that in terms of the initial formation of embryos, the outermost embryo forms further out in the system in which pebble accretion is neglected. For the formation of the innermost embryo, pebble accretion shows no dominant effect. Since the orbital separation is still the same in terms of the embryos Hill radii, that scales linearly with the distance to the star, we find the same logarithmic distribution of cumulative embryos, but with a lower total number as in the simulations without pebble accretion.
\begin{figure*}[]
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_6_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_6_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_6_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_13_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_13_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_13_slope_20.pdf}
\end{minipage}%
\\
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_27_slope_10.pdf}
\end{minipage}
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_27_slope_15.pdf}
\end{minipage}%
\begin{minipage}{.33\textwidth}
\includegraphics[width=1.0\linewidth]{./Cumulative_number/mass_27_slope_20.pdf}
\end{minipage}%
\caption{\small Cumulative number of initial embryos after 1 Myrs for the systems from Fig. \ref{Fig:Emb_form_LIPAD_6_ME} - Fig. \ref{Fig:Emb_form_LIPAD_27_ME}. The orange dots refer to the systems in which pebble accretion is disabled, whereas the blue dots refer to the systems in which pebble accretion is enabled.
}
\label{Fig:Cumulative_number}
\end{figure*}
\section{Discussion}
\label{Sec:Discussion}
\subsection{The impact of pebble accretion}
\label{Subsec:pebble_accretion_impact}
We show that an active pebble flux has major consequences on the evolution of the planetary systems within the first 1 Myr. The accretion of pebbles leads to the formation of a lower number of substantially more massive embryos within a smaller semimajor axis interval of embryo formation. The physical spacing between embryos increases due to their higher masses in the pebble accretion runs. Their orbital separation when expressed in Hill radii remains unaffected and converges to $\approx$10$\,$R$_{Hill}$ in both cases. Embryos that form at larger distances (>1.5$\,$au) well after $T_{M_{disk}> 90\%}$ remain at low masses as they fail to undergo significant pebble accretion. This behavior was already predicted in our first study that neglected the accretion of pebbles but suggested that the disk formation rate is a valid constraint for pebble accretion due to its pebble flux dependency. We find that the outer edge of embryo formation moves slightly inwards when considering the accretion of pebbles. The formation of embryos at larger heliocentric distances within the lifetime of the pebble flux does not occur. The necessary size for significant pebble accretion is not reached at larger distances within the lifetime of our pebble flux.
\\
The formation of the first embryo occurs earlier in the inner region if pebble accretion is considered and the embryos that form first end up to have the highest masses after 1 Myr. The accretion of pebbles plays a major role once embryos have formed. Their impact on the local formation time, while noticeable, plays a subordinate role.
\\
Generally, we can say that the accretion of pebbles strongly favors the formation of super earths in the terrestrial planet region, but it does not enhance planetary embryo formation at larger distances.
\subsection{Consequences for the analytic embryo formation model}
\label{Subsec:toy_model}
In part I of our study we introduced an analytic model that succeeded in reproducing the results of the N-body simulations without pebble accretion. We refer to the local formation time, the spatial distribution and the total number of initial embryos. In brief summary, the formation of embryos in the analytic model is based on two Criteria. Criterion I refers to the necessary local growth time. Criterion II determines the orbital separation to other embryos. The model uses a parameterized approach to compute the local growth time scales of planetesimals based on the local planetesimal surface density evolution. Embryos are placed if the analytic growth surpassed the mass of a planetary embryo and the orbital separation to the other already existing embryos is above an input parameter.
\\
As discussed in Sect. \ref{Subsec:pebble_accretion_impact}, the impact of pebble accretion is largely found in the mass of the embryos, not in their initial formation time. Criterion I of the embryo formation model will therefore still give the right results (even though we find slight deviations in the inner regions).
\\
The number of embryos and their spatial distribution are determined by criterion II. Under the assumption that the already placed embryos grow respectively by pebble accretion, their Hill radii increase. The physical spacing between the embryos thus enlarges. As a consequence, the total number of embryos decreases since the semimajor axis interval of embryo formation does not increase (Criterion I). The analytic model for embryo formation from part I of our study is therefore still valid in a framework that includes pebble accretion.
\\
Implementing the analytic model into a global model for planet formation that includes planetesimal formation and pebble accretion will be subject to future studies.
\section{Summary and Outlook}
\label{Sec:Summary}
We study the impact of pebble accretion and planetesimal formation on the formation of planetary embryos in the terrestrial planet region. For this purpose we connected a one dimensional model for pebble flux regulated planetesimal formation and solid evolution with the N body code LIPAD. Thus we studied the growth and fragmentation of planetesimals with an initial size of 100$\,$km in diameter within the first million years of a viscously evolving circumstellar disk. In this paper we compare 18 different N-body simulations in which we vary the total mass in planetesimals, the radial pebble flux and the planetesimal surface density profile.
Building on the efforts of our previous study \citep{voelkel2020embI} we include a radial pebble flux and the accretion of pebbles during the formation of planetary embryos.
The main impacts on embryo formation by pebble accretion in the terrestrial planet region can be summarized as follows:
\begin{itemize}
\item Pebble accretion is highly beneficial for the formation of super earths.
\\
\item When compared with planetesimal accretion alone, the total number of embryos decreases strongly if pebble accretion is considered, while the individual embryos grow significantly more massive.
\\
\item Embryos that form early in the inner regions of the disk grow rapidly by pebble accretion, whereas the outer embryos that form later fail to do so.
\\
\item The outer edge of planetary embryo formation is not increased if pebble accretion is included. Our work indicates that it is not possible to form planetary embryos at larger distances (>2au) within the lifetime of a radial pebble flux for our assumptions.
\end{itemize}
Our findings from the first part of our study are still valid, the formation of planetary embryos occurs first in the innermost regions and then proceeds to larger distances. The number of embryos is given as the number of orbital distances within their possible formation zone. Since embryos grow more massive when pebble accretion is included, we find that the number of embryos decreases. The area in which they form however is not increased by pebble accretion since pebble accretion only becomes an effective growth mechanism for much larger sizes than 100$\,$km. By the time the outer objects have grown to larger sizes by planetesimal collisions, the pebble flux has largely vanished.
Even though we can see that the first embryos form earlier in the inner parts of the disk for the simulations in which pebbles are accreted, this trend does not continue to larger distances. The conundrum of distant embryo formation within the lifetime of a radial pebble flux as found in part I of our study \citep{voelkel2020embI} remains. A possible solution to this issue could be in the form of locally enhanced substructures in the planetesimal surface density profile at larger distances or the formation of planetesimals that initially form large enough for pebble accretion. Future work will include the formation of planetary embryos in distant local substructures, like in pressure bumps and around the water iceline \citep{Drazkowska2017}.
\section*{Acknowledgements}
|
2,869,038,154,930 | arxiv | \section{Introduction}\label{aba:sec1}
The formation of Primordial Black Holes (PBHs) has been postulated
in many theories of the early Universe (for a recent review see
Ref.~\refcite{C05}). Black holes of mass $M_{\rm bh}$ continually
emit Hawking radiation~\cite{H} with a temperature of $T_{\rm bh}=
1.06\ {\rm GeV}/\left( M_{\rm bh}/10^{13}\ {\rm g} \right)$ in the
form of all available fundamental particle species. The emitted
particles decay quickly on astrophysical timescales into $\gamma$,
$\nu_{e,\mu,\tau}$, $\bar{\nu}_{e,\mu,\tau}$, $p$, $\bar{p}$,
$e^+$ and $e^-$. PBHs with an initial mass\cite{MCP} of $M_*\sim
5\times 10^{14}$ g should be expiring today with a burst of high
energy particles including gamma-rays. The current upper limit on
the number expiring today per volume per unit time is\cite{MC}
\begin{equation}
R\lesssim 10^{-7}\eta_{\rm \, local}\, \rm{pc^{-3} \, yr^{-1}}
\label{aba:eq1}
\end{equation}
where $\eta_{\rm \, local}$ is the density enhancement of PBHs in
the local region. Typically $\eta_{\rm \, local}$ is $\sim 10^6$
(for clustering in the Galactic halo) or larger. Such PBH bursts
may be detectable by the Fermi Gamma-ray Space Telescope
observatory's Large Area Telescope (LAT). Conversely,
non-detection by the LAT may lead to tighter bounds on the PBH
distribution.
\section{PBH burst as seen by Ideal Detector}
In the standard model\cite{MCP}\,, the total number of photons
emitted per second by a $T_{\rm bh}\sim 0.3 - 100$ GeV black hole
scales as\cite{MW}
\begin{equation}
\dot{N}_{\rm bh \, \gamma} \simeq 1.4\times
10^{29}\left(\frac{T_{\rm bh}}{\rm TeV}\right)^{1.6}\rm{\ s}^{-1}.
\label{aba:eq2}
\end{equation}
The number of photons per second per unit area reaching the Earth
from a PBH bursting at a distance $d$ from the Earth is
\begin{equation}
F_{\rm bh}=\frac{\dot{N}_{\rm bh \, \gamma}}{4\pi d^2}.
\label{aba:eq3}
\end{equation}
Let us assume an ideal detector of effective area $A_{\rm \, eff}$
which can detect every photon that falls on it. (If the detector
is non-ideal then the efficiency can be incorprated into the value
of $A_{\rm \, eff}$.) If the detector requires $X$ photons over
time $t$ to distinguish an incoming event as a burst, then to
detect a burst we require $F_{\rm bh}A_{\rm \, eff}t\geq X$. That
is the PBH must be closer than
\begin{equation}
d_{\rm \, D}\simeq\frac{2.6\times 10^{-2}}{\sqrt{X}}
\left(\frac{T_{\rm bh}}{\rm TeV}\right)^{0.8} \left(\frac{A_{\rm
\, eff}}{{\rm m}^2}\right)^{1/2}\left(\frac{t}{\rm
min}\right)^{1/2}\ {\rm pc}. \label{aba:eq4}
\end{equation}
What $T_{\rm bh}$ maximizes the chance of detection? The remaining
lifetime\cite{M2} of a PBH of temperature $T_{\rm bh}$ is $\tau_{\rm evap}\simeq 7.4\times 10^3
/\left(T_{\rm bh}/{\rm TeV}\right)^{3} f$ s where $f\left(T_{\rm bh}\right)$ weights the
number of emitted species. (The remaining lifetime of a 300 GeV, 1 TeV, or 5 TeV black hole is 1 hour, 1 minute, or 1 second, respectively.) Taking $t = \tau_{\rm evap}$, a PBH will be detected by the ideal detector if it is closer than
\begin{equation}
d_{\rm \, D}\simeq\frac{0.03}{\sqrt{X}} \left(\frac{T_{\rm
bh}}{\rm TeV}\right)^{-0.7} \left(\frac{A_{\rm \, eff}}{{\rm
m}^2}\right)^{1/2}\ {\rm pc}. \label{aba:eq5}
\end{equation}
Thus the detectability is maximized for the lowest $T_{\rm bh}$
black hole visible above the background and/or by using the
longest detector exposure time. For a detector of angular
resolution $\Omega$ to resolve the PBH above the gamma-ray
background $F_\gamma$, we also require that $F_{\rm bh}A_{\rm \,
eff}\geq F_\gamma \Omega A_{\rm \, eff}$. The PBH will be resolved
above the observed (EGRET) extragalactic background\cite{S}
\begin{equation}
\frac{dF_\gamma}{dE}\simeq 1.4\times 10^{-6} \left(\frac{E}{\rm
GeV}\right)^{-2.1}\ {\rm cm}^{-2}\ {\rm GeV}^{-1}\ {\rm s}^{-1}\
{\rm sr}^{-1} \label{aba:eq6}
\end{equation}
at energy $E$ by the ideal detector if the PBH is closer than
\begin{equation}
d_{\rm \, R}\simeq 0.03\left(\frac{\Omega}{\rm sr
}\right)^{-1/2}\left(\frac{E}{\rm
GeV}\right)^{0.55}\left(\frac{T_{\rm bh}}{\rm TeV}\right)^{0.8}\
{\rm pc} \label{aba:eq7}
\end{equation}
and $E$ is less than the average energy\cite{MW} of the PBH
photons $\overline{E}_{\gamma}\approx 10 \left(T_{\rm bh}/{\rm
TeV}\right)^{0.5}$ GeV. The isotropic diffuse gamma-ray
background, which is an upper limit on the extragalactic
background, recently measured\cite{A} by the LAT at mid-Galactic
latitudes is consistent with the earlier EGRET measurements Eq.
(\ref{aba:eq6}), although the extragalactic component
may\cite{SMR} be weaker by up to a factor of 2.
For a given detector, the scanned volume of space is then $V_{\rm
bh} = \left(\omega_{\rm A}/{\rm sr}\right)d_{\rm \, S}^{\,3}/3$
where $\omega_{\rm A}$ is the detector acceptance angle (field of
view) and $d_{\rm \, S} = \min (d_{\rm \, D}, d_{\rm \, R})$.
Extensive air shower arrays characteristically have $A_{\rm \,
eff}\gtrsim 10^4\ {\rm m}^2$, large $\omega_{\rm A}$ and small
$\Omega$ but very high threshold energy (typically $\gtrsim 10$
TeV) and hence are background-limited, while atmospheric Cerenkov
detectors\cite{AB} characteristically have $A_{\rm \, eff}\gtrsim
200\ {\rm m}^2$ and small $\Omega$ but high threshold energy
(typically $\gtrsim 100$ GeV although the Whipple SGARFACE
system\cite{LKS} has a threshold of 100 MeV) and very small
$\omega_{\rm A}$ ($\lesssim 10^{-2}$ sr). In contrast, the Fermi
LAT has\cite{AT} a smaller $A_{\rm \, eff}\sim 0.8\ {\rm m}^2$ but
large $\omega_{\rm A} \sim 2.4$ sr, finer source position angular
resolution ($0.3 - 2\, '$), low energy thresholds (down to 20 MeV),
good time resolution and is essentially background-free with
respect to burst sensitivity. Additionally, most of the photons
emitted by expiring $T_{\rm bh}\lesssim 1$ TeV PBHs are in the LAT
energy range (20 MeV - 300 GeV).
\section{Conclusions}
The Fermi LAT offers greater sensitivity to local PBH bursts
than ground-based detectors. We have proposed\cite{T} spectral lag
measurements (the temporal delay between high and low energy
pulses) of the incoming light curve in two different energy bands
as a method to identify PBH bursts. A PBH burst arriving at the
detector will exhibit positive to negative evolution with
increasing energy because the black hole temperature and
$\overline{E}_{\gamma}$ increase over time as the black hole loses
mass. Because spectral lag measurements require counts in only two
energy bands, and not the full spectrum, spectral lag can be
measured even for weak events that last for very short time
scales. Work is in progress to calculate quantitative values for
the PBH spectral lags for the characteristics of the Fermi LAT.
|
2,869,038,154,931 | arxiv | \section{Introduction}
An exceptional phase during the early stellar evolution is the ejection of rapidly moving material into the interstellar medium (ISM) as collimated protostellar jets.
More general, jets are powerful signatures of astrophysical activity and are observed over a wide range
of luminosity and spatial scale.
Besides young stars (YSO), also micro-quasars, and active galactic nuclei (AGN) are typical jet sources, while there is
indication of jet motion also for a few pulsars and for gamma-ray bursts
\citep{1974MNRAS.167P..31F, 1979Natur.279..701A, 1983ApJ...274L..83M,
1994Natur.371...46M, 1997ApJ...487L...1R}.
Astrophysical jets have been the subject of numerous studies investigating them from different points of view - such as the
process of jet launching, the jet propagation and interaction with the environment.
One of the earliest numerical simulations of radiatively cooling, supersonic jets was performed by \citet{1990ApJ...360..370B}.
Afterwards, the study of jet propagation using numerical simulations became feasible applying (M)HD codes developed by
many groups \citep{1993ApJ...410..686D, 1993ApJ...413..198S,1993ApJ...413..210S,1994ApJ...420..237S}.
Regarding the jet feedback into the ambient gas, one of the first numerical simulation studying the impact of stellar outflows
on driving the interstellar turbulence was performed by \cite{2000ESASP.445..457M}.
Further studies on scales beyond the jet launching area and considering the interaction of the jet and the ambient gas
were publishes subsequently (see e.g. \citealt{2007ApJ...668.1028B, 2009ApJ...692..816C, 2010MNRAS.402....7M, 2013MNRAS.429.2482P, 2014MNRAS.439.2903C, 2019ApJ...883..160S}.
On smaller scales, namely the jet formation and collimation scale, a break-through came by the simulations of \citet{1985PASJ...37...31S,1995ApJ...439L..39U,1997ApJ...482..712O}, numerically following the earlier, seminal
analytical approaches by \citet{1982MNRAS.199..883B,1983ApJ...274..677P,1985PASJ...37..515U,Uchida1985}.
Such simulations considered the jet formation from the {\em disk surface}, thus the acceleration
of jet material and its collimation by the magnetic field
(to cite a few, see \citealt{1993ApJ...410..218W, 1995ApJ...444..848L, 1997A&A...319..340F, 2002A&A...395.1045F,
2010ApJ...709.1100P, 2011ApJ...742...56V})
However, in order to understand the very launching process of the jet -- that is the transition from accretion
to ejection -- it is essential to include the disk physics in the numerical treatment.
Today, numerical simulations of the accretion-ejection process play an essential role for the
understanding of jet launching.
A vast literature exists on magnetohydrodynamics (MHD) simulation on jet launching with ever improving physical complexity and also numerical resolution
\citep{Uchida1985, 1998ApJ...508..186K, 2002ApJ...581..988C, 2007A&A...469..811Z,2010A&A...512A..82M,
2012ApJ...757...65S, 2014ApJ...793...31S, 2018ApJ...861...11S}.
In general, these works study how the properties of the outflow that is formed from the disk is determined from
certain disk properties, namely the disk resistivity, the presence of the mean field dynamo in the disk,
or 3D circum-stellar disk in a Roche potential.
Furthermore, we know that stars may form as binaries (see section below).
In close binary pairs the axial symmetry of the jet source may be disturbed substantially.
Bipolar jets forming in a binary system may be affected substantially by tidal forces and torques,
that might be visible as 3D effects in the jet structure and jet propagation.
There are well-known observational signatures that strongly indicate on non-axisymmetric features like jet precession or a curved ballistic motion of the jet which are suggesting that the jet source is part of a binary
system or even a multiple system
\citep{Fendt1998, 2000ApJ...535..833S, 2002MNRAS.335.1100C, 2004HEAD....8.2903M, 2007A&A...476L..17A, 2014xru..confE.147M,
2016A&A...593A.132P, Beltran2016, 2019ASSP...55...71M,2019A&A...622L...3E, 2019IAUS..346...34M, 2021MNRAS.503..704M, 2021MNRAS.503.3145B, 2021MNRAS.tmp..799D}
These papers study different binary systems owing jets either from observational data or by applying
simulation techniques.
Some of these jets are indeed found to show a non-axisymmetric structure, usually referred to as C-shape and S-shape,
which is thought to be a signature of jet precession of orbital motion of the jet source.
All these features indicate the presence of binary or multiple system.
The launching of jets in a binary system and their subsequent propagation naturally requires a three-dimensional (3D)
setup for the simulation.
The major difficulties here are
(i) the demand on CPU power,
(ii) the different kind of physics for outflow and disk (ideal MHD or diffusive MHD, respectively), and
(iii) the different time scales involved for the disk, the jet, and for the binary orbital motion.
Only recently, this could be achieved by \citet{2015ApJ...814..113S, 2018ApJ...861...11S} who tackled the problem of jet
launching -- thus the accretion-ejection connection -- in ``3D simulations''.
The emphasis of these papers was on global properties such as the accretion and ejection mass fluxes, the overall 3D structure and stability of disk and jet, and on global tidal effects on disk and jet.
Here we continue these investigations, now concentrating on a much deeper consideration of the local and global effects of the angular momentum budget in the disk and the outflow.
In particular, we will investigate the effect of the existence of disk spiral arms for the launching process and the substructures emerging in the jets.
We will further investigate how the global observable such as disk accretion rate and jet outflow rate are affected in comparison to a single-star accretion disk that launches an outflow.
Similar works have been published, studying hydrodynamic torques in circum-binary disks
\citep{1977MNRAS.181..441P, 1979MNRAS.186..799L, 2017MNRAS.468.1387L, 2017MNRAS.466.1170M,2019ApJ...875...66M, 2020A&A...635A.204A, 2020A&A...641A..64H},
or the torques exerted on accreting supermassive black hole binaries \citep{2013MNRAS.435.2633N} or the magnetic torque
in accretion disks of
millisecond pulsars \citep{2017MNRAS.469.4258T}.
Compared to previous studies of torques acting in a circum-binary disk of a binary system
(mostly performed in hydrodynamic limit),
our simulations consider the full magnetic torque and the presence of the MHD disk wind in a circum-primary disk in a binary system.
In general, we apply an approach similar to \citet{2017MNRAS.466.1170M, 2019ApJ...875...66M},
meaning that we treat a similar set of equations, but apply them i) to a circum-primary disk, and
extend them ii) in order to investigate the magnetohydrodynamic torques.
With that, we can treat the launching and evolution of a disk jet from a circum-primary disk in a binary system.
Note that in this paper, we concentrate on the evolution of a disk magnetic field and ignore the magnetic field of the
central object that is subject to simulations of the disk-star interaction \citep{2013A&A...550A..99Z, 2018ApJ...857....4T,2019ApJ...878L..10T}.
Our paper is structured as follows.
In Section 2 we discuss the model setup for our simulations.
We do this in brief, mostly referring to our previous papers \citep{2015ApJ...814..113S,2018ApJ...861...11S} in which
the modeling and numerical details are extensively discussed.
We then describe the general disk and outflow dynamics in Section 3 in great detail, discussing the particularities of the 3D disk and outflow dynamics.
Section 4 presents an analysis of the local torques acting in the disk and the outflow,
while Section 5 discusses the global angular momentum budget.
In Section 6 we summarize the 3D effects concerning the mass and angular momentum fluxes.
Section 7 summarizes our paper.
For convenience we have compiled a table containing the various physical terms of the angular momentum budget that
are considered and put it in Appendix A.
Additional useful information and graphs are included in Appendix B and C.
\section{Model setup and equations}
This paper is the follow up work of our recent paper \citep{2018ApJ...861...11S} in which we consider a binary system with a {"}primary{"} of mass $M_{\rm p}$ and a {"}secondary{"} of mass $M_{\rm s}$, separated by the distance $D$.
The primary is surrounded by a disk of initial size $R_{\rm out} < D/2$.
The location of the secondary is chosen to be outside the computational domain.
The orbital plane of the binary system can be chosen to be inclined towards the initial accretion disk
by an angle $\delta$, however, for simplicity we do not consider this option for the present paper.
The Lagrange points L1, L2 and L3 are outside the initial disk radius.
The Lagrange points L1 and L3 could be located in the computational domain.
\subsection{Governing equations}
In the current paper we analysis the results of our 3D MHD simulations focusing on the physical process of jet launching
in the binary system.
In these simulations we had applied the MHD code PLUTO \citep{2007ApJS..170..228M, 2012ApJS..198....7M}, version 4.3,
to solve the time-dependent, resistive, in-viscous MHD equations,
accounting namely for the conservation of mass, momentum, and energy,
\begin{equation}
\frac{\partial\rho}{\partial t} + \nabla \cdot \left( \rho \vec u \right)=0,
\label{continuity}
\end{equation}
\begin{equation}
\frac{\partial \left( \rho \vec u \right) } {\partial t} +
\nabla \cdot \left( \rho \vec u \vec u \right) + \nabla P-\frac{ \left( \nabla \times \vec B \right) \times \vec B}{4 \pi}
+ \rho \nabla \Phi = 0.
\label{momentum_eq}
\end{equation}
\begin{multline}
\frac{\partial e}{\partial t} + \nabla \cdot \left[ \left( e + P + \frac{B^2}{8\pi} \right) \vec u - \left( \vec u \cdot \vec B \right) \frac{\vec B}{4\pi} + \left( {\eta} \vec j \right) \times \frac{\vec B}{4\pi} \right]\\
= - \Lambda_{\rm cool}.
\end{multline}
Here, $\rho$ is the mass density, $\vec u$ is the velocity, $P$ is the thermal gas pressure,
$\vec B$ stands for the magnetic field, and $\Phi$ denotes the gravitational potential.
The electric current density $\vec j$ is given by Amp\'ere's law
$\vec j = \left( \nabla \times \vec B \right) / 4\pi$.
The total energy density is
\begin{equation}
e = \frac{P}{\gamma - 1} + \frac{\rho u^2}{2} + \frac{B^2}{2} + \rho \Phi.
\end{equation}
We consider an ideal gas with a polytropic equation of state $P = (\gamma - 1) u$ with
$\gamma = 5/3$ and the internal energy density $u$.
This is a further difference to \citet{2017MNRAS.466.1170M} and \citet{2019ApJ...875...66M}, both considering a locally
isothermal gas.
This is a typical assumption for disk simulations,
For studies of wind or jet launching simulations the literature usually assumes a polytropic.
The gas temperature is implicitly given by the polytrope, $T \propto \rho^{\gamma-1} \propto P/\rho$,
and is thus not considered as a separate variable.
Within our approach, heating (e.g. ohmic, compressional, numerical) will affect the dynamics via the gas pressure.
We consider a time dependent gravitational (Roche) potential $\Phi= \Phi_{\rm eff}$ in the equations.
Since the origin of our coordinate system is in the primary, we have to consider the time variation of
the gravitational potential in that coordinate system.
We prescribe the position of the secondary initially $(t=0)$ along the $x$-axis.
Thus, its position vector varies over time as
\begin{equation}
\vec{D}= \hat{x} D \cos{\omega t} +
\hat{y} D \sin{\omega t}\cos{\delta} +
\hat{z} D \sin{\omega t}\sin{\delta},
\label{roche_potential}
\end{equation}
with the inclination angle $\delta$ of the binary orbit with respect to the circum-primary disk.
Here $\hat{x}$, $\hat{y}$ and $\hat{z}$ denote the unit vectors in Cartesian coordinates.
In this paper we discuss a co-planar geometry, $\delta = 0$.
The effective potential in a binary system at a point with position vector $\vec{r}( x, y, z )$ is
\begin{equation}
\Phi_{\rm eff} = - \frac{G M_{\rm p}}{|\vec r|} - \frac{G M_{\rm s}}{|\vec{r}-\vec{D}|}
+ \frac{G M_{\rm s}}{|\vec{D}|^3} \left(\vec{r} \cdot \vec{D}\right).
\label{eq:phi_eff}
\end{equation}
The first term in Equation~\ref{eq:phi_eff} is the gravitational potential of the primary,
while the remaining terms describe the tidal perturbations due to the orbiting secondary.
The last {''}indirect{''} term accounts for the acceleration of the origin of the coordinate
system (see also \citealt{1996MNRAS.282..597L,2018ApJ...861...11S}).
The evolution of the magnetic field is described by the induction equation,
\begin{equation}
\frac{\partial \vec B}{\partial t} - \nabla\times \left( \vec u \times \vec B - \eta \vec j \right) = 0.
\end{equation}
The magnetic diffusivity can be defined most generally as a tensor $\bar{\bar{\eta}}$
(see our discussion in \citealt{2012ApJ...757...65S}).
Here, for simplicity we assume a scalar, isotropic magnetic diffusivity as a function of space
$\eta_{ij} \equiv \eta(r,z)$.
The cooling term $\Lambda$ in the energy equation can be expressed in terms of ohmic heating
$\Lambda = g \Gamma$, with $\Gamma = ({\eta} \vec j) \cdot \vec j$, and with $g$ measuring
the fraction of the magnetic energy that is radiated away instead of being dissipated locally.
For simplicity, again we adopt $g=1$, thus we neglect ohmic heating for the dynamical evolution of
the system.
\subsection{Numerical specifics}
For the numerical specifics such as boundary conditions, initial conditions, and the numerical grid we refer to our
previous paper \citep{2018ApJ...861...11S}.
Here we want to emphasize that all simulations were performed applying Cartesian coordinates.
This is essential in order to exclude any artificial effect of the rotational axis on the 3D structure
of the outflow.
However, in the present paper we will mainly discuss the evolution of properties involving radial motions (accretion, ejection)
and toroidal motions (orbital motions, angular momentum and torques with respect to the original rotational axis).
In particular, we also need to integrate in $\phi$-direction.
We therefore need to transform the required physical variables from the Cartesian to a cylindrical coordinate system.
As it is well known this transformation may provide some traps arising from the treatment of the trigonometric functions in the four quadrants.
We have therefore thoroughly tested our transformation routines in order to deal actually with the proper physical quantities.
We have also applied the interpolation tool provided by PLUTO to interpolate the variables that were
evolved by the simulation on a Cartesian grid onto a cylindrical coordinate system.
This option was used in particular when we further needed to integrate global properties such as the disk angular
momentum at a certain radius, or when plotting variables along the azimuthal angle $\phi$.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f1.pdf}
\caption{Time evolution of the disk-jet structure in a binary system.
Shown are snapshots of the mass density in log scale for simulation run {\em a0} at $t = 600, 1500, 2500$ in the $xz$-plane,
in the $yz$-plane, and in the mid-plane $z=0$.
The position of the $L_1$ point is indicated with the {``}+{''} sign.}
\label{fig:nc3_xy_rho_com}
\end{figure*}
\subsection{Units and normalization}
The simulations are performed in code units.
To convert in astrophysical units, for a protostellar system we may apply for the inner disk radius
$r_{\rm i} = {0.1~\rm au}$ and the Keplerian velocity at the inner disk radius $u_{\rm K,i}$,
result in a dynamical time scale of
\begin{equation}
t_{\rm i} = \frac{r_{\rm i}}{u_{\rm K,i}}
= 1.8 \left( \frac{r_{\rm i}}{0.1\rm au} \right)^{3/2}
\left( \frac{M_{\rm p}}{M_\odot} \right)^{-1/2} {\rm days }.
\label{eq:time-unit}
\end{equation}
Thus, a running time of the simulation of $5000\,t_{\rm i}$ corresponds to 25 years for a
typical protostellar system.
Other systems of interacting binaries are Cataclysmic Variables (CVs), consisting of a white dwarf (WD)
as a primary and a late type main sequence star as secondary in close separation.
The secondary may serve as source of material that is accreted onto the primary via an accretion disk.
The typical orbital period observed for CVs is about a few hours.
In order to scale our simulations to a CV system we may choose an inner disk radius to be several WD radii,
thus $r_{\rm i} \simeq 5\times 10^4 {\rm km}$ \citep{2016AstL...42..379S}.
The astrophysical time scale of our simulations applied to CVs is
\begin{equation}
t_{\rm i} = 0.85 \left( \frac{r_{\rm i}}{5\times10^4\rm km } \right)^{3/2}
\left( \frac{M_{\rm p}}{M_\odot} \right)^{-1/2} {\rm hours. }
\end{equation}
More details on the normalization of the variables are provided in Appendix C.
\begin{table}
\caption{Characteristic simulation parameters:
initial (maximum) plasma-beta at the inner disk radius,$\beta_{\rm i}$,
binary separation $D$,
inclination angle between binary orbit and the disk mid-plane,$\delta$,
mass ratio between secondary and primary, $q\equiv M_{\rm s}/M_{\rm p}$,
radial location of the Lagrange points (orbital plane) $r_{\rm L1}$, $r_{\rm L3}$,
and the orbital period $T_{\rm b}$ (in units $t_{\rm i}$).
The initial aspect ratio of the disk is $\epsilon = 0.1$,
and the initial outer disk radius $r_{\rm out} = 65$.
All values are given in code units.}
\begin{center}
\begin{tabular}{lccccccccl}
\hline
\hline
\noalign{\smallskip}
Run & $\beta_{\rm i}$ & D & $\delta$ & $ q $ & $r_{\rm L1}$ & $r_{\rm L3}$ & $T_{\rm b}$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\noalign{\smallskip}
{\em a0} & 20 & 150 & 0 & 1 & 75 & 105 & { 8160} \\
{\em a1 } & 20 & single & - & - & - & - & - \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{center}
\label{tbl:0}
\end{table}
\section{A 3D jet launching disk simulation}
In this section we present and discuss the different physical variables of 3D MHD simulations of jet launching in binary systems.
For details we refer to our past publications \citep{2015ApJ...814..113S,2018ApJ...861...11S}.
We first briefly summarize the general evolution of the accretion-ejection structure by discussing our reference simulation
{\em a0} for which the binary orbit and the disk forming jet are co-planar
(see Table \ref{tbl:0}).
This simulation shows all the tidal effects caused by the secondary, but not the effect of a disk re-alignment and a subsequent
3D disk or jet precession, as this requires an inclination between jet-launching disk and orbital plane..
As a general feature of the disk evolution, we find that the disk size is decreasing and becomes finally
confined to a size within the Roche lobe.
From an axisymmetric initial state the disk evolves into an asymmetric structure after $t=500$, developing
a spiral arm structure that grows in time.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f2.pdf}
\caption{Time evolution of the disk-jet structure in a binary system.
Shown are snapshots of the gas pressure in log scale for simulation run {\em a0} at $t = 600, 1000, 2500$ in the mid-plane $z=0$.
}
\label{fig:nc3_xy_prs_com}
\end{figure*}
\subsection{Evolution of the disk spiral arms}
We now consider in particular the evolution of the disk spiral arms for our reference run {\em a0}.
We first look at snapshots of the mass density (Fig.~\ref{fig:nc3_xy_rho_com}) and the gas pressure (Fig.~\ref{fig:nc3_xy_prs_com})
across the equatorial plane, $z=0$, for three exemplary time steps.
The spiral arm pattern appears at the same position in both maps, respectively -- simply indicating that
the density wave and the pressure wave follow the same pattern speed.
We notice that the spiral arms first evolves smoothly as a density wave but then develops
a shock structure with a jump in pressure and mass density.
The shock front allows for a clear definition of the spiral arm position.
The gas accumulates at the shock front making the spiral arm structure more prominent over time.
It is clearly seen that with time the disk spiral arms become denser and more prominent and represent
the main structural feature of the disk.
The spiral arms rotation is synchronized with the orbital motion of the binary.
The magnetic field lines of the accretion-ejection system are shown in Figure~\ref{fig:nc3_xy_rho_com}
for $t=1500$ for the $xz$-plane.
It shows a smooth, almost axisymmetric pattern that is typical for time scales up to $t=1500$.
However, when the disk spiral arms become more prominent,
we will see the 3D effects of the dynamical evolution more clearly
(for reference, see \citealt{2018ApJ...861...11S}).
In order to analyze the motion and the pattern speed of the spiral arms and the gas material during the evolution of the accretion disk,
we first follow the evolution of the density peaks inside the spiral arms.
The ({"}northern{"}) spiral arm starts forming at $t \simeq 1000$, while at the opposite side, the signature of
a spiral arm appears somewhat later, at $t \simeq 1500$.
We believe that this time difference arises from the fact that {"}southern{"} part of the disk is just farther to the companion star and tidal forces that form the spiral wave are weaker and thus need more time to evolve.
Also, the {"}northern{"} is directed towards the secondary (see the position of the L1 moving),
and thus feels the tidal forces stronger.
\subsection{Disk and outflow dynamics}
In order to gain insight in the evolution of the spiral arm pattern we now consider the dynamics of the accretion disk and the outflow in more details.
We first consider the angular profile of mass density $\rho(\phi)$.
For our numerical estimates we consider Figure~\ref{fig:vpatt} (first row).
Similarly, Figure~\ref{fig:vpatt} (bottom row) shows also the angular profile of the rotational
velocity $u_{\phi}$ for $r=20$.
The spiral arms are clearly detected by the peaks and dips in the corresponding angular profiles.
By comparing the location of these features over time we can estimate the pattern speed of the spiral arm.
When comparing the profile of mass density and the rotational velocity along circles of different radius, $\rho(\phi; r)$, and $u_{\phi}(\phi; r)$ (not shown), we clearly observe that these profiles differ for different radii.
Furthermore, we find that the density peak(s) at a certain radius move in angular direction over time, indicating the rotation
of the spiral arm pattern.
For instance, at $t= 604$ the mass density peaks at larger radii, $r>5$,
indicating that the arm is gradually forming at larger radii we do not show the $\phi$ profile for all radii).
Evidently, this once again shows the spiral structure of the arm.
We now derive some numerical estimates, considering Figure~\ref{fig:vpatt}.
From the motion of the density peaks we can derive the pattern speed of the spiral arm(s),
\begin{equation}
u_{\rm patt} = r \frac{\Delta \phi}{\Delta t}.
\end{equation}
For $r= 20$ we estimate the pattern speed $u_{\rm patt}$ focusing on the mass density (mid-plane) at times
$t=604$ and $t=1034$.
We find the peak of the density profile moving in $\phi$ direction from $\phi_1=3.3$ to $\phi_2=4.2$,
resulting in a circular pattern speed $u_{\rm patt} = 0.041$.
The Keplerian velocity at the disk mid-plane at $r=20$ is $u_{K} = 0.22$ and, thus one order of magnitude larger than the pattern speed at this radius.
This is typical for any orbiting wave pattern, while the exact pattern speed of course depends on the forcing involved \citep{1964ApJ...140..646L} .
Here the wave pattern is triggered by the orbiting secondary with an orbital period of about 10,000 time units.
Overall, this looks all reasonable and shows again that the arm is indeed a pattern that is not co-rotating with the
material, but is synchronized with the companion motion.
\begin{figure}
\includegraphics[width=1.\columnwidth]{./f3.pdf}
\caption{Angular profiles of density and rotational velocity at $r=20$.
Shown are mid-plane values at times $t=604, 1030$.}
\label{fig:vpatt}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f4.pdf}
\caption{Radial velocity field $u_r(r,\phi)$ in the disk mid-plane at $t=1500$.
For convenience, two representations are shown.
The standard image in Cartesian coordinates (left) emphasizes the spiral structure of the velocity field.
For the image that is shown in cylindrical coordinates (right) we have chosen a color bar that emphasizes the inflow-outflow
structure along the mid-plane.
The '+' symbol indicates the position of the L1 point.}
\label{fig:vr_binary2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f5.pdf}
\caption{Accretion-ejection velocity field. Shown are the snapshots of the radial velocity $u_r$.
Compare to Fig.~\ref{fig:nc3_xy_rho_com}, middle, for a face-on view of gas density.
The angle $\Phi =0$ is measured from the $x$-axis.
Different $r$-$z$ planes correspond to different angles $\phi$.
In Figure~\ref{fig:vr_single} we show for comparison the $u_r$-distribution for a single star simulation.
Colors are enhanced to demonstrate inflow (blue) and outflow (red).
}
\label{fig:vr_binary}
\end{figure*}
We can now quantify the growth rate of the spiral arms by simply comparing the height and width of the density peak over time or more accurately, by integrating the density profile over the width of the arm.
For this we look again at Fig.~\ref{fig:vpatt} and estimate how the peaks in the density profile grow over time.
As an example we consider the spiral arm located at $\phi = 3.3$ at $r=20$ at $t=604$, which is moving to $\phi =4.2$ at $t=1034$.
We integrate the mass under the density peaks within a control volume, here defined by the
two minima along $\phi$, by $\Delta r =1$, and integrating from $z=-1$ to $z=1$, thus
\begin{equation}
\Delta M = \int_{r=20}^{r=21} \int_{\phi_{\rm min1}=2.4}^{\phi_{\rm min2}=5.2} \int_{z=-1}^{z=1} \rho~r~d\phi~dz~dr.
\label{secondinteg}
\end{equation}
Since we look for an estimate only, we consider the high density area close to the mid-plane.
We refer to the growth rate as to the {\em local excess} mass that is carried by the spiral arm, meaning the total mass enclosed by the arm
(in the control volume with $\Delta r=1$),
but subtracted by the average disk mass.
Here, we consider the azimuthally averaged density as proxy for the underlying disk.
We thus measure a {\em local} growth rate of the excess mass of the arm of
$\dot{M}_{\rm{arm}} = {\Delta M}_{\rm{arm}} / \Delta t = 0.22 / 430 = 5 \times10^{-4}$ in normalized units,
corresponding to $\dot{M}_{\rm{arm}} =0.22/0.77= 0.285$ measured per Keplerian period at $r=20$.
This numbers make sense only when compared to the local disk mass in the control volume,
$\Delta M =2.95$, thus referring to a growth rate of 10\% over a Keplerian period at $r=20$.
We note that the timescale of the growth rate is consistent with the sound crossing time across the disk, $H/R \simeq 0.1$.
Naturally, the same numbers hold for the growth rate of the disk density.
Note, however that disk mass decreases (and thus the mean disk density) over long time due to
ongoing accretion and ejection (see our discussion in \citealt{2018ApJ...861...11S}).
Now we discuss the overall pattern of the radial velocity field.
Figure~\ref{fig:vr_binary2} shows the radial velocity distribution in the disk mid-plane, $u_r(r,\phi)$. For convenience, two representations are shown.
The standard image in Cartesian coordinates (left) emphasizes the spiral structure of the velocity field.
For the image that is shown in cylindrical coordinates (right) we have chosen a color bar that emphasizes the inflow-outflow
structure along the mid-plane.
We see that there are separate streams of opposite radial direction in the disk mid-plane.
In particular, we recognize that close to the spiral arm the direction of the gas materials changes. Interestingly, we find a positive radial motion (red area, left panel) in direction of the secondary. This area is outside the Roche lobe (compare to the position of the L1 in Fig.~\ref{fig:nc3_xy_rho_com}).
Further, material is spiraling in (blueish color) along the {"}left{"} spiral arm, and moving
out along the {"}right{"} spiral arm (yellowish color).
In Figure~\ref{fig:vr_binary} we display the radial velocity field of the disk-jet structure in the meridional plane.
Although the radial velocity pattern looks quite unusual, this is not an artifact of our data handling. We may compare this to the 3D simulation results of a single star (see Appendix, Figure~\ref{fig:vr_single} which shows a regular accretion pattern (a negative $u_r$) in almost axisymmetry.
In contrary, the accretion velocity for the binary star simulation looks drastically different (see Figure~\ref{fig:vr_binary}).
Here, positive radial velocities exist in the disk, indicating {"}excretion{"} channels along certain angular directions.
These channels are most clearly indicated in Figure~\ref{fig:vr_binary2} (right panel) that clearly shows radial layers of inverse radial velocity.
Accretion happens (at this time $t=1500$) along $\phi = 90\degr, 270\degr$, while excretion dominates in channels along $\phi = 0\degr, 180\degr$.
Most probably given by the spiral geometry of the disk structure, these channels are not completely aligned along the radial direction (thus not constant along certain angles).
Indeed these channels also follow a spiral structure, as in Figure~\ref{fig:vr_binary2} they are not oriented parallel to the horizontal axis.
We now discuss the rotational velocity distribution.
Comparing the angular profiles of the rotational velocity $u_{\phi}(\phi)$, we recognize that
these profiles change for different radii.
In Figure~\ref{fig:vpatt} we show the angular profile of rotational velocity at $r=20$.
While the disk material follows more or less a constant rotation profile along $\phi$ for small radii
$r=5, 15$, for larger radii this profile is substantially different (not shown $\phi$ profile for all radii).
The profile of rotational velocity follows very closely the profile of the density (see Fig.~\ref{fig:vpatt})
Peaks in the rotational velocity profile indicate the location of the spiral arm, while these peaks also indicate a very strong shear.
Enhanced {\em orbital} velocity that is present in the disk, itself triggers further angular momentum exchange
(and also heating in case of a viscous approach that we do not follow).
We also observe a combination of super and sub-Keplerian velocities
that reflects different behaviors of mass flow in the disk.
Concerning the angular momentum balance, the material in super-Keplerian regions has gained angular momentum.
Quoting \citet{2001LNP...573...69B}, we stress that spiral waves in disks carry a negative angular momentum and their dissipation
leads to accretion of the fluid supporting the waves onto the central object.
This issue has been addressed also before, see e.g.~\citet{2016ApJ...823...81J}.
\subsection{Spiral arms injected into the outflow}
In the last section, we have analyzed the structure and evolution of the disk spiral arms.
We now consider how this structure that is generated in the disk, is further transferred into the outflow.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f6.pdf}
\caption{Velocity field in the disk and the jet at $t=1500$.
Shown is the rotational velocity $u_{\phi}$ of the jet material (at $z=25$) and in the disk mid-plane.
For comparison we display also the Keplerian velocity $u_{\rm K}$ at the disk mid-plane at t=1500.}
\label{fig:vtor1500}
\includegraphics[width=18cm]{./f7.pdf}
\caption{Velocity field in the disk and the jet at $t=1500$.
Shown are the radial velocity $u_r$ of the jet material (at $z=25$) and the vertical velocity $u_z$ of
the jet material and at the disk mid-plane.}
\label{fig:vr1500}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f8.pdf}
\caption{Evolution of the spiral structure injected into the jet.
Shown are the 2D slices of a 3D snapshot at time $t=1500$ of the density (in log scale)
at different height within the disk-jet system, $z=3$, $z=33$, $z=58$ and $z=83$.
At this time the spiral wall is fully developed with an angular shift of the spiral geometry
between the different layers, corresponding to a time lag caused by the jet propagation.
}
\label{fig:3dview_box_diffz}
\end{figure*}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{./f9.pdf}
\caption{Angular profiles of density and rotational velocity at $r=20$ in the jet (at $z=25$)
for two different evolutionary time steps, $t=600, 1030$. }
\label{fig:vpatt-jetz30}
\end{figure}
\begin{figure*}
\includegraphics[width=18cm]{./f10.pdf}
\caption{Magnetic field components in the disk-jet structure at $t=1500$.
Shown are the radial, the vertical and the toroidal component of the magnetic field
in the disk mid-plane $z=0$ (top), and the jet material at $z=25$ (bottom). }
\label{fig:brbphibzjetdisk}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f11.pdf}
\caption{Specific angular momentum distribution in the disk mid-plane at time $t=1500$ in log scale.
Shown are all terms contributing to equation~\ref{llintime} in consecutive order (from upper left to lower right).
For comparison these terms are displayed in
Table~\ref{tbl:terms} (top).
}
\label{fig:eq16_disk}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f12.pdf}
\caption{Specific angular momentum distribution in the jet at time $t=1500$.
The presentation is similar to Figure~\ref{fig:eq16_disk}, however the variables displayed are now shown averaged
from $z=25$ to $z=35$ in the jet flow.
The reason is their pixelized distribution that is resulting from the application of the gradients involved which are calculated in post-processing.}
\label{fig:eq16_jet}
\end{figure*}
Naturally, we would expect that the local launching conditions for the disk wind will be reflected in the disk structure and the
dynamics of the disk wind -- in this case implying that the wind or jet structure also carries a density wave, respectively a spiral arm.
This exactly demonstrated by our simulations.
While the effect might not come as a surprise, its impact on the disk and jet dynamics has never been discussed before.
Obviously, a spiral wave structure of the outflow will have severe implications concerning its stability and
also the overall angular momentum budget of the disk-jet system (see next section).
We first discuss the overall structure of the outflow launched from the disk.
Figure~\ref{fig:vtor1500} and \ref{fig:vr1500} display snapshots of the radial velocity, the vertical velocity and the
rotational velocity at $z\simeq 25$.
Again, we see a spiral structure as the prominent features in the velocity pattern which is now inside the jet.
The location of the spiral arms is consistent with the position of the density peaks discussed above.
While we usually discuss the structure of a spiral {\em arm} in the disk, the corresponding structure along the jet is that
of a wall (of increased density), to which we will refer in the following as a {\em spiral wall}.
When there is differential pattern speed of the jet spiral structure {\em along} the jet, that spiral wall will follow
a {\em helical} structure (see discussion below).
As the disk spiral arms become denser in time, the same happens with the jet spiral walls which are as well rotating
synchronized with the orbital motion of the binary.
A 3D visualization of the evolution of the spiral features along the disk-jet structure is shown in Fig.~\ref{fig:3dview_box_diffz}
where display consecutive 2D slices along the 3D disk-jet system.
In particular, the plot visualizes the lag between the spiral features from jet layer to jet layer, which is
directly connected to the propagation of the jet material.
This strongly supports our claim that the spiral structure is {\it injected} into the outflow from the disk.
We now quantify the spiral wall structure in the outflow and compare it to the disk spiral arms.
We thus focus on the angular profiles of density and rotational velocity at $r=20$ inside the jet, here at $z=25$
(see Fig.~\ref{fig:vpatt-jetz30} and compare it to the same plots for the disk area i.e., Fig.~\ref{fig:vpatt}).
We find that the position of the minimum in the mass density of the disk spiral arm area and those of the jet spiral wall area are very close.
The same holds when comparing the angular profiles of the rotation velocity $u_\phi(\phi)$.
Essentially, the spiral features in the disk and in the jet follow the same kind of time evolution, meaning that the
jet spiral arms do not lag the disk spiral arms, and establishing an almost stationary
\footnote{Our simulations do not reach a steady state because of two reasons.
Firstly, because of the limited simulation time.
Secondly, and more physically, due to the fact that the disk mass decreases over time due to accretion and ejection.}
structure, such that the spiral features in the jet are almost co-rotating with the disk spiral features
(as discussed above).
Roughly speaking, the small change in the position angle of the spiral arm along the wall arises from the fact that the jet
dynamical time scale is much faster than the disk dynamical time scale.
As in any accretion-ejection scenario \citep{2004ApJ...601...90C, 2007A&A...469..811Z, 2012ApJ...757...65S} the dynamical
time scale of the jet is basically defined by the Alfv\'enic
time scale, and is much faster than the dynamical time scale of the disk which follows the dissipative time scales.
The jet Alfv\'enic time scale is $\tau_{\rm A} = \Delta L / v_{\rm A}$,
where we consider for $\Delta L \simeq 10$, that is either the jet length close to the disk or the jet width.
At this point the jet is trans-sonic, $v_{\rm A} \simeq 0.125$, thus $\tau_{\rm A} \simeq =10/0.125= 80$.
This is the time scale on which the internal jet structure can be causally changed on these scales.
This time scale is similar (given the trans-sonic jet nature) also to the kinematic time scale
of the $\tau_{\rm kin} \equiv \Delta L / v_{\rm jet} \simeq =25/0.2=125$ that is the time scale typical for jet propagation.
Launching of the jet out of the disk, i.e the mass transfer from accretion into ejection, happens on a
resistive time scale $\tau_{\eta} \simeq (\Delta L)^2 / \eta$.
Assuming $\Delta L \simeq 1$ for the launching area and an average magnetic diffusivity $\eta \simeq 0.03$, the time scale for
the launching process is about $\tau_{\eta} \simeq 33$, thus shorter than the time scale we observe for spiral arm formation.
The jet mass flux is certainly fed by the accretion disk.
The feeding is - essentially - a local process, such that each surface area of the disk feeds the outflow
that is launched from there.
Once injected, the material is rapidly accelerated along the outflow, and any imprint of the injection
process is propagated to larger altitudes.
However, the propagation time of the outflow, together with the spiral arm orbital time scale will
lead to a lag between the spiral arm structure in the disk and in the outflow.
In summary, any structure that develops in the disk, is {"}immediately{"} propagated along the wind.
Nevertheless, on very large spatial scales we would expect the jet spiral arms to lag behind the disk spiral
arms, assuming that the spiral structure and the jet survives that long.
We find that the vertical jet speed (measured at $z=25$, see Fig.~\ref{fig:vr1500}) is more
smoothly distributed than the poloidal velocity field and the density in the disk mid plane.
In contrast, the toroidal velocity in the disk is smoother than that for the jet. This is also seen in the angular profiles in
Fig.~\ref{fig:vpatt}.
We explain this by the observation that at the position of the density spiral arms, also the poloidal magnetic field is enhanced
(accumulated).
This can be seen in Fig.~\ref{fig:brbphibzjetdisk}, that shows a distinct spiral pattern in the disk for all three field components.
Inside the jet, the spiral features are also present, but are much broader.
Thus, for launching and accelerating the higher mass flux out of the spiral arm to similar
speed, a stronger magnetic flux is available.
Comparing the mass density distribution in the jet and the counter jet, we find a similar
profile for the jet spiral arms which just reflects the bipolar symmetry of the setup,
in particular our model setup with the binary orbit being co-planar with the mid-plane of the disk forming jet.
If the secondary would be instead placed offset to the mid-plane (and the initial disk) establishing
a bipolar asymmetry in the gravitational potential, we would expect a different spiral structure
for jet and counter jet.
In this section, we have proposed the scenario that the jet spiral structure we observe is injected from the
disk into the outflow and then propagates along the jet.
A valid objection may be that the tidal forces by the binary system that modify the disk structure,
do also act directly on the jet.
In particular for low altitudes, thus the disk wind close to the disk, the tidal forces on disk and jet are expected to be
similar \footnote{Note that the extension of the Roche lobe in vertical direction is about $z\simeq 70$}.
Therefore, the jet flow through the Roche lobe can be tidally deformed, and the resulting structure is
{\em not} the result of the injection process alone.
However, we have further proven our hypothesis of a disk spiral structure injected into the outflow by a simple
numerical experiment.
We have first run a jet launching simulation for a single star
( see Figure ~\ref{sin_to_bin_spiralwall} in the appendix).
An almost axisymmetric 3D jet structure evolves that reaches a quasi steady-state at time $t\simeq 4000$.
After that time, we then switch on the gravity of the Roche potential and observe how the disk and jet structure
further evolves.
What we see is that the spiral structures in the disk and in the jet do not arise at the same time.
Instead, the disk spiral structure evolves first, after about 500 further time steps.
Then, subsequently, the different jet layers also exhibit spiral arms.
Furthermore, the jet spiral structure is first seen at lower altitudes (we have compared layers $z=25, 35, 45$),
the at the higher layers.
From layer to layer separated by $\Delta z =20$ it takes the spiral structure another time period of $\Delta t \simeq 100$
to appear.
This compares to a pattern speed propagating along the jet with $v_{\rm pat} \simeq 0.2$ which is indeed
comparable to the wind or jet speed at this location.
We see this as very strong support for our hypothesis.
Would the spiral structure in disk and jet be produced instantaneously by the tidal forces, such
a time lag would not be visible (see our discussion above).
It would be interesting to follow this features to much higher altitudes, even beyond the Roche lobe in
vertical direction.
If the jet at these distances would carry a spiral structure, we expect that to be injected from the lower
altitudes, in fact by the disk, as the tidal forces beyond the Roche lobe are considerably smaller.
The gravitational potential far beyond the Roche lobe approaches that of a point source, although we
expect some jet bending towards the secondary not far from the Roche lobe.
\section{Local torque analysis in the disk and the outflow}
In this section, we analyze the forces and torques acting on the disk and the jet in order to investigate
the physics of 3D effects in MHD jet launching, that is the interaction between the disk and the jet.
Similar works have recently been published, studying hydrodynamic torques in circum-binary disks
\citep{2017MNRAS.466.1170M,2019ApJ...875...66M}, or the torques exerted on accreting supermassive
black hole binaries \citep{2013MNRAS.435.2633N}.
Only a few studies exist that have considered {\em magnetic torques} in a binary system.
One example is \citet{2017MNRAS.469.4258T} who have studied the magnetic torque in accretion disks of
millisecond pulsars.
A classic problem of MHD winds and jets is the loss of the disk angular momentum by the magnetic torque of
a disk outflow (see e.g. \citealt{1992ApJ...394..117P}).
It has become clear that MHD disks wind do remove angular momentum from the disk very efficiently, due to extended lever arm.
For the kind of studies just cited above, certain approximations can be made when calculating the angular
momentum balance.
For example, the initial studies of MHD jet formation were considering steady-state MHD and axisymmetry.
Thus, certain terms could be neglected in the angular momentum balance.
The same is true for studies concentrating on the disk physics only.
Here, the vertical motion (in the disk) can be neglected, thus also the vertical angular momentum loss.
This is obviously also the case for purely hydrodynamic studies that do not consider any magnetic torque
and its accompanied extended lever arm.
As the present paper considers MHD launching in 3D, we cannot simplify the angular momentum equation accordingly.
Instead, we have to deal with all terms -- the magnetic terms, the angular momentum loss induced by vertical transport ($u_{\rm p} \simeq u_{\mathrm K}$), and, essentially, also the derivatives in the $\phi$-direction.
In order to study the different torques acting on the accretion disk, we concentrate on the toroidal component of the momentum defined by Equation~\ref{momentum_eq} and the $\phi$-component is given as
\begin{multline}
\frac{\partial u_{\phi}}{\partial t}+\frac{u_{r}}{r} \frac{\partial (r u_{\phi})}{\partial r}
+\frac{u_{z}}{r} \frac{\partial (r u_{\phi})}{\partial z}
+\frac{u_{\phi}}{r} \frac{\partial u_{\phi}}{\partial \phi}=\\
-\frac{1}{r}\frac{\partial \Phi}{\partial \phi}
-\frac{1}{\rho r}\frac{\partial P}{\partial \phi}\\
+\frac{1}{4\pi \rho r}
\left[ B_{r} \frac{\partial (r B_{\phi})}{\partial r}
+ B_{z} \frac{\partial (r B_{\phi})}{\partial z}
- \frac{\partial \left(B_r ^2 + B_z ^2\right) }{2 \partial \phi}.
\right]
\label{phi_momentum_eq0}
\end{multline}
%
We can re-write this equation as
\begin{multline}
\frac{\partial u_{\phi}}{\partial t}
+ \frac{1}{r} \vec{u}_P\cdot\nabla(r u_{\phi})
+ \frac{u_{\phi}}{r} \frac{\partial u_{\phi}}{\partial \phi}=\\
-\frac{1}{r}\frac{\partial \Phi}{\partial \phi}
- \frac{1}{\rho r}\frac{\partial P}{\partial \phi}\\
+ \frac{1}{4\pi\rho r} \left[\vec{B}_P\cdot\nabla(r B_{\phi})
- \frac{\partial \left(B_r ^2 + B_z ^2\right) }{2\partial \phi} \right].
\label{phi_momcf1}
\end{multline}
It is immediately clear that there are terms in this fully 3D equation which are related to derivatives
in $\phi$-direction.
When applying stationarity, ${\partial }/{\partial t}=0 $, and axisymmetry, ${\partial }/{\partial \phi}=0 $,
and using $\vec \nabla \cdot \rho \vec u_{\rm p} =0$ and $\vec \nabla \cdot \vec B_{\rm p} =0$,
we may obtain another, simplified form of equation \ref{phi_momcf1},
\begin{equation}
\vec \nabla \cdot \left( \vec{u}_{\rm p} ru_{\phi}-\vec{B}_{\rm p} r u_{\phi} \right)=\vec \nabla\cdot \vec{\tau}_{0},
\end{equation}
where $\vec{\tau}_{0}$ represents the other torques acting in the system, such as e.g. the viscous torque.
By applying Stokes theorem, this equation can be converted to
\begin{equation}
\tau_{0} = \int_S r \left(\rho u_{\phi} \vec{u}_p - \frac{1}{4\pi} B_\phi \vec{B}_p \right) \cdot \vec ds.
\label{eq-tau-PP92}
\end{equation}
which is the well-known equation for the angular momentum flux (thus, the torques)
derived for steady-state MHD wind theory (see e.g. \citealt{1992ApJ...394..117P, 2013A&A...550A..99Z}).
As for another example -- that also connects to the topical literature -- we may consider non-stationarity,
but focus on the angular momentum budget only inside the disk.
This approach is typical when investigating solely the hydrodynamic disk structure
(see e.g. \citealt{2017MNRAS.466.1170M, 2019ApJ...875...66M}).
Neglecting the vertical motion in the disk, $u_z \simeq 0$, in Equation~\ref{phi_momentum_eq0},
and integrating vertically over the disk, we arrive at
\begin{multline}
\frac{\partial u_{\phi}}{\partial t}
+ \frac{u_{r}}{r} \frac{\partial \left( r u_{\phi} \right) }{\partial r}
+ \frac{u_\phi u_r}{r}
+ \frac{u_{\phi}}{r} \frac{\partial u_{\phi}}{\partial \phi} = \\
- \frac{1}{r} \frac{\partial \Phi}{\partial \phi}
- \frac{1}{\Sigma r} \frac{\partial P}{\partial \phi}
+ \int \frac{1}{\rho} F_{\rm B} dz
\label{phi_momentum_equ1}
\end{multline}
which is similar to Equation~A2 in \citet{2017MNRAS.466.1170M}.
Here $\Sigma$ denotes the surface density of the disk, while the Lorentz force $F_{\rm B}$ in the $\phi$-component of the equation of motion
is defined as
\begin{multline}
F_{\rm B} = \frac{1}{4\pi r} \left[
B_{r} \frac{\partial \left(r B_{\phi} \right) }{\partial r} +
B_{z} \frac{\partial \left(r B_{\phi} \right) }{\partial z} -
\frac{\partial \left(B_r^2 + B_z^2 \right) }{2\partial \phi}
\right],
\label{eq:tau-bx}
\end{multline}
In our approach that considers disk outflows and jets, we cannot ignore the vertical motion.
We thus need to consider the full Equation~\ref{phi_momentum_eq0}
that allows us to study the angular momentum budget inside disk and along the jet.
Multiplying Equation~\ref{phi_momentum_eq0} by $r$ we find for the evolution of the specific angular momentum $l = r u_{\phi}$,
\begin{multline}
\frac{\partial l}{\partial t}=
- u_r \frac{\partial l}{\partial r}
- u_z \frac{\partial l}{\partial z}
- \frac{u_\phi}{r} \frac{\partial l}{\partial \phi} \\
- \frac{\partial \Phi}{\partial \phi}
- \frac{1}{\rho} \frac{\partial P}{\partial \phi}
+ \frac{1}{\rho} r F_{\rm B}.
\label{llintime}
\end{multline}
On the r.h.s.~of Equation~\ref{llintime} different torques
appear that affect the disk specific angular momentum evolution -
namely the pressure gradient torque,
the gravity torque and the magnetic torque $\tau_B = r F_{\rm B}$, respectively.
Here, also the $\phi$-derivative of the magnetic pressure term is included that has been neglected for Equation~\ref{eq-tau-PP92}.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f13.pdf}
\caption{Specific magnetic angular momentum contributions in equation \ref{llintime} at time $t=1500$.
These terms are shown along the disk mid-plane (top) and in the jet (bottom) at $z=25$.
For comparison these terms are listed in
Table~\ref{tbl:terms} (top).}
\label{tb_term_magnetic}
\end{figure*}
We now compare the different contributions in Equation~\ref{llintime}. For convenience, we display them in Table~\ref{tbl:terms} (top) in consecutive order.
In Figures~\ref{fig:eq16_disk} and \ref{fig:eq16_jet} we show these terms at time t=1500.
The main conclusions from our comparison are the following.
(i) Among all terms, the gravity torque $l_{\rm G}$ (see Table~\ref{tbl:terms}, top) is most smoothly
distributed\footnote{We stress again that the pixelized appearance of some of the terms arises
from the numerical calculation of the gradients involved.
That is, however, obtained in post-processing (in cylindrical coordinates, interpolated from the numerical Cartesian grid), so the simulation procedure itself is not affected.}.
This is easy to understand as this term does not depend on the MHD variables like density, but it is just defined by the time evolution of the Roche potential, i.e. the location of the companion star.
Consequently, we observe that the signature of the spiral arms is clearly visible in all panels except the gravity torque.\\
(ii) The term $l_{\rm Uz}$ corresponding to the exchange of specific angular momentum due to vertical transport is larger
inside the jet compared to the disk.
This may simply be explained by the fact that the vertical advection speed is much smaller in the disk compared to the jet.\\
(iii) There are more spiral windings in the disk than for the jet.
We understand this as a consequence of the opening-up of the jet flow.
Due to the opening cone of the outflow the opening angle of a spiral wave injected into the outflow decreases
with altitude and eventually the wave pattern dies out.\\
(iv) The spiral arms in the disk and inside the jet are not synchronized.
The spiral arm in the jet lags the spiral arm in the disk (best visible for $l_{\rm P}$).
We understand this as due to the jet inertia.
The jet is set in rotation by the magnetic field that is anchored in the disk
(like a whirlpool).
The inertia of the jet material counteracts the toroidal Lorentz force and leads to a lag between the foot point
of the jet and the jet upper layers.
We also note that the jet rotation pattern we observe at a certain time results from an injection from the disk
into the jet at earlier times (when the disk spiral structure was located at an earlier position).
This pattern is then propagated to higher altitudes.
A simple estimate of the time lag is by comparing the (estimated) outflow speed and the altitude in the jet
$\Delta t = z / u_{\rm jet} \simeq 25 / 0.5 = 50$
which roughly fits to what we observe in our simulation pattern comparing similar time differences.\\
(v) For a comparison of the magnetic terms defined by Equation~\ref{eq:tau-bx}, we look at in Figure\ref{tb_term_magnetic}.
The terms are shown for the disk (mid-plane) and for the jet.
It is obvious that all three terms are larger in the jet compared to the disk.
This is due to the large magnetic lever arm in the jet and is known from traditional (axisymmetric) jet theory.
Essentially, the figure also demonstrates that the non-axisymmetric effects are crucial even
far from the disk mid-plane.\\
(vi) The magnetic term also shows a spiral structure.
This can be explained by the fact that the magnetic field is frozen into the disk material.
Therefore, the magnetic flux follows the disk density structure, thus the spiral shape.
On the other hand the launching of the spiral structure from the disk into the outflow is affected by the disk resistivity.
So the jet spiral magnetic field may lag the disk spiral.
Once loaded into the disk wind, the further evolution follow ideal MHD.
However, inertial forces (caused by the indirect term in time dependent Roche potential, see Equation~\ref{roche_potential})
will continue to affect the jet spiral structure, with the result that the spiral
in the more distant parts of the jet lags the spiral in the parts of the outflow close to the disk.\\
In summary, we find that the 3D effects such as the $\phi$-dependency of various variables as well as the vertical transport of the material are essential for our study of the angular momentum budget in the disk-jet system.
The non axisymmetric structures that are triggered in the disk by the Roche gravitational
potential are launched into the disk wind and are propagated into the jet.
In the next section we will discuss and show how these effects contribute to the
overall angular momentum budget of the disk and jet.
\section{Global angular momentum balance}
We finally briefly investigate the global angular momentum budget and compare the respective impact
of the particular physical terms.
We integrate the differential equation discussed above and compare the radial and vertical profiles of the angular momentum distribution over time.
Multiplying Equation~\ref{llintime} by $r\rho$, and making use of both the identity
\begin{equation}
\rho \frac{\partial l}{\partial t}
= \frac{\partial \left( \rho l \right) }{\partial t} - l\frac{\partial \rho}{\partial t}
\end{equation}
and the continuity equation, we arrive at
\begin{multline}
\frac{\partial}{\partial t} \left(r \rho l\right)=
-\frac{\partial}{\partial r} \left(\rho r u_r l\right)
-\frac{\partial}{\partial z} \left(\rho r u_z l\right)
-\frac{\partial}{\partial \phi} \left(\rho u_\phi l\right)\\
- r\rho \frac{\partial \Phi}{\partial \phi}
-r\frac{\partial P}{\partial \phi}
+r^2 F_{\rm B}.
\label{phi-torque1}
\end{multline}
Equation~\ref{phi-torque1} is identical to Equation~(A6) in \citet{2017MNRAS.466.1170M},
except for the two additional terms that appear in our approach and are due to the existence of
the (i) magnetic field and the (ii) disk wind.
In order to calculate the poloidal angular momentum fluxes (i.e. accretion and ejection),
respectively the profile of the torques at work,
we need to integrate Equation~\ref{phi-torque1} in the corresponding poloidal directions.
We decided to integrate (i) in vertical direction ($z$-direction) and (ii) in radial direction.
This will deliver the angular momentum flux that is advected (i) along the disk and (ii) into the
outflow.
\subsection{Radial angular momentum balance}
By integrating Equation~\ref{phi-torque1} in $\phi$ and $z$-direction, we arrive at
\begin{multline}
\frac{\partial}{\partial t}\int \int \left(r \rho l\right) \,d\phi\,dz
= -\int \int \frac{\partial}{\partial r} \left(\rho r u_r l\right) \,d\phi\,dz \\
-\int \int \frac{\partial}{\partial z} \left(\rho r u_z l\right) \,d\,\phi\,dz
-\int \int \frac{\partial}{\partial \phi} \left(\rho u_\phi l\right) \,d\phi\,dz\\
- \int \int r\rho \frac{\partial \Phi}{\partial \phi} \,d\phi\,dz
-\int \int r\frac{\partial P}{\partial \phi} \,d\phi\,dz
+\int \int r^2 F_{\rm B} \,d\phi\,dz.
\label{phi-torque2}
\end{multline}
The integrated values may also be understood as averages over the $z$ and $\phi$-direction.
The integration area is chosen as one (initial) disk scale height in vertical direction,
$\Delta z (r) = [-0.1~r, 0.1~r]$, and $\Delta \phi=[0, 2\pi]$,
thus confined to the disk region.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f14.pdf}
\caption{Angular momentum flux evolution for a binary disk-jet system.
Shown are radial profiles of the different contributions to the total angular momentum
flux $\dot{J}(r,t)$ for different times.
These terms are
$\dot{J}_1(r,t)$ considering the radial advection of angular momentum,
$\dot{J}_2(r,t)$ considering the the vertical transport,
$\tau_{\rm G}(r,t)$ considering the gravity torque,
and $\tau_{\rm B}(r,t)$ the magnetic torque.
The terms $\dot{J}_{\rm br,1}(r,t)$, $\dot{J}_{\rm br,2}(r,t)$ and $\dot{J}_{\rm br,3}(r,t)$ represent the different contributions
to the magnetic torque, respectively (see Table~\ref{tbl:terms}).
For each radius $r$ these terms are integrated from $r=0$ to $r$, while we have vertically averaged between
$z=-0.1~r$ and $z=0.1~r$.
The terms $\dot{J}_3(r,t)$ for the orbital transport is quite small and $\tau_{\rm P}(r,t)$ for the pressure torque
approximately vanishes and are not shown here.}
\label{jdotr_averged_binary}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f15.pdf}
\caption{Angular momentum flux evolution for a single star disk-jet system.
For comparison with Figure~\ref{jdotr_averged_binary}, we have reproduced the same terms for a single star jet launching simulation.
Shown are radial profiles of the different contributions to the total angular momentum
flux $\dot{J}(r,t)$ at different times.
These terms are
$\dot{J}_1(r,t)$ which considers the radial advection of angular momentum,
$\dot{J}_2(r,t)$ considering the the vertical transport,
$\tau_{\rm G}(r,t)$ considering the gravity torque
and $\tau_{\rm B}(r,t)$ the magnetic torque.
The terms $\dot{J}_{\rm br,1}(r,t)$, $\dot{J}_{\rm br,2}(r,t)$ and $\dot{J}_{\rm br,3}(r,t)$ represent the different contributions
to the magnetic torque, respectively (see Table~\ref{tbl:terms}).
For each radius $r$ these terms are integrated from $r=0$ to $r$ while we have vertically averaged between
$z=-0.1~r$ and $z=0.1~r$.
The terms $\dot{J}_3(r,t)$ for the orbital transport and $\tau_{\rm P}(r,t)$ the pressure torque, and
$\dot{J}_{\rm br,3}(r,t)$ do vanish and are not shown here.}
\label{jdotr_averged_single}
\end{figure*}
The different terms in Equation~\ref{phi-torque2} contribute differently to the overall angular momentum
evolution of the binary star-disk-jet system.
Altogether, Equation~\ref{phi-torque2} describes the rate of change of angular momentum across a ring with
radius $r$ and width $dr$.
When integrating $\rho l$ over the volume element $dz\,r d\phi$, we obtain the total angular momentum in a cylinder of
width $dr$ and radius $r$ (see left hand side of Equation~\ref{phi-torque2}).
Equation~\ref{phi-torque2} can be re-written as
\begin{multline}
\frac{\partial }{\partial t}\left( \frac{dJ_{\rm tot}(r,t)}{dr} \right)=\\
\frac{\partial}{\partial r} \left( \dot J_1(r,t)\right)+\frac{\partial}{\partial r}\left( \dot J_2(r,t)\right)
+\frac{\partial }{\partial r} \left( \dot J_3(r,t)\right)\\
+\frac{\partial\tau_{\rm G}(r,t)}{\partial r} +\frac{\partial\tau_{\rm P}(r,t)}{\partial r}
+\frac{\partial \tau_{\rm B}(r,t)}{\partial r},
\label{allJr}
\end{multline}
where we define the following terms contributing to the total angular momentum evolution
\begin{equation}
\frac{d \dot{J}_{\rm tot}(r,t)}{dr} = \frac{\partial}{\partial t} \oint \int r \rho l \, dz \,d\phi.
\label{djdr}
\end{equation}
These contributions are the inward flux of angular momentum due to radial advection,
\begin{equation}
\frac{d \dot{J}_{1}(r,t)}{dr}
= -\oint \int \frac{\partial}{\partial r} \left(\rho r u_r l \right)\, dz \, d\phi,
\label{j1_adv}
\end{equation}
the loss of angular momentum due to vertical transport,
\begin{equation}
\frac{d \dot{J}_{2}(r,t)}{dr} =
-\int \int \frac{\partial}{\partial z} \left( \rho r u_z l \right)\, d\,\phi\, dz,
\label{j2_up}
\end{equation}
the radial flux of angular momentum due to toroidal motion,
\begin{equation}
\frac{d \dot{J}_{3}(r,t)}{dr} =
-\int \int \frac{\partial}{\partial \phi} \left( \rho u_\phi l \right) \,d\phi \, dz,
\label{j3_shear}
\end{equation}
the gravitational torque per unit radius,
\begin{equation}
\frac{d\tau_{\rm G}(r,t)}{dr} = -\oint \int r\rho \, dz \frac{\partial \Phi}{\partial \phi} \, d\phi,
\label{grtor}
\end{equation}
the pressure torque per unit radius,
\begin{equation}
\frac{d\tau_{\rm P}(r,t)}{dr} =-\int \int r\frac{\partial P}{\partial \phi} d\phi dz,
\label{gpre}
\end{equation}
and the magnetic torque per unit radius.
\begin{equation}
\frac{\partial \tau_{\rm B}(r,t)}{\partial r}=\oint \int r^2 F_{\rm B}\,dz\, d\phi.
\label{Jmag}
\end{equation}
Since we conduct a fully 3D study, a few extra terms are seen compared to \citet{2017MNRAS.466.1170M}.
These are torques due to vertical and azimuthal motion or the thermal pressure torque or magnetic torque.
Among the different terms, the disk wind plays a major role in the angular momentum transport.
More specifically, the Equations~\ref{j2_up} and \ref{Jmag} provide the contribution of the disk outflow
for the angular momentum budget of the system.
We notice that Equations~\ref{djdr}-\ref{Jmag} describe $\partial_r \dot J(r,t)$ or $\partial_r \tau(r,t)$.
Thus, to derive the radial profile of the angular momentum fluxes at work, respectively the torques,
we integrate each term
(see Table~\ref{tbl:terms}, middle)
in $r$-direction from $0$ to $r$.
For example, to obtain
$\dot{J}_1(r,t)$ at each radius point we integrate
$\dot{J}_1(r,t) = \int_0^r \, Te_{\rm r,1}(r',t) dr'$ and $Te_{\rm r,1}$ defined in Table~\ref{tbl:terms}.
We notice that $\dot{J}_1(r,t) $ represents the angular momentum flux of radial transport, integrated in $\phi$ and $z$-direction, and also along the radial direction
at each radius.
As another example, we can consider the different terms of the radial magnetic torque which are obtained as
in the following example $ {\tau}_{B}$.
\begin{equation}
\tau_{\rm B}(r,t)= \dot{J}_{br,1}(r,t) + \dot{J}_{br,2}(r,t)+ \dot{J}_{br,3}(r,t)
\label{jdot6_terms}
\end{equation}
with
\begin{equation}
\dot{J}_{br,1}(r,t)=\int_0^r Te_{\rm B,r,1}(r',t) dr'
\end{equation}
and the terms $Te_{\rm B,r,1}, Te_{\rm B,r,2},Te_{\rm B,r,3} $ are defined in Table~\ref{tbl:terms}.
In Figure~\ref{jdotr_averged_binary} we display the radial profile of the angular momentum fluxes and the
corresponding torques on the disk-jet at different times
(see again Table~\ref{tbl:terms}, middle).
In order to disentangle the contribution of the 3D terms to the local and global angular momentum budget,
it is essential to compare the results for the binary star simulation to that of a single star.
We thus show the respective terms also for a single star simulation in Figure~\ref{jdotr_averged_single}.
This simulations is run in fully 3D \citep{2015ApJ...814..113S} but with single star gravitational potential (thus no time-dependent Roche potential).
We compare in detail all terms that contribute to the radial profile of the angular momentum transport
in the disk by different physical processes, respectively (see Table~\ref{tbl:terms}, middle).
As we find some of the terms being negligibly small, we do not show them in the Figure~\ref{jdotr_averged_single}.
By comparing Figure~\ref{jdotr_averged_binary} and Figure~\ref{jdotr_averged_single} we derive the
following conclusions.\\
(i) Here, we study the angular momentum evolution of a MHD accretion-ejection structure
orbiting in a binary system, thus our approach is magnetized and non-axisymmetric.
Compared to previous works
\citep{1977MNRAS.181..441P, 1979MNRAS.186..799L,2017MNRAS.468.1387L,2020A&A...635A.204A,2020A&A...641A..64H}
studying the torques acting in a binary system (mostly performed in hydrodynamic limit),
our simulations consider the full magnetic torque,
thus, all terms for the magnetic tension and the magnetic pressure are taken into account.
In addition, due to the presence of the outflow, the vertical distribution of angular momentum,
thus the vertical transport of angular momentum, is considered in our approach.\\
(ii) Comparing Figure~\ref{jdotr_averged_binary} and Figure~\ref{jdotr_averged_single} we see that the $\phi$-dependent
terms
$\tau_{\rm G}(r,t)$ and $ \dot J_{\rm br3}$ (part of the magnetic torque),
are actually quite contributing
in the binary setup.
These terms become important due to the orbiting companion star that breaks the axial symmetry, and, thus, represent the {\em 3D tidal effects} in our approach.\\
(iii) Figures~\ref{jdotr_averged_binary} and \ref{jdotr_averged_single}
present the time evolution of the angular momentum or torques.
Thus, the sign of the term considered determines if this particular region of the disk is losing or gaining
angular momentum.
Regarding this, we observe that the gravity torques ${\tau}_{\rm G}(r,t)$ and the magnetic torque $\tau_{\rm B}(r,t)$ are
reducing (removing) the angular momentum through the whole disk area,
especially at the outer part of the disk.
In the other panels, corresponding to the terms for advection or vertical transport, the sign varies
(in time and radius).
Thus, these terms contribute in increasing or decreasing the angular momentum in a particular region, respectively.\\
(iv) We recognize
for both the single star and the binary approach that the torque
carried by vertical motion $\dot{J}_2(r,t)$ is comparable to the other terms, such as e.g. the advection torque.
At time $t=2500$ we find that the profile of $\dot{J}_2(r,t)$ is more scattered, and also larger in binary system.
The reasons are a larger gradient for the vertical velocity and also
stronger velocity fluctuations
along disk mid-plane for the case of the binary simulation (The figure not shown here).
We conclude, that the vertical transport considered in the disk has a significant effect.
This seems to be caused mainly by the existence of an outflow.\\
(v) An essential torque that needs to be considered is that of gravity $\tau_{\rm G}(r,t)$.
This is a 3D effect and caused by the tidal forces produced by the time-dependent Roche potential.
These features are (obviously) not seen in the gravity torque of the single star
(see Figure~\ref{jdotr_averged_single}).
In the end, it is this tidal torque that is the fundamental cause for the other 3D effects appearing
in our disk-jet system, including the disk spiral arms (seen in density but also the magnetic field) and in the outflow.
Considering the radial profile of the gravity torque, we observe that it is dominant at the outer part of the disk.
In the inner part ($r<25$), the gravity torque is somewhat smoother but essentially smaller.
The obvious explanation is by means of the gravitational potential.
At the inner disk region the point gravity of the primary is dominant, while further out the Roche potential plays the dominant role.
(vi) The other substantial term is the magnetic torque $\tau_{\rm B}(r,t)$.
Three different terms are involved in the magnetic torque (see Equation \ref{jdot6_terms} and Table~\ref{tbl:terms}, middle).
We find that among these terms, the term $\dot{J}_{br,2}$ is dominant in both the binary and the single star setup.
The other terms do not have a serious contribution.
We see that the magnetic torque $\tau_{\rm B}(r,t)$ is larger in the outer disk regions ($r> 20$).
This may reflect again the importance of the magnetic lever arm which is larger at the larger radii (less collimated field).
We also see that the first term $\dot{J}_{br,1}$ does not contribute much to the radial angular momentum flux.
This is understandable, as both the $r$-component of the magnetic field is small, and also the
$z$-derivative of the toroidal field is larger than its $r$-derivative.
Note that along the disk mid-plane the $B_{\phi}$ almost vanishes (in axisymmetric steady state MHD it vanishes
by definition), similar for the component $B_r$.
Thus no contribution due to $B_{\phi}B_r$ stresses is expected here.
Accordingly, we find that the magnetic torques and the gravity torques remove the angular momentum from the disk and support the inward motion of
the disk material.
However, for the other torques, we do not find a unique behavior throughout the disk.
These torques change their sign at various radial positions.
(vii) Among the different torques we have explored, the torque induced by the pressure gradient is almost
vanishing and thus do not contribute to the total angular momentum budget.
Also, the torque induced by orbital motion $\dot{J}_3(r,t)$ has quite small contribution to the total angular momentum budget.
Thus, in agreement with the previous works which do not consider the torque by pressure gradient
\citep{2013MNRAS.435.2633N, 2017MNRAS.469.4258T, 2017MNRAS.466.1170M, 2019ApJ...875...66M},
we can ignore the pressure torque, i.e. the angular momentum transport due to the $\phi$-derivative of the gas pressure.\\
We summarize this subsection by stating again that we have considered in our simulations and in our analysis
the full magnetic torque and also the presence of an outflow, thus angular momentum transport by vertical motion.
After all, among the extra terms considered, this latter term has a significant role on the total angular momentum
budget also in a binary system.
The same holds for the magnetic torque, however, the contribution of the $\phi$-derivative of the magnetic pressure
and the $B_{\phi}B_r$ stresses are small in the mid-plane.
\begin{figure*}
\centering
\includegraphics[width=18cm]{./f16.pdf}
\caption{Angular momentum flux along the jet.
Shown are the vertical profiles of the angular momentum fluxes and torques in the upper hemisphere, $\dot J(z,t)$.
Here, the different angular momentum fluxes are defined by $\dot{J}_2(z,t)$, the vertical transport, $\tau_{\rm G}(z,t)$,
the gravity torque, and $\tau_{\rm B}(z,t)$, the magnetic torque, respectively.}
\label{jdotz_andt}
\includegraphics[width=18cm]{./f17.pdf}
\caption{Angular momentum flux along the jet, comparison for a single star simulation.
Shown are the vertical profiles of the angular momentum fluxes and torques in the upper hemisphere, $\dot J(z,t)$.
Again, the different angular momentum fluxes are defined by $\dot{J}_2(z,t)$, the vertical transport, $\tau_{\rm G}(z,t)$,
the gravity torque, and $\tau_{\rm B}(z,t)$, the magnetic torque, respectively.
}
\label{jdotz_andtSingle}
\end{figure*}
%
\subsection{Vertical angular momentum balance}
In the next step, we evaluate the vertical profile of angular momentum transport.
We thus integrate Equation~\ref{phi-torque1} in radial and $\phi$ direction,
\begin{multline}
\frac{\partial}{\partial t}\int \int \left(r \rho l\right) \,d\phi \, dr
=-\int \int \frac{\partial}{\partial r} \left(\rho r u_r l\right) \,d\phi \, dr \\
-\int \int \frac{\partial}{\partial z} \left(\rho r u_z l\right)\, d\,\phi\, dr
-\int \int \frac{\partial}{\partial \phi} \left(\rho u_\phi l\right) \,d\phi \, dr\\
-\int \int r\frac{\partial P}{\partial \phi} d\phi dr
- \int \int r\rho \frac{\partial \Phi}{\partial \phi}\, d\phi \, dr
+\int \int r^2 F_{\rm B}\, d\phi \, dr.
\label{phi-torque_3vert}
\end{multline}
For the integration area we have chosen $\Delta r=[0, 80]$ and $\Delta \phi=[0, 2\pi]$.
Equation~\ref{phi-torque_3vert} describes the rate of change of angular momentum across the surface of a
cylinder with the width of $dz$).
After all, this will provide vertical profiles along the whole disk-jet area.
Equation~\ref{phi-torque_3vert} can be re-written as
\begin{multline}
\frac{\partial }{\partial t}J_{\rm tot}(z,t) =
\dot{J}_1(z,t)+ \dot{J}_2(z,t)
+ \dot{J}_3(r,t)\\
+\tau_{\rm G}(z,t) +\tau_{\rm P}(z,t) +\tau_{\rm B}(z,t)
\label{allJz}
\end{multline}
By this integration we obtain the vertical angular momentum flux along the whole disk surface.
Similar to the radial profile of the angular momentum (last subsection),
we define the following terms also for the vertical profile of the angular momentum evolution,
\begin{equation}
\dot J_{tot}(z,t)= \frac{\partial}{\partial t}\oint \int r \rho l \, dr\,d\phi,
\label{djdr_z}
\end{equation}
as the vertical flux of angular momentum due to advection,
\begin{equation}
\dot J_1(z,t) =-\oint \int \rho r u_r l\, dr \, d\phi,
\label{j1_advz}
\end{equation}
as the vertical flux of angular momentum due to vertical transport,
\begin{equation}
\dot J_2(z,t) =-\int \int \frac{\partial}{\partial z} \left(\rho r u_z l\right)\, d\,\phi\, dr,
\label{j2_upz}
\end{equation}
as the vertical flux of angular momentum due to toroidal motion,
\begin{equation}
\dot J_3(z,t) =-\int \int \frac{\partial}{\partial \phi} \left(\rho u_\phi l\right) \,d\phi \, dr
\label{j3_shearz}
\end{equation}
as the gravitational torque per unit height,
\begin{equation}
\tau_{\rm G}(z,t) = -\oint \int r\rho \, dr \frac{\partial \Phi}{\partial \phi} \, d\phi,
\label{grtorz}
\end{equation}
as the pressure torque per unit height,
\begin{equation}
\tau_{\rm P}(z,t) =-\int \int r\frac{\partial P}{\partial \phi} d\phi dr
\label{gprez}
\end{equation}
and as the magnetic torque per unit height,
\begin{equation}
\tau_{\rm B}(z,t)=\oint \int r^2 F_{\rm B}\,dr\, d\phi.
\label{Jmagz}
\end{equation}
In Figure~\ref{jdotz_andt} we display these vertical profiles for the different terms of
Equation~\ref{phi-torque_3vert} for different times.
These terms correspond to the vertical profile of the angular momentum flux along the jet that is generated by the different
physical agents in the disk
(see Table~\ref{tbl:terms}, bottom).
As we find that some of the terms are negligibly small, we have not shown them in the figure.
When comparing the different terms which are presented in Figure~\ref{jdotz_andt} we come to the following conclusions:\\
(i) We see that the most efficient driver in distributing the angular momentum in vertical direction
is provided by the vertical motion of the magnetized outflow material, i.e.,$\dot{J}_2(z,t)$.
We also clearly see how the angular momentum flux of $\dot{J}_2(z,t)$ is correlated to the vertical
mass flux $\dot{M}_z$ (see Figure~\ref{mdot_rz_sin_bin}).
(ii) Similarly, we observe that the magnetic torque and the gravity torque are largest
close to the disk and are decreasing for larger $z$.
This seems resulting from the fact that both the gravity and the magnetic field strength
are largest close to the disk.
In comparison to the disk area the magnetic torque gets smaller.
(iii) Considering the maps of the magnetic terms in Figure~\ref{tb_term_magnetic}, and also the radial and vertical profiles of
the magnetic torque (Fig~\ref{jdotz_andt}),
we find the term $l_{\rm B2}= (B_z / 4\pi\rho r) \partial_z(r B_{\phi})$ dominating.
The same is reflected in the integrated values for the radial ($\dot{J}_{br,2}$) and the vertical direction ($\dot{J}_{tbzi,2}$).
This is in nice agreement with the classical studies applying axisymmetry and only considering term $l_{\rm B2}$ \citep{2007prpl.conf..277P, 2019MNRAS.490.3112J}.
With our study, we confirm that term $l_{\rm B2}$ is dominant also in a non-axisymmetric treatment.
The contribution of the $\phi$-derivative of magnetic pressure term is minor.\\
We summarize this subsection by stating that among the additional terms considered in our model setup - compared to previous studies - the vertical motion contributes most for the vertical transport of angular momentum.
The 3D terms considering a $\phi$-variation of the physical variables, contribute relatively little to the global budget, probably since they average out when integrated over $\phi$. Nevertheless these terms vary by about 10-20\% along $\phi$.
The largest impact results from the time-varying Roche potential, in particular for the areas inside the disk and close to the disk surface.
We emphasize that, eventually, it is that variation that triggers all the non-axisymmetric effects we observe in the other physical terms.
\begin{figure*}
\includegraphics[width=18cm]{./f18.pdf}
\caption{Radial mass fluxes. Shown are the total angular momentum flux (first panel), the radial profile of $\dot M_r$ (second and third panel) for the binary star (top) and the single star (bottom) runs at different times.
}
\label{mdot_rz_sin_bin}
\includegraphics[width=18cm]{./f19.pdf}
\caption{Vertical mass fluxes. Shown are the vertical profile of $\dot M_z$ from up hemisphere (left panel), down hemisphere (middle panel) and both hemisphere (right panel), for the binary star (top) and the single star (bottom) runs at different times.
}
\label{fig:mdotz_both_hem}
\end{figure*}
\section{Overall fluxes of mass and angular momentum }
We finally investigate, how the efficiency of the global disk angular momentum transport
is affected by the presence of the secondary star, thus by the action of its tidal torque.
We therefore integrate all local fluxes that have been discussed above in order to obtain
the global fluxes.
We compare the evolution in a binary system with the evolution of a single star system.
A convenient parameter to evaluate the efficiency of the disk angular momentum transport is the mass flux transported inside the disk or the outflow.
We integrate the mass flux in radial direction as follows,
\begin{equation}
\dot M(r,t) = \int_{-0.1 r}^{0.1r} \int_0^{2\pi} \rho r v_r \, d\phi \, dz.
\label{radila mass flux}
\end{equation}
In Figure~\ref{mdot_rz_sin_bin} we display the mass fluxes in radial direction, $\dot{M}_r$.
The radial profile of the mass accretion rate $\dot{M}_r(r)$ is shown along the whole disk,
and also for the inner part of the disk (in higher resolution).
We see that for the setup of single star-disk-jet system the accretion process is smoothly
established over time.
Accretion remains in action over the whole disk, also till the later evolutionary stages.
In particular, we find a negative accretion rate along the whole disk.
The accretion rate is converging to a value $\dot{M} \simeq 0.02-0.03$ (in code units), here
measured at a disk radius of $r \simeq 30$.
In contrast, we find that for the setup of a binary star-disk-jet system, disk accretion is severely
affected by tidal effects - essentially visible as spiral arms in density and velocity as we have seen above.
Here, at late evolutionary stages we observe a change from accretion to an outward motion ({"}excretion{"}) for certain radii.
This is consistent with the distribution of radial velocities discussed earlier in Figure~\ref{fig:vr_binary} and ~\ref{fig:vr_single}.
The variation in the direction of mass flux does not happen at a fixed region and is
also evolving in time.
For instance, at $t= 1500$ the transition from accretion to excretion is found at $r\simeq 30$.
This is close to the area of the predominant spiral structure, and also close to the
radius of the L1 Lagrange point (see Figure~\ref{fig:nc3_xy_rho_com}).
We conclude that different agents affect the variation in the disk accretion behavior.
The most prominent ones arise from the evolution of the spiral arms, and from the orbital motion of the secondary (see position of the Lagrange point), altogether affecting the total angular momentum flux distribution in the disk.
It is certainly an intriguing questions whether the global disk accretion rates are affected by the 3D effects and the subsequent torques.
We may quantify this by considering the radial mass flux profiles of two different systems.
For late times, $t\simeq 3500$, and for radii of $r\simeq25$ (which is inside the accreting area of the disk in the binary setup),
we measure an average accretion rate of $-0.03$ (in code units) for the binary system, and of $-0.025$ (in code units) for the
disk around the single star.
This consequently implies that the accretion rates indeed may change due to the 3D effects discussed above, for the system parameters that we have investigated the radial mass fluxes increases by about $20\%$.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{./f20.pdf}
\caption{Test of the angular momentum budget. Shown is the radial profile of the term at the left side of equation \ref{phi-torque2} denoted by ``$T_{\rm left}$''.}
\label{a.m budget_bin}
\end{figure}
The mass fluxes in vertical direction we integrate as follows,
\begin{equation}
\dot M(z,t) = \int_0^{80r_i} \int_0^{2\pi} \rho r v_z \, d\phi \, dr,
\label{vertical mass flux}
\end{equation}
with the inner disk radius $r_{\rm i}$.
Figure~\ref{fig:mdotz_both_hem} shows the vertical profile of $\dot M_z$ from the upper hemisphere (left panel),
the lower hemisphere (middle panel) and both hemispheres together (right panel),
for both the binary star simulation (top) and the single star simulation (bottom) at different times.
In contrast to the radial mass flux in the disk, we find a much smoother profile for the vertical
mass fluxes $\dot{M}(z,t)$.
The profile for vertical mass fluxes nicely
demonstrate the transition from the accretion disk into the disk outflow.
In particular, the profiles explicitly demonstrate the change of sign that takes place at the
altitude of about one disk scale height, where radial mass accretion is diverted into a mass outflow
as a disk wind.
The profiles also show how the disk outflow approaches a kind of steady state, saturating in a constant mass flux
in vertical direction, a result that is expected from steady-state MHD theory of disk
winds\footnote{Note that steady-state MHD wind theory predicts conserved mass flux along magnetic flux
surfaces, while in the Figures we plot the mass flux integrated over all flux surfaces.
}
Comparing the vertical mass fluxes for the binary and the single star simulation, we find very symmetric profiles for the
run of the single star setup, which again highlights the quality of our simulation setup.
In contrast, the clear asymmetries seen in the profiles for the binary star simulation evidently demonstrate the influence of the tidal effects on the launching process caused by the secondary.
We may quantify this by considering the vertical mass flux profiles of the two setups at late times ($t\simeq 3500$).
Here, we measure average mass fluxes of 0.03 (in code units) for the binary star,
and 0.02 (in code units) for the single star.
We conclude that disk winds in binary stars may carry 50\% more mass flux in comparison to a single star disk.
We also show the total angular momentum flux, $\dot{J}_{\rm tot}$ (see Figure~\ref{mdot_rz_sin_bin}, first panel).
It is helpful to stress again that in order to derive the radial profile of the total angular momentum fluxes at work,
respectively the torques, we need to integrate each term (see Table~\ref{tbl:terms}, middle)
in $r$-direction from $0$ to $r$.
For example, in order to obtain $\dot{J}_{tot}(r,t)$ at each radial position, we integrate
\begin{multline}
\dot{J}_{tot}(r,t) = \int_0^r \, [ Te_{\rm r,1}(r',t) + Te_{\rm r,2}(r',t) + Te_{\rm r,3}(r',t) \\
+ Te_{\rm r,4}(r',t) + Te_{\rm r,5}(r',t) + Te_{\rm r,6}(r',t) ] \,dr'
\end{multline}
Here, the control volume for integration covers the radii from $r=0$ to $r'=r$ and spans in vertical direction from
$z=-0.1~r$ and $z=0.1~r$.
Thus, the term $\dot{J}_{\rm tot}(r,t)$ represents total angular momentum flux integrated along in $\phi$, in $z$ and in radial direction at each radius.
In other words, $\dot{J}_{tot}(r,t)$ shows the total angular momentum flux until that specific radius.
We find that the total angular momentum flux (respectively the net torques) at time $2500$ (in code units) is positive
for $r<50$, and changes sign for larger radii.
The radial profile of $\dot {J}_{tot}(r,t)$ allows to interpret the evolution of the disk angular momentum locally.
The positive sign indicates that until this specific radius (r=50) the angular momentum of the disk material is removed from the inner regions
to larger radii.
Obviously, this supports accretion of matter.
However, for radii $r>50$ we observe that removal of angular momentum does not take place.
These areas in fact gain angular momentum, and, consequently, accretion turns into excretion.
More specifically, a parcel of mass that looses angular momentum (in any direction) will move inwards.
When it gains angular momentum, it will move outwards.
Now, if angular momentum continues to go outwards (for a certain range of radii), this part will constitute an {\em accretion} disk.
If angular momentum is transported inwards, this part of the disk may move in, however, the part of the disk within
this radius, has received angular momentum, and is supposed to move outwards.
Overall, this will not lead to a steady state situation.
We note that is why magnetic winds are so efficient for accretion as the vertical transport always removes angular momentum, thus leading to accretion for all radii.
In fact that is what we also observe on the long time scales.
The outer disk disappears and a smaller-size disk remains with a disk radius well within the radius of L1.
As a sanity check for the total the angular momentum budget of the binary star-disk-jet system,
we compare both sides of the equation \ref{phi-torque2}.
We compute the term at the left side of the equation, now denoted by $T_{\rm left}$ and the total angular moment flux
$\dot {J}_{tot}(r,t)$ computed from the right side.
Both terms are shown in Figure~\ref{mdot_rz_sin_bin} and Figure~\ref{a.m budget_bin}, respectively, and
allow us to compare both sides of the angular momentum flux equation.
From these plots, we may consider, as an example, the values for $t_1 \simeq 1500$ (yellow)
and $t_2 \simeq 2100$ (green) at $r=80$.
We first compute the left side of equation \ref{phi-torque2},
$\left( T_{\rm left}(t_2) - T_{\rm left}(t_2)(t_1) \right) / \Delta t \simeq \dot J _{\rm tot}(t_1)$.
With $\Delta t = t_2 - t_1 = 640$ we calculate
$\left( T_{\rm left}(t_2) - T_{\rm left}(t_1) \right) \simeq 500$ and
$\left( T_{\rm left}(t_2) - T_{\rm left}(t_1) \right) / \Delta t \simeq 0.7$.
On the other hand, from the figures we also find $\dot J _{\rm tot}(t_1=1500) \simeq 0.7$ (yellow line at $r=80$).
This nicely approves our angular momentum budget, as it shows the equivalence of the left and right side
of equation \ref{phi-torque2} .
\section{Conclusions}
We have presented a detailed analysis of the angular momentum balance of the accretion-ejection system in a binary star.
For that we have re-visited our novel 3D MHD simulations that were published recently \citep{2018ApJ...861...11S}.
In particular, we have investigated how the existence of disk spiral arms influence the jet launching process and what kind of substructures emerge in the jets that evolve in our full 3D simulations.
We have further investigated to what extend the global properties, thus observables such as the disk accretion rate and jet mass flux are affected by the 3D effects, compared to a single-star accretion disk that launches an outflow.
We have obtained the following results.
(i) As a general result for the evolution of the binary star-disk-jet system,
we find that the initial disk size is decreasing and finally becomes confined to a size within the Roche lobe. As our model setup considers the full 3D evolution, we observe the growth of non-axisymmetric structures and spiral arms developing in the accretion disk.
(ii)
Considering the evolution of the disk spiral arms we recognize that the density wave and the pressure wave follow the same pattern speed.
It is clearly seen that with time the disk spiral arms become denser and more prominent and finally representing the main structural feature of the disk.
We find that the rotation of the spiral arms is synchronized with the orbital motion of the binary.
Furthermore, we see that the different sides of the disk experience different tidal forces -- resulting in a
stronger and faster formation of spiral arm in that part of the disk that is closer to the L1.
(iii) The spiral arm pattern is also seen in the magnetic field structure.
While the local differences of the corresponding magnetic torques cancel out when averaged over the full angle,
they have an essential impact on the local launching conditions for the outflow.
In fact, they determine, together with density profile the particular 3D jet structure we observe (see below).
(iv)
Also for the velocity field of the system we find the same pattern in the disk that is involved in the formation of
the density spiral arm.
In particular we find that the arm is not co-rotating with the material, but is synchronized with the orbital motion of the companion.
(v)
The velocity pattern observed in the binary simulation shows considerable differences compared to the
one we observe in simulation for 3D jet launching from a single star.
The radial velocity pattern for the disk around a single star is similar to the typical accretion pattern
(thus a negative $u_r$) in almost axisymmetry.
In contrary, the accretion velocity for the binary star simulation looks quite unusual, exhibiting {"}excretion{"}
channels along certain angular directions.
These channels follow a spiral structure and are not aligned parallel the mid-plane.
Overall, we conclude that the radial velocity pattern seen in the binary disk is affected drastically by the tidal forces acting in the system.
(vi)
The azimuthal profile of the rotational velocity follows very closely the azimuthal profile of the disk density.
The peaks in the rotational velocity profile indicate the location of the spiral arm, while these peaks also
indicate a very strong shear.
Thus, the enhanced {\em orbital} velocity that is present in the disk itself triggers further angular momentum exchange.
(vii)
As our central result, we find that the spiral structure of the disk is {\em launched into the jet outflow}.
Most prominently, these features are visible in the velocity pattern.
We may call these newly discovered jet structures {\em jet spiral walls}.
Essentially, the spiral features in the disk and in the jet follow the same kind of time evolution,
meaning that the jet spiral {}"walls{"} are establishing an almost stationary structure co-rotating with the disk.
We notice however that a small change in the position angle of the spiral structure along the outflow appears,
resulting from the fact that the jet dynamical time scale is much faster than the disk dynamical time scale.
Thus, any structure that develops in the disk, is {"}immediately{"} propagated along the wind.
Nevertheless, on very large spatial scales we would expect the jet spiral arms to lag behind the disk
spiral arms, assuming that the spiral structure and the jet survives that long.
(viii)
We investigated the global angular momentum budget in the binary star-disk-jet system and compared the respective
impact of the particular physical terms.
In comparison to previous work, in our approach we have essentially considered the full magnetic torque
and also the presence of an outflow, thus the angular momentum transport by vertical motion.
We find that among the extra terms we have considered that the vertical transport of angular momentum has a
significant role in the total angular momentum budget also in a binary system.
The same holds for the magnetic torque, however, the contribution that arises from the $\phi$-derivative
of the magnetic pressure (which is a truly 3D term) and the $B_{\phi}B_r$ stresses are small in the disk mid-plane.
The gravity torque arising from the time evolution of the 3D Roche potential plays an essential role, as it
constitutes the fundamental cause for the all 3D effects appearing in our disk-jet system,
including the spiral structure in the disk (seen in density but also the magnetic field) and in the outflow.\\
(ix)
From the radial profiles of the angular momentum fluxes we have concluded that the torques by gravity and the magnetic
field remove the angular momentum from the disk and thus support the inward motion of the disk material (accretion).
However, the other contributions to the torques and the respective direction of transport
(radial or vertical angular momentum transport) do not show an unique behavior.
Depending on the radial position in the disk they may either remove or advect angular momentum of the disk.
The latter may lead to excretion of material at certain radii. \\
(x)
When comparing the radial and vertical mass fluxes and also the total angular momentum fluxes in the binary disk and in the disk around a
single star, we find that in the binary case, accretion is not supported throughout the whole disk,
and the profiles for the angular momentum fluxes and radial mass fluxes are altered due to the tidal effects.
Subsequently, we also detect the hemispherically asymmetric profiles of the vertical mass flux for the binary disk, also caused by tidal effects.
In comparison, for the single-star disk the evolution of the total angular momentum distribution supports the accretion process over the whole disk area and we find profiles for vertical mass fluxes that are perfectly symmetric for both hemisphere.
\\
In summary, from investigating in detail fully 3D MHD simulations of the launching process of jets from accretion disks that
are hosted by a binary star component, we disentangle a number of new dynamical features.
In particular, we see the disk spiral structure being ejected into the disk outflow and the jet, featuring {"}spiral walls{"}
along the jet.
The different physical torques acting on the disk and the jet are all affected by the existence of a binary component,
thus changing in space and time along with the time-variation of the Roche potential.
The global observable parameters such as disk accretion rate and jet mass flux are substantially varied in comparison to a 3D single star
launching situation.
\acknowledgements
We thank Andrea Mignone and the PLUTO team for the possibility to use their code.
We acknowledge really helpful comments by an unknown referee that have lead to a clearer presentation of our results.
Our simulations were performed on the ISAAC cluster of the Max Planck Institute for Astronomy
and the COBRA and DRACO clusters of the Max Planck Society.
|
2,869,038,154,932 | arxiv | |
2,869,038,154,933 | arxiv | \section{Results}
\paragraph{Accessing Tan's contact for a planar geometry.}
Our ultra-cold Bose gas is well described by the Hamiltonian $\hat H$, sum of the kinetic energy operator,
the confining potential,
and the interaction potential $\hat H_{\rm int}=a \hat K$ with
\begin{equation}
\hat K=\frac{2\pi \hbar^2}{m} \int \hskip -3mm\int \hat \psi^\dagger(\boldsymbol r)\, \hat \psi^\dagger(\boldsymbol r')\;\hat \delta(\boldsymbol r-\boldsymbol r')\;\hat \psi(\boldsymbol r')\,\hat \psi(\boldsymbol r)\ {\rm d}^3 r\;{\rm d}^3 r'.
\label{eq:interaction_potential_K}
\end{equation}
Here $\hat \delta (\boldsymbol r)$ is the regularized Dirac function entering in the definition of the pseudo-potential \cite{huan87} and the field operator $\hat \psi(\boldsymbol r)$ annihilates a particle in $\boldsymbol r$. Using Hellmann--Feynman theorem, one can rewrite the contact defined in Eq.\,(\ref{eq:contact_definition}) as
$C=8\pi m a^2 \langle \hat K\rangle/\hbar^2$.
In our experiment, the gas is uniform in the horizontal $xy$ plane, and it is confined with a harmonic potential of frequency $\omega_z$ along the vertical direction. We choose $\hbar \omega_z$ larger than both the interaction energy and the temperature, so that the gas is thermodynamically two-dimensional (2D). On the other hand, the extension of the gas $a_z=(\hbar /m\omega_z)^{1/2}$ along the direction $z$ is still large compared to the scattering length $a$, so that the collisions keep their 3D character and Eq.\,(\ref{eq:interaction_potential_K}) remains relevant \cite{Petrov:2001}. Suppose first that the zero-range potential $\hat \delta(\boldsymbol r-\boldsymbol r')$ appearing in (\ref{eq:interaction_potential_K}) does not need to be regularized. Then, after integration over $z$, $C$ can be related to the in-plane two-body correlation function $g_2$:
\begin{equation}
\frac{C}{C_0}\stackbin{?}{=} g_2(0) , \qquad C_0\equiv 4(2\pi)^{3/2} \frac{a^2 \bar n N}{a_z},
\label{eq:relation_C_g2}
\end{equation}
where we introduced the average in normal order:
\begin{equation}
g_2(\boldsymbol r)=\frac{1}{\bar n^2}\langle :\hat n(\boldsymbol r) \hat n(0):\rangle,
\end{equation}
with $\hat n(\boldsymbol r)$ the operator associated with the 2D density, $\bar n$ its average value and $N$ the atom number. For an ideal Bose gas, the value of $g_2(0)$ varies from $2$ to 1 when one goes from the non-condensed regime to the fully condensed one \cite{Naraschewski:1999_PhysRevA.59.4595}, so that $C_0$ sets the scale of Tan's contact.
However, it is well known that $g_2(0)$ is generally an ill-defined quantity for an interacting fluid. For example in a Bose gas with zero-range interactions, one expects $g_2(r)$ to diverge as $1/r^2$ in 3D and $(\log r)^2$ in 2D when $r\to 0$
\cite{Braaten:2011_PhysRevLett.106.153005,werner_general_2012Bosons}. On the other hand, when one properly regularizes the zero-range potential $\hat \delta$ in Eq.\,(\ref{eq:interaction_potential_K}), Tan's contact is well-behaved and measurable. Here, we approach it by measuring the change in energy per atom $h \Delta \nu=\Delta E/N$ when the scattering length is changed by the small amount $\Delta a$. Replacing $\partial E/\partial a$ by $\Delta E/\Delta a$ in the definition (\ref{eq:contact_definition}), we obtain
\begin{equation}
\frac{C}{C_0}\approx \sqrt{2\pi} \;\frac{m a_z}{\hbar \bar n}\;\frac{\Delta \nu}{\Delta a}.
\label{eq:C_C0_Delta_nu}
\end{equation}
To measure the energy change $h\Delta \nu$ resulting for a small modification of the scattering length, we take advantage of a particular feature of the $^{87}$Rb atom: All scattering lengths $a_{ij}$, with $(i,j)$ any pair of states belonging to the ground-level manifold, take very similar values \cite{vanKempen:2002_PhysRevLett.88.093201}. For example, Ref.\,\cite{altin2011optically} predicts $a_{11}=100.9\,a_0$, $a_{22}=94.9\,a_0$ and $a_{12}=98.9\,a_0$, where the indices 1 and 2 refer to the two states $|1\rangle\equiv |F=1,m_z=0\rangle$ and $|2\rangle\equiv |F=2,m_z=0\rangle$ used in this work and $a_0$ is the Bohr radius. For an isolated atom, this pair of states forms the so-called clock transition at frequency $\nu_0\simeq 6.8\,$GHz, which is insensitive (at first order) to the ambiant magnetic field. Starting from a gas at equilibrium in $|1\rangle$, we use a Ramsey interferometric scheme to measure the microwave frequency required to transfer all atoms to the state $|2\rangle$. The displacement of this frequency with respect to $\nu_0$ provides the shift $\Delta \nu$ due to the small modification of scattering length $\Delta a=a_{22}- a_{11}$.
\begin{figure}[t]
\begin{center}
\includegraphics{fig_1.pdf}
\vskip -67.8mm
\hskip 56mm
\includegraphics[width=30mm]{fig_1_inset.pdf}
\vskip 26mm
\end{center}
\caption{Example of an interferometric Ramsey signal showing the optical density of the fraction of the gas in state $|2\rangle$ after the second Ramsey pulse, as a function of the microwave frequency $\nu$. These data were recorded for $\bar n\approx 40$ atoms/\si{\micro}m$^2$ and $T\sim 22\,$nK, $\tau_1=10\,$ms. Here, $\tau_2$ has been increased to 1\,ms to limit the number of fringes for a better visibility. Inset. Filled black disks (resp. open red circles): central fringe for atoms in $|2\rangle$ (resp. $|1\rangle$) in the ``standard" configuration $\tau_2=0.1\,$ms. The density in $|1\rangle$ is obtained by applying a microwave $\pi$-pulse just before the absorption imaging phase. Blue squares: single-atom response measured during the ballistic expansion of the cloud by imaging atoms in $|2\rangle$. The lines in the inset are sinusoidal fits to the data. The vertical error bars of the inset correspond to the standard deviation of the 3 repetitions made for this measurement.}
\label{fig:Ramsey_signal}
\end{figure}
\paragraph{Ramsey spectroscopy on the clock transition.}
The Ramsey scheme consists in two identical microwave pulses, separated by a duration $\tau_1 = 10\,$ms. Their duration $\tau_2\sim 100\,$\si{\micro}s is adjusted to have $\pi/2$ pulses, \emph{i.e.}\;each pulse brings an atom initially in $|1\rangle$ or $|2\rangle$ into a coherent superposition of these two states with equal weights. Just after the second Ramsey pulse, we measure the 2D spatial density $\bar n$ in state $|2\rangle$ in a disk-shaped region of radius 9\,\si{\micro}m and using the absorption of a probe beam nearly resonant with the optical transition connecting $|2\rangle$ to the excited state $5P_{3/2},\, F'=3$. We infer from this measurement the fraction of atoms transferred into $|2\rangle$ by the Ramsey sequence, and we look for the microwave frequency $\nu_m$ that maximises this fraction.
An example of spectroscopic signal is shown in Fig. \ref{fig:Ramsey_signal}. In order to determine the ``bare" transition frequency $\nu_0$, we also perform a similar measurement on a cloud in ballistic expansion, for which the 3D spatial density has been divided by more than 100 and interactions play a negligible role. The uncertainty on the measured interaction-induced shift $\Delta \nu=\nu_m-\nu_0$ is on the order of 1 Hz. In principle, the precision of our measurements could be increased further by using a larger $\tau_1$. In practice however, we have to restrict $\tau_1$ to a value such that the spatial dynamics of the cloud, originating from the non-miscibility of the $1-2$ mixture ($a_{12}^2>a_{11} a_{22}$), plays a negligible role \footnote{We also check that no detectable spin-changing collisions appear on this time scale: more than 99\,\% of the atoms stay in the clock state basis.}. Another limitation to $\tau_1$ comes from atom losses, mostly due to 2-body inelastic processes involving atoms in $|2\rangle$. For $\tau_1=10\,$ms, these losses affect less than $5\%$ of the total population and can be safely neglected.
We see in Fig.\,\ref{fig:Ramsey_signal} that there indeed exists a frequency $\nu_m$ for which nearly all atoms are transferred from $|1\rangle$ to $|2\rangle$, so that $E(N,a_{22})-E(N,a_{11})=N\,h(\nu_m-\nu_0)$ (see \cite{SM} for details). We note that for an interacting system, the existence of such a frequency is by no means to be taken for granted. Here, it is made possible by the fact that the inter-species scattering length $a_{12}$ is close to $a_{11}$ and $a_{22}$. We are thus close to the SU(2) symmetry point where all three scattering lengths coincide. The modeling of the Ramsey process detailed in \cite{SM} shows that this quasi-coincidence allows one to perform a Taylor expansion of the energy $E(N_1,N_2)$ (with $N_1+N_2=N$) of the mixed system between the two Ramsey pulses, and to expect a quasi-complete rephasing of the contributions of all possible couples $(N_1,N_2)$ for the second Ramsey pulse. The present situation is thus quite different from the one exploited in
\cite{Fletcher:2017} for example, where $a_{11}$ and $a_{12}$ were vanishingly small. It also differs from the generic situation prevailing in the spectroscopic measurements of Tan's contact in two-component Fermi gases, where a microwave pulse transfers the atoms to a third, non-interacting state \cite{stewart2010verification}.
\begin{figure}[t]
\begin{center}
\includegraphics{fig_2.pdf}
\vskip -68.5mm
\hskip 44mm
\includegraphics{fig_2_inset.pdf}
\vskip 36mm
\end{center}
\caption{Variations of the shift $\Delta \nu$ with temperature for various 2D spatial densities. Violet disks: $\bar n=10.4\,(2)$\,\si{\micro}m$^{-2}$, blue squares: $\bar n=21.0\,(3)$\,\si{\micro}m$^{-2}$, green diamonds: $\bar n=31.5\,(3)$\,\si{\micro}m$^{-2}$, orange pentagons: $\bar n=42.0\,(1)$\,\si{\micro}m$^{-2}$. The horizontal error bars represent the statistical uncertainty on the temperature calibration, except for the points at very low temperature (10-22\,nK). These ultracold points are deeply in the Thomas-Fermi regime, where thermometry based on the known equation of state of the gas is not sensitive enough. The temperature is thus inferred from an extrapolation with evaporation barrier height of the higher temperature points. The error on the frequency measurement is below 1\,Hz and is not shown in this graph. Inset: Variations of the shift $\Delta \nu$ with density at low temperature $T \sim 22$\,nK, \emph{i.e.}\;a strongly degenerate gas. The straight line is the mean-field prediction corresponding to $\Delta a=-5.7\,a_0$.}
\label{fig:Delta_nu_vs_T}
\end{figure}
\paragraph{Resonance shift $\Delta \nu$ and contact $C$.}
We show in Fig.\,\ref{fig:Delta_nu_vs_T} our measurements of the shift $\Delta \nu$ for densities ranging from 10 to 40 atoms/\si{\micro}m$^2$, and temperatures from 10 to 170\,nK. Since $\hbar \omega_z/k_{\rm B}=210\,$nK, all data shown here are in the thermodynamic 2D regime $k_{\rm B}T<\hbar \omega_z$. More precisely, the population of the ground state of the motion along $z$, estimated from the ideal Bose gas model \cite{Chomaz:2015}, is always $\gtrsim$ 90\,\%. All shifts are negative as a consequence of $a_{22}<a_{11}$: the interaction energy of the gas in state $|2\rangle$ is slightly lower than in state $|1\rangle$. For a given density, the measured shift increases in absolute value with temperature. This is in line with the naive prediction of Eq.\,(\ref{eq:relation_C_g2}), since density fluctuations are expected to be an increasing function of $T$. Conversely for a given temperature, the shift is (in absolute value) an increasing function of density.
For the lowest temperatures investigated here, we reach the fully condensed regime in spite of the 2D character of the sample, as a result of finite size effects. In this case, the mean-field prediction for the shift reads $\Delta \nu=\bar n \, \hbar\, \Delta a/(\sqrt{2\pi}\, m a_z)$ [\emph{i.e.}\;$C=C_0$ in Eq.\,(\ref{eq:C_C0_Delta_nu})]. Our measurements confirm the linear variation of $\Delta \nu$ with $\bar n$, as shown in the inset of Fig.\,\ref{fig:Delta_nu_vs_T} summarizing the data for $T=22\,$nK. A linear fit to these data gives $\Delta a/a_0=-5.7\,(1.0)$ where the error mostly originates from the uncertainty on the density calibration. In the following, we use this value of $\Delta a$ for inferring the value of $C/C_0$ from the measured shift at any temperature, using Eq.\,(\ref{eq:C_C0_Delta_nu}). We note that this estimate for $\Delta a$ is in good agreement with the prediction $\Delta a/a_0=-6$ quoted in \cite{altin2011optically}, as well as with our recent measurement \cite{Zou2020} which is independent of the density calibration. The first corrections to the linear mean-field prediction were derived (in the 3D case) by Lee, Huang and Yang in \cite{Lee:1957}. For our densities, they have a relative contribution on the order of 5\,\% of the main signal ($\Delta \nu \lesssim 1\,$Hz) \cite{SM}, and their detection is borderline for our current precision.
\begin{figure}[t]
\begin{center}
\includegraphics{fig_3.pdf}
\vskip -65mm
\hskip 30mm
\includegraphics{fig_3_inset.pdf}
\vskip 23mm
\end{center}
\caption{Variations of the normalized Tan's contact $C/C_0$ with the phase-space density ${\cal D}$. The encoding of the experimental points is the same as in Fig. \ref{fig:Delta_nu_vs_T}. The colored zone indicates the non-superfluid region, corresponding to ${\cal D}<{\cal D}_{\rm c}\approx 7.7$. The continuous black line shows the prediction derived within Bogoliubov approximation. Inset: Zoom on the critical region. The dashed blue line is the prediction from \cite{ren2004virial}, resulting from a virial expansion for the 2D Bose gas. The dotted red line shows the results of the classical field simulation of \cite{Prokofev:2002}.}
\label{fig:Contact_vs_TTc}
\end{figure}
We summarize all our data in Fig.\,\ref{fig:Contact_vs_TTc}, where we show the normalized contact $C/C_0$ defined in Eq.\,(\ref{eq:C_C0_Delta_nu}) as a function of the phase-space density ${\cal D}$. All data points collapse on a single curve within the experimental error, which is a manifestation of the approximate scale invariance of the Bose gas, valid for a relatively weak interaction strength $\tilde g\lesssim 1$ \cite{Hung:2011,Yefsah:2011}.
\section{Discussion}
We now compare our results in Fig.\,\ref{fig:Contact_vs_TTc} to three theoretical predictions. The first one is derived from the Bogoliubov approximation applied to a 2D quasi-condensate \cite{Mora:2003}. This prediction is expected to be valid only for ${\cal D}$ notably larger than the phase-space density at the critical point ${\cal D}_c$ (see methods), but it gives a fair account of our data over the whole superfluid region. Within this approximation, one can also calculate the two-body correlation function and write it as $g_2(r)= g_2^{T=0}(r)+g_2^{\rm thermal}(r)$. One can then show the result \cite{SM}
\begin{equation}
\frac{C}{C_0}=1+g_2^{\rm thermal}(0),
\end{equation}
which provides a quantitative relation between the contact and the pair correlation function, in spite of the already mentioned singularity of $g_2^{T=0}(r)$ in $r=0$.
For low phase-space densities, one can perform a systematic expansion of various thermodynamic functions in powers of the (properly renormalized) interaction strength \cite{ren2004virial}, and obtain a prediction for $C$ (dashed blue line in the inset of Fig.\,\ref{fig:Contact_vs_TTc}). By comparing the 0th, 1st and 2nd orders of this virial-type expansion, one can estimate that it is valid for ${\cal D}\lesssim 3$ for our parameters. When ${\cal D}\to 0$, the result of \cite{ren2004virial} gives $C/C_0\to 2$, which is the expected result for an ideal, non-degenerate Bose gas. The prediction of \cite{ren2004virial} for ${\cal D}\sim 3$ compares favourably with our results in the weakly-degenerate case.
Finally we also show in Fig.\,\ref{fig:Contact_vs_TTc} the results of the classical field simulation of \cite{Prokofev:2002} (red dotted line), which are in principle valid both below and above the critical point. Contrary to the quantum case, this classical analysis does not lead to any singularity for $\langle n^2(0)\rangle$, so that we can directly plot this quantity as it is provided in \cite{Prokofev:2002} in terms of the quasi-condensate density. For our interaction strength, we obtain a non-monotonic variation of $C$. This unexpected behavior, which does not match the experimental observations, probably signals that the present interaction strength $\tilde g=0.16$ (see Methods) is too large for using these classical field predictions, as already suggested in \cite{Prokofev:2002}.
Using the Ramsey interferometric scheme on a many-body system, we have measured the two-body contact of a 2D Bose gas over a wide range of phase-space densities. We could implement this scheme on our fluid thanks to the similarities of the three scattering lengths in play, $a_{11},a_{22},a_{12}$, corresponding to an approximate SU(2) symmetry for interactions. Our method can be generalized to the strongly interacting case $a_{ij}\gtrsim a_z$, as long as a Fano-Feshbach resonance allows one to stay close to the SU(2) point. One could then address simultaneously the LHY-type corrections at zero temperature \cite{Mora:2009_PhysRevLett.102.180404,Fournais:2019}, the contribution of the three-body contact \cite{werner_general_2012Bosons,Smith:2014_PhysRevLett.112.110402},
and the breaking of scale invariance expected at non-zero temperature. Finally we note that even for our moderate interaction strength, classical field simulations seem to fail to reproduce our results, although they could properly account for the measurement of the equation of state itself \cite{Hung:2011,Yefsah:2011}. The semi-classical treatment of Ref.\,\cite{Giorgetii:2007_PhysRevA.76.013613} and quantum Monte Carlo approaches of Refs.\,\cite{Holzmann08,Rancon12} should provide a reliable path to the modelling of this system. This would be particularly interesting in the vicinity of the BKT transition point where the usual approach based on the $XY$ model \cite{Nelson:1977}, which neglects any density fluctuation, does not provide a relevant information on the behavior of Tan's contact.
\section{Methods}
\paragraph{Preparation of the two-dimensional gas.}
The preparation and the characterization of our sample have been detailed in \cite{Ville:2017,Ville:2018} and we briefly outline the main properties of the clouds explored in this work. In the $xy$ plane, the atoms are confined in a disk of radius $12\,$\si{\micro}m by a box-like potential, created by a laser beam properly shaped with a digital micromirror device. We use the intensity of this beam, which determines the height of the potential barrier around the disk, as a control parameter for the temperature. The confinement along the $z$ direction is provided by a large-period optical lattice, with a single node occupied and $\omega_z/(2\pi)= 4.41\,(1)\,$kHz. We set a magnetic field $B=0.701\,(1)$\,G along the vertical direction $z$, which defines the quantization axis.
We use the expression ${\cal D}_{\rm c}=\ln(380/ \tilde g)$ for the phase-space density at the critical point of the superfluid transition \cite{Prokofev:2001}. Here, $\tilde g=\sqrt{8\pi}\,a_{11}/a_z=0.16$ is the dimensionless interaction strength in 2D, leading to ${\cal D}_{\rm c}=7.7$. We study Bose gases from the normal regime (${\cal D}=0.3 {\cal D}_{\rm c}$) to the strongly degenerate, superfluid regime (${\cal D}>3 {\cal D}_{\rm c}$).
\paragraph{Acknowledgments.}
We thank Paul Julienne, Raphael Lopes, and F\'elix Werner for useful discussions. We acknowledge the contribution of Rapha\"el Saint-Jalm at the early stage of the project. This work was supported by ERC (Synergy Grant UQUAM), Quantera ERA-NET (NAQUAS project) and the ANR-18-CE30-0010 grant. LKB is a member of the SIRTEQ network of R\'egion Ile-de-France.
\paragraph{Author contributions.}
Y.-Q.Z., B.B.-H. and C.M. performed the experiment and carried out the preliminary data analysis. Y.-Q.Z. performed the detailed data analysis. E.L.C. participated in the preparation of the experimental setup. S.N., J.D. and J.B. contributed to the development of the theoretical model. J.D. and J.B. wrote the manuscript with contributions from all authors.
|
2,869,038,154,934 | arxiv | \section{Introduction}
In this article, we address the problem of the computation of eigenvalues of self-adjoint Schr\"odinger operators
(quantum Hamiltonians) $\mathcal H=-\Delta +V$. Our main result is a reduction of this infinite dimensional problem to a finite-dimensional one in a fully controlled way. To this end we use the Feshbach--Schur (FS) method which originated in works of I. Schur on the Dirichlet problem in planar domains and H. Feshbach, on resonances in the nuclear physics, and was then developed independently in numerical analysis, computational quantum chemistry and mathematical physics, see \cite{Griesemer2008-uw,Gustafson2011-wd}.
Unlike the standard applications of this method (see e.g. a series of papers~\cite{Lowdin1965-rd,Lowdin1965-tr} by L\"owdin on bounds on eigenvalues of a given Hamiltonian), we use it not as a fixed scheme but rather, following~\cite{Bach1998-df}, as a map - called the Feshbach--Schur map (FSM) - from one problem to another, simpler
one, involving fewer degrees of freedom. We base our analysis on the isospectrality property of this map discovered in~\cite{Bach1998-df} recalled in Theorem~\ref{thm:isospF} below.
We call this approach the {\it FSM method}.
We combine this approach with planewave discretizations which are widely used in numerical methods in electronic structure calculation,
especially for condensed matter simulations and in materials science.
Electronic structure calculation is indeed one of the problems we have in our sight.
And one particular very useful aspect of planewaves is that they are eigenfunctions of the Laplace operator, which is the main part of the Hamiltonian $\mathcal H=-\Delta +V$ that needs to be diagonalized in order to determine the electronic structure of the system.
Limiting the computational cost of finding the eigenstates of the Hamiltonian operator has been a key issue in electronic structure calculation, and is currently of interest, due to the ever growing size of the considered systems that is matched to the available computational resources.
For example, different perturbation methods have been proposed, such as~\cite{Brust1964-hw, Moller1934-iz}, traditionally to introduce more physical details e.g. many-particle interactions in a given approximation.
More recently, a post-processing strategy has been proposed by some authors for planewave discretizations for non-linear eigenvalue problems~\cite{Cances2014-lb,Cances2016-vy,Cances_undated-fd,Dusson2017-jg}, which considers the exact solution as a perturbation of the discrete (using the planewave basis) approximation.
This is in spirit not so far from so-called two-grid methods, where a first problem is solved on a coarse basis, i.e. in a small discretization space, and a small problem is solved on a fine basis. In the case of eigenvalue problems, two-grid methods have been proposed e.g. in \cite{Xu1999-vo} in the case of a linear eigenvalue problem.
A two-grid method has also been proposed for nonlinear eigenvalue problems of a Gross--Pitaevskii type in ~\cite{Cances2018-ow}.
In this article, we extend the FSM-method to establish finite-dimensional approximations to solve the Hamiltonian eigenvalue problem with controlled errors on the eigenvalues and eigenvectors.
To be a little more concrete, we define a new problem in a coarse space ${\mathsf X}_M\subset {\mathsf X}$ yielding the exact eigenvalue one would obtain when computing it in the infinite-dimensional space ${\mathsf X}$.
Indeed, our contribution follows a new Ansatz based on the question: Can we find a discrete Hamiltonian acting on the finite-dimensional space ${\mathsf X}_M$ that has the exact eigenvalue $\lambda_\star$ of the original Hamiltonian (acting on ${\mathsf X}$) as eigenvalue? It turns that the answer is yes, but that the discrete Hamiltonian depends itself on $\lambda_\star$, through the FS-map, leading to an eigenvalue problem in ${\mathsf X}_M$ that is nonlinear in the spectral parameter.
Not surprisingly, the map cannot be computed exactly but only be approximated through a fast decaying series, that is truncated based on a parameter $K$, and which requires computations in a larger space ${\mathsf X}_N$ with ${\mathsf X}_M\subset {\mathsf X}_N \subset {\mathsf X}$.
In this work we quantify the error introduced due to the discretization parameters $K,N,M$ and show that the eigenvalue and eigenfunction errors are bounded by two terms: i) a term with algebraic decay with respect to the truncation parameter $K$, and ii) a term with a regularity-dependent convergence rate in $N$.
We also quantify the explicit dependency of the error in terms of the parameter $M$ defining the discrete space ${\mathsf X}_M$ and the potential, including its regularity.
Our analysis reveals that the algebraic decay rate with respect to $K$ increases with increasing~$M$.
Our method uses an adapted version of perturbation theory based on a slightly more regular notion of relative form-boundedness, as stated in Assumption~\ref{as:pot}, developed as an abstract theory in Section~\ref{sec:pert-est}, which thus only requires little regularity of the potential including cases which are not covered by the standard analysis of planewave discretizations.
We also illustrate our approach by computing eigenvalues of several 1D Schr\"odinger operators.
This article is organized as follows. In Section~\ref{sec:sec2} we present the problem and numerical method that is used to find approximations thereof, as well as the main approximation result of the article and the error bounds on the eigenvalues.
Section~\ref{sec:pert-est} provides the above-mentioned abstract framework of Feshbach--Schur perturbation theory based on the regularized version of form-boundedness whereas Section~\ref{sec:prelim-res} contains some technical results needed to prove the main result which follows in Section~\ref{sec:main-proof}.
Finally, we present in Section~\ref{sec:NumRes} some numerical results to illustrate the convergence as well as the error bounds, and we conclude with some perspectives in Section~\ref{sec:sec6}.
\section{Set-up and results}
\label{sec:sec2}
\subsection{Problem statement}\label{sec:probl}
In order to simplify the notation, we consider a cubic lattice ${\mathcal R}=L\mathbb Z^d$ ($L > 0$, $d=1,2,3$),
but all our arguments straightforwardly apply to the general case of any Bravais lattice.
In this paper we are interested in the spectral theory of the self-adjoint Schr\"odinger operators
(quantum Hamiltonians)
\[
\mathcal H := -\Delta + V,
\]
with reasonably regular, ${\mathcal R}$-periodic potentials $V$, acting on
the Hilbert space
\begin{align*}\label{L2per} {\sf L^2_\per}
&:=\left\{ u \in L^2_{\rm loc}(\mathbb R^d) \; | \; u \mbox{ is ${\mathcal R}$-periodic} \right\},
\end{align*}
endowed with the scalar product
$
\langle u,v\rangle = \int_\Omega u({\bm r}) \,
v({\bm r}) \, d{\bm r}$ and the induced norm $\|\cdot\|$, where $\Omega=[0,L)^d$ is the chosen fundamental cell of the lattice ${\mathcal R}=L\mathbb Z^d$.
Specifically, we would like to solve the eigenvalue problem
\begin{equation}
\label{eq:EVP}
\mathcal H\varphi = \lam\varphi,
\end{equation}
in a space ${\mathsf X}\subset \Hs{1}$. Here, $\Hs{1}$ is the Sobolev space of index 1 of periodic functions on $\Omega$, which is defined in precise terms later on and equation~\eqref{eq:EVP} is considered in the weak sense.
To this end we use the Feshbach--Schur method to reduce the problem to a finite dimensional one.
To simplify the exposition, we will assume that the eigenvalue of interest ${\lambda_\star}$ is isolated, which is true for the smallest eigenvalue under fairly general assumptions of $V$.
We denote by $\|\cdot\|$ the operator norm on ${\mathcal L}({\sf L^2_\per})$, the space of bounded linear operators on ${\sf L^2_\per}$.
To formulate our condition on the potential $V$, we introduce the following norm measuring its regularity
\[
\Enorm{V} := \|(-\Delta+1)^{-1/2+r/2} V (-\Delta+1)^{-1/2+r/2}\|,
\]
{where the operator $(-\Delta+1)^{s}$ is defined by the Fourier transform (cf. Appendix \ref{sec:tech-res}). In what follows, we thus assume that the potential $V$ satisfies the following condition.}
\begin{assumption}
\label{as:pot}
The potential $V$ is real, ${\mathcal R}$-periodic and satisfies
\[
\Enorm{V} < \infty \quad \text{for some} \quad r > 0.
\]
\end{assumption}
Assumption \ref{as:pot} implies that $V$ is $\Delta$-form bounded~\cite{CFKS,RSII}, which corresponds to $r=0$. The latter, weaker property implies that $\mathcal H$ (a) is self-adjoint; (b) is bounded below and (c) has purely discrete spectrum (see e.g. \cite{CFKS,RSII,RSIV, HS}).
Moreover, potentials $V$ belonging to the Sobolev spaces, $\Hs{s}:=(-\Delta+1)^{s/2}{\sf L^2_\per}$ satisfy this assumption as shown in Appendix~\ref{sec:tech-res}, Lemma~\ref{lem:Hs-bnd} for $r \le s+1$ and $r<1+\frac{s}{2}-\frac{d}{4}$.
In terms of Sobolev spaces, Assumption~\ref{as:pot} states that $V$, as an operator, maps $\Hs{1-r}$ into $\Hs{-1+r}$.
\subsection{Approach}
In our approach, we reduce the exact infinite dimensional eigenvalue problem to a finite dimensional one in a controlled way for fairly irregular potentials. Of course, we have to pay a price for this, which is that at one point we solve a one-dimensional fixed point problem
that can be equivalently seen as a non-linear eigenvalue problem.
A key ingredient of our method is the finite dimensional space and the corresponding orthogonal projection onto which we map the original problem to obtain a reduced, finite-dimensional one.
Let ${\mathsf X}_M$ denote the subspace of ${\sf L^2_\per}$ spanned by the eigenfunctions of $-\Delta$ on ${\mathcal R}$, thus planewaves, with eigenvalues smaller than $\rho_M$ where
\[
\rho_M:=\left(\frac{2\pi M}{L}\right)^2.
\]
Let $\Pr_{\!M}$ be the ${\sf L^2_\per}$-orthogonal projection onto ${\mathsf X}_M$ and $\Pr_{\!M}^\perp=1-\Pr_{\!M}$.
We consider the Galerkin approximation
of the linear Hamiltonian
$ \mathcal H := -\Delta + V,$
\[
\mathcal{H}_{\! M} := \Pr_{\!M}(-\Delta + V) \Pr_{\!M}.
\]
We now introduce the projections $\varphi_M = \Pr_{\!M} \varphi$ and $\varphi_M^\perp =\Pr_{\!M}^\perp\varphi$ and project
the exact eigenvalue problem \eqref{eq:EVP}
onto the subspace ${\mathsf X}_M$ and its complement ${\mathsf X}_M^\perp$ to obtain
\begin{align}
\label{eq:Seq1}
\Pr_{\!M}( \mathcal{H}_{\! M} - \lam) \varphi_M &= -\Pr_{\!M} V \varphi_M^\perp, \\
\label{eq:Seq2}
\Pr_{\!M}^\perp(\mathcal{H}_{\! M}^\perp- \lam)\varphi_M^\perp &= -\Pr_{\!M}^\perp V\varphi_M,
\end{align}
where $\mathcal{H}_{\! M}^\perp := \Pr_{\!M}^\perp \mathcal H\Pr_{\!M}^\perp$. Next, in Appendix~\ref{sec:tech-res}, we prove the following
\begin{lem}
\label{lem:Hperp-low-bnd}
Let Assumption~\ref{as:pot} hold and define $\ka_M:=\rho_M - (\rho_M+1) \, \rho_M^{-{r}}\Enorm{V}$.
Then
\begin{align}
\label{HNlowbnd}
\mathcal{H}_{\! M}^\perp
&\ge \ka_M \ \text{ on }\ {\rm{Ran\, }} \Pr_{\!M}^\perp.
\end{align}
\end{lem}
Thus for ${\lambda} < \ka_M$,
the operator $\mathcal{H}_{\! M}^\perp - {\lambda}$ is invertible and
we can solve~\eqref{eq:Seq2} for $\varphi_M^\perp$ and thus $\varphi_M^\perp=- (\mathcal{H}_{\! M}^\perp - \lam)^{-1}\Pr_{\!M}^\perp V\varphi_M$. Substituting the result into~\eqref{eq:Seq1}, we obtain the
non-linear eigenvalue problem
\begin{equation}
\label{EVPN}
\big( \mathcal{H}_{\! M} + U_{\! M}(\lam)\big) \varphi_M = \lam \varphi_M,
\end{equation}
where we introduced the \emph{effective interaction} $U_{\! M}({\lambda}):{\mathsf X}_M\rightarrow {\mathsf X}_M$, or a Schur complement,
\begin{align}
\label{UNlam-def}
U_{\! M}(\lam)
:=
-\Pr_{\!M} V\Pr_{\!M}^\perp (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^\perp V\Pr_{\!M}.
\end{align}
We then have the following proposition, which is proved in Appendix~\ref{sec:tech-res}.
\begin{prop}
\label{prop:UN-well-defined}
For each ${\lambda}$ such that ${\lambda} < \ka_M$, $U_{\! M}({\lambda})$ is a well-defined
operator as a product of three maps: $\Pr_{\!M}, V$ and $\Pr_{\!M}^\perp ( \mathcal{H}_{\! M}^\perp - {\lambda})^{-1} \Pr_{\!M}^\perp$ between various but matching Sobolev spaces.
\end{prop}
Now, we construct a completely computable approximation of the eigenvalue problem~\eqref{EVPN}, with the operators involved being sums of products of finite matrices.
Namely, we expand the resolvent $(\mathcal{H}_{\! M}^\perp- {\lambda})^{-1}_{|\text{Ran}\Pr_{\!M}^\perp} = ( -\Delta + V_M^\perp- {\lambda})^{-1}_{|\text{Ran}\Pr_{\!M}^\perp}$ in \eqref{UNlam-def} in the formal Neumann series in $ V_M^\perp$, then truncate this series at $K\in\mathbb{N}$ and replace the projections $\Pr_{\!M}^\perp:=\mathbf{1}-\Pr_{\!M}$ by $\Pr_{\!M}^{\!N}:=\Pr_{\!N}-\Pr_{\!M}$, with $N>M$. Introducing the notation
\begin{equation}
\label{eq:GNM22}
{\sf G}_{\!M}^N({\lambda}) := ( -\Delta - {\lambda})|_{{\rm{Ran\, }} \Pr_{\!M}^{\!N}}^{-1}
\end{equation}
and $V_{\! M}^N:= \Pr_{\!M}^{\!N} V \Pr_{\!M}^{\!N}$, we obtain the following
truncated effective interaction
\begin{align}\label{UNMKlam}
U_{\! \sigma}({\lambda})
:=
-\Pr_{\!M} V \Pr_{\!M}^{\!N}
R_{\sigma}({\lambda}) \Pr_{\!M}^{\!N} V \Pr_{\!M},
\end{align}
where $\sigma:=(N, M, K)$ and $R_{\sigma}({\lambda}) :=\sum_{k=0}^K (-1)^{k}\Big[ {\sf G}_{\!M}^N({\lambda})V_{\! M}^N \Big]^k {\sf G}_{\!M}^N({\lambda}) $.
Since all the operators involved in \eqref{UNMKlam} are finite matrices, this family is well-defined and computable.
Now, we define $\mathcal{H}_{\! \sigma}({\lambda}):=\mathcal{H}_{\! M} + U_{\! \sigma}({\lambda})$ on ${\mathsf X}_M$ and consider the eigenvalue problem:
find an eigenvalue $\lambda_{\sigma i}$ and the corresponding eigenfunctions $\varphi_{\sigma i}\in {\mathsf X}_M$ such that
\begin{align}
\label{EVPNMK}
\mathcal{H}_{\! \sigma}(\lambda_{\sigma i}) \varphi_{\sigma i} = \lambda_{\sigma i} \varphi_{\sigma i}.
\end{align}
Next, we define the approximate `lifting' operator whose origin will be become clear in the next section:
\begin{align} \label{Qsig}
Q_{\sigma} ({\lambda})
&:=
\mathbf{1} - R_{\sigma}({\lambda}) \Pr_{\!M}^{\!N} V \Pr_{\!M}.
\end{align}
\subsection{Main results}
Within this manuscript, we denote by $\lesssim$ upper bounds involving constants that do not depend on the parameters $\sigma=(N,M,K), \alpha, r, \| V \|_r$.
Then, we have the following result.
\begin{thm}
\label{thm:main}
Let Assumption \ref{as:pot} hold, let ${\lambda_\star}$ be an isolated eigenvalue of $\mathcal H$ of finite multiplicity $m$, let $\g_0$ denote the gap of ${\lambda_\star}$ to the rest of the spectrum of $\mathcal H$.
Then, there exists $\alpha>0$ and $M_0\in\mathbb N$ such that for $M \ge M_0$, problem~\eqref{EVPNMK} has $m$ solutions $( \varphi_{\sigma i}, \lambda_{\sigma i})\in {\mathsf X}_M\times [\lambda_\star-\frac{\g_0}{2},\lambda_\star+\frac{\g_0}{2}]$.
We denote $\lambda_{\circ} = {\lambda_\star}+\g_0+\alpha$.
For each $\varphi_{\sigma i}$, there exists $\varphi_i\in {\mathsf X}$, an eigenfunction of $\mathcal H$ associated to $\lambda_\star$, such that $( \varphi_{\sigma i}, \lambda_{\sigma i})$ approximates $( \varphi_{ i}, \lambda_\star)$ in the following sense:
\begin{align}
|{\lambda_\star}-\lambda_{\sigma i}|
&\lesssim
(\lambda_\star + \alpha)
\frac{\Enorm{V}^2 }{\alpha^r}
\varepsilon(\sigma,r,V),
\\
\| \varphi_{i} - Q_{\sigma} (\lambda_{\sigma i})\varphi_{\sigma i} \|
& \lesssim
\Enorm{V} \left[
1+ \frac{\lambda_{\circ}}{\g_0} \frac{\Enorm{V}}{ \alpha^r}
\right]
\, \varepsilon(\sigma,r,V),
\label{eq:est-eigenvectors2-0}
\end{align}
where
\[
\varepsilon(\sigma,r,V)
:=
\rho_N^{-r}
+ \rho_M^{-r}
\left[ 4 \rho_M^{-r}\Enorm{V} \right]^{K+1}.
\]
\end{thm}
This Theorem is subject to several remarks.
\begin{remark}
Note that $\varepsilon$ is equivalent to
\[
\varepsilon(\sigma,r,V)
\approx
N^{-2r} + M^{-2r}
\left[ 4 \left(\tfrac{2\pi}{L}\right)^{2r} M^{-2r}\Enorm{V} \right]^{K+1},
\]
where the equivalence constants do not depend on the parameters $\sigma=(N,M,K),r,\alpha,V$.
\end{remark}
\begin{remark}
In some cases, for instance in multi-scale problems, one might be only interested in the coarse-scale solution, i.e. the best-approximation in the coarse space ${\mathsf X}_M$ given by $\Pr_{\!M}\varphi_{i}$.
In such cases, a useful byproduct of the proof of Theorem~\ref{thm:main} is the following estimate
\begin{equation}
\| (-\Delta+1)^s (\Pr_{\!M}\varphi_{i} - \varphi_{\sigma i}) \|
\lesssim
\frac{\lambda_{\circ}}{\g_0} \frac{\Enorm{V}^2}{ \alpha^r}
\rho_M^s\,
\varepsilon(\sigma,r,V),
\label{eq:est-eigenvectors1-0}
\end{equation}
for any $s\ge 0$, which thus compares the eigenfunctions in the space ${\mathsf X}_M$.
\end{remark}
\begin{remark}
Note that convergence of the eigenvalues and the eigenfunctions can be achieved by taking the limit $K,N\rightarrow \infty$ for fixed $M\ge M_0$.
For practical purposes, the idea is to set $N$ large enough so that the error is dominated by the error introduced in $K<+\infty$.
Further, note that the eigenvalue and eigenvector errors have the same rate of convergence with respect to $K$. However, the error in the eigenvector depends on the gap $\g_0$ while the error in the eigenvalue does not.
\end{remark}
The estimate with respect to $N$ in Theorem~\ref{thm:main} is not sharp in all cases, in particular for sufficiently regular potentials $V$. Nonetheless, our analysis has the merit of presenting the convergence result in one combined analysis based on perturbative techniques which also holds for low regularities of the potential where standard {\it a priori} approximation results of the variational approximation result do not hold
(note that an estimate of the variational problem can be obtained by setting $K\rightarrow \infty$ or $M=N$).
Moreover, for potentials with very low regularities, the upcoming numerical results indicate that the convergence with respect to $N$ of our analysis is sharp, at least in the presented numerical study.
Note that we can adapt the result whenever {\it a priori} approximation results are available by employing the triangle inequality.
Indeed, if the potential $V$ belongs to the Sobolev space $V\in\Hs{s}$, with $s>d/2$, we resort to {\it a priori} results in a first place to obtain a sharp bound with respect to $N$, see e.g., \cite{Babuska1991-cg,cances2010numerical,Boffi2010-wh}, and also~\cite{norton2010convergence} for a certain class of discontinuous potentials in $H^{1/2-\varepsilon}$ for all $\varepsilon>0$, in two dimensions.
More precisely, we consider $\mathcal H$ acting on ${\mathsf X}_N$ directly, i.e. substituting $\mathcal H$ by $\mathcal H_N:=\Pr_{\!N}\mathcal H\Pr_{\!N}$ and using ${\mathsf X}={\mathsf X}_N$ with variational solution $(\varphi_{N},{\lambda}_{N})$, assuming a simple eigenvalue for simplicity.
It is important to note that problem~\eqref{EVPNMK} remains unchanged and thus, the result of Theorem~\ref{thm:main} holds with
\[
\widetilde\varepsilon(\sigma,r,V)
:=
\rho_M^{-r}
\left[ 4 \rho_M^{-r}\Enorm{V} \right]^{K+1},
\]
but where the exact solution is substituted by $(\varphi_{N},{\lambda}_{N})$.
Proceeding then by the triangle inequality yields
\begin{align*}
|\lambda_\star - {\lambda}_{\sigma i} |
&\le
|\lambda_\star- {\lambda}_{N}|
+
|{\lambda}_{N} - {\lambda}_{\sigma i}|,
\\
\|\varphi_i - Q_{\sigma}({\lambda}_{\sigma i})\varphi_{\sigma i} \|
&\le
\|\varphi_i - \varphi_{N}\|
+
\|\varphi_{N} - Q_{\sigma}({\lambda}_{\sigma i})\varphi_{\sigma i}\|.
\end{align*}
Combining then the aforementioned {\it a priori} estimates from~\cite{Babuska1991-cg,cances2010numerical,Boffi2010-wh}
for the first terms of the right hand sides with Theorem~\ref{thm:main} for the latter parts yields the following corollary.
\begin{cor}
\label{rem:TriangleIneq}
Under the conditions of Theorem~\ref{thm:main} and if $V\in\Hs{s}$, $s>d/2$, then
\begin{align*}
|\lambda_\star - {\lambda}_{\sigma i} |
&\le
C \, \left( N^{-(2s+2)} + \widetilde\varepsilon(\sigma,r,V)
\right),
\\
\|\varphi_i - Q_{\sigma}({\lambda}_{\sigma i})\varphi_{\sigma i}\|
&\le
C \, \left( N^{-(s+2)} + \widetilde\varepsilon(\sigma,r,V)
\right),
\end{align*}
for some constant $C>0$ independent on $\sigma=(N,M,K)$.
\end{cor}
\subsection{Theoretical background}\label{sec:theor}
Let us shed the attention to the theoretical foundation on the eigenvalue formulation in form of~\eqref{EVPN}, instead of~\eqref{eq:EVP}, which is based on the following results, originally presented in~\cite[Theorem 11.1]{Gustafson2011-wd}.
\begin{thm}
\label{thm:isospF}
Let $H$ be an operator on a Hilbert space and $P$ and $P^\perp$, a pair of
projections such that $P+P^\perp = 1$. Assume $H^\perp:=P^\perp H P^\perp$ is invertible
on ${\rm{Ran\, }} P^\perp$ and
the expression
\begin{equation} \label{Fesh}
F_{P} (H ) \ := \ P (H -
H R^\perp H) P ,
\end{equation}
where $R^\perp:=P^\perp (H^\perp)^{-1} P^\perp$, defines a bounded operator. Then $F_{P}$, considered as a map on the space of operators, is \textit{isospectral} in the following sense:
\begin{itemize}
\item[(a)] $ \lambda \in \sigma(H ) \qquad \Longleftrightarrow \qquad 0 \in \sigma(F_{P}
(H - \lambda))$;
\item[(b)]
$H\psi = \lambda \psi \qquad \Longleftrightarrow\qquad
F_{P} (H - \lambda) \, \varphi = 0;$
\item[(c)]
$\dim Null (H - \lambda) = \dim Null F_{P} (H -
\lambda)$.
\end{itemize}
Moreover, $\psi$ and $\varphi$ in (b) are related as $\varphi= P\psi$ and $\psi=Q_{P}({\lambda})\varphi$, where
\[
Q_{P}({\lambda}) := P - P^\perp \, (H^\perp- \lambda)^{-1}P^\perp H P.
\]
Finally, if $H$ is self-adjoint, then so is $F_{P} (H )$.
\end{thm}
The map $F_{P}$ on the space of operators, is called the {\it Feshbach--Schur map}. The relation $\psi=Q_{P}({\lambda})\varphi$ allows us to reconstruct the full eigenfunction from the projected one.
By statement~(a), we have\begin{cor}\label{cor:nuFP} Let $\nu_i({\lambda})$ denote the
$i$-th eigenvalue
of the operator $F_{P} (H - \lambda)+ \lambda \mathbf{1}$
for each ${\lambda}$ in an interval $I\subset \mathbb R$.
Then the eigenvalues of $H$ in $I$ are in one-to-one correspondence with the solutions of the equations \[\nu_i({\lambda})={\lambda}.\]
\end{cor}
In the current setting of planewave approximations,
$P=\Pr_{\!M}$, $P^\perp = \Pr_{\!M}^\perp$, Proposition \ref{prop:UN-well-defined} implies
that the results of Theorem~\ref{thm:isospF} apply for each choice of $M\in\mathbb N$ and yield\begin{align}
\label{FS-HNlam}
F_{\Pr_{\!M}} (\mathcal H - \lambda)= \mathcal{H}_{\! M}({\lambda})- \lambda\Pr_{\!M},
\end{align}
where we introduced the notation
\begin{align}
\label{HNlam-def}
\mathcal{H}_{\! M}({\lambda}):= \mathcal{H}_{\! M} + U_{\! M}(\lam).
\end{align}
Note that $\mathcal{H}_{\! M}({\lambda})$ is exactly the operator entering \eqref{EVPN}. Thus, we have the following.
\begin{cor}
\label{cor:isospFN}
Let ${\lambda} \in \mathbb C$ with
$\operatorname{Re}{\lambda} < \ka_M$. Then
\begin{itemize}
\item[(a)]
$\mathcal H \psi = \lambda \psi \quad \Longleftrightarrow\quad
(\mathcal{H}_{\! M}({\lambda}) - \lambda) \, \varphi_M = 0;$
\item[(b)]
$\dim Null (\mathcal H - \lambda) = \dim Null (\mathcal{H}_{\! M}({\lambda}) - \lambda)$.
\item[(c)] $\psi$ and $\varphi$ in (a) are related as $\varphi_M= \Pr_{\!M}\psi$ and $\psi={\sf Q}_{\!M}({\lambda})\varphi_M$, where
\begin{align}
\label{eq:QM}
{\sf Q}_{\!M}({\lambda})
&=
\mathbf{1} - (\mathcal{H}_{\! M}^\perp -\lambda)^{-1} \Pr_{\!M}^\perp V \Pr_{\!M}.
\end{align}
i.e. the corresponding eigenfunction can be reconstructed from $\varphi_M$ by an explicit linear map.
\end{itemize}
\end{cor}
This result shows that the original infinite-dimensional spectral problem~\eqref{eq:EVP} is equivalent to the finite dimensional spectral problem~\eqref{EVPN} which is nonlinear in the spectral parameter ${\lambda}$.
We now state a few properties of the effective interaction $U_{\! M}({\lambda})$, in order to characterize the solutions of the fixed-point problems $\nu_i({\lambda}) = {\lambda}$.
\begin{prop}
\label{prop:UN-prop}
For ${\lambda}\in \mathbb R$ such that
${\lambda} < \ka_M$, %
$U_{\! M}({\lambda})$ is
(i) non-positive,
(ii) monotonically decreasing with $\lam$, (iii) vanishing as ${\lambda}\rightarrow-\infty$. For ${\lambda} \in \mathbb C$ such that $\operatorname{Re}{\lambda} < \ka_M$, $U_{\! M}({\lambda})$ is (iv) complex analytic in ${\lambda}$ and (v)
symmetric.
\end{prop}
\begin{proof}
Properties (i)-(iv) follow directly from definition \eqref{UNlam-def}
and Lemma \ref{lem:Hperp-low-bnd} above.
For the last one, we use that $\mathcal H$ is self-adjoint.
\end{proof}
\begin{prop}
\label{prop:nu-fp-sol}
Denote by $\nu_{M i}({\lambda})$ the $i$-th eigenvalue of $\mathcal{H}_{\! M}({\lambda})$. Then the equation $\nu_{M i}({\lambda})={\lambda}$ has a unique solution on the interval $(-\infty,\ka_M)$.
\end{prop}
\begin{proof} Since, for ${\lambda} < \ka_M$, $U_{\! M}({\lambda})$ is symmetric, the operator $\mathcal{H}_{\! M}({\lambda})$ defined by \eqref{HNlam-def} is (a) self-adjoint, (b) monotonically decreasing with $\lam$, (c) converging to $\mathcal{H}_{\! M} $ as ${\lambda}\rightarrow-\infty$, (d) is complex analytic in ${\lambda}$ for $\operatorname{Re}{\lambda} < \ka_M$.
We deduce from (b) that the functions $\nu_{M i}$ are decreasing on $(-\infty,\ka_M)$ and thus, if the $i$-th eigenvalue of $\mathcal H$ is less than $\ka_M$, the equation $\nu_{M i}({\lambda})={\lambda}$ has a unique solution.
\end{proof}
Note also that $\lim_{{\lambda} \rightarrow -\infty} \nu_{M i}({\lambda})$ is the $i$-th eigenvalue of $\mathcal{H}_{\! M}$ which is larger than the $i$-th eigenvalue of $\mathcal H$ due to the variational principle.
These considerations motivate the numerical strategies to compute solutions to~\eqref{EVPNMK} in the following section.
\subsection{Numerical strategy}
In order to find solutions to the non-linear eigenvalue problem~\eqref{EVPNMK}, we propose two strategies:
\textit{Strategy 1:}
For a fixed index $i=1,\ldots,M$, consider the sequence of iterates ${\lambda}_{\sigma }^{(k)}$ obtained by
\begin{align}
\label{EVPNMKk}
{\lambda}_{\sigma }^{(k)}: \text{ is the $i$-th eigenvalue of }\mathcal{H}_{\! \sigma}(\lambda_{\sigma}^{(k-1)}).
\end{align}
We thus introduce the notation $\nu_{\sigma i}({\lambda})$ denoting the $i$-th eigenvalue (counting multiplicities) of the Hamiltonian $\mathcal{H}_{\! \sigma}({\lambda})$ and thus have ${\lambda}_{\sigma }^{(k)}= \nu_{\sigma i}(\lambda_{\sigma}^{(k-1)})$.
The limit value ${\lambda}_{\sigma }:=\lim_{k\rightarrow \infty}{\lambda}_{\sigma }^{(k)}$ then satisfies ${\lambda}_{\sigma } = \nu_{\sigma i}(\lambda_{\sigma})$ and thus~\eqref{EVPNMK}.
\textit{Strategy 2:}
For a given target value $\lambda_{\sf t} \in \mathbb R$, consider the sequence of iterates ${\lambda}_{\sigma }^{(k)}$ obtained by
\begin{align}
\label{EVPNMKk-2}
{\lambda}_{\sigma }^{(k)}: \text{ is the eigenvalue of }\mathcal{H}_{\! \sigma}(\lambda_{\sigma}^{(k-1)})\text{ closest to }\lambda_{\sf t}.
\end{align}
We thus introduce the notation $\nu_{\sigma {\sf t}}({\lambda})$ denoting the eigenvalue of the Hamiltonian $\mathcal{H}_{\! \sigma}({\lambda})$ closest to $\lambda_{\sf t}$ and thus have ${\lambda}_{\sigma }^{(k)}= \nu_{\sigma {\sf t}}(\lambda_{\sigma}^{(k-1)})$.
The limit value ${\lambda}_{\sigma }:=\lim_{k\rightarrow \infty}{\lambda}_{\sigma }^{(k)}$ then satisfies ${\lambda}_{\sigma } = \nu_{\sigma {\sf t}}(\lambda_{\sigma})$ and thus~\eqref{EVPNMK}.
\medskip
In both cases, as outlined in the upcoming Remark~\ref{rem:ConvFP}, convergence of the fixed-point maps~\eqref{EVPNMKk} and \eqref{EVPNMKk-2} can be guaranteed under some conditions and for $M$ large enough.
\section{Perturbation estimates}\label{sec:pert-est}
In this article, we often deal with the following
the eigenvalue perturbation problem:
Given an operator $H$ on a Hilbert space $X$ of the form
\begin{equation}
\label{eq:pertform}
H = H_0 + W,
\end{equation}
where $H_0$ is an operator with some isolated eigenvalues and $W$ is small in an appropriate norm, show that $H$ has eigenvalues near those of $H_0$ and estimate these eigenvalues and the corresponding eigenvectors.
We therefore start by presenting an abstract theory which will be applied to our concrete problem in the following sections.
Specifically, we assume that $H$ and $H_0$ are self-adjoint and bounded from below and that $W$ is
{\it $\alpha$-form-bounded} w.r.t. of $H_0$, in the sense that for $\alpha\in\mathbb R$ such that $H_0+\alpha$ is a positive operator ($H_0 +\alpha> 0$), we have
\begin{align} \label{H0a-norm}
\| W \|_{\! H_0,\alpha} := \|(H_0+\alpha)^{-1/2} W (H_0+\alpha)^{-1/2}\|<\infty,
\end{align}
where
$(H_0+\alpha)^{-s}, s>0,$ is defined either by the spectral theory or by the explicit formula \[(H_0+\alpha)^{-s}:=c_\alpha \int_0^\infty (H_0+\alpha+\om)^{-1} d\om/\om^s,\] where $c_\alpha:=[\int_0^\infty (\alpha+\om)^{-1} d\om/\om^s]^{-1}$.
This notion is equivalent to that of the relative form-boundedness, but it gives an important quantification of the latter.
We also note here that, by a known result about relatively form-bounded operators (see e.g.~\cite{RSII, HS}), if $H_0$ is a self-adjoint, bounded below operator on $X$ and $W$ is symmetric and $\alpha$-form-bounded w.r.t. of $H_0$, then $H = H_0 + W$ is self-adjoint.
We start with a general result on the eigenvalue difference.
\begin{prop}
\label{prop:eigenvalue-estimate}
Let $H_0$ be a self-adjoint bounded below operator on $X$ and $W$ symmetric and
$\alpha$-form-bounded w.r.t. of $H_0$, and let $H = H_0 + W$.
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$.
Then $H$
the eigenvalues of $H$ and $H_0$ satisfy the estimates
\begin{align}
\label{lami-lam0-est}
|\nu_{i}(H) - \nu_i(H_0)| \, &\le (\nu_i(H_0) + \alpha) \| W \|_{\! H_0,\alpha},
\end{align}
where $\nu_{ i}(A)$ denotes the $i$-th eigenvalue of the operator $A$.
\end{prop}
\begin{proof}
Let $u\in X$ be arbitrary and define $v=H_{0,-\alpha}^{1/2}u$ noting that $H_{0,-\alpha} >0$.
Then,
\begin{align*}
\langle u, H u \rangle
&=
\langle u, H_{0} u\rangle
+ \langle v, H_{0,-\alpha}^{-1/2} W H_{0,-\alpha}^{-1/2} v\rangle.
\end{align*}
Note that
\[
\langle v, H_{0,-\alpha}^{-1/2} W H_{0,-\alpha}^{-1/2} v\rangle
\le \| W \|_{\! H_0,\alpha} \langle v, v \rangle
= \| W \|_{\! H_0,\alpha} \langle u, H_{0,-\alpha} u \rangle,
\]
and therefore
\begin{align*}
\langle u, H_0 u\rangle \left(1 - \| W \|_{\!H_0,\alpha}\right)& - \alpha \| u\|^2\| W \|_{\!H_0,\alpha}
\le
\langle u, H u\rangle\\
&\le
\langle u, H_0 u\rangle \left(1 + \| W \|_{\!H_0,\alpha} \right) + \alpha \| u\|^2\| W \|_{\!H_0,\alpha}.
\end{align*}
Using the min-max principle (Courant--Fisher), there holds
\[
\nu_i(H_0) \left(1 - \| W \|_{\!H_0,\alpha}\right) - \alpha\| W \|_{\!H_0,\alpha}
\le
\nu_i(H)
\le
\nu_i(H_0) \left(1 + \| W \|_{\!H_0,\alpha} \right) +\alpha \| W \|_{\!H_0,\alpha},
\]
which leads to the result.
\end{proof}
Let us now assume that $\lam_{0 }$ is an isolated eigenvalue of $H_0$ of finite multiplicity $m$ and
let $P_0$ be the orthogonal projection onto the span of the the eigenfunctions
of $H_0$ corresponding to the eigenvalue $\lam_{0 }$, and let $\Pz^\perp := \mathbf{1} - P_0$.
We further introduce $H_{0,-{\lambda}}:=H_0^\perp-{\lambda} \Pz^\perp$ and thus $H_{0,\alpha}=H_0^\perp+\alpha\Pz^\perp$.
Let $\g_0$ denote the gap of ${\lambda}_0$ to its closest eigenvalue in the remaining spectrum of $H_0$ and we introduce the spectral interval $I_0= [{\lambda}_{0 }- \frac12 \g_0, {\lambda}_{0 }+ \frac12 \g_0 ] $.
Our next result gives estimates on the difference of eigenvectors of $H$ and $H_0$, as well as on the difference of their corresponding eigenvalues.
For standard approaches to the spectral perturbation theory, see \cite{Re, Kato1976-hm, RSIV, HS}.
\begin{thm} \label{thm:FS-pert-var}
Let $H_0$ be a self-adjoint bounded below operator on $X$, with the eigenvalue $\lam_{0 }$ as above, and $W$ symmetric and
$\alpha$-form-bounded w.r.t. $H_0$, and let $H = H_0 + W$.
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$.
If $ \| W \|_{\!H_0,\alpha} \le \frac{1}{2} \frac{\g_0}{{\lambda}_0+\alpha}$, then
the self-adjoint operator $H$ has exactly $m$ eigenvalues (counting the multiplicities)
, denoted by $\mu_i$, in the interval $I_0= [{\lambda}_{0 }- \frac12 \g_0, {\lambda}_{0 }+ \frac12 \g_0 ]$ which satisfy
\begin{equation}
\label{eq:eigenvalue_est}
|\mu_{ i}-{\lambda}_{0 } |
\le
({\lambda}_{0 }+ \alpha) \|W \|_{H_0, \alpha}
\le
\frac12 \g_0.
\end{equation}
Further, if $ \| W \|_{\!H_0,\alpha} \le \frac{1}{4} \frac{\g_0}{\lambda_{\circ}}$, then any normalized eigenfunction, $\psi_i$, of $H$ for the eigenvalue $\mu_i$ satisfies the estimates
\begin{align}
\| H_{0,\alpha}^{1/2} (\psi_{0i} - \psi_i) \|
& \le 4 \frac{\lambda_{\circ}}{\g_0} ({\lambda}_0+\alpha)^{1/2} \| W \|_{\!H_0,\alpha}, \label{eq:est-eigenvectors1}
\\
\| \psi_{0i} - \psi_i \|
& \le 4 \frac{\lambda_{\circ}}{\g_0} \| W \|_{\!H_0,\alpha}, \label{eq:est-eigenvectors2}
\end{align}
where $\lambda_{\circ}={\lambda}_0 + \alpha+\g_0$ and $\psi_{0i}$ is an appropriate eigenfunction of $H_0$ corresponding to the eigenvalue $\lam_0$, namely $\psi_{0i} := P_0 \psi_i$.
\end{thm}
\begin{remark}
We note that similar estimates can be obtained for normalized eigenfunctions $\widetilde\psi_{0i} := P_0 \psi_i/\|P_0 \psi_i\|$ with an additional factor 2 using the estimate
\begin{align*}
\|\widetilde\psi_{0i} - \psi_i \|
&\le \big|1-\|\psi_{0i}\|\big| + \|\psi_{0i} - \psi_i \|
\le \big|\|\psi_i\|-\|\psi_{0i}\|\big| + \|\psi_{0i} - \psi_i \|
\le 2\, \|\psi_{0i} - \psi_i \|.
\end{align*}
\end{remark}
We first develop the following preliminary results.
\begin{lem}
\label{lem:H-lam-alph2}
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$ and $\lambda_{\circ}:={\lambda}_{0 }+\g_{0 }+ \alpha$.
Then, for all ${\lambda}\in I_0^c:=\{z\in \mathbb C: \operatorname{Re} z\in I_0\}$,
there holds $|\mathbf{1} \Pz^\perp - ({\lambda}+\alpha) H_{0,\alpha}^{-1} \Pz^\perp| \ge \frac{\gam_0}{2\lambda_{\circ}}$.
\end{lem}
\begin{proof}
The eigenvalues of $|\mathbf{1} \Pz^\perp - ({\lambda}+\alpha) H_{0,\alpha}^{-1} \Pz^\perp|$ on ${\rm{Ran\, }} \Pz^\perp$ are
\[
\left| 1 - \frac{{\lambda}+\alpha}{\lam_{0i}+\alpha} \right|
=
\left|\frac{\lam_{0i} - {\lambda}}{\lam_{0i}+ \alpha}\right|,
\]
where $\lam_{0i}$ denotes the eigenvalues of $H_0$ and the index $i$ runs over all eigenvalues except $i$ such that $\lam_{0i} = \lam_0$.
For ${\lambda}\in I_0^c$, we write ${\lambda}={\lambda}_r + \mbox{i} {\lambda}_i$, with ${\lambda}_r\in I_0$, ${\lambda}_i\in\mathbb R$.
Since for any $x\in\mathbb R$, $|x-{\lambda}| \ge |x-{\lambda}_r|$ we have thus to study the function
\[
f(x) = \left|\frac{x-{\lambda}}{x+\alpha} \right|, \qquad x\in K_\alpha := [-\alpha,+\infty)
\setminus ({\lambda}_0- \gamma_0,{\lambda}_0+ \gamma_0),
\]
for ${\lambda} \in I_0$ in order to lower bound the eigenvalues.
\noindent
Since
\[
f'(x) = \frac{x-{\lambda}}{|x-{\lambda}|}\cdot\frac{\alpha+{\lambda}}{(x+\alpha)^2},
\]
if $\alpha + \lambda \le 0$,
there holds
$
f'(x) < 0 \ \mbox{for } x>-\alpha
$
so that
\[
\min_{x\in K_\alpha} f(x) \ge 1.
\]
If $\alpha+{\lambda}>0,$
there holds
\[
f'(x) < 0 \ \mbox{for } x<{\lambda},\qquad\qquad
f'(x) > 0 \ \mbox{for } x>{\lambda},
\]
and thus, for ${\lambda}\in I_0$,
\[
\min_{x\in K_\alpha} f(x)
= \min\left( f({\lambda}_0-{\g_0}), f({\lambda}_0+{\g_0}) \right)
= \min \left( \frac{|{\lambda}_0-\g_0-{\lambda}|}{{\lambda}_0-\g_0+\alpha},
\frac{|{\lambda}_0+\g_0-{\lambda}|}{{\lambda}_0+\g_0+\alpha} \right)
\ge \frac12 \,\frac{\g_0}{\lambda_{\circ}},
\]
yielding the result.
\end{proof}
Denote $H^\bot := \Pz^\perp H \Pz^\perp \restriction_{{\rm{Ran\, }} \Pz^\perp}$ and $R^\perp(\lam):= \Pz^\perp (H^\bot-\lambda)^{-1} \Pz^\perp$. We have
\begin{lem}\label{lem:FSM-conds}
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$ and $\lambda_{\circ}:={\lambda}_{0 }+\g_{0 }+ \alpha$.
Let $ I_0^c:=\{z\in \mathbb C: \operatorname{Re} z\in I_0\}$ and assume $\|W\|_{H_{0, \alpha}}\le \frac{1}{4}\frac{\g_0}{\lambda_{\circ}}$.
Then, for $\lam\in I_0^c$, the following statements hold
\begin{itemize}
\item[(a)]
The operator $H^\bot-\lambda$ is invertible on ${\rm{Ran\, }} \Pz^\perp$;
\item[(b)]
The inverse $R^\perp(\lam):=\Pz^\perp(H^\perp- {\lambda})^{-1}\Pz^\perp$ defines a bounded, analytic operator-family;
\item[(c)]
The expression \begin{equation}\label{Ulam}
U(\lambda):=- P_0 H R^\perp(\lam) H P_0
\end{equation}
defines a finite-rank, analytic operator-family and bounded as
\begin{align}\label{U-bnd}
&\|U({\lambda})\|_{H_{0, \alpha}}
\le
4\,[\lambda_{\circ}/\g_0]\|P_0 W \Pz^\perp\|_{H_{0, \alpha}}^2
\le
4\,[\lambda_{\circ}/\g_0]\|W\|_{H_{0, \alpha}}^2.
\end{align}
Further, $U({\lambda})$ is symmetric for any $\lam\in I_0$.
\end{itemize} \end{lem}
\begin{proof} (a) Since $H^\perp$ is self-adjoint, the operator $H^\perp - {\lambda}$ is invertible for any $ {\lambda}\in\mathbb C\backslash\mathbb R$. For ${\lambda}\in I_0$, we argue as follows. With the notation $A^\bot := \Pz^\perp A\Pz^\perp \restriction_{{\rm{Ran\, }} \Pz^\perp}$, we write
\[
H^\perp = H_{0}^\perp + W^\perp.
\]
Now, we write
\begin{equation}
\label{eq:35000}
H^\perp- {\lambda} \Pz^\perp =H_{0,\alpha}^{1/2} [\mathbf{1} \Pz^\perp - ({\lambda}+\alpha) H_{0,\alpha}^{-1} + K_{\lam}] H_{0,\alpha}^{1/2},
\end{equation}
with $K_{\lam} = H_{0,\alpha}^{-1/2} W^\perp H_{0,\alpha}^{-1/2}$.
Lemma~\ref{lem:H-lam-alph2} yields that $|\mathbf{1} \Pz^\perp - ({\lambda}+\alpha) H_{0,\alpha}^{-1} P_0^\perp | \ge \frac{\gam_0}{2\lambda_{\circ}}$ and thus, the operator $\mathbf{1} - ({\lambda}+\alpha) H_{0,\alpha}^{-1} + K_{\lam}$ is invertible as soon as $\| K_{\lambda} \| < \frac{\gam_0}{2\lambda_{\circ}},$ which is in particular the case if $\|W\|_{H_0, \alpha}\le \frac{1}{4}\frac{\g_0}{\lambda_{\circ}}$.
Then, we also have $\left| \mathbf{1} - ({\lambda}+\alpha) H_{0,\alpha}^{-1} + K_{\lam} \right| \ge \frac{1}{4}\frac{\g_0}{\lambda_{\circ}}.$ Hence the operator $H^\perp- {\lambda}$ is a product of three invertible operators and therefore is invertible itself on ${\rm{Ran\, }} \Pz^\perp$.
For (b), since $H^\perp- {\lambda}$ is invertible on ${\rm{Ran\, }} \Pz^\perp$,
the interval $I_0$ is contained in the resolvent set, $\rho(H^\perp|_{{\rm{Ran\, }}\Pz^\perp})$, of $H^\perp|_{{\rm{Ran\, }}\Pz^\perp}$ and therefore, since $H^\perp|_{{\rm{Ran\, }}\Pz^\perp}$ is self-adjoint,
$I_0^c\subset\rho(H^\perp|_{{\rm{Ran\, }}\Pz^\perp})$.
Since
\begin{equation}
\label{resolv-id2}
R^\perp(\lam):=\Pz^\perp(H^\perp- {\lambda})^{-1}\Pz^\perp
\end{equation}
is the resolvent of the operator $H^\perp- {\lambda}$ restricted to ${\rm{Ran\, }}\Pz^\perp$, it is analytic on its resolvent set and in particular on $I_0^c$.
To prove statement (c), we note that the operators $R^\perp(\lam), H P_0$ and $P_0 H= (HP_0)^*$ are bounded and $R^\perp(\lam)$ is symmetric for $\lam\in I_0$. Hence so is $U(\lam)$. The analyticity of $U(\lam)$ follows from the analyticity of $R^\perp(\lam)$.
It it clear that $U({\lambda})$ is of finite rank due to its definition.
Finally, to prove estimate \eqref{U-bnd}, we first show that
the operator $R^\perp(\lam):=\Pz^\perp(H^\perp- {\lambda})^{-1}\Pz^\perp$, ${\lambda}\in I_0^c$, satisfies
\begin{align}
\label{ResPerp-bnd2}
\|H_{0,\alpha}^{1/2} R^\perp(\lam)H_{0,\alpha}^{1/2}\|
\le 4\, \frac{\lambda_{\circ}}{\g_0}.
\end{align}
To this end, we invert \eqref{eq:35000} on ${\rm{Ran\, }}\Pz^\perp$ and use that
$\left| \mathbf{1} - ({\lambda}+\alpha) H_{0,\alpha}^{-1} + K_{\lam} \right| \ge \frac{1}{4}\frac{\g_0}{\lambda_{\circ}}$ to obtain~\eqref{ResPerp-bnd2} for ${\lambda}\in I_0^c$.
Finally, we prove inequality~\eqref{U-bnd}. Since $P_0 H_0 = H_0 P_0$
and $P_0 \Pz^\perp = 0$, we have
\[
P_0 H \Pz^\perp = P_0 W \Pz^\perp, \quad
\Pz^\perp H P_0 = \Pz^\perp W P_0.
\]
These relations and definition \eqref{Ulam} yield
\begin{align}
\label{U-expr} &U(\lam) = - P_0 W R^\perp(\lam) W P_0.
\end{align}
Combining~\eqref{ResPerp-bnd2} and~\eqref{U-expr}, we obtain~\eqref{U-bnd}.
\end{proof}
\begin{remark}
In addition, we have the estimate
\begin{align}\label{U-bnd2} &\|U({\lambda})\|\le 4 \, \frac{\lambda_{\circ}^2}{\g_0}\|P_0 W \Pz^\perp\|_{H_0, \alpha}^2 .
\end{align}
Indeed, since
$H_0 P_0={\lambda}_{0 } P_0,$
we have
$
\label{P-est}\|(H_0+\alpha)^{1/2} P_0\|^2 = {\lambda}_0+\alpha,
$
which implies the estimate
\begin{align}
\label{PAP-est}
&\|P_0 A P_0\| = ({\lambda}_0+\alpha)\|P_0 A P_0\|_{H_{0, \alpha}} ,
\end{align}
which, together with estimate \eqref{U-bnd}, yields \eqref{U-bnd2}.
\end{remark}
Hence, under the conditions of Lemma~\ref{lem:FSM-conds} and for ${\lambda}\inI_0$ the following Hamiltonian is well defined
\begin{equation}
\label{Hlam-1}
H({\lambda}) := P_0 H P_0 + U({\lambda}).
\end{equation}
Note that $P_0 H P_0={\lambda}_0P_0$. Lemma \ref{lem:FSM-conds} above implies
\begin{cor}\label{cor:U-prop}
The operator family
$H({\lambda})$ is
(i) self-adjoint for ${\lambda}\in I_0$ and
(ii) complex analytic in ${\lambda} \in I_0^c$. \end{cor}
In what follows, we label the eigenvalue families $\nu_i({\lambda})$, $i=1,\ldots,m$, of $H({\lambda})$ in the order of their increase and so that
\begin{align}
\label{nui-order}
\nu_1({\lambda}) \le \ldots \le \nu_{m}({\lambda}).
\end{align}
Note that the eigenvalue branches $\nu_i({\lambda})$ can also be of higher multiplicity.
On a subinterval $I_i\subset I_0$, we say that the \emph{branch $\nu_i({\lambda})$ is isolated on $I_i$} if each other branch $\nu_j({\lambda})$, with ${\lambda} \in I_i$, either i) coincides with $\nu_i({\lambda})$ or ii) satisfies
\begin{equation}
\label{eq:BranchCond}
\min_{{\lambda}\in I_i} |\nu_i({\lambda}) - \nu_j({\lambda})|\ge \g_i >0.
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[trim = 0mm 140mm 50mm 0mm, clip,width=0.45\textwidth]{Zoom.pdf}
\includegraphics[trim = 0mm 140mm 50mm 0mm, clip,width=0.45\textwidth]{Zoom_multi.pdf}
\caption{(Left) Schematic illustration of the eigenvalues $\nu_i({\lambda})$ of $H({\lambda})$ in the neighborhood of ${\lambda}_0$ for the case of $m=3$.
(Right) Illustration of the spectrum of $H_0$ consisting of five eigenvalues ${\lambda}_{0,1}\ldots,{\lambda}_{0,5}$ of multiplicity $m_{0,1}=1$, $m_{0,2}=2$, $m_{0,3}=4$, $m_{0,4}=2$, $m_{0,5}=1$ and the corresponding situation when zooming in close to ${\lambda}_0 = {\lambda}_{0,i}$.
}
\label{fig:zoom}
\end{figure}
Further, we have the following result.
\begin{prop}\label{prop:H-evs}
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$ and let
$I_i\subset I_0$ be such that the branch $\nu_i({\lambda})$ is isolated on $I_i$.
For ${\lambda}\in I_i$,
(i) the eigenvalues $\nu_i({\lambda})$ of $H({\lambda})$ are continuously differentiable;
(ii) the derivative $\nu_i'({\lambda})$ is non-positive;
(iii) the solutions to the equations $\nu_i({\lambda}) = {\lambda}$ are unique if ${\lambda}\in I_i$;
(iv) if $\|W\|_{H_{0, \alpha}}\le \frac{1}{4}\frac{\g_0}{\lambda_{\circ}}$, the derivatives $\nu_i'({\lambda})$, ${\lambda}\inI_0':= [{\lambda}_{0 }- \frac14 \g_0, {\lambda}_{0 }+ \frac14 \g_0 ]\cap I_i $,
are bounded as
\[
|\nu_i'({\lambda})| \le \frac{8}{\pi} \frac{({\lambda}_0 +\alpha)\lambda_{\circ}}{\g_0^2} \|P_0 W \Pz^\perp\|_{H_0, \alpha}^2.
\]
where $\lambda_{\circ}:={\lambda}_{0 }+\g_{0 }+ \alpha$.
\end{prop}
\begin{proof}
Proof of (i) of a simple eigenvalue ${\lambda}_0$, i.e., $m=1$.
In such a case, $P_0$ is a rank-one projector on the space spanned by the eigenvector $\varphi_0$ of $H_0$ corresponding to the eigenvalue ${\lambda}_0$ and therefore Eq. \eqref{Hlam-1} implies that $H(\lam) =\nu_1 ({\lambda}) P_0$, with
\begin{align}\label{FS-sim}
&\nu_1 ({\lambda}): = \langle \varphi_{0}, H(\lam) \varphi_{0}\rangle.
\end{align}
This and Corollary \ref{cor:U-prop} show that the eigenvalue $\nu_1 ({\lambda})$ is analytic.
We now prove (i) in the general case.
First, we claim the following well-known formula
\begin{align}
\label{nu-deriv}{\nu}_i'({\lambda})=\langle \chi_i({\lambda}), U'({\lambda})\chi_i({\lambda})\rangle,
\end{align}
for ${\lambda}\in I_i$, where $\chi_i({\lambda})$ are well-chosen normalized eigenfunction of $H({\lambda})$ corresponding to the eigenvalue $\nu_i({\lambda})$, namely that they are differentiable in ${\lambda}$.
To this end, we observe that for each $\mu\in I_i$, we can find a local neighborhood $I_\mu\subset I_i$ of $\mu$ such that
\begin{equation}
\label{eq:no-intersection}
\bigcup_{j\neq i}\{\nu_j({\lambda}) \,|\, {\lambda}\in I_\mu \}\cap
\{ \nu_i({\lambda}) \,|\, {\lambda}\in I_\mu \} = \emptyset,
\end{equation}
due to the isolated branch property, i.e., $\g_i>0$ in~\eqref{eq:BranchCond}.
Second, since $H({\lambda})$ is self-adjoint for ${\lambda}\in I_0$ and
analytic (say, in the resolvent sense) in ${\lambda}\in I_0^c$, the Riesz projection, corresponding to the eigenvalue~$\nu_i({\lambda})$:
\begin{equation}
\label{RieszProj}
P_i({\lambda})
:=
\frac{1}{2 \pi i} \oint_{\Gamma_{i}(\mu)} (H({\lambda})-z)^{-1} dz,
\end{equation}
where $\Gamma_{i}(\mu)$ is a
closed curve in the resolvent set of $H({\lambda})$ surrounding the eigenvalue branch $\{\nu_i({\lambda}): {\lambda}\in I_\mu\}$, is also self-adjoint and analytic in ${\lambda}\in I_\mu^c$ and therefore in ${\lambda}\in I_0^c$ (see \cite{RSIV,HS}),
condition~\eqref{eq:no-intersection} guarantees that we can choose such a closed curve which contains no other points of $\sigma(H(\mu))$ on $I_\mu$,
and that, combining all neighborhoods of $\mu$ for $\mu \in I_i$, there holds that $P_i({\lambda})$ is analytic in $I_i$. From~\cite[Theorem XII.12]{RSIV}, there exists an analytic family of unitary operators $V_i({\lambda})$ such that $P_i({\lambda}) = V_i({\lambda}) P_i({\lambda}_0) [V_i({\lambda})]^{-1}$, ${\lambda}_0$ being possibly replaced by some arbitrary $\mu \in I_i$ if ${\lambda}_0$ does not belong to $I_i$.
We then define
\[
\chi_i({\lambda})
=
V_i({\lambda}) \psi_{0 i},
\]
where $\psi_{0 i}$ is an eigenvector of $H_0$ corresponding to the eigenvalue ${\lambda}_0$.
Since $V_i({\lambda})$ is analytic, $\chi_i({\lambda})$ is also analytic in $I_i$, so in particular differentiable, and one can easily check that $\chi_i({\lambda})$ is of norm 1 and that $P_i({\lambda}) \chi_i({\lambda}) = \chi_i({\lambda})$, which guarantees that $ \chi_i({\lambda})$ is a normalized eigenfunction of $H({\lambda})$.
Now, we use that
\begin{align*}
\langle \chi_i'({\lambda}), H({\lambda})\chi_i({\lambda})\rangle+\langle \chi_i({\lambda}), H({\lambda})\chi_i'({\lambda})\rangle
&=
{\nu}_i({\lambda})(\langle \chi_i'({\lambda}), \chi_i({\lambda})\rangle+\langle \chi_i({\lambda}), \chi_i'({\lambda})\rangle)\\
&=
\langle \chi_i({\lambda}), \chi_i({\lambda})\rangle'=0
\end{align*}
to obtain ${\nu}_i'({\lambda})=\langle \chi_i({\lambda}), H'({\lambda})\chi_i({\lambda})\rangle$, which gives \eqref{nu-deriv}.
The differentiability of $\chi_i({\lambda})$ and the analyticity of $H({\lambda})$ then implies the differentiability of $\nu_i$ in each neighborhood of ${\lambda}$.
In order to prove (ii), note that $U'({\lambda})\le 0$, as follows by the explicit formula
\begin{align}
\label{Ulam-deriv'}
U'(\lam):=-P W\Pz^\perp (H^\perp- {\lambda})^{-2} \Pz^\perp WP\le 0.
\end{align}
Hence, ${\nu}_i'({\lambda})<0$ by~\eqref{nu-deriv}.
The monotonicity of $\nu_i({\lambda})$ also implies the well-posedness of the equations $\nu_i({\lambda}) ={\lambda}$ under the condition that ${\lambda}\in I_i$, thus statement (iii).
We now aim to prove (iv).
Starting from~\eqref{nu-deriv}, we estimate ${\nu}_i'({\lambda})$ with
\begin{align}
\label{nu'-est1-1_v1}
|\nu_i'({\lambda})| \le \|(H_0+ \alpha)^{1/2} P_0\|^2\|U'({\lambda})\|_{H_0, \alpha}.
\end{align}
The first factor on the right hand side is exactly known as
\begin{equation}
\label{estim-H0a_v1}
\|(H_0+ \alpha)^{1/2} P_0\|^2 = ({\lambda}_0 +\alpha).
\end{equation}
To investigate the second factor on the r.h.s. of \eqref{nu'-est1-1_v1}, we use the analyticity $U({\lambda})$ and the estimate \eqref{U-bnd}. Indeed, by the Cauchy integral formula, we have \[\|U'({\lambda})\|_{H_0, \alpha}\le \frac{1}{2\pi R}\sup_{\substack{\mu \in \mathbb C, \\ |\mu-{\lambda}|=R}} \|U(\mu)\|_{H_0, \alpha},\] where $R$ is such that $\{\mu\in \mathbb C:|\mu-{\lambda}|\le R\}\subset I_0^c$.
Taking $R=\frac14 \g_0$ gives, under the conditions of Lemma \ref{lem:FSM-conds}, the estimate
\begin{align}
\label{U'-bnd}
&\|U'({\lambda})\|_{H_0, \alpha}\le
\frac{8}{\pi} \frac{\lambda_{\circ}}{\g_0^2} \|P_0 W \Pz^\perp\|_{H_0, \alpha}^2.
\end{align}
Combining equations \eqref{nu'-est1-1_v1}, \eqref{estim-H0a_v1} and \eqref{U'-bnd} shows (iv).
\end{proof}
\begin{cor}
\label{cor:ContractionH}
Let $\alpha\in\mathbb R$ be such that $H_0+\alpha>0$ and let $I_i\subset I_0$ be such that the branch $\nu_i({\lambda})$ is isolated on $I_i$.
Under the condition that
\[
\frac{8}{\pi} \frac{({\lambda}_0 +\alpha)\lambda_{\circ}}{\g_0^2} \|P_0 W \Pz^\perp\|_{H_0, \alpha}^2 < 1,
\]
and that the unique solution ${\lambda}$ of $\nu_i({\lambda})={\lambda}$ satisfies ${\lambda}\inI_0'$, the fixed-point iteration ${\lambda}^{(k+1)} = \nu_i({\lambda}^{(k)})$ converges to ${\lambda}$ for initial values in $I_0'$.
\end{cor}
Now, we proceed directly to the proof of Theorem \ref{thm:FS-pert-var}.
\begin{proof}[Proof of Theorem \ref{thm:FS-pert-var}]
For the estimate on the eigenvalues, we first remark that applying Proposition~\ref{prop:eigenvalue-estimate} to the $m$ eigenvalues corresponding to ${\lambda}_0$ for $H_0$ provides the first inequality in~\eqref{eq:eigenvalue_est}.
The second one follows immediately from the condition $ \| W \|_{\!H_0,\alpha} \le \frac{1}{2} \frac{\g_0}{{\lambda}_0+\alpha}$ and thus $\mu_i\in I_0$.
The fact that the operator $H$ has exactly $m$ eigenvalues (counting the multiplicities) in $I_0$ follows from Corollary \ref{cor:nuFP} and Proposition \ref{prop:H-evs}(iii) and the fact that $H({\lambda})$ is a $m\times m$ symmetric matrix.
For the estimates on the eigenfunctions, recall from Theorem~\ref{thm:isospF} that $Q_0 (\mu_i)\psi_{0i}=\psi_{i}$, where $\mu_i = \nu_i(H)$ and the operator $Q_0 ({\lambda})$ is given by
\begin{align} \label{Q}
Q_0 ({\lambda})
&:=
\mathbf{1} - R^\perp(\lam) \Pz^\perp W P_0,
\end{align}
with $R^\perp(\lam)$ defined in~\eqref{resolv-id2}.
This yields
\begin{align}
\psi_{ 0i } - \psi_{i}=\psi_{ 0i } - Q_0(\mu_{i}) \psi_{ 0i}=R^\perp(\mu_{i}) \Pz^\perp W P_0 \psi_{ 0i }.
\end{align}
Then, for $\gamma\in \{0,1/2\}$,
\begin{align*}
\| H_{0,\alpha}^\gamma(\psi_{ 0i } - \psi_{i}) \|
\le \; & \| H_{0,\alpha}^\gamma R^\perp(\mu_{i}) \Pz^\perp W P_0\| \\
\le \; & \| H_{0,\alpha}^\gamma H_{0,\alpha}^{-1/2} \Pz^\perp \| \| H_{0,\alpha}^{1/2} R^\perp(\mu_{i}) H_{0,\alpha}^{1/2} \|
\| \Pz^\perp W P_0\|_{H_0,\alpha}
\| P_0 H_{0,\alpha}^{1/2}\|.
\end{align*}
In the previous expression, we can use~\eqref{ResPerp-bnd2} to estimate $\| H_{0,\alpha}^{1/2} R^\perp(\mu_{i}) H_{0,\alpha}^{1/2} \|$. Then, we note that
\[
\| \Pz^\perp W P_0\|_{H_0,\alpha}
\le \|W \|_{H_0,\alpha},
\]
as well as
\[
\| P_0 H_{0,\alpha}^{1/2}\| = ({\lambda}_0 + \alpha)^{1/2}.
\]
Finally, in the case $\gamma = 0$
\[
\| H_{0,\alpha}^{-1/2} \Pz^\perp \| \le ({\lambda}_0 + \alpha)^{-1/2},
\]
and for $\gamma = 1/2$,
\[
\| H_{0,\alpha}^\gamma H_{0,\alpha}^{-1/2} \Pz^\perp \| \le 1.
\]
Combining the four bounds leads to~\eqref{eq:est-eigenvectors1} and~\eqref{eq:est-eigenvectors2}.
\end{proof}
\begin{remark}
Note that by Theorem~\ref{thm:isospF}, any solution $\mu_i$ to the equation $\nu_i(\mu_i)=\mu_i$, for $i=1,\ldots,m$ and where the $\mu_i$ are in ascending order, is an eigenvalue of $H$.
Under the condition
\begin{equation}
\label{eq:WcondI}
\|W\|_{H_0, \alpha} \le \frac12 \frac{\g_0}{{\lambda}_0 + \alpha},
\end{equation}
Theorem~\ref{thm:FS-pert-var} guarantees that the eigenvalues ${\lambda}_i$ satisfy $|\mu_i-{\lambda}_0|\le \frac{\g_0}{2}$ and thus $\mu_i \in I_0$.
On the contrary, the eigenvalues $\mu_i$ are the only $m$ eigenvalues of $H$ belonging to $I_0$ if a similar condition as~\eqref{eq:WcondI}, but for the next larger eigenvalue of $H_0$ than ${\lambda}_0$ holds whereas such a condition is automatically satisfied by \eqref{eq:WcondI} for the preceding eigenvalue of $H_0$.
\end{remark}
\section{Preliminary results} \label{sec:prelim-res}
We now derive a few preliminary results that will be useful for proving Theorem~\ref{thm:main}.
For the following proofs, we define the following quantities:
$h_{\lambda}:=-\Delta-{\lambda}$, $V_M^\perp = \Pr_{\!M}^\perp V \Pr_{\!M}^\perp$, $V_{\! M}^N = \Pr_{\!M}^{\!N} V \Pr_{\!M}^{\!N}$.
\begin{lem}
\label{lem:Vperp-bnd}
For ${\lambda}\in \mathbb C$ with $\operatorname{Re}{\lambda} < \frac12 \rho_M$ and $\rho_M \ge 1$, the following bounds hold
\begin{align}\label{q-bnd}
\|h_{\lambda}^{-1/2}V_{\! M}^N h_{\lambda}^{-1/2}\| \le \|h_{\lambda}^{-1/2}V_M^\perp h_{\lambda}^{-1/2}\| \le 4 \rho_M^{-r}\Enorm{V}.
\end{align}
\end{lem}
\begin{proof}
First, we note that
\[
\|h_{\lambda}^{-1/2}V_{\! M}^N h_{\lambda}^{-1/2}\| = \|P_N h_{\lambda}^{-1/2} V_M^\perp h_{\lambda}^{-1/2} P_N\| \le \| h_{\lambda}^{-1/2}
V_M^\perp
h_{\lambda}^{-1/2} \|.
\]
Then, we estimate for any $s\ge 0,$ using the assumption $\operatorname{Re}{\lambda}\le \frac12\rho_M $
\begin{align}\label{hal/hlam-est}
&\|h_{-1}^s h_{\lam}^{-s} \Pr_{\!M}^\perp\|
\le \frac{|\rho_M +1|^s }{|\rho_M - {\lambda}|^s}
\le 2^s \; ( 1 + \rho_M^{-1})^s \le 4^s,
\end{align}
This implies in particular that
$\|h_{\lam}^{-1/2}V_M^\perp h_{\lam}^{-1/2}\|
\le 4 \, \EnormZ{V_M^\perp}$. The result follows noting that $\EnormZ{V_M^\perp} \le \rho_M^{-r}\Enorm{V}.$
\end{proof}
\begin{lem}
\label{lem:UNK-bnd}
For ${\lambda}\in\mathbb C$ with $\operatorname{Re}{\lambda}\le \min(\frac12\rho_M, \ka_M)$ and $\rho_M \ge 1$,
the following bound holds
\begin{align}
\label{UNMK-bnd}
\|U_{\! \sigma}({\lambda})\|_{r} \le
& \;
4 \rho_M^{-r} \Enorm{V}^2 \sum_{k=0}^K \left[4 \rho_M^{-r}\Enorm{V} \right]^k.
\end{align}
Moreover, if $4 \rho_M^{-r}\Enorm{V} < 1$,
\begin{align}
\label{UNMK-bnd2}
\|U_{\! \sigma}({\lambda})\|_{r} \le
& \;
\rho_M^{-r} \frac{4\,\Enorm{V}^2}{1 - 4 \rho_M^{-r}\Enorm{V}},
\end{align}
and in particular
\begin{align}
\label{UM-bnd}
\|U_{\! M}({\lambda})\|_{r} & \le \rho_M^{-r} \frac{4\,\Enorm{V}^2}{1 - 4 \rho_M^{-r}\Enorm{V}}.
\end{align}
\end{lem}
\begin{proof}
By definition
\eqref{UNMKlam}, we can write $U_\sigma({\lambda})$ which is well-defined for ${\lambda} < \ka_M$ as
\begin{align}
U_{\! \sigma}({\lambda})
= - \sum_{k=0}^K
\Pr_{\!M} V \Pr_{\!M}^{\!N} h_{{\lambda}}^{-1/2} \Big[ - h_{{\lambda}}^{-1/2} V_{\! M}^N h_{{\lambda}}^{-1/2} \Big]^k h_{{\lambda}}^{-1/2} \Pr_{\!M}^{\!N} V \Pr_{\!M}.
\end{align}
Using estimate \eqref{q-bnd}, there holds
\[
\| U_{\! \sigma}({\lambda}) \|_r
\le \sum_{k=0}^K \left[4 \rho_M^{-r}\Enorm{V} \right]^k
\| h_{-1}^{-1/2 +r/2} \Pr_{\!M} V \Pr_{\!M}^{\!N} h_{{\lambda}}^{-1/2} \| \| h_{{\lambda}}^{-1/2} \Pr_{\!M}^{\!N} V \Pr_{\!M} h_{-1}^{-1/2 +r/2}\| .
\]
and by \eqref{hal/hlam-est}, $\|h_{\lam}^{-1/2}\Pr_{\!M}^N V \Pr_{\!M} h_{-1}^{-1/2+r/2}\| \le 2 \rho_M^{-r/2}\|V\|_{r}$, so that we obtain~\eqref{UNMK-bnd}. The bound~\eqref{UNMK-bnd2} is easily obtained from~\eqref{UNMK-bnd}
and taking $K,N=\infty$ in \eqref{UNMK-bnd2}, we arrive at \eqref{UM-bnd}.
\end{proof}
\begin{lem}
\label{lem:UN-bnd}
For ${\lambda} < \frac12 \ka_M$, $\rho_M \ge 1$ and if $4 \rho_M^{-r}\Enorm{V} < 1$, the following bounds hold
\begin{align}
\label{UM'-est}
\|U_{\! M}'({\lambda})\|_{r } & \le \frac{1}{\pi (\ka_M - 2{\lambda})} \rho_M^{-r} \frac{4\,\Enorm{V}^2}{1 - 4 \rho_M^{-r}\Enorm{V}} .
\end{align}
\end{lem}
\begin{proof}
Since from Proposition \ref{prop:UN-prop}(iv), $U_{\! M}({\lambda})$ is complex analytic in ${\lambda}$ for $\operatorname{Re}{\lambda} < \ka_M$, by the Cauchy integral formula, we have
\[
\|U_M'({\lambda})\|_{r}\le \frac{1}{2\pi R_M({\lambda})}\sup_{\substack{\mu \in \mathbb C, \\ |\mu-{\lambda}|=R_M({\lambda})}} \|U_M(\mu)\|_{r},
\]
with $R_M({\lambda}) = \frac12\ka_M - {\lambda} >0 $. Using~\eqref{UM-bnd} and noting that $\frac12 \ka_M\le \frac12\rho_M$, we obtain~\eqref{UM'-est}.
\end{proof}
\begin{lem}
\label{lem:UN-UNK-bnd}
For ${\lambda} < \min(\ka_M,\frac12\rho_M)$, $\rho_N \ge \rho_M >1$ and if $4 \rho_M^{-r}\Enorm{V} < 1$ and $4 \rho_N^{-r}\Enorm{V} + \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}}<1$,
the following bound holds
\begin{align}
\label{UM-Us-bnd}
& \|U_{\! M}({\lambda})-U_{\! \sigma}({\lambda})\|_{r}\notag\\ & \le \;
\frac{4 \rho_N^{-r} \Enorm{V}^2}{1 - 4 \rho_N^{-r}\Enorm{V} - \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}}} \left[ 1 + \frac{4 \rho_M^{-r} \Enorm{V}}{1 - 4 \rho_M^{-r} \Enorm{V}} \right]^2
+
\frac{4 \rho_M^{-r} \,\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}} \left[4 \rho_M^{-r}\Enorm{V} \right]^{K+1}.
\end{align}
\end{lem}
\begin{proof}
We first write $U_{\! M}({\lambda})-U_{\! \sigma}({\lambda})$ as
\begin{align}
\label{UN-Us-expr}
&U_{\! M}({\lambda})-U_{\! \sigma}({\lambda})
=
(U_{\! M}({\lambda})-U_{\! MN}({\lambda})) + (U_{\! MN}({\lambda})-U_{\! \sigma}({\lambda})),
\end{align}
where, using the notation $\mathcal{H}_{\! M}^N = \Pr_{\!M}^{\!N}\mathcal H\Pr_{\!M}^{\!N}$
\begin{align*}
U_{\! MN}({\lambda})
:=&
-\Pr_{\!M} V\Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N - {\lambda})^{-1} \Pr_{\!M}^{\!N} V\Pr_{\!M}
\\
=&
-\Pr_{\!M} V\Pr_{\!M}^{\!N} (-\Delta + V_{\! M}^N - {\lambda})^{-1} \Pr_{\!M}^{\!N} V\Pr_{\!M}.
\end{align*}
Since $U_{\! M}({\lambda}) = -\Pr_{\!M} V\Pr_{\!M}^\perp (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^\perp V\Pr_{\!M}$, with $\mathcal{H}_{\! M}^\perp = \Pr_{\!M}^\perp \mathcal H \Pr_{\!M}^\perp$,
the first term $U_{\! M}({\lambda})-U_{\! MN}({\lambda})$ is estimated as follows.
Denoting by ${\mathcal{H}}_N^\perp = \Pr_{\!N}^\perp {\mathcal{H}} \Pr_{\!N}^\perp$, and the Schur complement
\[
A = \Pr_{\!N}^\perp \left( ({\mathcal{H}}_N^\perp - {\lambda} \Pr_{\!N}^\perp ) - \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp \right)^{-1} \Pr_{\!N}^\perp,
\]
there holds, using a block matrix inversion
\begin{align}
\Pr_{\!M}^\perp (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^\perp =&
\Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} + \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} \nonumber \\
& - \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A - A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} + A.
\label{eq:matrix_inversion}
\end{align}
Therefore, $U_{\! M}({\lambda})-U_{\! MN}({\lambda})$ can be decomposed into four terms as
\begin{align*}
U_{\! M}({\lambda})-U_{\! MN}({\lambda}) = & -\Pr_{\!M} V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V\Pr_{\!M}, \\
& + \Pr_{\!M} V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A V\Pr_{\!M}, \\
& + \Pr_{\!M} V A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V\Pr_{\!M}, \\
& -\Pr_{\!M} V A V\Pr_{\!M}.
\end{align*}
Then, the $r$-norm can be estimated as
\begin{align*}
\|U_{\! M}({\lambda})-U_{\! MN}({\lambda})\|_r \le \; &
\Enorm{V}^2 \Big[ \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N}
h_{-1}^{1/2-r/2} \| \\
&
+ \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} V A
h_{-1}^{1/2-r/2} \| \\
& + \| h_{-1}^{1/2-r/2} A V \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N}
h_{-1}^{1/2-r/2} \|
\\
&
+
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \|
\Big].
\end{align*}
Introducing appropriate $ h_{-1}^{1/2-r/2}$ and $h_{-1}^{-1/2+r/2}$ terms, we obtain
\begin{align*}
\|U_{\! M}({\lambda})-U_{\! MN}({\lambda})\|_r \le \; &
\Enorm{V}^2 \Big[ \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} h_{-1}^{1/2-r/2} \|^2 \Enorm{V}^2 \| h_{-1}^{1/2-r/2} A h_{-1}^{1/2-r/2} \| \\
&
+ 2 \, \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} h_{-1}^{1/2-r/2} \| \Enorm{V} \| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| \\
&
+
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \|
\Big] \\
\le \; &
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| \Enorm{V}^2 \\
& \times \Big[ 1 + \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} h_{-1}^{1/2-r/2} \| \Enorm{V} \Big]^2.
\end{align*}
We are therefore left with the estimation of $ \| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} h_{-1}^{1/2-r/2} \|$ and $\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \|$.
First, noting from~\eqref{q-bnd} that
\begin{align}
\label{eq:HMperpbound_new}
\| h_{\lambda}^{1/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^{\!N}
h_{\lambda}^{1/2} \|
& \le \| (I + h_{{\lambda}}^{-1/2} V_M^\perp h_{{\lambda}}^{-1/2} )^{-1} \|
\le \frac{1}{1 - 4 \rho_M^{-r} \Enorm{V}},
\end{align}
and using that
\begin{align}
\label{eq:AuxLem7}
\| h_{-1}^{1/2-r/2} h_\lambda^{{-1/2}} P_M^N \| \le 2 \rho_M^{-r/2},
\end{align}
we obtain
\begin{equation}
\| h_{-1}^{1/2-r/2} \Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} h_{-1}^{1/2-r/2} \| \le
\frac{4 \rho_M^{-r}}{1 - 4 \rho_M^{-r} \Enorm{V}}.
\end{equation}
Second,
\begin{align*}
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| = \| h_{-1}^{1/2-r/2} \Pr_{\!N}^\perp \left[ ({\mathcal{H}}_N^\perp - {\lambda} \Pr_{\!N}^\perp ) - \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp \right]^{-1} \Pr_{\!N}^\perp
h_{-1}^{1/2-r/2} \|.
\end{align*}
Noting that
\begin{align*}
\| h_{-1}^{1/2-r/2} h_\lambda^{{-1/2}} P_N^\perp \| \le 2 \rho_N^{-r/2},
\end{align*}
there holds
\[
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| \le 4 \rho_N^{-r} \| h_\lambda^{{1/2}} \Pr_{\!N}^\perp \left[ ({\mathcal{H}}_N^\perp - {\lambda} \Pr_{\!N}^\perp ) - \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp \right]^{-1} \Pr_{\!N}^\perp
h_\lambda^{{1/2}} \|.
\]
Factorizing $h_\lambda$, we deduce
\[
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| \le 4 \rho_N^{-r} \big\| \big[ I + h_\lambda^{-1/2} V_N^\perp h_\lambda^{-1/2} - h_\lambda^{-1/2} \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp h_\lambda^{-1/2} \big]^{-1} \big\|.
\]
From~\eqref{q-bnd} with $N$ in place of $M$, we obtain
\begin{equation}
\label{eq:temp_3003}
\| h_\lambda^{-1/2} V_N^\perp h_\lambda^{-1/2}\| \le 4 \rho_N^{-r}\Enorm{V}.
\end{equation}
Moreover,
\[
\| h_\lambda^{-1/2} \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp h_\lambda^{-1/2} \|
\le
\| h_\lambda^{-1/2} \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} h_\lambda^{-1/2} \|^2
\| h_\lambda^{1/2} \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} h_\lambda^{1/2} \|,
\]
which, from~\eqref{q-bnd} and~\eqref{eq:HMperpbound_new}, leads to
\[
\| h_\lambda^{-1/2} \Pr_{\!N}^\perp V \Pr_{\!M}^{\!N} ({\mathcal{H}}_M^\perp - {\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!N}^\perp h_\lambda^{-1/2} \| \le \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}}.
\]
Combining this last line with~\eqref{eq:temp_3003}, we obtain the bound
\[
\| h_{-1}^{1/2-r/2} A
h_{-1}^{1/2-r/2} \| \le 4 \rho_N^{-r} \frac{1}{1 - 4 \rho_N^{-r}\Enorm{V} - \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}}}.
\]
This leads to the following bound for the difference $U_{\! M}({\lambda})-U_{\! MN}({\lambda})$
\begin{equation}
\label{eq:UM-UMN}
\|U_{\! M}({\lambda})-U_{\! MN}({\lambda})\|_r \le
\rho_N^{-r} \frac{4 \Enorm{V}^2}{1 - 4 \rho_N^{-r}\Enorm{V} - \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}}} \left[ 1 + \frac{4 \rho_M^{-r} \Enorm{V}}{1 - 4 \rho_M^{-r} \Enorm{V}} \right]^2.
\end{equation}
For the second term on the right handside of~\eqref{UN-Us-expr},
we write $U_{\! MN}({\lambda})-U_{\! \sigma}({\lambda})$ as
\begin{align}
\label{eq:AuxLem7-2}
U_{\! MN}({\lambda})-U_{\! \sigma}({\lambda})
= - \sum_{k=K+1}^\infty
\Pr_{\!M} V \Pr_{\!M}^{\!N} h_{{\lambda}}^{-1/2} \Big[ - h_{{\lambda}}^{-1/2} V_{\! M}^N h_{{\lambda}}^{-1/2} \Big]^k h_{{\lambda}}^{-1/2} \Pr_{\!M}^{\!N} V \Pr_{\!M}.
\end{align}
Using~\eqref{q-bnd} and~\eqref{hal/hlam-est}, we obtain
\begin{align*}
\| U_{\! MN}({\lambda})-U_{\! \sigma}({\lambda}) \|_r
\le & \Enorm{V}^2 \; \|h_{-1}^{1/2-r/2} h_{\lambda}^{-1/2} P_M^\perp\|^2 \sum_{k=K+1}^\infty
\| h_{{\lambda}}^{-1/2} V_{\! M}^N h_{{\lambda}}^{-1/2} \|^k \\
\le & 4 \, \Enorm{V}^2 \; \rho_M^{-r} \sum_{k=K+1}^\infty
\left[4 \rho_M^{-r}\Enorm{V} \right]^k,
\end{align*}
from which we deduce that
\begin{equation}
\label{eq:UMN-Us}
\| U_{\! MN}({\lambda})-U_{\! \sigma}({\lambda}) \|_r
\le \frac{4\, \Enorm{V}^2}{1 - 4 \rho_M^{-r}\Enorm{V}} \; \rho_M^{-r}
\left[4 \rho_M^{-r}\Enorm{V} \right]^{K+1}.
\end{equation}
Combining~\eqref{eq:UM-UMN} and~\eqref{eq:UMN-Us}, we obtain~\eqref{UM-Us-bnd}.
\end{proof}
\begin{lem}
\label{lem:UNdist}
For ${\lambda} < \mu < \min(\ka_M, \frac12\rho_M)$, $\rho_M \ge 1$ and if $4 \rho_M^{-r}\Enorm{V} < 1$,
there holds
\begin{align}
\label{eq:UNdist}
\|U_{\! M}(\mu) - U_{\! M}({\lambda})\|_r
&
\le |\mu-{\lambda}| \frac{\rho_M^{-r}}{\pi (\ka_M - 2{\lambda})} \frac{4\,\Enorm{V}^2}{1 - 4 \rho_M^{-r}\Enorm{V}}
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lem:UNdist}]
Without loss of generality, we assume that ${\lambda}<\mu$.
Writing
\[
U_{\! M}(\mu) - U_{\! M}({\lambda})
=
\int_{{\lambda}}^\muU_{\! M}'(s) ds,
\]
yields
\[
\|U_{\! M}(\mu) - U_{\! M}({\lambda})\|_r
\le |\mu-{\lambda}| \max_{s\in [{\lambda},\mu]} \| U_{\! M}'(s) \|_r.
\]
We conclude by applying~\eqref{UM'-est} of Lemma~\ref{lem:UN-bnd}.
\end{proof}
\section{Proof of the main results}
\label{sec:main-proof}
The goal of this section is to provide the proof for Theorem~\ref{thm:main}.
\subsection{Proof of Theorem~\ref{thm:main}}
We first prove the following technical lemmas which will be useful later.
\begin{lem}
\label{lem:HW}
For $H,W$ such that $H=-\Delta+W$, there holds for $\alpha\ge (2\,\|W\|_r)^{1/r}$ such that $H+\alpha>0$
\[
\| A \|_{H,\alpha}
\le
2 \, \| A \|_{-\Delta,\alpha}
\le
2 \alpha^{-r} \| A \|_{r}.
\]
\end{lem}
\begin{proof}
First, denoting again $h_{-\alpha}:=-\Delta+\alpha$,
note that
\[
\| A \|_{H,\alpha}
\le
\| A \|_{-\Delta,\alpha} \|h_{-\alpha}^{1/2}(H+\alpha)^{-1/2}\|^2.
\]
Let $v$ be arbitrary and define $u=(H+\alpha)^{-1/2}v$. Note that
\begin{align*}
\|h_{-\alpha}^{1/2}u\|^2
&= \langle u, h_{-\alpha} u \rangle
= \langle u, (H+\alpha-W)u \rangle
= \|v\|^2 - \langle u,W u \rangle.
\end{align*}
Further, there holds
\[
\langle u,W u \rangle \le \|W\|_{-\Delta,\alpha} \|h_{-\alpha}^{1/2}u\|^2,
\]
and using the inequality $\|W\|_{-\Delta,\alpha} \le \alpha^{-r} \|W\|_{r}$ we obtain that $\|W\|_{-\Delta,\alpha}\le \frac12$ for all $\alpha \ge (2\,\|W\|_r)^{1/r}$, and that $\|h_{-\alpha}^{1/2}(H+\alpha)^{-1/2}\|^2\le 2$ yielding the first inequality.
The second inequality results from using, once again, the inequality $\|A\|_{-\Delta,\alpha} \le \alpha^{-r} \|A\|_{r}$.
\end{proof}
Before starting the proof of Theorem~\ref{thm:main}, we analyze the relation of the spectra of ${\mathcal{H}}$ and $\mathcal{H}_{\! M}({\lambda})$.
\begin{lem}
\label{lem:HN-gap-bs}
Let ${\lambda}_i<{\lambda}_j$ be the $i$-th resp. $j$-the eigenvalue of ${\mathcal{H}}$ and let $\nu_{Mi}({\lambda})$ denote the $i$-th eigenvalue of the operator $\mathcal{H}_{\! M}({\lambda})$.
Then, we have the following $M$-independent lower bound
\begin{align}
\label{gap-bnd-bs}
{\lambda}_j-{\lambda}_i\le \min(\nu_{Mj}({\lambda}_i)-\nu_{Mi}({\lambda}_i),\nu_{Mj}({\lambda}_j)-\nu_{Mi}({\lambda}_j)).
\end{align}
\end{lem}
\begin{proof}
Since ${\lambda}_i=\nu_{Mi}({\lambda}_i)$ and ${\lambda}_j=\nu_{Mj}({\lambda}_j)$, we have ${\lambda}_j-{\lambda}_i=\nu_{Mj}({\lambda}_j)-\nu_{Mi}({\lambda}_i)$.
By definition \eqref{HNlam-def} and Proposition \ref{prop:UN-prop}, the family $\mathcal{H}_{\! M}({\lambda})$ is monotonically decreasing with $\lam$. Hence $\nu_{Mi}({\lambda}_j)\le \nu_N^i({\lambda}_i)$ and $\nu_{Mj}({\lambda}_j)\le \nu_{Mj}({\lambda}_i)$, so that ${\lambda}_j-{\lambda}_i=\nu_{Mj}({\lambda}_j)-\nu_{Mi}({\lambda}_i)$ implies \eqref{gap-bnd-bs}.
\end{proof}
\begin{cor}
\label{cor:HN-gap}
Let ${\lambda}_i$ be an isolated eigenvalue of $H$. Then, the gap $\g_{Mi}$ of ${\lambda}_i$ to the rest of the spectrum of $\mathcal{H}_{\! M}({\lambda}_i)$ is bounded below by the gap $\g_0$ of ${\lambda}_i$ to the rest of the spectrum of $H$.
\end{cor}
\begin{lem}
\label{lem:PertEstim}
Let be $\alpha>0$ such that
\begin{equation}
\label{eq:Acond}
\mathcal{H}_{\! M}(\lambda_\star)+\alpha>0,
\qquad\mbox{and}\qquad
\alpha>(2\,\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i})\|_r)^{1/r},
\end{equation}
$M$ large enough such that $\lambda_\star, \lambda_{\sigma i} < \min(\ka_M, \frac12\rho_M)$, $\rho_M \ge 1$ and $4\rho_M^{-r}\Enorm{V}\le \frac13$, and $N \ge M$.
Then, there holds
\begin{equation}
\label{eq:PertEstim}
\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i}) \|_{\mathcal{H}_{\! M}(\lambda_\star),\alpha}
\le
|\lambda_{\sigma i}-\lambda_\star|
\frac{8\, \alpha^{-r}\rho_M^{-r}\Enorm{V}^2}{\pi (\ka_M - \min(\lambda_\star,\lambda_{\sigma i}))}
+
36 \, \frac{\|V\|_r^2}{\alpha^{r}}\,\varepsilon(\sigma,r,V)
\end{equation}
with, using the notation $\sigma=(N,M,K)$,
\[
\varepsilon(\sigma,r,V)
:=
\rho_N^{-r} + \rho_M^{-r}
\left[ 4 \rho_M^{-r}\Enorm{V} \right]^{K+1}.
\]
\end{lem}
\begin{proof}
Applying Lemma~\ref{lem:HW} to the present case, we obtain
\begin{align*}
\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i})\|_{\mathcal{H}_{\! M}(\lambda_\star),\alpha}
&\le
2 \alpha^{-r} \|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i})\|_{r}.
\end{align*}
Employing the triangle inequality, there holds
\begin{equation}
\label{eq:ProofThm1TriangleIneq}
\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i})\|_{r}
\le
\|U_{\! M}(\lambda_\star)-U_{\! M}(\lambda_{\sigma i})\|_{r}
+
\|U_{\! M}(\lambda_{\sigma i})-U_{\! \sigma}(\lambda_{\sigma i})\|_{r}.
\end{equation}
For the first term of the right handside, Lemma~\ref{lem:UNdist} yields
\begin{align*}
\|U_{\! M}(\lambda_{\sigma i}) - U_{\! M}(\lambda_\star)\|_r
&\le
|\lambda_{\sigma i}-\lambda_\star|
\frac{8\rho_M^{-r}\Enorm{V}^2}{\pi (\ka_M - 2\min(\lambda_\star,\lambda_{\sigma i}))} .
\end{align*}
Finally, we use Lemma~\ref{lem:UN-UNK-bnd} in order to bound the second term of~\eqref{eq:ProofThm1TriangleIneq}, we conclude that
\[
\|U_{\! M}(\lambda_{\sigma i})-U_{\! \sigma}(\lambda_{\sigma i})\|_{r}
\le
\,\Enorm{V}^2 \left[
{18} \rho_N^{-r} + 6 \rho_M^{-r}
\left[4 \rho_M^{-r}\Enorm{V} \right]^{K+1}
\right].
\]
Combining all estimates concludes the proof.
\end{proof}
Finally, we are now ready to prove Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Note that $\lambda_\star=\nu_{M i}(\lambda_\star)$, $\lambda_{\sigma i} = \nu_{\sigma i}(\lambda_{\sigma i})$ where $\nu_{M i}({\lambda})$ and $\nu_{\sigma i}({\lambda})$ represents the $i$-th eigenvalue of $\mathcal{H}_{\! M}({\lambda})$ and $\mathcal{H}_{\! \sigma}(\lam)$ respectively.
We choose $\alpha>0$ in order to satisfy the condition~\eqref{eq:Acond} of Lemma~\ref{lem:PertEstim}.
By the expression~\eqref{eq:PertEstim} of Lemma~\ref{lem:PertEstim} we can choose $\widetilde M_{0}\in \mathbb N $ such that
\begin{equation*}
\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i}) \|_{\mathcal{H}_{\! M}(\lambda_\star),\alpha}
\le
\frac14 \frac{\g_0}{\lambda_{\circ}},
\end{equation*}
for any $M\ge \widetilde M_{0}$ and where $\g_0$ denotes the gap of $\lambda_\star$ to the rest of the spectrum of $\mathcal H$, which is a lower bound of $\g_{Mi}$ by Corollary~\ref{cor:HN-gap}, and $\lambda_{\circ} = \lambda_\star+\g_0+\alpha$.
In consequence, we can apply Theorem~\ref{thm:FS-pert-var} with
\begin{align}
\label{eq:ApplThm3}
H_0&=\mathcal{H}_{\! M}(\lambda_\star), \qquad
W=U_{\! \sigma}(\lambda_{\sigma i})-U_{\! M}(\lambda_\star), \qquad
{\lambda}_0 = \lambda_\star,
\end{align}
and thus $H=\mathcal{H}_{\! \sigma}(\lambda_{\sigma i})$, yielding
\begin{align}
\label{eq:EvalEstimPr1-1}
|\lambda_\star - \lambda_{\sigma i} |
&=
|\nu_{M i}(\lambda_\star) - \nu_{\sigma i}(\lambda_{\sigma i})|
\le
(\lambda_\star + \alpha) \|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i}) \|_{\mathcal{H}_{\! M}(\lambda_\star),\alpha}
\\&\le
\label{eq:EvalEstimPr1-2}
(\lambda_\star + \alpha)
\left[
|\lambda_{\sigma i}-\lambda_\star|
\frac{16 \,\alpha^{-r}\rho_M^{-r}\Enorm{V}^2}{\pi (\ka_M - 2\min(\lambda_\star,\lambda_{\sigma i}))}
+
{ 54}\,\frac{\|V\|_r^2}{\alpha^{r}}\,\varepsilon(\sigma,r,V)
\right]
\end{align}
It exists a $M_0\in\mathbb N$, with $M_0\ge \widetilde M_{0}$, such that
\begin{align}
\label{as:boundby12}
4\rho_M^{-r}\Enorm{V}\le \frac13
\qquad\mbox{and}\qquad
(\lambda_\star + \alpha)\frac{16\, \alpha^{-r}\rho_M^{-r}\Enorm{V}^2}{\pi (\ka_M - 2\min(\lambda_\star,\lambda_{\sigma i}))}
\le \frac12,
\end{align}
for all $M\ge M_0$
and thus
\begin{equation}
\label{eq:EVestimPr}
|\lambda_\star - \lambda_{\sigma i} |
\lesssim (\lambda_\star + \alpha)
\frac{\|V\|_r^2}{\alpha^{r}} \, \varepsilon(\sigma,r,V).
\end{equation}
A similar development for the eigenfunctions based on the estimates~\eqref{eq:est-eigenvectors1}--\eqref{eq:est-eigenvectors2} of Theorem~\ref{thm:FS-pert-var} can be applied.
Indeed, we denote by $\varphi_{\sigma i}$ the $i$-th normalized eigenfunction of $\mathcal{H}_{\! \sigma}(\lambda_{\sigma i})$, thus, in terms of the notation used in Theorem~\ref{thm:FS-pert-var}, we have $\psi_i=\varphi_{\sigma i}$.
Then, the corresponding eigenfunction in the span of all eigenfunctions corresponding to $\lambda_\star$ is given by $\varphi_{Mi} = {\sf P}_0 \varphi_{\sigma i}$, where ${\sf P}_0$ is the projector onto this span of all eigenfunctions of $\mathcal{H}_{\! M}(\lambda_\star)$ corresponding to $\lambda_\star$.
Thus, $\varphi_{Mi}$ is an eigenfunction of $\mathcal{H}_{\! M}(\lambda_\star)$ associated to the eigenvalue $\lambda_\star$.
Taking, again, Corollary~\ref{cor:HN-gap} into account yields
\begin{align}
\| \varphi_{Mi} - \varphi_{\sigma i} \|
& \le 4 \frac{\lambda_{\circ}}{\g_0} \| U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i}) \|_{\!\mathcal{H}_{\! M}(\lambda_\star),\alpha}, \label{eq:est-eigenvectors2-2}
\end{align}
Using the bounds of $\|U_{\! M}(\lambda_\star)-U_{\! \sigma}(\lambda_{\sigma i}) \|_{\!\mathcal{H}_{\! M}(\lambda_\star),\alpha}$ expression~\eqref{eq:PertEstim} of Lemma~\ref{lem:PertEstim} and combining with~\eqref{eq:EVestimPr} yields the auxiliary result (see~\eqref{eq:est-eigenvectors1-0}): for $s\ge 0$ and for all $M\ge M_0$,
\begin{align}
\| (-\Delta+1)^s (\varphi_{Mi} - \varphi_{\sigma i}) \|
\lesssim
\rho_M^s \| \varphi_{Mi} - \varphi_{\sigma i} \|
& \lesssim \frac{\lambda_{\circ}}{\g_0}\frac{\|V\|_r^2}{\alpha^{r}}\rho_M^s\varepsilon(\sigma,r,V) .
\label{eq:est-eigenvectors2-3}
\end{align}
\medskip
Now we define the eigenfunction $\varphi_i := {\sf Q}_{\!M}(\lambda_\star) \varphi_{Mi}$ where ${\sf Q}_{\!M}$ is defined by~\eqref{eq:QM}.
Following Corollary~\ref{cor:isospFN}, $\varphi_i$ is an eigenfunction of $\mathcal H$ associated to the eigenvalue $\lambda_\star$.
Note that
\begin{align*}
\varphi_i - Q_\sigma(\lambda_{\sigma i}) \varphi_{\sigma i}
=
\varphi_{Mi} - \varphi_{\sigma i}
- \big[ S_M(\lambda_\star) \varphi_{Mi} - S_\sigma(\lambda_{\sigma i})\varphi_{\sigma i} \big],
\end{align*}
with $S_M({\lambda}):=\Pr_{\!M}^\perp(\mathcal{H}_{\! M}^\perp-{\lambda})^{-1}\Pr_{\!M}^\perp V \Pr_{\!M}$, $S_\sigma({\lambda}):=R_\sigma({\lambda}) \Pr_{\!M}^{\!N} V \Pr_{\!M}$.
Applying the triangle inequality several times yields
\begin{equation}
\label{eq:Qansatz}
\|\varphi_i - Q_\sigma(\lambda_{\sigma i}) \varphi_{\sigma i}\|
\le
(1+I_1)\|\varphi_{Mi} - \varphi_{\sigma i}\|
+ I_2 + I_3,
\end{equation}
with
\begin{align*}
I_1 &=
\| S_M(\lambda_\star) \| ,
\qquad
I_2 =
\| S_M(\lambda_\star) - S_M(\lambda_{\sigma i}) \|,
\qquad
I_3 =
\| S_M(\lambda_{\sigma i}) - S_\sigma(\lambda_{\sigma i}) \|.
\end{align*}
For $I_1$, proceeding as in the proof of Lemma~\ref{lem:UNK-bnd},
writing
\[
S_M({\lambda})
= \sum_{k=0}^\infty
\Pr_{\!M}^\perp h_{{\lambda}}^{-1/2} \Big[ - h_{{\lambda}}^{-1/2} V_M^\perp h_{{\lambda}}^{-1/2} \Big]^k h_{{\lambda}}^{-1/2} \Pr_{\!M}^\perp V \Pr_{\!M}.
\]
and under the assumptions ${\lambda}\le \min(\frac12\rho_M, \ka_M)$, $\rho_M\ge 2$, $4\rho_M^{-r}\Enorm{V}\le \frac12$, using the estimates of Lemma~\ref{lem:Vperp-bnd} $\| \Pr_{\!M}^\perp h_{{\lambda}}^{-1/2} \|\le \sqrt{2} \rho_M^{-1/2}$, $\| \Pr_{\!M} h_{-1}^s \|\le (\rho_M+1)^s \le (3\rho_M/2)^s$ for $s\ge 0$, one obtains
\begin{align}
\label{eq:SMestim1}
\| S_M({\lambda}) \|
& \le
2 \, \| \Pr_{\!M}^\perp h_{{\lambda}}^{-1/2} \| \,
\| \Pr_{\!M}^\perp h_{{\lambda}}^{-1/2} \Pr_{\!M}^\perp V \Pr_{\!M} h_{-1}^{-1/2+r/2} \|
\, \| \Pr_{\!M} h_{-1}^{1/2-r/2} \|
\\
\label{eq:SMestim2}
&\le
2 \sqrt{2} (3/2)^{1/2-r/2} \rho_M^{-1/2} \rho_M^{-r/2} \|V\|_r \rho_M^{1/2-r/2}
\le 12 \rho_M^{-r} \|V\|_r.
\end{align}
For $I_{2}$, we proceed as in Lemma~\ref{lem:UNdist} based on the results of Lemma~\ref{lem:UN-bnd} but substituting $U_M({\lambda})$ by $S_M({\lambda})$ in order to obtain
\begin{equation}
\label{eq:I2}
I_{2}
\le
\frac{|\lambda_\star - \lambda_{\sigma i}|}{\pi (\kappa_M - 2\min(\lambda_\star,\lambda_{\sigma i}))} 12 \rho_M^{-r} \|V\|_r,
\end{equation}
based upon the estimate $\|S_M({\lambda})\| \le 12 \rho_M^{-r} \|V\|_r$ from~\eqref{eq:SMestim1}--\eqref{eq:SMestim2}.
Finally for $I_3$, we proceed as in the proof of Lemma~\ref{lem:UN-UNK-bnd}.
Indeed, we apply, once again, the triangle inequality to obtain $I_3\le I_{3,1} + I_{3,2}$ with
\begin{align*}
I_{3,1} = \| S_M(\lambda_{\sigma i}) - S_{MN}(\lambda_{\sigma i}) \|
\qquad
I_{3,2} = \| S_{MN}(\lambda_{\sigma i}) - S_{\sigma}(\lambda_{\sigma i}) \|.
\end{align*}
with $S_{MN}({\lambda}):=\Pr_{\!M}^{\!N} ({\mathcal{H}_{\! M}^N}-{\lambda})^{-1} \Pr_{\!M}^{\!N} V \Pr_{\!M}$.
To estimate $I_{3,1}$, we first note that
\begin{align*}
\nonumber
\|S_M({\lambda}) - S_{MN}({\lambda})\|
\le
&
\| h_{-1}^{-1/2+r/2} \Pr_{\!M}^{\!N} \|
\| h_{-1}^{1/2-r/2} \left[ \Pr_{\!M}^\perp (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^\perp -
\Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} \right] h_{-1}^{1/2-r/2} \| \\
& \| h_{-1}^{-1/2+r/2} V \Pr_{\!M} \|
\end{align*}
Using that $\| h_{-1}^{-1/2+r/2} \Pr_{\!M}^{\!N} \| \le \rho_M^{-1/2+r/2}$ and $\| h_{-1}^{-1/2+r/2} V \Pr_{\!M} \| \le \Enorm{V} \rho_M^{1/2-r/2}$, we obtain
\[
\|S_M({\lambda}) - S_{MN}({\lambda})\|
\le
\Enorm{V}
\| h_{-1}^{1/2-r/2} \left[ \Pr_{\!M}^\perp (\mathcal{H}_{\! M}^\perp- {\lambda})^{-1} \Pr_{\!M}^\perp -
\Pr_{\!M}^{\!N} (\mathcal{H}_{\! M}^N- {\lambda})^{-1} \Pr_{\!M}^{\!N} \right] h_{-1}^{1/2-r/2} \|.
\]
Adapting the proof of~\eqref{eq:UM-UMN}, and noting that $4 \rho_N^{-r}\Enorm{V} - \frac{16 \rho_M^{-2r}\Enorm{V}^2}{1 - 4 \rho_M^{-r} \Enorm{V}} <1$, and $4 \rho_M^{-r} \Enorm{V}<1$, we obtain
\begin{equation}
\label{eq:I31}
I_{3,1} \lesssim
\rho_N^{-r} \Enorm{V}.
\end{equation}
Finally, for $I_{3,2}$, we proceed as in the derivation of~\eqref{eq:UMN-Us}, starting from a similar ansatz than~\eqref{eq:AuxLem7-2}
\[
S_{MN}({\lambda})-S_\sigma({\lambda})
= \sum_{k=K+1}^\infty
\Pr_{\!M}^{\!N} h_{{\lambda}}^{-1/2} \Big[ - h_{{\lambda}}^{-1/2} V_{\! M}^N h_{{\lambda}}^{-1/2} \Big]^k h_{{\lambda}}^{-1/2} \Pr_{\!M}^{\!N} V \Pr_{\!M},
\]
yielding, together with $\|\Pr_{\!M}^{\!N} h_{{\lambda}}^{-1/2}\| \lesssim \rho_M^{-1/2}$ and
$\|h_{{\lambda}}^{-1/2} \Pr_{\!M}^{\!N} V \Pr_{\!M}\| \lesssim \rho_M^{1/2-r} \Enorm{V}$,
\begin{align}
\label{eq:I32}
I_{3,2} = \| S_{MN}(\lambda_{\sigma i}) - S_{\sigma}(\lambda_{\sigma i}) \|
\lesssim
\rho_M^{-r} \| V \|_r \left[4 \rho_M^{-r}\Enorm{V} \right]^{K+1}.
\end{align}
Starting from~\eqref{eq:Qansatz} and combining \eqref{eq:SMestim1}--\eqref{eq:SMestim2}, \eqref{eq:I2}, \eqref{eq:I31}, \eqref{eq:I32} with the estimates~\eqref{eq:est-eigenvectors2-3} (with $s=0$), \eqref{eq:EVestimPr} and the bound~\eqref{as:boundby12} concludes the proof.
\end{proof}
\begin{remark}
\label{rem:ConvFP}
Having now understood how to apply Theorem~\ref{thm:FS-pert-var} in this context, i.e. using~\eqref{eq:ApplThm3}, and seen the abstract theory of Section~\ref{sec:pert-est} we can now make a statement about the convergence of the fixed-point iteration schemes~\eqref{EVPNMKk} and \eqref{EVPNMKk-2}:
Following Corollary~\ref{cor:ContractionH}, we note that if $\nu_{\sigma i}({\lambda})$ parametrizes an isolated branch on some interval $I_i$ containing $\lambda_{\sigma i}$, $M\ge M_0$ and, using~\eqref{eq:EvalEstimPr1-1}--\eqref{eq:EvalEstimPr1-2} and~\eqref{eq:EVestimPr}, if $\varepsilon(\sigma,r,V)$ is small enough,
then, the fixed-point iterations converge for any starting point in $I_i$.
Unfortunately, it is difficult to assess whether the isolated branch property holds in practical application and this remains an abstract result.
\end{remark}
\section{Numerical results}
\label{sec:NumRes}
In this section, we test the theoretical estimates developed in this article.
We consider a one-dimensional test case with $\Omega=(0,1)$ and a potential $V_t$ given by its Fourier coefficients:
\[
\widehat V_0 = -10,\qquad
\widehat V_n = -\frac{5}{|n|^t},
\]
so that $V_t\in \Hs{t-\tfrac{1}{2}-\varepsilon}$for any $\varepsilon>0$.
We then consider the two different values $t=1$ or $t=0$, see Figure~\ref{fig:potential} for a graphical illustration for the case $t=1$, so that including the embedding expressed under the constraints~\eqref{eq:SobEmb} yields that $\Enorm{V}<\infty$ for $r =1$ if $t=1$ and for all $r < 1/2$ if $t=0$.
Note that, to the best of our knowledge, the classical convergence analysis does not cover such low regularities of the potentials. While the standard analysis can probably be extended to potentials in $L^2$ and thus covering the case $t=1$ (although we are not aware of any published analysis in this case), it certainly does not hold without further developments for $t=0$.
In Figure~\ref{fig:convK}, we illustrate the convergence of the discrete solutions with respect to $K$ for different values of $M$ and $t=1$ and $t=0$, and for fixed $N=500$.
The error in the eigenvalue and eigenvector are defined as
\[
{\sf err}_{\sf val} = |\lambda_\star-\lambda_{\sigma i}|
\qquad
\mbox{and}
\qquad
{\sf err}_{\sf vec} = \|\varphi_{i} - Q_\sigma(\lambda_{\sigma i})\varphi_{\sigma i}\|,
\]
where $(\lambda_\star,\varphi_{M i})$, $(\lambda_{\sigma i},\varphi_{\sigma i})$ are the $i$-th solution to \eqref{EVPN} and \eqref{EVPNMK} respectively, using the computational Strategy 1 defined by~\eqref{EVPNMKk} targeting the smallest eigenvalue.
The ``exact'' solution is obtained by computing the variational approximation for $N_{\sf e}=1000$.
We observe that, in agreement with the theory for small enough $K,M$, the convergence rate with respect to $K$ for different value of $M$ is the same for the eigenvalue and eigenvector error and that the convergence rate improves with increased values of $M$.
In this example, we observe that the condition $M\ge M_0$ is not restrictive and convergence can be observed for all values of $M$.
It is also noted, in particular for the higher values of $M$ (but still very moderate), that the number on the truncation order $K$ can be kept very low to achieve a good accuracy.
This, in turn, means that the number of computations on the fine grid (essentially matrix-vector products involving the fine grid for each matrix-vector product involving the coarse Hamiltonian $\mathcal{H}_{\! \sigma}({\lambda})$) can be kept to a minimum.
A similar behaviour is reported in Figure~\ref{fig:convK_n3} for the third eigenvalue using the potential $V_{t=1}$ and again $N=500$.
The number of required SCF-iterations~\eqref{EVPNMKk} to converge to an increment in the eigenvalue smaller than $10^{-12}$, thus very tight, is stable and very moderate over all test cases as reported in Tables~\ref{tab:results} and~\ref{tab:results2} (the case of $t=0$ behaves similarly and is not reported here).
Finally, Figure~\ref{fig:convNres} illustrates the error in the eigenvalue and eigenvector with respect to~$N$ for different values of $K$ and $M$ for the first eigenvalue with $V_{t=1}$ and $V_{t=0}$.
We observe two regimes. First, for small values of $N$, the error is limited by the small size of the fine grid ${\mathsf X}_N$, i.e. by a small $N$, and decreases with increasing $N\in [25,500]$.
Second, when $N$ is large enough, the error due to the moderate values of $M$, $K$ is dominating and the error stagnates.
As $M$ or $K$ increase, the transition of the two regimes moves to lower accuracy. This agrees well with the theoretical result presented in Theorem~\ref{thm:main}.
For the potential $V_{t=1}$, we observe that the convergence rate in $N$ (for $M,K$ large enough) is roughly 3 for the eigenvalue error and 2.5 for the eigenvector error, which are the rates predicted by the standard analysis as outlined in Corollary~\ref{rem:TriangleIneq}.
For the less regular case $V_{t=0}$, and thus $\Enorm{V}<\infty$ for any $r<1/2$, we observe a rate of roughly 1 in both cases, eigenvalue and eigenvector $L^2$-error, which is exactly as predicted by Theorem~\ref{thm:main}.
\begin{figure}[t!]
\centering
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.6\textwidth]{potential2.png}
\caption{Illustration of the potential $V_{t=1}$.}
\label{fig:potential}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_eval.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_evec_q.png}
\\
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s0_eval.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s0_evec_q.png}
\caption{
The convergence of the eigenvalue error ${\sf err}_{\sf val}$ (left) and the eigenvector error ${\sf err}_{\sf vec}$ (right) corresponding to the first eigenvalue with respect to $K$ for different values of $M$ and for fixed $N=500$.
The top row corresponds to a potential $V_{t=1}$ whereas the bottom row corresponds to~$V_{t=0}$.}
\label{fig:convK}
\end{figure}
\subfile{tablecase1}
\subfile{tablecase2}
\begin{figure}[t!]
\centering
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_Nres_K.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_Nres_N.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s0_Nres_K.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s0_Nres_N.png}
\caption{
The convergence of the eigenvalue error ${\sf err}_{\sf val}$ (straight lines) and the eigenvector error ${\sf err}_{\sf vec}$ (dotted lines) corresponding to the first eigenvalue with respect to $N$ for different values of $K$ (left) with $M=2$ (top) and $M=1$ (bottom) and different values of $M$ (right) with $K=2$ (top) and $K=1$ (bottom) for the potential $V_{t=1}$ (top) and $V_{t=0}$ (bottom).
}
\label{fig:convNres}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_eval_n3.png}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip,width=0.45\textwidth]{ConvK_s1_evec_n3_q.png}
\caption{
The convergence of the eigenvalue error ${\sf err}_{\sf val}$ (left) and the eigenvector error ${\sf err}_{\sf vec}$ (right) corresponding to the third eigenvalue with respect to $K$ for different values of $M$ and for fixed $N=500$ and with $V_{t=1}$.
}
\label{fig:convK_n3}
\end{figure}
\section{Conclusion and perspectives}
\label{sec:sec6}
In this paper, we have proposed a new numerical method based on the Feshbach-Schur map in combination with planewave discretizations for linear Schr\"odinger eigenvalue problems.
The method does not rely on the variational principle but reformulates the infinite-dimensional problem as an equivalent problem, non-linear in the spectral parameter, on a finite dimensional grid whose unknowns are the exact eigenvalue and the best-approximation of the exact eigenfunctions on the given grid.
Such a problem can then be approximated by evaluating the Feshbach-Schur map on a second finer grid.
The substantial contribution of this paper is an analysis in order to provide error estimates of the proposed method in all discretization parameters.
For this, we developed in Section~\ref{sec:pert-est} a version of perturbation theory that relies on the notion of form-boundedness with increased regularity, as stated by Assumption~\ref{as:pot}.
Having established the method and its analysis, its full benefits shall be further analyzed in future.
At the present stage, it is worth to mention that, for the considered one-dimensional problem, the contraction in the perturbation is rather small and the non-linear iteration converge rapidly.
Also, in view of more sophisticated non-linear eigenvalue problems, the artificial extra non-linearity does not seem to be much of a burden.
The future developments include the extension of Section~\ref{sec:pert-est} to a more general family of operators, including non-symmetric perturbations of self-adjoint operators as well as extending the numerical method and its analysis to cluster of eigenvalues using a density-matrix based formulation.
\bibliographystyle{abbrv}
|
2,869,038,154,935 | arxiv | \section{\bigskip A quadratic differential}
\bigskip Given a rational function $r\left( z\right) =\frac{p\left(
z\right) }{q\left( z\right) },$ where $p\left( z\right) $ and $q\left(
z\right) $ are two co-prime complex polynomials, we consider the quadratic
differential on the Riemann sphere $\widehat{%
\mathbb{C}
}$ :%
\begin{equation}
\varpi_{r}\left( z\right) =-\left( \frac{r^{\prime}\left( z\right)
}{r\left( z\right) }\right) ^{2}dz^{2}=-\left( \frac{p^{\prime}\left(
z\right) q\left( z\right) -p\left( z\right) q^{\prime}\left( z\right)
}{p\left( z\right) q\left( z\right) }\right) ^{2}dz^{2}. \label{qdiff}%
\end{equation}
\emph{Finite critical points} and \emph{infinite critical points }of
$\varpi_{r}$ are respectively its zero's and poles; all other points of
$\widehat{%
\mathbb{C}
}$ are called \emph{regular points} of $\varpi_{r}.$
It is obvious that the partial fraction decomposition of $\frac{r^{\prime
}\left( z\right) }{r\left( z\right) }$ is as follows :
\begin{equation}
\frac{r^{\prime}\left( z\right) }{r\left( z\right) }=\sum_{p\left(
a\right) q\left( a\right) =0}\frac{m_{a}}{z-a},\label{lucas}%
\end{equation}
where $m_{a}\in%
\mathbb{Z}
^{\ast}$ is the multiplicity of the zero $a$ of $p\left( z\right) q\left(
z\right) .$ We deduce that
\[
\ \varpi_{r}\left( z\right) =-\frac{m_{a}^{2}}{\left( z-a\right) ^{2}%
}\left( 1+\mathcal{O}(z-a)\right) dz^{2},\quad z\rightarrow a.
\]
In other words, the zero's of $p$ and $q$ are poles of order $2$ of
$\varpi_{r}$ with negative residue.
If
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) =\deg\left( pq\right) -1,
\]
(in particular, if $\deg\left( p\right) \neq\deg\left( q\right) $), then,
with the parametrization $u=1/z$, we get
\[
\ \varpi_{r}\left( u\right) =-\frac{\left( \deg\left( p\right)
-\deg\left( q\right) \right) ^{2}}{u^{2}}\left( 1+\mathcal{O}(u)\right)
du^{2},\quad u\rightarrow0;
\]
thus, $\infty$ is another double pole\emph{ }of $\varpi_{r}$ with negative
residue. If
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) <\deg\left( pq\right) -2,
\]
then $\infty$ is zero of $\varpi_{r}$ with multiplicity greater than $1.$ In
the case
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) =\deg\left( pq\right) -2,
\]
$\infty$ is a regular point.
\emph{Horizontal trajectories} (or just trajectories) of the quadratic
differential $\varpi_{r}$ are the zero loci of the equation%
\[
\varpi_{r}\left( z\right) >0,
\]
or equivalently%
\begin{equation}
\Re\int^{z}\frac{r^{\prime}\left( t\right) }{r\left( t\right) }%
dt=\log\left\vert r\left( z\right) \right\vert =\text{\emph{const}}.
\label{eq traj}%
\end{equation}
If $z\left( t\right) ,t\in%
\mathbb{R}
$ is a horizontal trajectory, then the function
\[
t\longmapsto\Im\int_{0}^{t}\frac{r^{\prime}\left( z\left( u\right) \right)
}{r\left( z\left( u\right) \right) }z^{\prime}\left( u\right)
du=\arg\left( r\left( z\left( t\right) \right) \right) -\arg\left(
r\left( z\left( 0\right) \right) \right)
\]
is monotone.
The \emph{vertical} (or, \emph{orthogonal}) trajectories are obtained by
replacing $\Im$ by $\Re$ in equation (\ref{eq traj}). The horizontal and
vertical trajectories of the quadratic differential $\varpi_{r}$ produce two
pairwise orthogonal foliations of the Riemann sphere $\widehat{%
\mathbb{C}
}$.
A trajectory passing through a critical point of $\varpi_{r}$ is called
\emph{critical trajectory}. In particular, if it starts and ends at a finite
critical point, it is called \emph{finite critical trajectory}, otherwise, we
call it an \emph{infinite critical trajectory}. \bigskip If two different
trajectories are not disjoint, then their intersection must be a zero of the
quadratic differential.
The closure of the set of finite and infinite critical trajectories is called
the \emph{critical graph} of $\varpi_{r},$ we denote it by $\Gamma_{r}.$
The local and global structures of the trajectories is well known (more
details about the theory of quadratic differentials can be found in
\cite{Strebel},\cite{jenkins}, or \cite{F.Thabet}), in particular :
\begin{itemize}
\item At any regular point, horizontal (resp. vertical) trajectories look
locally as simple analytic arcs passing through this point, and through every
regular point of $\varpi_{p}$ passes a uniquely determined horizontal (resp.
vertical) trajectory of $\varpi_{p};$ these horizontal and vertical
trajectories are locally orthogonal at this point.
\item From each zero with multiplicity $m$ of $\varpi_{r},$ there emanate
$m+2$ critical trajectories spacing under equal angle $2\pi/(m+2)$.
\item Any double pole has a neighborhood such that, all trajectories inside it
take a loop-shape encircling the pole or a radial form diverging to the pole,
respectively if the residue is negative or positive.
\item A trajectory in the large can be, either a closed curve not passing
through any critical point (\emph{closed trajectory}), or an arc connecting
two critical points, or an arc that has no limit along at least one of its
directions (\emph{recurrent trajectory}).
\end{itemize}
The set $\widehat{%
\mathbb{C}
}\setminus\Gamma_{r}$ consists of a finite number of domains called the
\emph{domain configurations} of $\varpi_{r}.$ For a general quadratic
differential on a $\widehat{%
\mathbb{C}
}$, there are five kind of domain configuration, see \cite[Theorem3.5]%
{jenkins}. Since all the infinite critical points of $\varpi_{r}$ are poles of
order $2$ with negative residues, then there are three possible domain configurations:
\begin{itemize}
\item the \emph{Circle domain} : It is swept by closed trajectories and
contains exactly one double pole. Its boundary is a closed critical
trajectory. For a suitably chosen real constant $c$ and some real number
$r>0,$ the function $z\longmapsto r\exp\left( c\int^{z}\frac{p^{\prime
}\left( t\right) }{p\left( t\right) }dt\right) $ is a conformal map from
the circle domain $D$ onto the unit circle; it extends continuously to the
boundary $\partial D,$ and sends the double pole to the origin.
\item the \emph{Ring domain}: It is swept by closed trajectories. Its boundary
consists of two connected components. For a suitably chosen real constant $c$
and some real numbers $0<r_{1}<r_{2},$ the function $z\longmapsto\exp\left(
c\int^{z}\frac{p^{\prime}\left( t\right) }{p\left( t\right) }dt\right) $
is a conformal map from the circle domain $D$ onto the annulus $\left\{
z:r_{1}<\left\vert z\right\vert <r_{2}\right\} $ and it extends continuously
to the boundary $\partial D.$
\item the \emph{Dense domain : }It is swept by recurrent critical trajectory
i.e., the interior of its closure is non-empty. Jenkins Three-pole Theorem
(see \cite[Theorem 15.2]{Strebel}) asserts that a quadratic differential on
the Riemann sphere with at most three poles cannot have recurrent
trajectories. In general, the non-existence of such trajectories is not
guaranteed, but here, following the idea of \emph{level function} of
Baryshnikov and Shapiro (see \cite{shapiro barish}), the quadratic
differential $\varpi_{r}$ excludes the dense domain, as we will see in
Proposition \ref{no recurrent}.
\end{itemize}
A very helpful tool that will be used in our investigation is the
Teichm\"{u}ller lemma (see \cite[Theorem 14.1]{Strebel}).
\begin{definition}
\bigskip A domain in $\widehat{%
\mathbb{C}
}$ bounded only by segments of horizontal and/or vertical trajectories of
$\varpi_{r}$ (and their endpoints) is called $\varpi_{r}$-polygon.
\end{definition}
\begin{lemma}
[Teichm\H{u}ller]\label{teich lemma} Let $\Omega$ be a $\varpi_{r}$-polygon,
and let $z_{j}$ be the critical points on the boundary $\partial\Omega$ of
$\Omega,$ and let $t_{j}$ be the corresponding interior angles with vertices
at $z_{j},$ respectively . Then%
\begin{equation}
\sum\left( 1-\dfrac{\left( m_{j}+2\right) t_{j}}{2\pi}\right) =2+\sum
n_{i}, \label{Teich equality}%
\end{equation}
where $m_{j}$ are the multiplicities of $z_{j},$ and $n_{i}$ are the
multiplicities of critical points of $\varpi_{r}$ inside $\Omega.\bigskip$
\end{lemma}
\section{Lemniscates}
We use the notations of \cite{khavinson}. Let us denote $n=\deg r=\max\left(
\deg p,\deg q\right) >0$. For $c>0,$ the set
\begin{equation}
\Gamma_{r,c}=\{z\in%
\mathbb{C}
:|r(z)|=c\} \label{lemniscate}%
\end{equation}
is called rational lemniscate of degree $n.$ For more details, see
\cite{Sheil-Small}. From the point of view of the theory of quadratic
differentials, each connected component of the lemniscate $\Gamma_{r,c}$
coincides with a horizontal trajectory of $\varpi_{r}=-\left( \frac
{r^{\prime}\left( z\right) }{r\left( z\right) }\right) ^{2}dz^{2},$ as we
have seen in equation (\ref{eq traj}). The lemniscate $\Gamma_{r,c}$ is
entirely determined by the knowledge of the critical graph $\Gamma_{r}$ (which
is the union of the lemniscates $\Gamma_{r,\left\vert r\left( a\right)
\right\vert },$ for all zero's $a$ of $\varpi_{r}$) of the quadratic
differential of $\varpi_{r}.$ In particular, if we denote by $n_{z}$ and
$n_{p}$ respectively the number of zero's and poles $r\left( z\right) $ in
$\widehat{%
\mathbb{C}
},$ then, from the local behavior of the trajectories, we see that, for
$c\rightarrow0^{+}$, the lemniscate $\Gamma_{r,c}$ is formed by exactly
$n_{z}$ disjoint closed curves each of them encircles a zero of $r\left(
z\right) $, while for $c\rightarrow+\infty,$ $\Gamma_{r,c}$ is formed by
exactly $n_{p}$ disjoint closed curves each of them encircles a pole of
$r\left( z\right) $. If $\deg\left( p^{\prime}q-pq^{\prime}\right)
<\deg\left( pq\right) -2,$ then, $\infty$ is a zero of $\varpi_{r}$ of
multiplicity $m\geq2,$ and there are $m+2$ critical trajectories emerging from
$\infty$ dividing in a symmetric way the complement of some zero centred ball
into $m+2$ connected components. See Figure \ref{FIG1}. In the rest of this
note, we assume that $\infty$ is a double pole, i.e., $\deg\left( p^{\prime
}q-pq^{\prime}\right) =\deg\left( pq\right) -1.$ \begin{figure}[tbh]
\begin{minipage}[b]{0.4\linewidth}
\centering\includegraphics[scale=0.28]{7.pdf}
\end{minipage}\hfill
\begin{minipage}[b]{0.4\linewidth} \includegraphics[scale=0.28]{8.pdf}
\end{minipage}
\hfill\caption{Critical graphs of $\varpi_{r},$ $r=$ $\frac{x^{2}-1}{x^{2}+1}$
(left), and $r=$ $\frac{x^{2}-4}{x^{2}+1}$ (right).}%
\label{FIG1}%
\end{figure}
\begin{definition}
A quadratic differential on $\widehat{%
\mathbb{C}
}$ is called \emph{Strebel} if the complement to the union of its closed
trajectories has vanishing area.
\end{definition}
\begin{proposition}
\label{no recurrent}The quadratic differential $\varpi_{r}$ is Strebel.
\end{proposition}
\begin{proof}
Since the critical points of $\varpi_{r}$ are only zero's and double poles
with negative residues, it is sufficient to prove that $\varpi_{r}$ has no
recurrent trajectory. The function%
\[%
\begin{array}
[c]{cc}%
f:%
\mathbb{C}
\setminus\left\{ \text{poles of }r\left( z\right) \right\} \longrightarrow%
\mathbb{R}%
& z\longmapsto\left\vert r\left( z\right) \right\vert
\end{array}
\]
is continuous, and constant on each horizontal trajectory of $\varpi_{r}.$ If
$\varpi_{r}$ has a recurrent trajectory, then, its domain configuration
contains a dense domain $D.$ Thus, the function $f$ must be constant on $D,$
which is clearly impossible by analyticity of the rational function
$z\longmapsto r\left( z\right) .$
\end{proof}
A necessary condition for the existence of a finite critical trajectory
connecting two finite critical points of $\varpi_{r}$ is the existence of a
Jordan arc $\gamma$ connecting them, such that
\begin{equation}
\Re\int_{\gamma}\frac{r^{\prime}\left( t\right) }{r\left( t\right) }dt=0.
\label{cond necess}%
\end{equation}
Unfortunately, this condition is not sufficient in general, as it can be shown
easily for the case of $r\left( z\right) =\left( z^{2}-1\right) \left(
z^{2}-4\right) ;$ see Figure \ref{FIG2}. \begin{figure}[th]
\centering\includegraphics[height=1.8in,width=2.8in]{9.pdf}\caption{Critical
graph of $\varpi_{p}$, $p=\left( z^{2}-1\right) \left( z^{2}-4\right) .$}%
\label{FIG2}%
\end{figure}However, a more sufficient condition will be shown by the
following Proposition
\begin{proposition}
Let us denote $z_{1},...,z_{m}$ the finite critical points of $\varpi_{r}.$
If
\[
\left\vert w_{i}\right\vert =\left\vert w_{j}\right\vert =\max\left\{
\left\vert w_{k}\right\vert :=r\left( z_{k}\right) ;k=1,...,m\right\}
\]
for some $1$ $\leq i<j\leq m,$ then, there exists a finite critical trajectory
joining $z_{i}$ and $z_{j}.$ In particular, the critical graph $\Gamma_{r}$ is
connected, if and only if $\left\vert w_{1}\right\vert =\cdot\cdot
\cdot=\left\vert w_{m}\right\vert .$
\end{proposition}
\begin{proof}
If no finite critical trajectory joins $z_{i}$ and $z_{j},$ then a lemniscate
$\Gamma_{r,c},$ for some $c>\left\vert w_{i}\right\vert ,$ is not connected :
$\Gamma_{r,c}$ is a disjoint union of $s\geq2$ loops $L_{1},...,L_{s},$ each
of them encircles a part of the critical graph $\Gamma_{r}.$ Looking at each
of these loops as a $\varpi_{r}$-polygon and applying Lemma \ref{teich lemma},
we get :
\begin{equation}
0=2+\sum n_{k},k=1,...,s. \label{somme}%
\end{equation}
Making the sum of all equalities in (\ref{somme}), and taking into account our
assumption that $\left( \deg\left( p^{\prime}q-pq^{\prime}\right)
=\deg\left( pq\right) \right) -1,$ we get%
\[
0=2s+2\left( \deg\left( p^{\prime}q-pq^{\prime}\right) -\deg\left(
pq\right) \right) =2s-2;
\]
a contradiction. The second point is a mere consequence.
The numbers $w_{1}=r\left( z_{1}\right) ,...,w_{m}=r\left( z_{m}\right) $
are called the \emph{non-vanishing critical values} of $r\left( z\right) .$
\end{proof}
\section{\bigskip Fingerprints of polynomial lemniscates}
Here following a brief mention of the case of polynomial lemniscates
$\Gamma_{p,1}$. Let us denote by
\begin{align*}
\Omega_{-} & :=\{z\in%
\mathbb{C}
:|p(z)|<1\},\\
\Omega_{+} & :=\{z\in\widehat{%
\mathbb{C}
}:|p(z)|>1\}.
\end{align*}
The maximum modulus theorem asserts that $\Omega_{+}$ is a connected open
subset containing a neighborhood of $\infty$ in $\widehat{%
\mathbb{C}
}$.
\begin{definition}
A lemniscate $\Gamma_{p,1}$ of degree $n$ is \emph{proper }if it is smooth
($p^{\prime}\left( z\right) \neq0$ on $\Gamma_{p,1}$) and connected.
\end{definition}
\bigskip Let $z_{1,}...,z_{s},$ $s\leq n-1$ be the zero's (repeated according
to their multiplicity) of $\varpi_{p}.$ The non-vanishing critical values for
$p\left( z\right) $ are the values $w_{1}=p\left( z_{1}\right)
,...,w_{s}=p\left( z_{s}\right) .$ For a smooth lemniscate $\Gamma_{p,1}$ of
degree $n$, the following characterizes the property of being proper through
the critical values :
\begin{proposition}
Assume that the lemniscate $\Gamma_{p,1}$ is smooth. Then, $\Gamma_{p,1}$ is
proper if and only if all the critical values $w_{1},...,w_{s}$ satisfy
$|w_{k}|<1.$
\end{proposition}
\begin{proof}
Proof of this Proposition can be found in \cite{khavinson}. We provide here a
more evident proof relying on quadratic differentials theory. The smoothness
of $\Gamma_{p,1}$ implies that it is not a critical trajectory. Suppose that
$|w_{k}|>1$ for some $k\in\left\{ 1,...,s\right\} ,$ and consider two
critical trajectories emerging from $z_{k}$ that form a loop $\gamma$. This
loop cannot intersect $\Gamma_{p,1},$ and $\gamma\cap\Omega_{-}\neq\emptyset$
since $\gamma$ contains a pole in its interior; a contradiction. The other
point is clear.
\end{proof}
Note that the interior $\Omega_{-}$ of a proper lemniscate of degree $n$ (or,
for a general smooth lemniscate, each component of ) is also simply connected,
since its complement is connected.
Let $\gamma$ be a $\mathcal{C}^{\infty}$ Jordan curve in $%
\mathbb{C}
;$ by a Jordan theorem, $\gamma$ splits $\widehat{%
\mathbb{C}
}$ into a bounded and an unbounded simply connected components $D_{-}$ and
$D_{+}.$ The Riemann mapping theorem asserts that there exist two conformal
maps $\phi_{-}:\Delta\longrightarrow$ $D_{-},$ and $\phi_{+}:\widehat{%
\mathbb{C}
}\setminus\overline{\Delta}\longrightarrow$ $D_{+},$ where $\Delta$ is the
unit disk. The map $\phi_{+}$ is uniquely determined by the normalization
$\phi_{+}\left( \infty\right) =\infty$ and $\phi_{+}\left( \infty\right)
>0.$ It is well-known that $\phi_{-}$ and $\phi_{+}$ extend to $\mathcal{C}%
^{\infty}$-diffeomorphisms on the closure of their respective domain. The
\textit{fingerprint of }$\gamma$ is the map $k$ $:=$ $\phi_{+}^{-1}\circ
\phi_{-}:S^{1}\longrightarrow S^{1}$ from the unit circle $S^{1}$ to itself.
Note that $k$ is uniquely determined by up to post-composition with an
automorphism of $D$ onto itself. Moreover, the fingerprint $k$ is invariant
under translations and scalings of the curve $\gamma.$
\subsection{Lemniscates in a Circle Domain}
\bigskip
Let $a$ be a double pole of $\varpi_{p}$ ( $a=\infty$ or $p\left( a\right)
=0$ ). Jenkins Theorem on the Configuration Domains of the quadratic
differential $\varpi_{p}$ asserts that there exists a connected neighborhood
$\mathcal{U}_{a}$ of $a$ (a Circle Domain of $\varpi_{p}$) bounded by finite
critical trajectories of $\varpi_{p},$ such that all trajectories of
$\varpi_{p}$ (lemniscates of $p$) inside $\mathcal{U}_{a}$ are closed smooth
curves encircling $a.$ Moreover, for a suitably chosen non-vanishing real
constant $c,$ the function
\[
\psi:z\longmapsto\exp\left( c\int^{z}\frac{p^{\prime}\left( t\right)
}{p\left( t\right) }dt\right)
\]
is a conformal map from $\mathcal{U}_{a}$ onto a certain disk centered in
$z=0.$ A more obvious form of it, is
\[
\psi\left( z\right) =\beta p\left( z\right) ^{c}%
\]
for some complex number $\beta.$ Baring in mind that $\psi$ is univalent near
$a$, we get
\[
c=\left\{
\begin{array}
[c]{c}%
\frac{1}{n},\text{ if }a=\infty\\
\frac{1}{\alpha},\text{ if }p\left( a\right) =0,\text{ }%
\end{array}
\right.
\]
where $\alpha$ is the multiplicity of $a$ if $p\left( a\right) =0.$ It
follows that the function
\[
z\longmapsto\left\{
\begin{array}
[c]{c}%
p\left( z\right) ^{\frac{1}{n}},\text{ if }a=\infty,\\
p\left( z\right) ^{\frac{1}{\alpha}},\text{ if }p\left( a\right) =0.
\end{array}
\right.
\]
is a conformal map from $\mathcal{U}_{a}$ onto a certain disk $\Delta_{a}$
centered in $z=0.$ We may assume for the sake of simplicity that $\Delta_{a}$
with a radius $R>1.$ For the given lemniscate $\Gamma_{p,1}$ in $\mathcal{U}%
_{a}$ (see Figure \ref{FIG3} ), it is straightforward that the function
$z\longmapsto p\left( z\right) ^{\frac{1}{\alpha}}$ maps $\Omega_{-}$
conformally onto the unit disk $\Delta.$ Thus,
\[
\left\{
\begin{array}
[c]{c}%
\phi_{+}^{-1}\left( z\right) =p\left( z\right) ^{\frac{1}{n}},\text{ if
}a=\infty,\\
\phi_{-}^{-1}\left( z\right) =p\left( z\right) ^{\frac{1}{\alpha}},\text{
if }p\left( a\right) =0.
\end{array}
\right. .
\]
\begin{figure}[tbh]
\begin{minipage}[b]{0.5\linewidth}
\centering\includegraphics[scale=0.4]{10.pdf}
\end{minipage}\hfill
\begin{minipage}[b]{0.5\linewidth} \includegraphics[scale=0.5]{11.pdf}
\end{minipage}
\hfill\caption{Critical graph of $\varpi_{\left( z^{2}-1\right) \left(
z^{2}-4\right) }$ and lemniscates in Circle Domains: $a=\infty$ (left), $a=1$
(right).}%
\label{FIG3}%
\end{figure}
\bigskip In the first case, we notice that $\Gamma_{p,1}$ is proper if and
only if $a=\infty;$ the next Theorem gives its fingerprint.
\begin{theorem}
[Ebenfelt, Khavinson and Shapiro ]The fingerprint $k$ $:S^{1}\longrightarrow
S^{1}$ of a proper lemniscate $\Gamma_{p,1}$ of the polynomial $p\left(
z\right) =\prod_{k=1}^{n}\left( z-\varsigma_{k}\right) $ is given by
\[
k(z)=B(z)^{1/n},
\]
where $B$ is the Blaschke product of degree $n$%
\[
B(z)=e^{i\theta}\prod_{k=1}^{n}\frac{z-a_{k}}{1-\overline{a_{k}}z}%
\]
for some real number $\theta$, and $a_{k}=\phi_{-}\left( \varsigma
_{k}\right) ,k=1,,,n.$
\end{theorem}
In the case $p\left( a\right) =0$, let
\[
p\left( z\right) =\bigskip\left( z-a\right) ^{\alpha}p_{1}\left(
z\right) ,\alpha\in%
\mathbb{N}
^{\ast};p_{1}\left( z\right) =\prod_{i=1}^{n-\alpha}\bigskip\left(
z-a_{i}\right) ,p_{1}\left( a\right) \neq0.
\]
With the normalization $\phi_{+}\left( z\right) \rightarrow\infty$ as
$z\rightarrow\infty,$ the function
\[
z\longmapsto\ \frac{p\circ\phi_{+}\left( z\right) }{\prod_{i=1}^{n-\alpha
}\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left(
a_{i}\right) }z}};\left\vert z\right\vert \geq1
\]
is holomorphic in $%
\mathbb{C}
\setminus\overline{\Delta}$, does not vanish there, is continuous in $%
\mathbb{C}
\setminus\Delta,$ and has modulus one on $\partial\Delta=S^{1}$. We deduce the
existence of $\theta\in%
\mathbb{R}
$ such that
\[
p\circ\phi_{+}\left( z\right) =e^{i\theta}z^{n}\prod_{i=1}^{n-\alpha}%
\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left(
a_{i}\right) }z};\left\vert z\right\vert \geq1,
\]
which proves the
\begin{theorem}
Let $\Gamma_{p,1}$ be a smooth connected lemniscate such that $z=a$ is the
only zero of $p$ in $\Omega_{-}$ $.$ The fingerprint $k$ $:S^{1}%
\longrightarrow S^{1}$ of $\Gamma_{p,1}$ is given by
\[
k^{-1}(z)=z^{\frac{n}{\alpha}}B_{1}\left( z\right) ^{\frac{1}{\alpha}}.
\]
where $B_{1}\left( z\right) $ is the Blaschke product
\[
B_{1}\left( z\right) =e^{i\theta}\prod_{i=1}^{n-\alpha}\frac{z-\phi_{+}%
^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left( a_{i}\right)
}z}.
\]
\end{theorem}
\subsection{Lemniscates in a Ring Domain}
In the following, let $\mathcal{U}$ be a Ring Domain of the quadratic
differential $\varpi_{p}.$ It is bounded by two lemniscates $\Gamma_{p,r}$ and
$\Gamma_{p,R}.$ We may assume that
\[
0<r<1<R.
\]
For the sake of simplicity, we may assume that $p$ has exactly two different
zeros $a$ and $b$ in the bounded domain of $%
\mathbb{C}
$ with boundary $\Gamma_{p,r}.$
\begin{align*}
p\left( z\right) & =\bigskip\left( z-a\right) ^{\alpha}\left(
z-b\right) ^{\beta}p_{2}\left( z\right) ,\alpha,\beta\in%
\mathbb{N}
^{\ast};\\
p_{2}\left( z\right) & =\prod_{i=1}^{n-\left( \alpha+\beta\right)
}\bigskip\left( z-a_{i}\right) ,p_{2}\left( a\right) p_{2}\left(
b\right) \neq0.
\end{align*}
We consider the lemniscate $\Gamma_{p,1}$ of $p$ in $\mathcal{U}$ (see Figure
\ref{FIG4} ). \begin{figure}[th]
\centering\includegraphics[height=1.8in,width=2.8in]{12.pdf}\caption{Critical
graph of $\varpi_{\left( z^{2}-1\right) \left( z^{2}-4\right) }$ with a
lemniscate in a Ring Domain ( $a=1,b=2$ ).}%
\label{FIG4}%
\end{figure}Since the function
\[
z\longmapsto p\circ\phi_{-}\left( z\right) =\bigskip\left( \phi_{-}\left(
z\right) -a\right) ^{\alpha}\left( \phi_{-}\left( z\right) -b\right)
^{\beta}p_{2}\left( \phi_{-}\left( z\right) \right)
\]
is holomorphic in $\Delta$, is continuous in $\overline{\Delta},$ has
$\phi_{-}^{-1}\left( a\right) $ and $\phi_{-}^{-1}\left( b\right) $ as
unique zeros (with multiplicities $\alpha$ and $\beta$) in $\Delta,$ and has
modulus one on $\partial\Delta$. We deduce that there exists $\theta_{1}\in%
\mathbb{R}
$ such that$\bigskip$%
\[
p\circ\phi_{-}\left( z\right) =\bigskip e^{i\theta_{1}}\left( \frac
{z-\phi_{-}^{-1}\left( a\right) }{1-\overline{\phi_{-}^{-1}\left( a\right)
}z}\right) ^{\alpha}\left( \frac{z-\phi_{-}^{-1}\left( b\right)
}{1-\overline{\phi_{-}^{-1}\left( b\right) }z}\right) ^{\beta};\left\vert
z\right\vert \leq1.
\]
Reasoning like in the previous subsection on $\phi_{+}\left( z\right) ,$ we
get for some $\theta_{2}\in%
\mathbb{R}
$
\[
p\circ\phi_{+}\left( z\right) =\bigskip e^{i\theta_{2}}z^{n}\prod
_{i=1}^{n-\left( \alpha+\beta\right) }\frac{z-\phi_{+}^{-1}\left(
a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left( a_{i}\right) }z};\left\vert
z\right\vert \geq1.
\]
Combining the last two equalities for $\left\vert z\right\vert =1,$ we obtain
the following
\begin{theorem}
Let $\Gamma_{p,1}$ be a smooth connected lemniscate such that $\Omega_{-}$
contains exactly two different zeros $a$ and $b$ of $p$ with respective
multiplicities $\alpha$ and $\beta.$The fingerprint $k$ $:S^{1}\longrightarrow
S^{1}$ of $\Gamma_{p,1}$ satisfies the functional equation
\[
\left( B\circ k\right) \left( z\right) =A\left( z\right) ;\left\vert
z\right\vert =1.
\]
where $A$ and $B$ are the Blaschke products given by
\[
\bigskip B\left( z\right) =e^{i\theta}\left( \frac{z-\phi_{-}^{-1}\left(
a\right) }{1-\overline{\phi_{-}^{-1}\left( a\right) }z}\right) ^{\alpha
}\left( \frac{z-\phi_{-}^{-1}\left( b\right) }{1-\overline{\phi_{-}%
^{-1}\left( b\right) }z}\right) ^{\beta},\theta\in%
\mathbb{R}
.
\]%
\[
A\left( z\right) =z^{n}B_{2}\left( z\right) =z^{n}\prod_{i=1}^{n-\left(
\alpha+\beta\right) }\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }%
{1-\overline{\phi_{+}^{-1}\left( a_{i}\right) }z}.
\]
\end{theorem}
\section{\bigskip A quadratic differential}
\bigskip Given a rational function $r\left( z\right) =\frac{p\left(
z\right) }{q\left( z\right) },$ where $p\left( z\right) $ and $q\left(
z\right) $ are two co-prime complex polynomials, we consider the quadratic
differential on the Riemann sphere $\widehat{%
\mathbb{C}
}$ :%
\begin{equation}
\varpi_{r}\left( z\right) =-\left( \frac{r^{\prime}\left( z\right)
}{r\left( z\right) }\right) ^{2}dz^{2}=-\left( \frac{p^{\prime}\left(
z\right) q\left( z\right) -p\left( z\right) q^{\prime}\left( z\right)
}{p\left( z\right) q\left( z\right) }\right) ^{2}dz^{2}. \label{qdiff}%
\end{equation}
\emph{Finite critical points} and \emph{infinite critical points }of
$\varpi_{r}$ are respectively its zero's and poles; all other points of
$\widehat{%
\mathbb{C}
}$ are called \emph{regular points} of $\varpi_{r}.$
It is obvious that the partial fraction decomposition of $\frac{r^{\prime
}\left( z\right) }{r\left( z\right) }$ is as follows :
\begin{equation}
\frac{r^{\prime}\left( z\right) }{r\left( z\right) }=\sum_{p\left(
a\right) q\left( a\right) =0}\frac{m_{a}}{z-a},\label{lucas}%
\end{equation}
where $m_{a}\in%
\mathbb{Z}
^{\ast}$ is the multiplicity of the zero $a$ of $p\left( z\right) q\left(
z\right) .$ We deduce that
\[
\ \varpi_{r}\left( z\right) =-\frac{m_{a}^{2}}{\left( z-a\right) ^{2}%
}\left( 1+\mathcal{O}(z-a)\right) dz^{2},\quad z\rightarrow a.
\]
In other words, the zero's of $p$ and $q$ are poles of order $2$ of
$\varpi_{r}$ with negative residue.
If
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) =\deg\left( pq\right) -1,
\]
(in particular, if $\deg\left( p\right) \neq\deg\left( q\right) $), then,
with the parametrization $u=1/z$, we get
\[
\ \varpi_{r}\left( u\right) =-\frac{\left( \deg\left( p\right)
-\deg\left( q\right) \right) ^{2}}{u^{2}}\left( 1+\mathcal{O}(u)\right)
du^{2},\quad u\rightarrow0;
\]
thus, $\infty$ is another double pole\emph{ }of $\varpi_{r}$ with negative
residue. If
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) <\deg\left( pq\right) -2,
\]
then $\infty$ is zero of $\varpi_{r}$ with multiplicity greater than $1.$ In
the case
\[
\deg\left( p^{\prime}q-pq^{\prime}\right) =\deg\left( pq\right) -2,
\]
$\infty$ is a regular point.
\emph{Horizontal trajectories} (or just trajectories) of the quadratic
differential $\varpi_{r}$ are the zero loci of the equation%
\[
\varpi_{r}\left( z\right) >0,
\]
or equivalently%
\begin{equation}
\Re\int^{z}\frac{r^{\prime}\left( t\right) }{r\left( t\right) }%
dt=\log\left\vert r\left( z\right) \right\vert =\text{\emph{const}}.
\label{eq traj}%
\end{equation}
If $z\left( t\right) ,t\in%
\mathbb{R}
$ is a horizontal trajectory, then the function
\[
t\longmapsto\Im\int_{0}^{t}\frac{r^{\prime}\left( z\left( u\right) \right)
}{r\left( z\left( u\right) \right) }z^{\prime}\left( u\right)
du=\arg\left( r\left( z\left( t\right) \right) \right) -\arg\left(
r\left( z\left( 0\right) \right) \right)
\]
is monotone.
The \emph{vertical} (or, \emph{orthogonal}) trajectories are obtained by
replacing $\Im$ by $\Re$ in equation (\ref{eq traj}). The horizontal and
vertical trajectories of the quadratic differential $\varpi_{r}$ produce two
pairwise orthogonal foliations of the Riemann sphere $\widehat{%
\mathbb{C}
}$.
A trajectory passing through a critical point of $\varpi_{r}$ is called
\emph{critical trajectory}. In particular, if it starts and ends at a finite
critical point, it is called \emph{finite critical trajectory}, otherwise, we
call it an \emph{infinite critical trajectory}. \bigskip If two different
trajectories are not disjoint, then their intersection must be a zero of the
quadratic differential.
The closure of the set of finite and infinite critical trajectories is called
the \emph{critical graph} of $\varpi_{r},$ we denote it by $\Gamma_{r}.$
The local and global structures of the trajectories is well known (more
details about the theory of quadratic differentials can be found in
\cite{Strebel},\cite{jenkins}, or \cite{F.Thabet}), in particular :
\begin{itemize}
\item At any regular point, horizontal (resp. vertical) trajectories look
locally as simple analytic arcs passing through this point, and through every
regular point of $\varpi_{p}$ passes a uniquely determined horizontal (resp.
vertical) trajectory of $\varpi_{p};$ these horizontal and vertical
trajectories are locally orthogonal at this point.
\item From each zero with multiplicity $m$ of $\varpi_{r},$ there emanate
$m+2$ critical trajectories spacing under equal angle $2\pi/(m+2)$.
\item Any double pole has a neighborhood such that, all trajectories inside it
take a loop-shape encircling the pole or a radial form diverging to the pole,
respectively if the residue is negative or positive.
\item A trajectory in the large can be, either a closed curve not passing
through any critical point (\emph{closed trajectory}), or an arc connecting
two critical points, or an arc that has no limit along at least one of its
directions (\emph{recurrent trajectory}).
\end{itemize}
The set $\widehat{%
\mathbb{C}
}\setminus\Gamma_{r}$ consists of a finite number of domains called the
\emph{domain configurations} of $\varpi_{r}.$ For a general quadratic
differential on a $\widehat{%
\mathbb{C}
}$, there are five kind of domain configuration, see \cite[Theorem3.5]%
{jenkins}. Since all the infinite critical points of $\varpi_{r}$ are poles of
order $2$ with negative residues, then there are three possible domain configurations:
\begin{itemize}
\item the \emph{Circle domain} : It is swept by closed trajectories and
contains exactly one double pole. Its boundary is a closed critical
trajectory. For a suitably chosen real constant $c$ and some real number
$r>0,$ the function $z\longmapsto r\exp\left( c\int^{z}\frac{p^{\prime
}\left( t\right) }{p\left( t\right) }dt\right) $ is a conformal map from
the circle domain $D$ onto the unit circle; it extends continuously to the
boundary $\partial D,$ and sends the double pole to the origin.
\item the \emph{Ring domain}: It is swept by closed trajectories. Its boundary
consists of two connected components. For a suitably chosen real constant $c$
and some real numbers $0<r_{1}<r_{2},$ the function $z\longmapsto\exp\left(
c\int^{z}\frac{p^{\prime}\left( t\right) }{p\left( t\right) }dt\right) $
is a conformal map from the circle domain $D$ onto the annulus $\left\{
z:r_{1}<\left\vert z\right\vert <r_{2}\right\} $ and it extends continuously
to the boundary $\partial D.$
\item the \emph{Dense domain : }It is swept by recurrent critical trajectory
i.e., the interior of its closure is non-empty. Jenkins Three-pole Theorem
(see \cite[Theorem 15.2]{Strebel}) asserts that a quadratic differential on
the Riemann sphere with at most three poles cannot have recurrent
trajectories. In general, the non-existence of such trajectories is not
guaranteed, but here, following the idea of \emph{level function} of
Baryshnikov and Shapiro (see \cite{shapiro barish}), the quadratic
differential $\varpi_{r}$ excludes the dense domain, as we will see in
Proposition \ref{no recurrent}.
\end{itemize}
A very helpful tool that will be used in our investigation is the
Teichm\"{u}ller lemma (see \cite[Theorem 14.1]{Strebel}).
\begin{definition}
\bigskip A domain in $\widehat{%
\mathbb{C}
}$ bounded only by segments of horizontal and/or vertical trajectories of
$\varpi_{r}$ (and their endpoints) is called $\varpi_{r}$-polygon.
\end{definition}
\begin{lemma}
[Teichm\H{u}ller]\label{teich lemma} Let $\Omega$ be a $\varpi_{r}$-polygon,
and let $z_{j}$ be the critical points on the boundary $\partial\Omega$ of
$\Omega,$ and let $t_{j}$ be the corresponding interior angles with vertices
at $z_{j},$ respectively . Then%
\begin{equation}
\sum\left( 1-\dfrac{\left( m_{j}+2\right) t_{j}}{2\pi}\right) =2+\sum
n_{i}, \label{Teich equality}%
\end{equation}
where $m_{j}$ are the multiplicities of $z_{j},$ and $n_{i}$ are the
multiplicities of critical points of $\varpi_{r}$ inside $\Omega.\bigskip$
\end{lemma}
\section{Lemniscates}
We use the notations of \cite{khavinson}. Let us denote $n=\deg r=\max\left(
\deg p,\deg q\right) >0$. For $c>0,$ the set
\begin{equation}
\Gamma_{r,c}=\{z\in%
\mathbb{C}
:|r(z)|=c\} \label{lemniscate}%
\end{equation}
is called rational lemniscate of degree $n.$ For more details, see
\cite{Sheil-Small}. From the point of view of the theory of quadratic
differentials, each connected component of the lemniscate $\Gamma_{r,c}$
coincides with a horizontal trajectory of $\varpi_{r}=-\left( \frac
{r^{\prime}\left( z\right) }{r\left( z\right) }\right) ^{2}dz^{2},$ as we
have seen in equation (\ref{eq traj}). The lemniscate $\Gamma_{r,c}$ is
entirely determined by the knowledge of the critical graph $\Gamma_{r}$ (which
is the union of the lemniscates $\Gamma_{r,\left\vert r\left( a\right)
\right\vert },$ for all zero's $a$ of $\varpi_{r}$) of the quadratic
differential of $\varpi_{r}.$ In particular, if we denote by $n_{z}$ and
$n_{p}$ respectively the number of zero's and poles $r\left( z\right) $ in
$\widehat{%
\mathbb{C}
},$ then, from the local behavior of the trajectories, we see that, for
$c\rightarrow0^{+}$, the lemniscate $\Gamma_{r,c}$ is formed by exactly
$n_{z}$ disjoint closed curves each of them encircles a zero of $r\left(
z\right) $, while for $c\rightarrow+\infty,$ $\Gamma_{r,c}$ is formed by
exactly $n_{p}$ disjoint closed curves each of them encircles a pole of
$r\left( z\right) $. If $\deg\left( p^{\prime}q-pq^{\prime}\right)
<\deg\left( pq\right) -2,$ then, $\infty$ is a zero of $\varpi_{r}$ of
multiplicity $m\geq2,$ and there are $m+2$ critical trajectories emerging from
$\infty$ dividing in a symmetric way the complement of some zero centred ball
into $m+2$ connected components. See Figure \ref{FIG1}. In the rest of this
note, we assume that $\infty$ is a double pole, i.e., $\deg\left( p^{\prime
}q-pq^{\prime}\right) =\deg\left( pq\right) -1.$ \begin{figure}[tbh]
\begin{minipage}[b]{0.4\linewidth}
\centering\includegraphics[scale=0.28]{7.pdf}
\end{minipage}\hfill
\begin{minipage}[b]{0.4\linewidth} \includegraphics[scale=0.28]{8.pdf}
\end{minipage}
\hfill\caption{Critical graphs of $\varpi_{r},$ $r=$ $\frac{x^{2}-1}{x^{2}+1}$
(left), and $r=$ $\frac{x^{2}-4}{x^{2}+1}$ (right).}%
\label{FIG1}%
\end{figure}
\begin{definition}
A quadratic differential on $\widehat{%
\mathbb{C}
}$ is called \emph{Strebel} if the complement to the union of its closed
trajectories has vanishing area.
\end{definition}
\begin{proposition}
\label{no recurrent}The quadratic differential $\varpi_{r}$ is Strebel.
\end{proposition}
\begin{proof}
Since the critical points of $\varpi_{r}$ are only zero's and double poles
with negative residues, it is sufficient to prove that $\varpi_{r}$ has no
recurrent trajectory. The function%
\[%
\begin{array}
[c]{cc}%
f:%
\mathbb{C}
\setminus\left\{ \text{poles of }r\left( z\right) \right\} \longrightarrow%
\mathbb{R}%
& z\longmapsto\left\vert r\left( z\right) \right\vert
\end{array}
\]
is continuous, and constant on each horizontal trajectory of $\varpi_{r}.$ If
$\varpi_{r}$ has a recurrent trajectory, then, its domain configuration
contains a dense domain $D.$ Thus, the function $f$ must be constant on $D,$
which is clearly impossible by analyticity of the rational function
$z\longmapsto r\left( z\right) .$
\end{proof}
A necessary condition for the existence of a finite critical trajectory
connecting two finite critical points of $\varpi_{r}$ is the existence of a
Jordan arc $\gamma$ connecting them, such that
\begin{equation}
\Re\int_{\gamma}\frac{r^{\prime}\left( t\right) }{r\left( t\right) }dt=0.
\label{cond necess}%
\end{equation}
Unfortunately, this condition is not sufficient in general, as it can be shown
easily for the case of $r\left( z\right) =\left( z^{2}-1\right) \left(
z^{2}-4\right) ;$ see Figure \ref{FIG2}. \begin{figure}[th]
\centering\includegraphics[height=1.8in,width=2.8in]{9.pdf}\caption{Critical
graph of $\varpi_{p}$, $p=\left( z^{2}-1\right) \left( z^{2}-4\right) .$}%
\label{FIG2}%
\end{figure}However, a more sufficient condition will be shown by the
following Proposition
\begin{proposition}
Let us denote $z_{1},...,z_{m}$ the finite critical points of $\varpi_{r}.$
If
\[
\left\vert w_{i}\right\vert =\left\vert w_{j}\right\vert =\max\left\{
\left\vert w_{k}\right\vert :=r\left( z_{k}\right) ;k=1,...,m\right\}
\]
for some $1$ $\leq i<j\leq m,$ then, there exists a finite critical trajectory
joining $z_{i}$ and $z_{j}.$ In particular, the critical graph $\Gamma_{r}$ is
connected, if and only if $\left\vert w_{1}\right\vert =\cdot\cdot
\cdot=\left\vert w_{m}\right\vert .$
\end{proposition}
\begin{proof}
If no finite critical trajectory joins $z_{i}$ and $z_{j},$ then a lemniscate
$\Gamma_{r,c},$ for some $c>\left\vert w_{i}\right\vert ,$ is not connected :
$\Gamma_{r,c}$ is a disjoint union of $s\geq2$ loops $L_{1},...,L_{s},$ each
of them encircles a part of the critical graph $\Gamma_{r}.$ Looking at each
of these loops as a $\varpi_{r}$-polygon and applying Lemma \ref{teich lemma},
we get :
\begin{equation}
0=2+\sum n_{k},k=1,...,s. \label{somme}%
\end{equation}
Making the sum of all equalities in (\ref{somme}), and taking into account our
assumption that $\left( \deg\left( p^{\prime}q-pq^{\prime}\right)
=\deg\left( pq\right) \right) -1,$ we get%
\[
0=2s+2\left( \deg\left( p^{\prime}q-pq^{\prime}\right) -\deg\left(
pq\right) \right) =2s-2;
\]
a contradiction. The second point is a mere consequence.
The numbers $w_{1}=r\left( z_{1}\right) ,...,w_{m}=r\left( z_{m}\right) $
are called the \emph{non-vanishing critical values} of $r\left( z\right) .$
\end{proof}
\section{\bigskip Fingerprints of polynomial lemniscates}
Here following a brief mention of the case of polynomial lemniscates
$\Gamma_{p,1}$. Let us denote by
\begin{align*}
\Omega_{-} & :=\{z\in%
\mathbb{C}
:|p(z)|<1\},\\
\Omega_{+} & :=\{z\in\widehat{%
\mathbb{C}
}:|p(z)|>1\}.
\end{align*}
The maximum modulus theorem asserts that $\Omega_{+}$ is a connected open
subset containing a neighborhood of $\infty$ in $\widehat{%
\mathbb{C}
}$.
\begin{definition}
A lemniscate $\Gamma_{p,1}$ of degree $n$ is \emph{proper }if it is smooth
($p^{\prime}\left( z\right) \neq0$ on $\Gamma_{p,1}$) and connected.
\end{definition}
\bigskip Let $z_{1,}...,z_{s},$ $s\leq n-1$ be the zero's (repeated according
to their multiplicity) of $\varpi_{p}.$ The non-vanishing critical values for
$p\left( z\right) $ are the values $w_{1}=p\left( z_{1}\right)
,...,w_{s}=p\left( z_{s}\right) .$ For a smooth lemniscate $\Gamma_{p,1}$ of
degree $n$, the following characterizes the property of being proper through
the critical values :
\begin{proposition}
Assume that the lemniscate $\Gamma_{p,1}$ is smooth. Then, $\Gamma_{p,1}$ is
proper if and only if all the critical values $w_{1},...,w_{s}$ satisfy
$|w_{k}|<1.$
\end{proposition}
\begin{proof}
Proof of this Proposition can be found in \cite{khavinson}. We provide here a
more evident proof relying on quadratic differentials theory. The smoothness
of $\Gamma_{p,1}$ implies that it is not a critical trajectory. Suppose that
$|w_{k}|>1$ for some $k\in\left\{ 1,...,s\right\} ,$ and consider two
critical trajectories emerging from $z_{k}$ that form a loop $\gamma$. This
loop cannot intersect $\Gamma_{p,1},$ and $\gamma\cap\Omega_{-}\neq\emptyset$
since $\gamma$ contains a pole in its interior; a contradiction. The other
point is clear.
\end{proof}
Note that the interior $\Omega_{-}$ of a proper lemniscate of degree $n$ (or,
for a general smooth lemniscate, each component of ) is also simply connected,
since its complement is connected.
Let $\gamma$ be a $\mathcal{C}^{\infty}$ Jordan curve in $%
\mathbb{C}
;$ by a Jordan theorem, $\gamma$ splits $\widehat{%
\mathbb{C}
}$ into a bounded and an unbounded simply connected components $D_{-}$ and
$D_{+}.$ The Riemann mapping theorem asserts that there exist two conformal
maps $\phi_{-}:\Delta\longrightarrow$ $D_{-},$ and $\phi_{+}:\widehat{%
\mathbb{C}
}\setminus\overline{\Delta}\longrightarrow$ $D_{+},$ where $\Delta$ is the
unit disk. The map $\phi_{+}$ is uniquely determined by the normalization
$\phi_{+}\left( \infty\right) =\infty$ and $\phi_{+}\left( \infty\right)
>0.$ It is well-known that $\phi_{-}$ and $\phi_{+}$ extend to $\mathcal{C}%
^{\infty}$-diffeomorphisms on the closure of their respective domain. The
\textit{fingerprint of }$\gamma$ is the map $k$ $:=$ $\phi_{+}^{-1}\circ
\phi_{-}:S^{1}\longrightarrow S^{1}$ from the unit circle $S^{1}$ to itself.
Note that $k$ is uniquely determined by up to post-composition with an
automorphism of $D$ onto itself. Moreover, the fingerprint $k$ is invariant
under translations and scalings of the curve $\gamma.$
\subsection{Lemniscates in a Circle Domain}
\bigskip
Let $a$ be a double pole of $\varpi_{p}$ ( $a=\infty$ or $p\left( a\right)
=0$ ). Jenkins Theorem on the Configuration Domains of the quadratic
differential $\varpi_{p}$ asserts that there exists a connected neighborhood
$\mathcal{U}_{a}$ of $a$ (a Circle Domain of $\varpi_{p}$) bounded by finite
critical trajectories of $\varpi_{p},$ such that all trajectories of
$\varpi_{p}$ (lemniscates of $p$) inside $\mathcal{U}_{a}$ are closed smooth
curves encircling $a.$ Moreover, for a suitably chosen non-vanishing real
constant $c,$ the function
\[
\psi:z\longmapsto\exp\left( c\int^{z}\frac{p^{\prime}\left( t\right)
}{p\left( t\right) }dt\right)
\]
is a conformal map from $\mathcal{U}_{a}$ onto a certain disk centered in
$z=0.$ A more obvious form of it, is
\[
\psi\left( z\right) =\beta p\left( z\right) ^{c}%
\]
for some complex number $\beta.$ Baring in mind that $\psi$ is univalent near
$a$, we get
\[
c=\left\{
\begin{array}
[c]{c}%
\frac{1}{n},\text{ if }a=\infty\\
\frac{1}{\alpha},\text{ if }p\left( a\right) =0,\text{ }%
\end{array}
\right.
\]
where $\alpha$ is the multiplicity of $a$ if $p\left( a\right) =0.$ It
follows that the function
\[
z\longmapsto\left\{
\begin{array}
[c]{c}%
p\left( z\right) ^{\frac{1}{n}},\text{ if }a=\infty,\\
p\left( z\right) ^{\frac{1}{\alpha}},\text{ if }p\left( a\right) =0.
\end{array}
\right.
\]
is a conformal map from $\mathcal{U}_{a}$ onto a certain disk $\Delta_{a}$
centered in $z=0.$ We may assume for the sake of simplicity that $\Delta_{a}$
with a radius $R>1.$ For the given lemniscate $\Gamma_{p,1}$ in $\mathcal{U}%
_{a}$ (see Figure \ref{FIG3} ), it is straightforward that the function
$z\longmapsto p\left( z\right) ^{\frac{1}{\alpha}}$ maps $\Omega_{-}$
conformally onto the unit disk $\Delta.$ Thus,
\[
\left\{
\begin{array}
[c]{c}%
\phi_{+}^{-1}\left( z\right) =p\left( z\right) ^{\frac{1}{n}},\text{ if
}a=\infty,\\
\phi_{-}^{-1}\left( z\right) =p\left( z\right) ^{\frac{1}{\alpha}},\text{
if }p\left( a\right) =0.
\end{array}
\right. .
\]
\begin{figure}[tbh]
\begin{minipage}[b]{0.5\linewidth}
\centering\includegraphics[scale=0.4]{10.pdf}
\end{minipage}\hfill
\begin{minipage}[b]{0.5\linewidth} \includegraphics[scale=0.5]{11.pdf}
\end{minipage}
\hfill\caption{Critical graph of $\varpi_{\left( z^{2}-1\right) \left(
z^{2}-4\right) }$ and lemniscates in Circle Domains: $a=\infty$ (left), $a=1$
(right).}%
\label{FIG3}%
\end{figure}
\bigskip In the first case, we notice that $\Gamma_{p,1}$ is proper if and
only if $a=\infty;$ the next Theorem gives its fingerprint.
\begin{theorem}
[Ebenfelt, Khavinson and Shapiro ]The fingerprint $k$ $:S^{1}\longrightarrow
S^{1}$ of a proper lemniscate $\Gamma_{p,1}$ of the polynomial $p\left(
z\right) =\prod_{k=1}^{n}\left( z-\varsigma_{k}\right) $ is given by
\[
k(z)=B(z)^{1/n},
\]
where $B$ is the Blaschke product of degree $n$%
\[
B(z)=e^{i\theta}\prod_{k=1}^{n}\frac{z-a_{k}}{1-\overline{a_{k}}z}%
\]
for some real number $\theta$, and $a_{k}=\phi_{-}\left( \varsigma
_{k}\right) ,k=1,,,n.$
\end{theorem}
In the case $p\left( a\right) =0$, let
\[
p\left( z\right) =\bigskip\left( z-a\right) ^{\alpha}p_{1}\left(
z\right) ,\alpha\in%
\mathbb{N}
^{\ast};p_{1}\left( z\right) =\prod_{i=1}^{n-\alpha}\bigskip\left(
z-a_{i}\right) ,p_{1}\left( a\right) \neq0.
\]
With the normalization $\phi_{+}\left( z\right) \rightarrow\infty$ as
$z\rightarrow\infty,$ the function
\[
z\longmapsto\ \frac{p\circ\phi_{+}\left( z\right) }{\prod_{i=1}^{n-\alpha
}\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left(
a_{i}\right) }z}};\left\vert z\right\vert \geq1
\]
is holomorphic in $%
\mathbb{C}
\setminus\overline{\Delta}$, does not vanish there, is continuous in $%
\mathbb{C}
\setminus\Delta,$ and has modulus one on $\partial\Delta=S^{1}$. We deduce the
existence of $\theta\in%
\mathbb{R}
$ such that
\[
p\circ\phi_{+}\left( z\right) =e^{i\theta}z^{n}\prod_{i=1}^{n-\alpha}%
\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left(
a_{i}\right) }z};\left\vert z\right\vert \geq1,
\]
which proves the
\begin{theorem}
Let $\Gamma_{p,1}$ be a smooth connected lemniscate such that $z=a$ is the
only zero of $p$ in $\Omega_{-}$ $.$ The fingerprint $k$ $:S^{1}%
\longrightarrow S^{1}$ of $\Gamma_{p,1}$ is given by
\[
k^{-1}(z)=z^{\frac{n}{\alpha}}B_{1}\left( z\right) ^{\frac{1}{\alpha}}.
\]
where $B_{1}\left( z\right) $ is the Blaschke product
\[
B_{1}\left( z\right) =e^{i\theta}\prod_{i=1}^{n-\alpha}\frac{z-\phi_{+}%
^{-1}\left( a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left( a_{i}\right)
}z}.
\]
\end{theorem}
\subsection{Lemniscates in a Ring Domain}
In the following, let $\mathcal{U}$ be a Ring Domain of the quadratic
differential $\varpi_{p}.$ It is bounded by two lemniscates $\Gamma_{p,r}$ and
$\Gamma_{p,R}.$ We may assume that
\[
0<r<1<R.
\]
For the sake of simplicity, we may assume that $p$ has exactly two different
zeros $a$ and $b$ in the bounded domain of $%
\mathbb{C}
$ with boundary $\Gamma_{p,r}.$
\begin{align*}
p\left( z\right) & =\bigskip\left( z-a\right) ^{\alpha}\left(
z-b\right) ^{\beta}p_{2}\left( z\right) ,\alpha,\beta\in%
\mathbb{N}
^{\ast};\\
p_{2}\left( z\right) & =\prod_{i=1}^{n-\left( \alpha+\beta\right)
}\bigskip\left( z-a_{i}\right) ,p_{2}\left( a\right) p_{2}\left(
b\right) \neq0.
\end{align*}
We consider the lemniscate $\Gamma_{p,1}$ of $p$ in $\mathcal{U}$ (see Figure
\ref{FIG4} ). \begin{figure}[th]
\centering\includegraphics[height=1.8in,width=2.8in]{12.pdf}\caption{Critical
graph of $\varpi_{\left( z^{2}-1\right) \left( z^{2}-4\right) }$ with a
lemniscate in a Ring Domain ( $a=1,b=2$ ).}%
\label{FIG4}%
\end{figure}Since the function
\[
z\longmapsto p\circ\phi_{-}\left( z\right) =\bigskip\left( \phi_{-}\left(
z\right) -a\right) ^{\alpha}\left( \phi_{-}\left( z\right) -b\right)
^{\beta}p_{2}\left( \phi_{-}\left( z\right) \right)
\]
is holomorphic in $\Delta$, is continuous in $\overline{\Delta},$ has
$\phi_{-}^{-1}\left( a\right) $ and $\phi_{-}^{-1}\left( b\right) $ as
unique zeros (with multiplicities $\alpha$ and $\beta$) in $\Delta,$ and has
modulus one on $\partial\Delta$. We deduce that there exists $\theta_{1}\in%
\mathbb{R}
$ such that$\bigskip$%
\[
p\circ\phi_{-}\left( z\right) =\bigskip e^{i\theta_{1}}\left( \frac
{z-\phi_{-}^{-1}\left( a\right) }{1-\overline{\phi_{-}^{-1}\left( a\right)
}z}\right) ^{\alpha}\left( \frac{z-\phi_{-}^{-1}\left( b\right)
}{1-\overline{\phi_{-}^{-1}\left( b\right) }z}\right) ^{\beta};\left\vert
z\right\vert \leq1.
\]
Reasoning like in the previous subsection on $\phi_{+}\left( z\right) ,$ we
get for some $\theta_{2}\in%
\mathbb{R}
$
\[
p\circ\phi_{+}\left( z\right) =\bigskip e^{i\theta_{2}}z^{n}\prod
_{i=1}^{n-\left( \alpha+\beta\right) }\frac{z-\phi_{+}^{-1}\left(
a_{i}\right) }{1-\overline{\phi_{+}^{-1}\left( a_{i}\right) }z};\left\vert
z\right\vert \geq1.
\]
Combining the last two equalities for $\left\vert z\right\vert =1,$ we obtain
the following
\begin{theorem}
Let $\Gamma_{p,1}$ be a smooth connected lemniscate such that $\Omega_{-}$
contains exactly two different zeros $a$ and $b$ of $p$ with respective
multiplicities $\alpha$ and $\beta.$The fingerprint $k$ $:S^{1}\longrightarrow
S^{1}$ of $\Gamma_{p,1}$ satisfies the functional equation
\[
\left( B\circ k\right) \left( z\right) =A\left( z\right) ;\left\vert
z\right\vert =1.
\]
where $A$ and $B$ are the Blaschke products given by
\[
\bigskip B\left( z\right) =e^{i\theta}\left( \frac{z-\phi_{-}^{-1}\left(
a\right) }{1-\overline{\phi_{-}^{-1}\left( a\right) }z}\right) ^{\alpha
}\left( \frac{z-\phi_{-}^{-1}\left( b\right) }{1-\overline{\phi_{-}%
^{-1}\left( b\right) }z}\right) ^{\beta},\theta\in%
\mathbb{R}
.
\]%
\[
A\left( z\right) =z^{n}B_{2}\left( z\right) =z^{n}\prod_{i=1}^{n-\left(
\alpha+\beta\right) }\frac{z-\phi_{+}^{-1}\left( a_{i}\right) }%
{1-\overline{\phi_{+}^{-1}\left( a_{i}\right) }z}.
\]
\end{theorem}
|
2,869,038,154,936 | arxiv | \section*{Methods}
\small
{\bf Lattice configuration.}
All experiments are performed in a mutually orthogonal retro-reflected 3D optical lattice consisting of superlattices along $x$ and $y$ and a simple lattice in the $z$-direction. Each superlattice is created by superimposing two standing waves, a short lattice with wavelength $\lambda\sub{s} = 767$\,nm and a long lattice with $\lambda\sub{l} =2\lambda\sub{s}$. The vertical lattice along $z$ is formed by a standing wave with $\lambda_z = 844$\,nm.
{\bf Tight-binding Hamiltonian of the 2D superlattice.}
\looseness -1 In the tight-binding limit, a 2D superlattice with $d\sub{l} = 2 d\sub{s}$ is described by the 2D Rice-Mele Hamiltonian
\begin{align}
& \hat{H}_{2D}(\varphi_x, \varphi_y) = \nonumber \\
&- \sum_{m_x,m_y} \left[J_x(\varphi_x) + (-1)^{m_x} \delta J_x(\varphi_x)/2 \right] \hat{a}^{\dagger}_{m_x+1,m_y} \hat{a}_{m_x,m_y} + \mathrm{h.c.} \nonumber \\
&- \sum_{m_x,m_y} \left[J_y(\varphi_y) + (-1)^{m_y} \delta J_y(\varphi_y)/2 \right] \hat{a}^{\dagger}_{m_x,m_y+1} \hat{a}_{m_x,m_y} + \mathrm{h.c.} \nonumber \\
&+ \sum_{m_x,m_y} \frac{1}{2} \left[(-1)^{m_x} \Delta_x (\varphi_x) + (-1)^{m_y} \Delta_y(\varphi_y)\right] \hat{a}^{\dagger}_{m_x,m_y} \hat{a}_{m_x,m_y} \nonumber
\end{align}
with $\hat{a}_{m_x,m_y}^{\dagger}$ $(\hat{a}_{m_x,m_y})$ being the creation (annihilation) operator acting on the ($m_x$,$m_y$)-th site in the $xy$-plane. The first (second) term describes the hopping between neigbouring sites along the $x$-axis ($y$-axis) and the last term contains the on-site potential of each lattice site. The long lattices lead to a modulation of the on-site energies, $(-1)^{m_\mu} \Delta_\mu / 2$, and the tunnelling matrix elements, $J_\mu + (-1)^{m_\mu} \delta J_\mu/2$, with $\mu \in \{x,y\}$, which depend on the respective superlattice phase $\varphi_\mu$.
{\bf Initial state preparation for band-mapping measurements.}
For all sequences, a Mott insulator with quarter filling consisting of about 5000 $^{87}$Rb atoms is prepared in the lowest subband of the 2D superlattice. To this end, a Bose-Einstein condensate is loaded from a crossed dipole trap into the lattice by first ramping up the blue-detuned short lattices along $x$ and $y$ to $3.0(1) E\sub{r,s}$ during 50\,ms to lower the initial density of the atom cloud. Then these lattices are switched off again within 50\,ms while at the same time the vertical lattice as well as both long lattices are increased to $30(1) E\sub{r,z}$ and $30(1) E\sub{r,l}$, respectively, with $\varphi_x = 0.000(5) \pi$ and $\varphi_y = \varphi_y^{(0)}$. Subsequently, doubly-occupied lattice sites are converted to singly-occupied ones (see Supplementary Information), creating a Mott insulator with unit filling and a negligible fraction of doublons. Afterwards, each lattice site is split to a four-site plaquette by ramping up the short lattices along $x$ and $y$ to their final depth of $7.0(2) E\sub{r,s}$ and decreasing the long lattices to $20.0(6) E\sub{r,l}$ within 5\,ms.
{\bf Sequence for pumping.}
The superlattice phase can be controlled by slightly changing the frequency of the lasers used for generating the long lattices and thereby moving the relative position between the short and long lattice at the position of the atoms. The pumping along $x$ is performed by slowly changing $\varphi_x$, starting from the staggered configuration at $\varphi_x = 0.000(5) \pi$, where the energy difference between neighbouring sites $|\Delta_x|$ is largest and the tunnel couplings are equal, $\delta J_x = 0$. To minimize non-adiabatic transitions to higher bands, each pump cycle consists of three s-shaped ramps $\varphi_x \in \left[0,0.5\pi\right], \left[0.5\pi, 1.5 \pi \right]$ and $\left[1.5\pi,2\pi \right]$. This reduces the ramp speed in the vicinity of the symmetric double well configuration ($\Delta_x = 0$) at $\varphi_x = (l + 1/2)\pi$, $l \in \mathbb{Z}$, where the gap to the first excited band is smallest. The duration of the $\pi/2$ ramps is 7\,ms and 14\,ms for the ramp by $\pi$. Due to the limited tuning range of a single laser, a second laser is required for implementing multiple pump cycles, which is set to a constant phase of $\varphi_x = 0.000(5) \pi$. At the end of each cycle, an instantaneous switch from the primary laser to the second one is made and within 5\,ms the frequency of the former is ramped back to its initial value --- corresponding to an identical lattice configuration. After switching back to the first laser, the next cycle continues as described above. We checked experimentally that this handover between the two lasers does not create any measurable band excitations.
{\bf Measurement of the in-situ position.}
To determine the non-linear COM displacement along $y$, a double-differential measurement is conducted to minimize the effect of shot-to-shot fluctuations of the atom position. In order to do this, the COM position is measured before ($y\sub{i}$) and after the pumping ($y\sub{f}$) and compared to a reference sequence ($y\sub{i}^{(0)}$, $y\sub{f}^{(0)}$). In the latter, the pumping is performed with only the short lattice along $y$ (at $V\sub{s,y} = 40(1) E\sub{r,s}$) and therefore the non-linear response is zero. The initial position is obtained during the doublon removal sequence, where the atoms are initially prepared in the $(F=1, m_F = 0)$ hyperfine state and one atom from each doubly-occupied site is transferred to $(F=2, m_F = -2)$ using microwave-dressed spin-changing collisions (see Supplementary Information). Here, $F$ denotes the total angular momentum of the atoms. In addition, we transfer 50\% of the atoms on singly-occupied sites to the $F = 2$ manifold as well by applying a microwave $\pi$-pulse resonant on the $(F=1, m_F = 0) \rightarrow (F=2, m_F = 0)$ transition. The $F = 2$ atoms thus have the same density distribution as the remaining $F = 1$ atoms and are imaged prior to the push-out pulse, which removes them from the lattice. The motion of the atoms due to the non-linear response is then given by $\Delta y = (y\sub{f} - y\sub{i}) - (y\sub{f}^{(0)} - y\sub{i}^{(0)})$. The difference of the COM displacement along $y$ between $\theta_1$ and $\theta_2$ is defined as $\Delta r_y = \Delta y (\theta_1) - \Delta y (\theta_2)$. For the $x$-direction, it is obtained directly from $\Delta x = (x\sub{f} - x\sub{i}) - \delta \overline{x}$ without comparing it to the reference sequence. Here, $\delta \overline{x}$ is the average displacement of all data points for a given angle, accounting for a small constant offset between the measured initial and final positions.
{\bf Relation between centre-of-mass position and double well imbalance.}
If there are no inter-double-well transitions along $y$, the change in the double well imbalance $\delta \mathcal{I}_y = \mathcal{I}_y (\varphi_x) - \mathcal{I}_y (\varphi_x = 0)$ can be directly related to the COM motion along $y$. The COM position in the $y$-direction is given by $y\sub{COM} = d\sub{l}/N \, \sum_{ij} \left[ (j-1/4) N_{e,ij} + (j+1/4) N_{o,ij}\right]$, where the sum is over all unit cells, $N_{e,ij}$ ($N_{o,ij}$) is the occupation of the even (odd) sites along $y$ in the $(i,j)$-th unit cell and $N$ is the total atom number. Expressing this in terms of the total number of atoms on even and odd sites, $N_e = \sum_{ij} N_{e,ij}$ and $N_o = \sum_{ij} N_{o,ij}$, and assuming that there are no transitions between neighbouring unit cells along $y$, i.e.~$\sum_i N_{e,ij} + N_{o,ij}$ remains constant, the change in the COM position can be written as $\delta y\sub{COM} = y\sub{COM} (\varphi_x) - y\sub{COM} (\varphi_x = 0) = d\sub{l} \, \delta \mathcal{I}_y/4$. Note that this derivation implicitly assumes that the COM of the maximally-localized Wannier functions on the lattice sites along $y$ is independent of $\varphi_y$, which is a valid approximation deep in the tight-binding regime. Otherwise, the proportionality factor $d\sub{l}/2$ has to be replaced by the distance between the COM of the Wannier functions on the even and odd site of a double well.
{\bf Model for double well imbalance including experimental imperfections.}
To determine the slope of the non-linear response from the band mapping data, we use a simple model that takes into account band excitations and double occupation of plaquettes and the experimental pumping efficiency of the linear response. The average double well imbalance $\mathcal{I}_y (\varphi_x)$ can be written as
\begin{equation*}
\label{eq:M1}
\mathcal{I}_y (\varphi_x) = n\sub{gs} \mathcal{I}_y^\mathrm{gs} (\varphi_y) + n\sub{exc} \mathcal{I}_y^\mathrm{exc} (\varphi_y) + n\sub{2} \mathcal{I}_y^\mathrm{2,gs} (\varphi_y)
\end{equation*}
where $n\sub{gs}$ ($n\sub{exc}$) is the fraction of atoms on singly-occupied plaquettes in the ground (first excited) state along $y$ and $n\sub{2}$ is the fraction of atoms on doubly-occupied plaquettes, which we assume to be in the ground state. These quantities can be determined experimentally at each point in the pumping sequence. $\mathcal{I}_y^\mathrm{gs}$, $\mathcal{I}_y^\mathrm{exc}$ and $\mathcal{I}_y^\mathrm{2,gs}$ denote the imbalance of the corresponding state, which is determined by the local phase of the $y$-superlattice at the position of the cloud along $x$, $\varphi_y (x\sub{COM})$, and can be calculated using the respective double well Hamiltonian (see Supplementary Information). The COM position in turn depends on the pump parameter $\varphi_x$ and includes corrections for the finite pumping efficiency, $x\sub{COM} (\varphi_x) = \mathrm{sgn}(\varphi_x) \sum_{i=1}^{|\varphi_x|/\pi} \left( 2 \beta_0 \beta^i - \beta \right)$ for $\varphi_x/\pi \in \mathbb{Z}$. Here, $\beta_0 = 0.980(4)$ is the initial ground state occupation along $x$ and $\beta = 0.986(2)$ is the pumping efficiency, given by the fraction of atoms that are transferred by one lattice site along $x$ during each half of a pump cycle. The main contributions limiting the pumping efficiency are band excitations in the pumping direction as well as non-adiabatic transitions between neighbouring double wells induced by the external harmonic confinement.
{\bf Fit function for non-linear response.}
Based on the above model, the experimental data is fitted with the function $\mathcal{I}_y (\varphi_x) + \mathcal{I}_0$ with $\varphi_y \rightarrow \varphi_y\supscr{exp} = \varphi_y^{(0)} + \alpha \, (\varphi_y - \varphi_y^{(0)})$. The two fit parameters are the prefactor $\alpha$, which describes the change of the superlattice phase along $y$ with $\varphi_x$ compared to the ideal case $\varphi_y\supscr{exp} = \varphi_y$, and an overall offset $\mathcal{I}_0$. The transport properties of the lowest band are encoded in the slope of the ground state imbalance at $\varphi_x = 0$. Knowing $\alpha$, it can be related to the ideal slope via
$$ \frac{\partial \mathcal{I}_y^\mathrm{gs}(\varphi_y\supscr{exp}) }{ \partial \varphi_x} = \frac{\partial \mathcal{I}_y^\mathrm{gs}(\varphi_y\supscr{exp})}{\partial \varphi_y\supscr{exp}} \, \frac{\partial \varphi_y\supscr{exp}}{\partial \varphi_x} = \alpha \, \frac{\partial \mathcal{I}_y^\mathrm{gs}(\varphi_y) }{ \partial \varphi_x}$$
Per cycle, this gives a change of the population imbalance for ground state atoms of
$$\delta \mathcal{I}_y^{\mathrm{gs}} = \alpha \left[ \mathcal{I}_y^\mathrm{gs}(\varphi_y)\Big|_{\varphi_x = 2\pi} - \mathcal{I}_y^\mathrm{gs}(\varphi_y)\Big|_{\varphi_x = 0} \right]$$
{\bf Determination of the second Chern number from scaling of the non-linear response with $\theta$.}
The COM displacement per cycle along $y$ for an infinite system, $\delta y\sub{COM} = \nu_2 \theta d\sub{l,x}$, scales linearly with the perturbing angle $\theta$. The second Chern number can thus be extracted from the slope of $\delta y\sub{COM} (\theta)$. Having confirmed that the measured shape of $\delta \mathcal{I}_y^\mathrm{gs} (\varphi_y^{(0)})$ is the same as expected theoretically, the response of an infinite system at a given angle $\theta$ can be inferred from a single measurement of $\delta \mathcal{I}_y^\mathrm{gs}$ at a fixed $\varphi_y^{(0)}$. This holds for all angles since the shape of $\overline{\Omega} (\varphi_y^{(0)})$ is independent of $\theta$. To obtain $\nu_2$, it is therefore sufficient to determine the slope of $\delta \mathcal{I}_y^\mathrm{gs}(\theta)$ at a constant $\varphi_y^{(0)}$.
\clearpage
\onecolumngrid
\clearpage
\begin{center}
\noindent\textbf{Supplementary Information for:}
\\\bigskip
\noindent\textbf{\large{Exploring 4D Quantum Hall Physics with a 2D Topological Charge Pump}}
\\\bigskip
Michael Lohse$^{1,2}$, Christian Schweizer$^{1,2}$, Hannah M. Price$^{3}$, Oded Zilberberg$^{4}$ \& Immanuel Bloch$^{1,2}$
\\\vspace{0.1cm}
\small{$^{1}$\emph{Fakult\"at f\"ur Physik, Ludwig-Maximilians-Universit\"at, Schellingstra\ss e 4, 80799 M\"unchen, Germany}}\\
\small{$^{2}$\emph{Max-Planck-Institut f\"ur Quantenoptik, Hans-Kopfermann-Stra\ss e 1, 85748 Garching, Germany}}\\
\small{$^{3}$\emph{INO-CNR BEC Center \& Dipartimento di Fisica, Universit\`a di Trento, Via Sommarive 14, 38123 Povo, Italy}}\\
\small{$^{4}$\emph{Institut f\"ur Theoretische Physik, ETH Z\"{u}rich, Wolfgang-Pauli-Stra\ss e 27, 8093 Z\"urich, Switzerland}}
\end{center}
\bigskip
\bigskip
\twocolumngrid
\normalsize
\renewcommand{\thefigure}{S\the\numexpr\arabic{figure}-10\relax}
\setcounter{figure}{10}
\renewcommand{\theequation}{S.\the\numexpr\arabic{equation}-10\relax}
\setcounter{equation}{10}
\renewcommand{\thesection}{S.\Roman{section}}
\setcounter{section}{10}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\section{Hall response of the 4D quantum Hall system}
Assuming perfect adiabaticity, the Hall response of the 4D system shown in Fig.~1a,~b can be evaluated from the semiclassical equations of motion for a wave packet centred at position $\mathbf{r}$ and quasimomentum $\mathbf{k}$ \cite{Xiao:2010_SI}:
\begin{gather}
\label{eq:S1}
\dot{r}^{\mu} = \frac{1}{\hbar} \frac{\partial \mathcal{E}(\mathbf{k})}{\partial k_{\mu}} + \dot{k}_{\nu} \, \Omega^{\nu \mu} (\mathbf{k})\\
\label{eq:S2}
\hbar \dot{k}_{\mu} = q E_\mu + q \dot{r}^\nu B_{\mu \nu}
\end{gather}
Here, $\mathcal{E}(\mathbf{k})$ is the energy of the respective eigenstate at $\mathbf{k}$, $q$ the charge of the particle and the Einstein notation is used for the spatial indices ${\mu, \nu} \in \{w,x,y,z\}$\footnote[1]{Note that the orientation of the axes in Fig. 1a,~b is chosen such that the 4D Levi-Civita symbol is $\varepsilon_{wxyz} = +1$}. The velocity of the wave packet $\mathbf{v} = \dot{\mathbf{r}}$ has two contributions: the group velocity arising from the dispersion of the band and the anomalous velocity due to the non-zero Berry curvature $\Omega^{\nu \mu } (\mathbf{k}) = i \left( \braket{\partial_{k_\nu} u }{\partial_{k_\mu} u} - \braket{\partial_{k_\mu} u }{\partial_{k_\nu} u} \right)$. For a filled or homogeneously populated band, the group velocity term vanishes and with $\mathbf{E} = E_z \mathbf{e}_z$ and $\mathbf{B} = 0$, the linear Hall response is given by the COM velocity
\begin{equation}
\label{eq:S3}
\mathbf{v}\sub{COM}^{(0)} = \frac{q}{h} A\sub{M}^{xz} E_z \nu_1^{zx} \, \mathbf{e}_x
\end{equation}
\looseness=-1 where $A\sub{M}^{xz}$ denotes the size of the magnetic unit cell and $\nu_1^{zx} = 1/(2\pi) \oint\sub{BZ} \Omega^{z x} \, \mathrm{d}^2 k$ the first Chern number of the 2D QH system in the $xz$-plane. The integration is performed over the 2D Brillouin zone spanned by $k_x$ and $k_z$.
Adding the perturbing magnetic field $B_{xw}$ generates a Lorentz force acting on the moving cloud, $\hbar \dot{\mathbf{k}} = q E_z \, \mathbf{e}_z - q v_x^{(0)} B_{xw} \, \mathbf{e}_w$ \cite{Price:2015_SI}. Note that this additional force can alternatively be interpreted as arising from a Hall voltage in the $w$-direction that is created by the current along $x$ in the presence of $B_{xw}$. This force in turn induces an additional anomalous velocity along $y$, giving rise to the non-linear Hall response. The resulting average velocity is then
\begin{equation}
\label{eq:S4}
\mathbf{v}\sub{COM} = \frac{q}{h} A\sub{M}^{xz} E_z \nu_1^{zx} \, \mathbf{e}_x - \left(\frac{q}{h}\right)^2 A_M \, E_z B_{xw} \nu_2 \, \mathbf{e}_y
\end{equation}
with $A_M$ being the size of the 4D magnetic unit cell. The second Chern number is given by $\nu_2 = 1/(4 \pi^2) \oint\sub{BZ} \Omega^{xw} \Omega^{zy} + \Omega^{xy} \Omega^{wz} + \Omega^{zx} \Omega^{wy} \mathrm{d}^4 k$, where $\mathrm{BZ}$ denotes the 4D Brillouin zone.
\section{Mapping of a 2D Topological Charge Pump to a 4D Quantum Hall System}
The Hamiltonian of a 2D topological charge pump for a given set of parameters $\{\varphi_x, \varphi_y\}$ can be interpreted as a Fourier component of a higher-dimensional quantum Hall system. Using the approach of dimensional extension \cite{Kraus:2012b_SI}, a 2D charge pump can be mapped onto a 4D QH system, whose Fourier components are sequentially sampled during a pump cycle. This is demonstrated in the following for the deep tight-binding limit $V\sub{s,\mu} \gg V\sub{l,\mu}^2/(4 E\sub{r,s})$, $\mu \in \{x,y\}$, where the corresponding 4D system consists of two 2D Harper-Hofstadter-Hatsugai models \cite{Harper:1955_SI, Azbel:1964_SI, Hofstadter:1976_SI, Hatsugai:1990_SI} in the $xz$- and $yw$-plane. A similar analogy can be made in the opposite limit of a vanishing short lattice, $V\sub{s,x} \rightarrow 0$ and $V\sub{s,y} \rightarrow 0$. In this case, each axis of the 2D lattice maps onto the Landau levels of a free particle in an external magnetic field in 2D \cite{Lohse:2016_SI}. For the lowest band, these two limiting cases are topologically equivalent, i.e. they are connected by a smooth crossover without closing the gap to the first excited band. The topological invariants governing the linear and non-linear response are thus independent of the depth of the short lattices.
For non-interacting atoms in the tight-binding limit, the motion in a 2D superlattice is captured by the following Hamiltonian:
\begin{align}
\label{eq:S5}
&\hat{H}_{2D}(\varphi_x, \varphi_y) = \\&- \sum_{m_x,m_y} \left[J_x(\varphi_x) + \delta J_x^{m_x}(\varphi_x) \right] \hat{a}^{\dagger}_{m_x+1,m_y} \hat{a}_{m_x,m_y} + \mathrm{h.c.} \nonumber \\
&- \sum_{m_x,m_y} \left[J_y(\varphi_y) + \delta J_y^{m_y}(\varphi_y) \right] \hat{a}^{\dagger}_{m_x,m_y+1} \hat{a}_{m_x,m_y} + \mathrm{h.c.} \nonumber \\
&+ \sum_{m_x,m_y} \left[\Delta_x^{m_x} (\varphi_x) + \Delta_y^{m_y} (\varphi_y)\right] \hat{a}^{\dagger}_{m_x,m_y} \hat{a}_{m_x,m_y} \nonumber
\end{align}
Here, $\hat{a}_{m_x,m_y}^{\dagger}$ $(\hat{a}_{m_x,m_y})$ are the creation (annihilation) operator acting on the ($m_x$,$m_y$)-th site in the $xy$-plane, $J_\mu + \delta J_\mu^{m_\mu}$ with $\mu \in \{x,y\}$ are the tunneling matrix elements between neighbouring sites along $\mu$ and $\Delta_x^{m_x} + \Delta_y^{m_y}$ is the on-site potential of a given site. In the presence of the long lattices, the tunnel couplings as well as the on-site energies are modulated periodically by $\delta J^{m_{\mu}}_{\mu}$ and $\Delta_x^{m_x} + \Delta_y^{m_y}$, respectively. For the lattice configuration used in the experiment, where $d\sub{l,\mu} = 2 d\sub{s,\mu}$, these modifications can be expressed as $(-1)^{m_\mu} \delta J_{\mu} (\varphi_{\mu})/2$ and $(-1)^{m_\mu} \Delta_{\mu} (\varphi_{\mu})/2$.
In the deep tight-binding regime, $J_x$ and $J_y$ become independent of the superlattice phases and the modulations can be approximated as
\begin{gather}
\delta J_x^{m_x} (\varphi_x) = - \frac{\delta J_x^{(0)}}{2} \sin \left( \tilde{\Phi}_{xz} m_x - \varphi_x\right)\\
\delta J_y^{m_y} (\varphi_y) = - \frac{\delta J_y^{(0)}}{2} \sin \left( \tilde{\Phi}_{yw} m_y - \varphi_y\right)\\
\Delta_x^{m_x} (\varphi_x) = \frac{\Delta_x^{(0)}}{2} \sin \left( \tilde{\Phi}_{xz} (m_x-1/2) - \varphi_x\right)\\
\Delta_y^{m_y} (\varphi_y) = \frac{\Delta_y^{(0)}}{2} \sin \left( \tilde{\Phi}_{yw} (m_y-1/2) - \varphi_y\right)
\end{gather}
with $\tilde{\Phi}_{xz} = 2 \pi d\sub{s,x}/d\sub{l,x}$ and $\tilde{\Phi}_{yw} = 2 \pi d\sub{s,y}/d\sub{l,y}$. In this case, $\hat{H}_{2D}$ is equivalent to the generalized 2D Harper model \cite{Harper:1955_SI, Roux:2008_SI} which describes the Fourier components of a 4D lattice model with two uniform magnetic fields. The 4D parent Hamiltonian can be obtained by performing an inverse Fourier transform \cite{Kraus:2013_SI}
\begin{equation}
\label{eq:S10}
\hat{H}_{4D} = \frac{1}{4\pi^2}\int_0^{2\pi} \hat{H}_{2D} (\varphi_x, \varphi_y) \mathrm{d} \varphi_x \mathrm{d} \varphi_y^{(0)}
\end{equation}
with
\begin{gather}
\hat{a}^{\dagger}_{m_x,m_y} = \sum_{m_z,m_w} e^{i (\varphi_x m_z + \varphi_y^{(0)} m_w)} \hat{a}^{\dagger}_{\mathbf{m}} \\
\hat{a}_{m_x,m_y} = \sum_{m_z,m_w} e^{-i (\varphi_x m_z + \varphi_y^{(0)} m_w)} \hat{a}_{\mathbf{m}}
\end{gather}
where $\mathbf{m} = \{m_x, m_y, m_z, m_w\}$ indicates the position in the 4D lattice. This yields
\begin{equation}
\label{eq:S13}
\hat{H}_{4D} = \hat{H}_{xz} + \hat{H}_{yw} + \hat{H}_{\delta J}
\end{equation}
The first term $\hat{H}_{xz}$ describes a 2D Harper-Hofstadter model \cite{Harper:1955_SI, Azbel:1964_SI, Hofstadter:1976_SI} in the $xz$-plane with a uniform magnetic flux per unit cell $\Phi_{xz} = \Phi_0 \tilde{\Phi}_{xz}/(2\pi) = d\sub{s,x}/d\sub{l,x} \, \Phi_0$ with $\Phi_0$ denoting the magnetic flux quantum:
\begin{align}
\label{eq:S14}
\hat{H}_{xz} = &- \sum_{\mathbf{m}} J_x \hat{a}^{\dagger}_{\mathbf{m} + \mathbf{e}_x} \hat{a}_{\mathbf{m}} + \mathrm{h.c.} \\
&- \sum_{\mathbf{m}} \frac{\Delta_x^{(0)}}{4} e^{i \left[\tilde{\Phi}_{xz} (m_x - 1/2) + \pi/2 \right]} \hat{a}^{\dagger}_{\mathbf{m} + \mathbf{e}_z} \hat{a}_{\mathbf{m}} + \mathrm{h.c.} \nonumber
\end{align}
Correspondingly, the second term $\hat{H}_{yw}$ is an independent 2D Harper-Hofstadter model in the $yw$-plane with $\Phi_{yw}= d\sub{s,y}/d\sub{l,y} \, \Phi_0$. Due to the position dependence of the transverse superlattice phase $\varphi_y$, it also contains the magnetic perturbation, i.e.~a weak homogeneous magnetic field in the $xw$-plane:
\begin{align}
\label{eq:S15}
\hat{H}_{yw} &= \\ - &\sum_{\mathbf{m}} J_y \hat{a}^{\dagger}_{\mathbf{m} + \mathbf{e}_y} \hat{a}_{\mathbf{m}} + \mathrm{h.c.} \nonumber \\
- &\sum_{\mathbf{m}} \frac{\Delta_y^{(0)}}{4} e^{i \left[\tilde{\Phi}_{yw} (m_y - 1/2) + \tilde{\Phi}_{xw} m_x + \pi/2\right]} \hat{a}^{\dagger}_{\mathbf{m} + \mathbf{e}_w} \hat{a}_{\mathbf{m}} + \mathrm{h.c.} \nonumber
\end{align}
with $\tilde{\Phi}_{xw} = -2\pi \theta d\sub{s,x}/d\sub{l,y}$. The strength of the perturbing magnetic field is thus given by
\begin{equation}
\label{eq:S16}
B_{xw} = - \frac{\Phi_0}{d\sub{s,w} d\sub{l,y}} \, \theta
\end{equation}
where $d\sub{s,w}$ is the lattice spacing along $w$. For $\delta J_{\mu}^{(0)} \neq 0$, the third contribution $\hat{H}_{\delta J}$ leads to the appearance of additional diagonal tunnel coupling elements in the $xz$- and $yw$-plane with an amplitude of $\delta J_x^{(0)}/4$ and $\delta J_y^{(0)}/4$, respectively. The individual 2D models without the magnetic perturbation $B_{xw}$ then correspond to the Harper-Hofstadter-Hatsugai model \cite{Hatsugai:1990_SI} with a uniform magnetic flux $\Phi_{xz}$ and $\Phi_{yw}$, respectively, i.e.~the same flux as for $\delta J_{\mu}^{(0)} = 0$.
\section{Transport properties of a 2D topological charge pump}
When the pump parameter $\varphi_x$ is changed slowly, a particle that is initially in an eigenstate $\ket{u (k_x, \varphi_x (t=0), k_y, \varphi_y)}$ of the 2D superlattice Hamiltonian $\hat{H}_{2D}$ [\eq{eq:S5}] will adiabatically follow the corresponding instantaneous eigenstate $\ket{u (k_x, \varphi_x (t), k_y, \varphi_y)}$. In absence of a tilt, $\theta = 0$, the particle acquires an anomalous velocity $\Omega^x \partial_t \varphi_x \mathbf{e}_x$ during this evolution, analogous to the linear Hall response in a QH system. In this case, the Berry curvature $\Omega^x$ is defined in a 4D generalized Brillouin zone $(k_x, \varphi_x, k_y, \varphi_y)$:
\begin{equation}
\label{eq:S17}
\Omega^x (k_x, \varphi_x, k_y, \varphi_y) = i \left( \braket{\partial_{\varphi_x} u}{\partial_{k_x} u} - \braket{\partial_{k_x} u}{\partial_{\varphi_x} u}\right)
\end{equation}
For a homogeneously populated band, the COM displacement along $x$ during one cycle, obtained by integrating the average anomalous velocity over one period, can be expressed as an integral of the Berry curvature over the 2D generalized Brillouin zone spanned by $k_x$ and $\varphi_x$. It is thus determined by the pump's first Chern number
\begin{equation}
\label{eq:S18}
\nu_1^x = \frac{1}{2\pi} \oint \Omega^x \, \mathrm{d} k_x \mathrm{d} \varphi_x
\end{equation}
When a tilt is present, $\theta \neq 0$, this motion along $x$ leads to a change in $\varphi_y$. This induces an additional anomalous velocity in the $y$-direction, giving rise to the non-linear response. Neglecting the contribution from the group velocity (which averages to zero for a homogeneously populated band), we obtain for a given eigenstate:
\begin{equation}
\label{eq:S19}
v_y (k_x, \varphi_x, k_y, \varphi_y) = \Omega^y \partial_t \varphi_y = \frac{2\pi}{d\sub{l,y}} \theta \, \Omega^x \Omega^y \partial_t \varphi_x
\end{equation}
The distribution of $\Omega^x \Omega^y$ in the 4D generalized Brillouin zone is shown in Fig.~1e for the lattice parameters used for the measurements in Fig.~3 and 4 of the main text. It exhibits a pronounced peak around $\varphi_x \in \{\pi/2, 3\pi/2\}$ and $\varphi_y \in \{\pi/2, 3\pi/2\}$.
For a small cloud that homogeneously populates a single band as in the experiment, the variation of $\Omega^x \Omega^y$ over the size of the cloud due to the position dependence of $\varphi_y$ is negligible for $\theta \ll 1$. The average velocity for the non-linear response can then be calculated by averaging \eq{eq:S19} over both quasimomenta $k_x$ and $k_y$. The COM displacement after a complete cycle can be determined by integrating the velocity over one period. We can thus express the change in the COM position per cycle as
\begin{equation}
\label{eq:S20}
\delta y\sub{COM} = \underbrace{\frac{1}{2\pi} \oint \Omega^x \Omega^y \, \mathrm{d} k_x \mathrm{d} k_y \mathrm{d} \varphi_x}_{\overline{\Omega} (\varphi_y)} \,\theta \, d\sub{l,x}
\end{equation}
If the number of pump cycles is small, the change of $\varphi_y$ as a result of the linear pumping response can be neglected and the non-linear displacement per cycle is very well approximated by $\delta y\sub{COM} \approx \overline{\Omega} (\varphi_y^{(0)}) \, \theta \, d\sub{l,x}$.
\looseness -1 The response of a large system with size $L_x \gg d\sub{l,y}/\theta$ can be obtained by averaging \eq{eq:S20} over $\varphi_y(x) \in [0,2\pi[$, yielding
\begin{equation}
\label{eq:S21}
\delta y\sub{COM} = \frac{1}{2\pi} \oint \overline{\Omega} (\varphi_y) \, \theta \, d\sub{l,x} \, \mathrm{d}\varphi_y = \nu_2 \, \theta \, d\sub{l,x}
\end{equation}
where the second Chern number $\nu_2$ is calculated by integrating $\Omega^x \Omega^y$ over the entire 4D generalized Brillouin zone:
\begin{equation}
\label{eq:S22}
\nu_2 = 1/(4 \pi^2) \oint\sub{BZ} \Omega^{x} \Omega^{y} \mathrm{d}k_x \mathrm{d}k_y \mathrm{d}\varphi_x \mathrm{d}\varphi_y
\end{equation}
\section{Pump Path}
Varying the pump parameter $\varphi_x$ periodically modulates the tight-binding parameters $\delta J_x (\varphi_x)$ and $\Delta_x (\varphi_x)$ describing the superlattice along $x$ [\eq{eq:S5}]. For $d\sub{l} = 2 d\sub{s}$, the modulation of $\delta J_x$ and $\Delta_x$ is out of phase and the system therefore evolves along a closed trajectory in the $\delta J_x$--$\Delta_x$ parameter space (\extfig{fig:S2}a). This pump path encircles the degeneracy point ($\delta J_x = 0$, $\Delta_x = 0$), where the two lowest subbands of the Rice-Mele model touch. This singularity can be interpreted as the source of the non-zero Berry curvature $\Omega^x$ in the generalized Brillouin zone, which gives rise to the linear pumping response. All pump paths that encircle the degeneracy can be continuously transformed into one another without closing the gap to the first excited subband and are thus topologically equivalent with respect to the linear response, i.e.~the value of $\nu_1^x$ does not change.
\begin{figure*}[t!]
\includegraphics[width=0.945\linewidth]{FigS2_final.pdf}
\caption{ \looseness -1
Pump cycle of the 2D topological charge pump. The 4D tight-binding parameter space ($\delta J_x$, $\Delta_x$, $\delta J_y$, $\Delta_y$) is visualized using the transformation of \eq{eq:S23}. \textbf{(a)} Changing the pump parameter $\varphi_x$ leads to a periodic modulation of $\delta J_x$ and $\Delta_x$ along a closed trajectory as shown in the inset for a full pump cycle $\varphi_x = 0 \rightarrow 2\pi$. This pump path (green) encircles the degeneracy point at the origin (grey), where the gap between the two lowest subbands of the Rice-Mele model closes. The surface in the main plot shows the same trace transformed according to \eq{eq:S23} and with $\varphi_y \in [0.46 \pi, 0.54\pi]$. The spacing of the mesh grid illustrating $\varphi_x$ is $\pi/10$. \textbf{(b)} For a given $\varphi_x$, a large system simultaneously samples all values of $\varphi_y$. This corresponds to a closed path in the $\delta J_y$-$\Delta_y$ parameter space where a singularity occurs at the origin as well (inset). The main plot shows the transformed path for $\varphi_x \in [0.46 \pi, 0.54\pi]$. \textbf{(c)} In a full pump cycle, such a system thus covers a closed surface in the 4D parameter space by translating the path shown in (b) along the trajectory from (a). \textbf{(d)} In the transformed parameter space, the singularities at ($\delta J_x = 0$, $\Delta_x = 0$) and ($\delta J_y = 0$, $\Delta_y = 0$) correspond to two planes that touch at the origin. \textbf{(e)} Cut around $r_3 = 0$ showing both the pump path from (c) (red/blue) as well as the singularities from (d) (grey). While they intersect in the 3D space $(r_1, r_2, r_3)$, the value of $r_4$ is different on both surfaces and the 4D pump path thus fully encloses the degeneracy planes.
\label{fig:S2}}
\end{figure*}
Similarly, the tight-binding parameters $\delta J_y$ and $\Delta_y$ depend on the phase of the transverse superlattice $\varphi_y$. For a large cloud, all possible values of $\varphi_y$ and thus $\delta J_y$ and $\Delta_y$ are sampled simultaneously (\extfig{fig:S2}b). During a pump cycle, the system therefore traces out a closed surface in the 4D parameter space of $\delta J_x$, $\Delta_x$, $\delta J_y$ and $\Delta_y$ (\extfig{fig:S2}c). In this parameter space, the two lowest subbands touch in the two planes ($\delta J_x = 0$, $\Delta_x = 0$) and ($\delta J_y = 0$, $\Delta_y = 0$), which intersect in a single point at the origin (\extfig{fig:S2}d). Analogous to the linear response, this degeneracy generates the non-zero Berry curvatures $\Omega^x$ and $\Omega^y$, which cause the non-linear motion in the $y$-direction. Due to the 4D character of the parameter space, the 4D pump path can enclose the degeneracy (\extfig{fig:S2}e). Whenever this is the case, the topology of the cycle does not change and the value of $\nu_2$ remains the same.
To visualize the pump path in the 4D parameter space in \extfig{fig:S2}, we apply the following transformation:
\begin{equation}
\label{eq:S23}
\left( \begin{array}{c} r_1 \\ r_2 \\ r_3 \\ r_4 \end{array} \right)
=
\frac{1}{4} \left( \begin{array}{cccc} 1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 1 \\ 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 \end{array} \right)
\cdot
\left( \begin{array}{c} \delta J_x / \delta J_x^{(0)} \\ \Delta_x / \Delta_x^{(0)} \\ \delta J_y / \delta J_y^{(0)} \\ \Delta_y / \Delta_y^{(0)} \end{array} \right)
\end{equation}
where the tight-binding parameters are normalized by their respective maximum values. The degeneracy planes are then given by $r_1 = -r_2$, $r_3 = -r_4$ and $r_1 = r_2$, $r_3 = r_4$, respectively, i.e.~they become perpendicular planes in the $(r_1, r_2, r_3)$-space.
\section{Calculation of the double well imbalance along $y$}
The measurement of the population imbalance in the $y$-direction as a function of $\varphi_x$ for Fig. 3 and 4 of the main text is performed after an integer or half-integer number of pump cycles, i.e.~$\varphi_x = l \pi$, $l \in \mathbb{Z}$. At these points, the superlattice along $x$ is in the staggered configuration with the maximum energy offset $|\Delta_x| \gg J_x$ and $\delta J_x = 0$. The atoms are thus fully localized on either even or odd sites along $x$ for $\varphi_x = 2 l \pi$ or $\varphi_x = (2l + 1) \pi$, respectively. The four-site unit cell of the 2D superlattice therefore effectively reduces to a double well along $y$.
For singly-occupied double wells, the expected imbalance in the $y$-direction for atoms in the ground ($\mathcal{I}_y^{\mathrm{gs}}$) and first excited state ($\mathcal{I}_y^{\mathrm{exc}}$) can then be calculated from the single-particle double well Hamiltonian:
\begin{equation}
\label{eq:S23b}
\hat{H}\sub{DW}^{(1)} (\varphi_y)
=
\left( \begin{array}{cc} \Delta_y (\varphi_y)/2 & - J_y^0 (\varphi_y) \\ - J_y^0 (\varphi_y) & -\Delta_y (\varphi_y)/2 \end{array} \right)
\end{equation}
with $J_y^0 (\varphi_y) = J_y(\varphi_y) + \delta J_y (\varphi_y)/2$ and using the Fock basis for the atom on the even and odd site, $\ket{1,0}$ and $\ket{0,1}$, respectively.
Correspondingly, the imbalance for the ground state of a doubly-occupied double well ($\mathcal{I}_y^{\mathrm{2,gs}}$) can be determined using the two-particle double well Hamiltonian:
\begin{equation}
\label{eq:S23c}
\hat{H}\sub{DW}^{(2)} (\varphi_y)
=
\left( \begin{array}{ccc} U + \Delta_y & - \sqrt{2} J_y^0 & 0 \\ -\sqrt{2} J_y^0 & 0 & -\sqrt{2} J_y^0 \\ 0 & -\sqrt{2} J_y^0 & U - \Delta_y \end{array} \right)
\end{equation}
in the Fock basis $\{\ket{2,0},\ket{1,1},\ket{0,2} \}$. Here, $U$ denotes the on-site interaction energy for two atoms localized on the same lattice site.
\section{Removal of doubly-occupied sites}
After preparing the Mott insulator with unit filling in the long lattices, sites containing two atoms are converted to singly-occupied ones using microwave-dressed spin-changing collisions \cite{Widera:2005_SI} and a resonant optical push-out pulse. For this, the lattice depths are increased to $V\sub{s,x} = 70(2) E\sub{r,s}$, $V\sub{l,x} = 30(1) E\sub{r,s}$, $V\sub{l,y} = 70(2) E\sub{r,l}$ and $V_{z} = 100(3) E\sub{z}$ in 5\,ms to maximize the on-site interaction energy. The atoms, which are initially in the $(F=1, m_F = -1)$ hyperfine state, are converted to $(F=1, m_F = 0)$ with an adiabatic radio-frequency transfer. By ramping a magnetic offset field in the presence of a microwave field, a Landau-Zener sweep is performed that adiabatically converts pairs of $m_F = 0$ atoms on the same lattice site to an $m_F = +1$ and an $m_F = -1$ atom via coherent spin-changing collisions. Subsequently, the $m_F = -1$ atoms are removed by an adiabatic microwave transfer to $(F=2, m_F = -2)$ followed by a resonant optical pulse after lowering the lattices to $V\sub{s,x} = 0 E\sub{r,s}$, $V\sub{l,x} = 30(1) E\sub{r,l}$, $V\sub{l,y} = 40(1) E\sub{r,l}$ and $V_{z} = 40(1) E\sub{z}$.
\section{Measurement of band excitations}
Band excitations in the $y$-direction are measured by adiabatically ramping the superlattice phase $\varphi_y^{(0)}$ from its initial value to $\pi/2 \pm 0.156(5)\pi$ and subsequently increasing the short lattice depth to $V\sub{s,y} = 40(1) E\sub{r,s}$. In this lattice configuration, ground state atoms on both singly- and doubly-occupied plaquettes are fully localized on the lower-lying site along $y$ due to the large double well tilt $\Delta_y$ and the suppression of tunnelling $J_y^0 \rightarrow 0$. Atoms in the excited band along $y$, on the other hand, localize on the higher-lying site and can be detected directly by measuring the resulting double well imbalance.
\section{Detection of doubly-occupied plaquettes}
The doublon fraction can be determined by taking advantage of the fact that two atoms in the same double well localize on the lower-lying site only at much larger double well tilts than a single atom due to the repulsive on-site interaction. For this, the double wells along $y$ are first merged to a single site by removing the short lattice and increasing the long lattice to $V\sub{l,y} = 30(1) E\sub{r,l}$ within 5\,ms. At the same time, the orthogonal lattice depths are ramped up to $V\sub{s,x} = 70(2) E\sub{r,s}$ and $V_{z} = 100(3) E\sub{r,z}$ to increase the interaction energy. After that, $\varphi_y^{(0)}$ is shifted adiabatically to either $0.474(5)\pi$ or $0.431(5)\pi$ and the sites are split into double wells again by ramping up the short lattice to $V\sub{s,y} = 40(1) E\sub{r,s}$. At $\varphi_y^{(0)} = 0.431\pi$, both single atoms and doublons are fully localized on the lower-lying site. At $\varphi_y^{(0)} = 0.474\pi$, on the other hand, single atoms are still very well localized on the lower site, but two atoms in the same double well localize on different sites due to the large interaction energy of $U/h = 5.4$\,kHz. By determining the site occupations for both phases, one can thus infer the doublon fraction from the difference in the even-odd imbalance between the two measurements.
\section{Alignment of the tilted superlattice}
Each optical lattice is created by retroreflecting a laser beam, which is focussed onto the atoms by a lens on either side of the cloud. For the superlattices, the incoming beams of the short and long lattice are overlapped with a dichroic mirror in front of the first lens. In order to control the tilt angle $\theta$ of the long lattice along $y$, a glass block is placed in the beam path prior to the overlapping. By rotating this glass block, a parallel displacement of the incoming beam can be induced, which is then converted into an angle $\theta$ relative to the short lattice beam at the first lens. The two beams intersect at the focus point of the lens, which corresponds to the position of the atom cloud. After passing through the second lens behind the cloud, both beams are retroreflected by the same mirror. The counterpropagating beams travel along the paths of the incoming beams, thereby creating the lattice potentials with the same relative angle $\theta$.
\section{Determination of the angle $\theta$}
When the long lattice in the $y$-direction is tilted by an angle $\theta$ with respect to the short lattice, the phase of the superlattice along $y$ depends on the position along $x$. This leads to a modification of the on-site potential, which for small angles can be approximated as a linear gradient along the $x$-axis, pointing in opposite directions on even and odd sites in $y$: $\Delta_y^{m_y} (\varphi_y) \approx \Delta_y^{m_y}(\varphi_y^{(0)}) + (-1)^{m_y} \delta \, m_x$. The strength of the gradient is given by $\delta = \pi d\sub{s}/d\sub{l} \, \partial \Delta_y / \partial \varphi_y \big|_{x=0} \, \theta$ for a given superlattice phase $\varphi_y^{(0)}$ and can thus be used to determine $\theta$. In order to do this in the experiment, a superfluid is prepared at $\mathbf{k} = 0$ in a 2D lattice with $V\sub{s,x} = 13.0(4) E\sub{r,s}$, $V\sub{s,y} = 20.0(6) E\sub{r,s}$ and $V\sub{l,y} = 70(2) E\sub{r,l}$. The superlattice phase $\varphi_y^{(0)}$ is set to either $0.344(5)\pi$ or $0.656(5)\pi$ such that the atoms are fully localized on even or odd sites along $y$, respectively. The Bloch oscillations induced by the gradient are probed by measuring the momentum distribution of the atoms after a variable hold time. The angle $\theta$ is then calculated from the average Bloch oscillation period of both phases to minimize the influence of additional residual gradients.
\section{Non-linear response versus lattice depth}
The technique for detecting the non-linear response with site-resolved band mapping introduced in the main text allows to accurately determine the slope over a wide range of lattice parameters. To demonstrate this, we measure the slope of the non-linear response at $\varphi_y^{(0)} = 0.500(5)\pi$ and $\theta = 0.54(3)\, \mathrm{mrad}$ for various values of the transverse short lattice depth $V\sub{s,y}$ (\extfig{fig:S3}). As expected, the slope increases with larger depths as the band gap decreases and the Berry curvature $\Omega^y$ becomes more and more localized around $\varphi_y^{(0)} = (l + 1/2) \pi$ with $l \in \mathbb{Z}$.
At $V\sub{s,y} = 6.25 E\sub{r,s}$, the first and second excited subband along $y$ touch for $\varphi_y^{(0)} = l \pi$, leading to a topological transition where the signs of the first and second Chern number of the first excited subband change from $+1$ for $V\sub{s,y} < 6.25 E\sub{r,s}$ to $-1$ for $V\sub{s,y} > 6.25 E\sub{r,s}$. This corresponds to a transition between the Landau and Hofstadter regimes \cite{Lohse:2016_SI}. For the lowest band, the two regimes are topologically equivalent and the atoms thus move in the same direction. In both limits, the experimentally determined slope matches very well with the one expected in an ideal system. This nicely illustrates that the transport properties of the lowest band can be extracted correctly in both regimes, even in the presence of atoms in the first excited band.
\begin{figure}[t!]
\includegraphics{FigS3_final.pdf}
\caption{Slope of the non-linear response at $\varphi_y^{(0)} = 0.500(5)\pi$ and $\theta = 0.54(3)\, \mathrm{mrad}$ versus short lattice depth along $y$ with all other lattice parameters as in Fig.~3 and 4 of the main text. $J_y^{(0)} = J_y(\varphi_y^{(0)}) + \delta J_y (\varphi_y^{(0)})/2$ with $\varphi_y^{(0)} = \pi/2$ is the maximum intra-double-well tunnelling rate along $y$, which is calculated from the corresponding lattice depth. The solid line indicates the theoretically expected slope and the error bars show the fit error for the slope. The dashed line at $V\sub{s,y} = 6.25 E\sub{r,s}$ marks the point at which a topological transition occurs in the first excited subband along $y$, indicating the transition between the Landau regime for $V\sub{s,y} < 6.25 E\sub{r,s}$ and the Hofstadter regime for $V\sub{s,y} > 6.25 E\sub{r,s}$.
\label{fig:S3}}
\end{figure}
\section{Direct Determination of the second Chern number}
The method for determining the second Chern number from the measurement of the non-linear response versus $\varphi_y^{(0)}$ presented in the main text relies on prior knowledge of the response of an ideal system, both for the ground state and the excited states. While this significantly improves the accuracy for $\nu_2^{\mathrm{exp}}$, it can also be determined directly from the measured double well imbalance $\mathcal{I}_y (\varphi_x)$ without any additional information about the system. To this end, the average change of the imbalance per cycle for the entire cloud, $\delta \mathcal{I}_y (\varphi_y^{(0)})$, is obtained from a linear fit of the differential imbalance $\mathcal{I}_y (\varphi_x) - \mathcal{I}_y (-\varphi_x)$ for each value of $\varphi_y^{(0)}$. The influence of the excitations can be reduced by restricting the fitting region to a small number of pump cycles. The response of an infinite system is reconstructed by averaging $\delta \mathcal{I}_y (\varphi_y^{(0)})$ over $\varphi_y^{(0)}$ using linear interpolation between the data points. When taking into account all points with $\varphi_x/(2\pi) \leq 3$ as well as the finite pumping efficiency along $x$, this gives $\nu_2^{\mathrm{exp}} = 0.94(19)$ for the data from Fig.~3. Note that the linear interpolation for the discrete sampling used in Fig.~3c leads to a systematic shift of $\nu_2^{\mathrm{exp}}$ by +0.05.
\bibliographystyle{bosons}
\input{2D_Pumping_SI_Bibliography.bbl}
\end{document}
|
2,869,038,154,937 | arxiv | \section{Physical Description and Principles of Operation}
Figure~\ref{fig:diagram} illustrates our implementation of a FLUXCAP magnet. The
yoke of the FLUXCAP is a Neodymium Iron Boron (NIB) magnetic rod (diameter 0.5
inches, length 2 inches) \cite{NIB} as indicated by the blue rectangle in the figure. It is
magnetized uniformly along the rod diameter. The NIB
magnet is attached to a motor (brown hashed) which permits continuous rotation of the rod about
its long axis, and consequently, continuous rotation of the magnetization in the
plane perpendicular to the rotation axis. The magnet yoke is flanked on both sides by
soft pole pieces -- two low-carbon steel bars as indicated by the black dotted
regions (0.25~$\mathrm{in}^2$ square cross-section, length 6 inches). Two
threaded holes near the termination of the bars accommodate one-quarter inch
threaded steel rods, completing the pole pieces. The pole gap is adjusted by
threading the removable pole pieces (grey with black stripes) into and out of the threaded holes in the steel bars. Test devices, as indicated by
the green solid rectangle, are inserted in the gap between the pole pieces and a
commercial Gaussmeter is placed at the sample location
to monitor the field produced between the poles. This entire apparatus is lightweight, weighing less
than 10~kg.
The soft pole pieces capture the flux incident from the yoke and focus the
field lines across the relatively short air gap between the pole pieces.
As the NIB rod is rotated on its axis, the net flux captured into the pole pieces
from the yoke varies periodically. This rotation translates into a
nearly sinusoidally varying magnetic field between the two poles.
The operation of the FLUXCAP magnet depends upon the capture of magnetic flux
from a permanent magnet into two parallel steel bars placed on each side of the
magnet. Maximal flux transfer occurs when the magnetization of the diametrically
magnetized permanent magnet is directed toward the faces of the steel bars and
minimal flux is transferred when the magnetization is oriented perpendicular to
the faces of the bars. Thus, the permanent magnet is rotated by a motor
in order to vary the flux captured by the steel bars by varying the angle between
the magnetization direction and the steel bar faces.
We present a model for understanding the basic dependence of the flux in the
steel rods as a function of the magnetization direction of the permanent magnet.
Equation~\eqref{radialField} presents the field from an infinite uniformly
magnetized rod with diametric magnetization in cylindrical coordinates:
\begin{equation}
\textbf{B} = \left\{\begin{array}{rl}
B_{\mathrm max}( \widehat{\rho} \cos \phi -
\widehat{\phi} \sin \phi), &\ \rho < R \\ B_{\mathrm max}\left(\frac{\displaystyle
R}{\displaystyle \rho}\right)^2 (\widehat{\rho} \cos \phi + \widehat{\phi} \sin
\phi), &\ \rho > R. \end{array}\right.
\label{radialField}
\end{equation}
Here $B_{\mathrm max} = \mu_0M_s/2$,
where $M_s$ is the saturation magnetization of the permanent magnet. $\rho$ and
$\phi$ are the radial and angular cylindrical coordinates. $R$ is
the radius of the permanent magnet rod. The geometry of this arrangement is
further depicted in Fig.~\ref{fig:netFLUX}(a). While the real magnet is
finite in extent, we believe that this approximately captures the relevant
behavior of this system because the length of the magnetic rod is much larger
than the distance between the rod and the steel bars (approximately one-eighth of
an inch). We are ultimately interested in the field lines extending radially
outward and into the steel piece, which is set by
$B_{\mathrm max}$, the maximum magnetic field at the surface of the magnet. This value can be
directly measured with a magnetic field sensor placed on the surface of the
magnet, which we have measured as 0.993~T. Having established an expression for the field, we proceed to a
description of the flux in the steel bars.
\begin{figure}[b]
\begin{center}
\includegraphics[width=3.375in,
keepaspectratio=True]
{Figure_2.png}
\end{center}
\caption{\label{fig:netFLUX} Distortion of the magnetic field lines from a
diametrically magnetized cylinder due to the proximity of a steel bar. The
red arrows in both (a) and (b) represent the radial magnetization direction.
The red dashed curves in (a) represent the magnetic field lines. The angle
$\theta_0$ in (b) is the angle between magnetization and the steel surface
normal. The red and green curves represent the magnetic field lines coming
out from and into the magnetic rod.}
\end{figure}
The proximity of the two parallel steel bars have a non-negligible effect on the
fields from the permanent magnet. We assume the magnetic field on the surface of
the permanent magnet is left unchanged, but that the magnetic field lines are
distorted in such a way that field lines on the right semicircular face of the
magnet terminate on the right steel bar and lines on the left semicircular face
terminate on the left steel bar, as depicted graphically in
Fig.~\ref{fig:netFLUX}(b). Therefore, we can estimate the total magnetic flux
into the right steel bar as the net magnetic flux exiting the right semicircular
face of the magnet. Equation~\eqref{fluxTOT} gives the total flux $\Phi$ as a function
of $\theta_0$, the angle between the magnetization and the steel surface normal:
\begin{equation}
\Phi = B_{max} \cdot 2R \ell \cdot \cos \theta_0,
\label{fluxTOT}
\end{equation}
where $\ell$ is the length of the magnet.
Finally, the magnetic flux is directed toward two pole pieces extending into the
gap between the parallel steel bars. For sufficient distances between the
permanent magnet and the pole pieces, all of the field between the pole pieces is indirectly coupled
through the flux in the steel bars. The magnitude of this field is inversely
proportional to the surface area of the pole tips. It is also sensitive to the
pole gap and distance of the pole pieces from the magnet, both of which can
contribute to flux losses through leakage along the gap between the parallel bars and fringing at the poles. We present
an implementation of this method of flux capture and direction that exploits a
pole displacement and tip surface area that gives a fraction of a Tesla magnetic field
in a $\sim 1$ cm$^3$ volume. We also adjust the pole gap in order to control the peak field applied between
the poles, as discussed below.
\section{FLUXCAP Operation}
\subsection{Variable Amplitude and Precision Control}
The gap between two steel pole pieces can be adjusted to change the maximum applied field.
This varies the peak field amplitude of the alternating magnetic field when the magnet is rotated continuously.
Reducing the peak field amplitude may be useful in studying devices that have multiple magnetic layers,
some of which are not intended to be remagnetized.
Reducing the peak field amplitude may also be useful in applications where field
precision is of most importance. For example, the FLUXCAP
can be used to generate dc magnetic fields with the magnet positioned using a
stepper motor. The field precision is related to the minimum rotation that the
motor can produce and the maximum field amplitude. Each finite step from a stepper
motor (typically 0.9 degrees) corresponds to a change in field of just over one
percent of the peak amplitude. Therefore, larger pole separations may be
desirable to obtain finer control over magnetic fields.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.375in,
keepaspectratio=True]
{Figure_3.png}
\end{center}
\caption{\label{fig:PeakAmplitude} Magnetic field between the pole pieces
as a function of time as the permanent magnet is rotated continuously at a rate of
400~rpm. The maximum applied field is seen to be a function of the gap spacing.}
\end{figure}
Figure~\ref{fig:PeakAmplitude} demonstrates the ability to adjust the maximum
field between the poles. We adjusted the 0.25 inch diameter threaded rods to create pole gaps in
0.125 inch increments between 0.125 and 0.5 inches and operated the FLUXCAP using
a 12~V battery-powered dc motor. As the permanent magnet was rotated continuously at a rate of 400~revolutions per minute,
field measurements were made using a commercial
Gaussmeter and then digitized at 48~kHz. We demonstrate control over the peak
field amplitude from 400~mT down to 180~mT.
The peak amplitude $B_{pk}$ at the pole gap of 0.125 inches allows us to estimate the efficiency by which
we capture the flux from our NIB yoke into the two pole pieces. Assuming an effective fringing area $A_f$ about 50\% larger
than the pole face, we compute the efficiency, $e = B_{pk}A_f / \Phi$, to be approximately 15\%. We estimate that 85\% of the
flux is being lost to leakage across the space between the steel bars. The FLUXCAP could be made more efficient by employing higher permeability materials and through optimizing the pole piece geometry.
\subsection{Variable frequency and Field Ramping}
\begin{figure}[b]
\begin{center}
\includegraphics[width=3.375in,
keepaspectratio=True]
{Figure_4.png}
\end{center}
\caption{\label{fig:SPECTRA} FFT spectra indicating the various magnet
rotation rates.}
\end{figure}
\begin{figure*}
\includegraphics[width=6.69in,
keepaspectratio=True]
{Figure_5.png}
\caption{\label{fig:Magnet} FLUXCAP apparatus in a testing configuration. The FLUXCAP motor and steel bars are bolted to aluminum tracks. The NIB yoke is encased in aluminum shells for coupling to the motor with a brass set screw and to two plastic ball bearings at the midpoint and the endpoint of the magnet. Adjustable pole pieces emerge from the steel bars where the test sample has been clamped to the aluminum track and a gaussmeter probe is attached behind the sample.
and sample.
}
\end{figure*}
The FLUXCAP magnet can operate either as a stationary or
an alternating field magnetizing device. In the alternating field operation mode,
a dc motor generates continuous rotation of the NIB yoke which drives an
alternating magnetic field between the poles. By changing the frequency of
rotation, a variation of the field sweeping rate (frequency) can be achieved.
Figure~\ref{fig:SPECTRA} shows the Fast Fourier Transform spectra of the
alternating magnetic field between the poles of the FLUXCAP under different
rotation speeds of the motor. Measurements were taken under a 0.25 inch pole gap
using the same field acquisition methods described above. We demonstrate
alternating magnetic fields with frequencies ranging between 3~Hz and 7~Hz for
voltages ranging from 7~V to 12~V placed across the motor.
For the lower frequencies, the FFT spectra show wide sidebands due to a varying rotation rate of the permanent magnet. We used a 12~V/84~oz-in 37~mm dc motor~\cite{Pololu.DCMotor}, which uses 12~W of power (1~A or 20\% of its stall current) at 12~V. At lower voltages, the maximal output torque of the motor decreases and rotating the magnet away from the steel bars requires larger torque. For inputs below 7~V, this motor stalls. We use a higher torque stepper motor when slower field ramp rates are needed~\cite{Lin.Stepper}.
The frequency of the alternating magnetic field corresponds to an effective
linear ramping rate over $\pm$ 85 percent of the maximum amplitude field. For the
frequencies given here and the 0.3~T peak amplitude for a 0.25~inch pole
separation, the ramping rates vary from 10~T/s
down to 4~T/s for the lowest rotation frequency. These high field ramping rates
make the FLUXCAP an efficient rapid magnetizing device when used with a
continuously rotating motor. Slower variation of the field has been achieved with the use of
a high torque stepper motor, permitting field ramping rates to decreasing by
several orders of magnitude. This could be relevant to studies of
thermally-activated magnetization reversal in which a magnet's coercivity is (typically
logarithmically)
sensitive to the sweeping rate
\cite{RefWorks:19,RefWorks:14,RefWorks:15,RefWorks:17,RefWorks:16,RefWorks:13}.
\section{Application: Fast magnetizing of spin-valve nanopillars}
The FLUXCAP magnet can facilitate fast characterization of many magnetic devices
such as spin-valve nanopillars -- a two terminal magnetic device composed of two
ferromagnetic layers separated by a thin non-magnetic layer \cite{RefWorks:20,RefWorks:14}. Typically a
spin-valve device exhibits two stable resistance states depending on the relative
magnetization orientation of the two magnetic layers from the Giant
Magnetoresistance (GMR) effect \cite{RefWorks:22,RefWorks:23,RefWorks:24,RefWorks:25}. The spin-valve state can
be toggled between high (antiparallel) and low (parallel) resistance by applied
magnetic fields. Characteristic of spin-valve nanopillars is the use of
ferromagnetic layers with different coercivities, such that one ferromagnet is typically
fixed (a reference layer) while the other ferromagnet (the free layer) can be switched
relative to the reference magnet. We can determine the relative orientation of
the layers by measuring the device resistance as a function of the applied
magnetic field. More critically, using the 7~Hz rotation rate of the FLUXCAP, we
can rapidly conduct MR hysteresis loops to measure the coercivity of the free
layer and the giant magnetoresistance of the spin-valve. The FLUXCAP also could be
incorporated into a probe setup for characterizing the properties of spin-valve and
magnetic tunnel juction (MTJ) devices.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=3.375in,
keepaspectratio=True]
{Figure_6.png}
\end{center}
\caption{\label{fig:130loops} GMR signal versus field for 130 hysteresis loops
obtained in 20 seconds.}
\end{figure}
Figure~\ref{fig:Magnet} demonstrates a testing configuration for this apparatus.
A spin-valve nanopillar device is wire bonded to a coplanar waveguide board,
which is in turn has been soldered to end-launch coaxial jacks. The waveguide is
mounted rigidly to the outer Aluminum rail of the apparatus such that the device
is centered between the two pole pieces. A commercial Gaussmeter probe is also
mounted on the aluminum rail and is attached to one of the pole pieces. A small
ac excitation current probes the differential resistance across the 300 $\times$
50~$\mathrm{nm}^2$ spin-valve nanopillar, whose physical properties have
been described in detail elsewhere~\cite{RefWorks:20}. The FLUXCAP is
configured with a one-quarter inch pole gap and 12~V power to run the motor at 7~Hz.
Figure~\ref{fig:130loops} shows over 100 hysteresis loops recorded from 20
seconds of operating the FLUXCAP. Sharp changes in the differential resistance
$R_{AC}$ indicate toggling of the magnetization of the free layer from ``up''
(anti-parallel) to ``down'' (parallel) relative to the reference layer. As
mentioned previously, the reference layer has a coercivity over 1~T, and is kept
fixed during these measurements. Due to the thermally activated nature of
magnetization reversal, a characteristic distribution of switching fields is
apparent in this ensemble of hysteresis loops. It is therefore effective to
consider an averaged hysteresis loop, such as the one depicted in
Fig~\ref{fig:HYST.avg}. From this figure, we estimate a coercivity of 150~mT and
a GMR ratio ($\Delta R/ R$) of 0.2\%, which is consistent
with similar devices \cite{RefWorks:14,RefWorks:26,RefWorks:27}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=3.375in,
keepaspectratio=True]
{Figure_7.png}
\end{center}
\caption{\label{fig:HYST.avg} Averaged magnetic hysteresis loop of a spin-valve
device.}
\end{figure}
\section{Conclusions}
We have demonstrated the operation of the FLUXCAP, a compact magnetizing device
based upon the capture and focusing of flux from a rotating permanent
magnet. This device can perform most of the same tasks as conventional
electromagnet based magnetizing devices in that it can synthesize static
and dynamic magnetic fields over a broad range of field values. Yet the FLUXCAP
immediately presents itself as an elegant alternative: it operates as an
ac magnetizing device requiring only a 12~V battery and a dc motor; its power consumption is marginal (12~W),
and it does not require water cooling. The pole pieces are modular - it is straightforward to change the maximum
field applied by varying the pole gap or even substituting a threaded rod with a different bevel or chamfer. This permits easy modifications to the magnitude and homogeneity of the applied field with minor changes in the FLUXCAP design.
Furthermore, large field ramp rates are possible with FLUXCAP and
have been demonstrated for studying spin-valve nanopillar devices. This enables
statistical studies of thermally activated magnetization reversal, quick
resetting of magnetic devices and testing the dynamic response of magnetic field
sensors.
FLUXCAP magnets are versatile enough to function well in a variety of other
applications. The setup could be made UHV compatible
-- in fact, the permanent magnet and
motor could be placed outside of the UHV chamber and the flux coupled into the
chamber with the soft steel core. FLUXCAP magnets can clearly be used
for electronic transport measurements and could be integrated into probe stations. For example,
it could be used to add magnetic capabilities to a semiconductor tester.
By designing the shape of pole pieces and the position to place the
sample, one can achieve different field directions using the
FLUXCAP. Finally, it is easy to imagine combining two or three such magnets to
generate a two-dimensional or even three-dimensional vector field for
sophisticated measurements.
\section*{Acknowledgments}
We appreciate Dr. St\'{e}phane Mangin of Nancy Universit\'{e} and Dr. Eric E. Fullerton of the University of California, San Diego for providing the spin-valve samples used in the characterization of magnetic properties used in this study. We would also like to acknowledge Dr. James Rantschler of Xavier University of Louisiana for fruitful discussions leading to the design of FLUXCAP. This research was supported by NSF Grant No. DMR-1006575.
|
2,869,038,154,938 | arxiv | \section{INTRODUCTION}\label{sec:intro}
Liger is a new integral field spectrograph (IFS) and imager in development for the W.M. Keck Observatory which will take advantage of the ongoing Keck All-Sky Precision Adaptive Optics (KAPA) upgrade. Liger\cite{Wright2019, Wright2022, Cosens2020, Wiley2020} will provide a number of improvements over existing AO fed instruments including larger fields of view, finer spectral resolution (up to $\rm R\sim8,000 - 10,000$), and extending to bluer wavelengths ($\rm 0.84-2.45\,\mu m$). The Liger design draws from the heritage of two key sources: the imager component is custom designed for Liger but makes use of similar mechanisms to the Keck OSIRIS imager\cite{Larkin2006}, and the spectrograph is a clone of the design developed for the InfraRed Imaging Spectrograph (IRIS)\cite{Larkin2016, Larkin2020, Zhang2018} --- the planned first light instrument for the Thirty Meter Telescope. Like IRIS, Liger will have two spectrograph channels, a slicer and lenslet mode, which will share a common grating turret, three mirror anastigmat cameras, and detector. The Liger imager filter and pupil wheel mechanisms make use of similar gear, detent, and limit switch designs as OSIRIS, but with improvements to the number of filter slots and the presence of a dedicated pupil wheel at the pupil location\cite{Cosens2020}. For a full overview of Liger see Wright et al. (2019)\cite{Wright2019} and Wright et al. (\textit{this conference})\cite{Wright2022}.
Here we present the design of the assembly which will be used to mount and align the Liger imager detector as well as the pick-off mirrors which feed the two spectrograph modes. These two components require a common assembly to place the pick-off mirrors as close to the imager focal plane as possible for the best optical performance. Key requirements for the assembly are listed in Table \ref{tab:requirements}.
\begin{table}[h]
\centering
\caption{Key Requirements: Detector and Pick-off Mirrors}
\label{tab:requirements}
\begin{tabular}{|l|l|}
\hline
\textbf{Parameter} & \textbf{Value} \\
\hline \hline
Operating Temperature & $\rm 77\,K$ \\
\hline
Operating Pressure & $\rm 10^{-5}\,Torr$ \\
\hline
Shock Load & $\rm 4g$ ($+$ gravity) \\
\hline
Resonant Frequency to Avoid & $\rm8-80\,Hz$ \\
\hline
Focus Travel & $\rm3\,mm$ \\
\hline
Focus Offset Range & \multirow{2}{4em}{$\rm1\,mm$} \\
(pick-offs) & \\
\hline
Focus Accuracy & $\rm100\,\mu m$ \\
\hline
Tip-tilt Range & \multirow{2}{4em}{$\rm2^\circ$} \\
(detector) & \\
\hline
Tip-tilt Accuracy & \multirow{2}{4em}{$\rm0.25^\circ$} \\
(detector) & \\
\hline
Tip-tilt Accuracy & \multirow{2}{4em}{$\rm0.2^\circ$} \\
(pick-offs) & \\
\hline
\end{tabular}
\end{table}
The design of the detector-mirror assembly is outlined in Section \ref{sec:design} including the adjustability (Section \ref{sec:focus} \& \ref{sec:adjustment}) and baffling (Section \ref{sec:baffling}) necessary to achieve the instrument requirements. In Section \ref{sec:structural_analysis}, structural analysis is performed to verify performance of the assembly under the loads and frequencies specified in Table \ref{tab:requirements}.
\section{MECHANISM DESIGN}\label{sec:design}
The imager detector and the pick-off mirrors for the slicer and lenselt IFS are coupled to the same mounting assembly on the imager optical plate (see Figure \ref{fig:rendering}). The pick-off mirrors for the two spectrograph channels will be made from a single piece of Zerodur. The orientation of the detector and pick-offs will be semi-fixed to each other; the relative z-offset (into the beam) and rotation will be independently adjustable within a small range. The alignment of the assembly in the x-direction (parallel to the optical plate and perpendicular to the beam) and y-direction (height) as well as tip-tilt for both the detector and pick-off mirrors, are adjusted manually where the assembly mounts to the optical plate. An additional tip-tilt adjustment for the detector is built into the assembly, as well as a rotation adjustment for the pick-off mirrors. The optimal z-position will be determined by moving the assembly through a range of focus positions during alignment via a piezoelectric linear actuator. This will yield the optimal offset between the detector and pick-off mirrors, that then can be adjusted via set screws. Once optimal focus is determined between the detector and pick-off mirrors the assembly is locked.
\begin{figure}[h]
\centering
\gridline{\fig{Liger_imager_detector_5_30_2022_edit.png}{0.48\textwidth}{(a): rendering}
\fig{maren_3227.jpg}{0.45\textwidth}{(b): 3D print}}
\caption{(a): Rendering of the Liger imager detector and pick-off mirror mounting and focus stage with all exterior baffling removed. The directional axes used throughout this paper are shown on the left hand side. The y-direction (green) is used to denote the height above the optical plate; the z-direction (blue) represents the direction of the beam path; and the x-direction (red) is parallel to the optical plate and perpendicular to the beam. The pick-off mirrors (gold) for both the lenslet and slicer IFS are made from one piece of Zerodur and are attached to the detector mounting so the two mirrors are coupled in position. A baffle snout is included around three sides of the detector which extends $\rm12\,mm$ in front of the detector face to block scattered light. The pick-off baffle lowers onto the top edge of this baffle snout and also extends to the same distance. The detector ASIC is held below the detector mount connected via a flexible cable. Adjustability is built into the mounting assembly in the focus position as well as the tip-tilt of the detector and pick-off mirrors. The height (y-direction) and x-direction adjustment will be made at the base of the mount. (b): A full scale 3D printed model of the same assembly with the baffling included around the ASIC. This model was built to test the planned assembly procedure.}
\label{fig:rendering}
\end{figure}
\subsection{Focus Alignment}\label{sec:focus}
The mounting assembly is designed to allow the detector and pick-off mirrors to move through a $\rm3\,mm$ range of possible focus positions during the alignment process. This is accomplished with the inclusion of a flexure between the mounting assembly base and the mounting points for the detector and pick-off mirrors. The flexure is made of AISI 304 stainless steel sheet metal cut into a ``U" shape that is bolted to the fixed base at the bottom and the mobile mount at the tops. As the piezoelectric linear actuator pushes on the mount, the arms of the flexure bend and extend forward as shown in Figure \ref{fig:focus_positions}. To maintain planarity of the detector face throughout the travel range, two extension springs are located above the arms of the flexure connecting the fixed base to the mobile mount. The force from these springs pulls back on the top of the mount, preventing the detector face from tilting forward as the flexure bends. Two support brackets are included between the fixed base and the mobile mount. These supports have clearance slots that mate to threaded holes on the fixed base. The screws at this location will be kept loose during focus adjustment, after which the support brackets are secured to lock the assembly into place.
\begin{figure}[h]
\centering
\gridline{\fig{focus_nominal_position_edit2.png}{0.4\textwidth}{(a): neutral position}
\fig{focus_extended_position_edit2.png}{0.4\textwidth}{(b): extended position}}
\caption{Side view of the detector and pick-off mirror assembly showing the AISI 304 flexure (outlined in blue) component connecting the base with the rest of the mount as well as the extension springs which maintain planarity of the system throughout the range of possible focus positions. The image on the left shows the neutral position while the right image shows the flexure and mount at the end of the $\rm3\,mm$ travel range. During operation support brackets will be included on each side (loosely fastened) to further maintain planarity and lock the system at the optimal focus position. The brackets are excluded from these models to show the operation of the flexure.}
\label{fig:focus_positions}
\end{figure}
Both the detector and pick-off mirrors are coupled to each other as they are moved through the range of potential focus positions. There may be an offset between these best focus positions, in which case the mounting arm attaching the pick-off mirrors to the main assembly may be adjusted by $\rm \pm0.5\,mm$ in the z-direction with an accuracy of $\rm 100\,\mu m$.
It is important that there is no significant change in the tilt of the detector and pick-off mirrors while determining the optimal focus position. To check the planarity of the mounting block (and by extension the detector and pick-off mirrors) a simplified model is used in a static load simulation within the SolidWorks Simulation suite. The weight of the mounting assembly as well as remote masses for the detector and pick-off mirrors are included in the simulation as well as all bolts (with pre-tension) and spring parameters used in the design. The bottom of the mounting plate is designated as a fixture within the simulation. First, a baseline is determined by conducting a simulation with no force from the actuator to yield the displacement across the detector face under static load. Next a load is added at the location of the linear actuator to cause forward motion of the detector mounting. The two cases must result in a difference in the tip-tilt angles $\rm <0.2^\circ$ in order to meet the required tolerance. The displacement across the detector face mounting plate in both simulations are shown in Figure \ref{fig:planarity_sims}. As can be seen, there is a small change in the tilt of the detector mounting plane after the full $\rm3\,mm$ of linear motion. However, this change amounts to only $\rm 0.02^\circ$, significantly less than the $\rm 0.2^\circ$ tolerance.
\begin{figure}[h]
\centering
\gridline{\fig{pre_load_displacement.png}{0.4\textwidth}{(a): neutral position}
\fig{20N_displacement.png}{0.4\textwidth}{(b): extended position}}
\caption{Displacement from SolidWorks Simulation of static loading on detector stage at the two extremes of the focus range from $\rm0\,mm$ (a) to $\rm3\,mm$ (b). The color scales are adjusted to highlight the small deviation from planarity across the face of the front plate where the detector will mount at the three clearance holes. In (a), there is a $\rm\sim0.4\,mm$ difference across the face of the plate, resulting in an angle of $\rm0.46^\circ$. At the end of the focus range shown in (b), there is a $\rm\sim0.42\,mm$ difference across the face of the plate, resulting in an angle of $\rm0.48^\circ$. The change in the angle between these two extremes of the focus range is only $\rm 0.02^\circ$, much less than the tolerance in this dimension.}
\label{fig:planarity_sims}
\end{figure}
\subsection{Positioning Adjustment}\label{sec:adjustment}
There are multiple adjustment points included in the detector assembly design which will allow fine-tuning during alignment. There are six degrees of freedom to the position of the pick-off mirrors and detector, although some have a limited range and/or require the use of shims at the base of the mount. Some of these adjustments cause position changes to both the detector and pick-off mirrors while others will only impact one. The separate adjustment of the detector and pick-off mirrors is particularly useful in cases where an offset is required (e.g. focus position) or when a tighter tolerance is required for one component than the other (e.g., tip-tilt).
First is the adjustment of the detector and pick-off distances in the z-direction (focus position). As discussed in the previous section, there is a flexure and actuator which allows both the detector and pick-off mirrors to be to be moved through the $\rm3\,mm$ focus range to find the optimal position. We have designed flexibility in the alignment procedure if there is an offset in the optimal position for the detector and the pick-off mirrors. If this occurs there is $\rm\pm0.5\,mm$ over which the pick-off mirror focus position can be adjusted independently of the detector with an accuracy of $\rm100\,\mu m$ (see Figure \ref{fig:pickoff_offset}a).
\begin{figure}[h]
\centering
\gridline{\fig{pickoff_focus_adjustment.png}{0.4\textwidth}{(a): focus position}
\fig{pickoff_x_adjustment.png}{0.4\textwidth}{(b): x-position}}
\caption{Model views of where the pick-off mirror mounting attaches to the main assembly with baffling removed. Left: Side view; the blue arrow shows how the focus position of the pick-off mirrors can be adjusted independently of the detector using the highlighted set screws. This adjustment can be made over a range of $\rm\pm0.5\,mm$ with an accuracy of $\rm100\,\mu m$. Right: Front view; the blue arrow shows how the x-position of the pick-off mirrors can be adjusted independently of the detector over a range of $\rm\pm0.25\,mm$ with an accuracy of $\rm75\,\mu m$ using the highlighted set screws. The rotation of the pick-off mirrors can be independently adjusted at the mounting location of the pick-off frame.}
\label{fig:pickoff_offset}
\end{figure}
The height of the detector assembly may be adjusted by including shims between the optical plate and the mount. The height of the pick-off mirrors may be further raised above the detector in increments of as little as $\rm100\,\mu m$ using a pair of set screws. The rotation of the pick-off mirrors about the z-axis may be adjusted independently of the detector with an accuracy of $\rm0.136^\circ$ by raising either side of the frame using these same set screws. The rotation of both components can be changed by inclusion of shims on one side of the mounting plate. The position of both the detector and pick-off mirrors in the x-direction can be adjusted by pushing the assembly along clearance slots at base of the mount shown in Figure \ref{fig:rendering}. The pick-off mirror position can be independently adjusted in the x-direction by $\rm\pm0.25\,mm$ with an accuracy of $\rm75\,\mu m$ using the set screws on the side of the pick-off mounting frame (Figure \ref{fig:pickoff_offset}b).
The tip-tilt angle may be adjusted independently for the detector and pick-off mirrors. Set screws can push the feet of the A frames which hold the detector to the main mount, allowing for a range of $\rm\pm1.13^\circ$ of adjustability in tip and $\rm\pm0.57^\circ$ in tilt (Figure \ref{fig:TT_adjustability}). The $\rm0.25^\circ$ tolerance is met in both dimensions, with a quarter turn of the set screw giving a $\rm0.17^\circ$ change in tilt and a $\rm0.09^\circ$ change in tip. The tilt of the pick-off mirrors can be adjusted using the same set screws shown in Figure \ref{fig:pickoff_offset}a that are used to set the focus offset. By moving one side of the pick-off frame to a closer or further offset position, slight adjustments to the tilt of the mirrors can be made with an accuracy of $\rm0.13^\circ$. To adjust the tip angle of just the pick-off mirrors, the tip of both the mirrors and detector must be adjusted via shims between the mounting plates, while the detector can then be separately adjusted at the A-frames to compensate. The accuracy for the adjustment of the tip-tilt angle is less than the tolerance ($\rm 0.2^\circ$ for the pick-off mirrors and $\rm 0.25^\circ$ for the detector) even when including the potential change in the tip angle at different focus positions.
\begin{figure}[h]
\centering
\gridline{\fig{detector_tip_adjustment2.png}{0.38\textwidth}{(a): tip}
\fig{detector_tilt_adjustment2.png}{0.4\textwidth}{(b): tilt}}
\caption{Model view of the detector mounting assembly illustrating the adjustability in tip (left) and tilt (right) via set screws at the A frame feet. Left: pushing the top foot of the A frames on the sides will result in changing the tip in the direction of the orange arrow. Likewise, pushing the bottom foot will cause a change in direction following the blue arrow. The A frame on the other side of the detector should be pushed in the same way for adjustments to the tip. Right: pushing both the top and bottom feet of the A frame shown forward by the same amount will cause an adjustment to the tilt of the detector following the blue arrow. Conversely, pushing the top and bottom feet of the A frame on the opposite side of the detector will cause a tilt in the opposite direction.}
\label{fig:TT_adjustability}
\end{figure}
\subsection{Baffling}\label{sec:baffling}
Baffling is included directly around the detector and pick-off mirrors as well as a baffle around the entire mounting assembly for further reduction of stray light. A three-sided baffle slides over the detector, extending $\rm12\,mm$ in front of it. The top of this baffle is open due to the small clearance between the detector and the pick-off mirrors. Here, a sheet metal baffle made from shim stock is included that is coupled to the pick-off mirror wedge and lowered with it onto the assembly. This baffle rests on the top corners of the detector baffle and extends $\rm12\,mm$ past the front of the detector to prevent light from scattering off the spectrograph re-imaging optics. A small lip is folded over at each end of this baffle sheet to prevent reflections from the top and bottom corners of the pick-off mirror wedge. Two additional baffles made of shim stock will slide in from the sides with a fold to prevent reflections from the pick-off mirror edges. This internal baffling setup is shown in Figure \ref{fig:baffling}a, and the external baffling demonstrated in Figure \ref{fig:baffling}b.
\begin{figure}[h]
\centering
\gridline{\fig{internal_baffling.png}{0.4\textwidth}{(a): internal baffling}
\fig{external_baffling.png}{0.35\textwidth}{(b): external baffling}}
\caption{Model view showing the baffling around the detector and pick-off mirrors. Left: The three sided baffle around the detector (light blue) extends $\rm\sim0.7\,mm$ above its edge ($\rm<0.1\,mm$ above the Teledyne detector package) in order to protect this critical component while the pick-off mirrors are lowered into position. The sheet metal baffle (green) attached to the pick-off frame along with the mirror rests on this top edge and provides a roof over the detector to prevent ghosting from the re-imaging optics located after the pick-off mirrors. The silver baffles on the sides fold over the edges of the pick-off mirrors to prevent reflections off the corners. All of these baffles will be painted black; the color shown in the model is for illustrative purposes only. Right: Baffling (shown as transparent black) is also included around the full assembly including a separate box around the ASIC which extends below the optical plate.}
\label{fig:baffling}
\end{figure}
\section{STRUCTURAL ANALYSIS}\label{sec:structural_analysis}
In order to determine how the assembly will respond to loading, a simplified model with only the mounting components is analyzed within the SOLIDWORKS Simulation suite. The baffling, detector, pick-off mirrors, and all fasteners are removed in this simplified model. The weight of the detector and pick-off mirrors are accounted for in the form of remote masses, and bolted joints are specified in the simulation with the fastener dimensions and pre-load. Like the simulation at each end of the focus range, the bottom of the mounting plate is designated as a fixture.
With only the static load due to gravity the maximum stress is $\rm <2/3$ the yield strength of aluminum ($\rm S_y = 276\,MPa$). The areas of the highest stress are located under bolted connections due to pre-loading while elsewhere the stress is $\rm <1/2\,S_y$. The assembly is required to withstand an additional 4g's of loading during shipping and/or earthquakes. With this load applied in the vertical direction (adding to the static load due to gravity) the maximum stress in the model is still $\rm <2/3\,S_y$, located underneath the bolted connections at the A frames (Figure \ref{fig:4g_load}). This stress is also present here with only the static load due to gravity and is therefore likely due to compression from the bolt pre-load. Since the 4g load may be experienced during shipping it may occur along any direction. With the shock load applied in the x- and z-directions the maximum stress is located under an A frame bolted connection instead, but is still $\rm <2/3\,S_y$. In all loading cases the stress outside of the bolted connections is $\rm <1/2\,S_y$.
\begin{figure}[h]
\centering
\gridline{\fig{shock_load_stress_v2.png}{0.4\textwidth}{}
\fig{shock_load_displacement_v2.png}{0.43\textwidth}{}}
\caption{Stress (left) and deflection (right) for simulation of the detector mounting under both static and an additional 4g shock load in the vertical direction (5g total). The maximum stress determined is $\rm <2/3\,S_y$ underneath bolted connections and $\rm <1/2\,S_y$ elsewhere with negligible deflections.}
\label{fig:4g_load}
\end{figure}
If the best focus position involves full extension of the flexure, there will be a higher stress in the steel flexure itself. As shown in Figure \ref{fig:4g_load}, the mount as a whole will not fail since the support brackets will hold it up, but we also want to ensure that the flexure is not damaged so that focus adjustments can be made later in the life of the instrument. To investigate the worst case scenario with the highest stress, the 4g shock load is simulated with a deformed flexure in the most extended configuration (with the optimal focus furthest from the starting estimate). This results in the stress throughout the flexure being well below the ultimate strength of AISI 304 steel (505MPa) except for at a single edge of the flexure. This is at the location of a modification to the deformed flexure model needed in order to create parallel surfaces for mating parts. This is likely creating an artificial stress concentration which will not be present in the fabricated part. Outside of this modified edge of the flexure, the stress is below the ultimate strength throughout, and outside of the compressive stress under fasteners, the stress is also below yield.
A frequency analysis was run in SOLIDWORKS Simulation using the simplified model in order to estimate the resonant frequencies. The mass of the detector and pick-off mirrors are again added as a remote load offset from the mounting point with the force of gravity included. The simulation was run searching for the first five frequency modes for the simplified model used in the load simulations as well as with the external baffling included (Table \ref{tab:modes_nobaffle} \& \ref{tab:modes_baffle} respectively). As can be seen from the lower mass participation factors in Table \ref{tab:modes_baffle}, the external baffling largely does not participate in the modal response.
\begin{table}
\centering
\caption{Frequencies and mass participation for the first five modes of the imager detector and pick-off mounting without external baffling \label{tab:modes_nobaffle}}
\begin{tabular}{|c|c|c|c|c|}
\hline
Mode & Frequency (Hz) & \multicolumn{3}{c|}{Mass Participation (\%)} \\
& & X-direction & Y-direction & Z-direction \\
\hline \hline
1 & 345 & 0.2 & 28 & 14 \\
2 & 368 & 40 & 0.2 & 0 \\
3 & 480 & 0 & 0 & 2 \\
4 & 787 & 0 & 9 & 32 \\
5 & 804 & 0 & 0.3 & 0.8 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Frequencies and mass participation for the first five modes of the imager detector and pick-off mounting with external baffling \label{tab:modes_baffle}}
\begin{tabular}{|c|c|c|c|c|}
\hline
Mode & Frequency (Hz) & \multicolumn{3}{c|}{Mass Participation (\%)} \\
& & X-direction & Y-direction & Z-direction \\
\hline \hline
1 & 331 & 0.1 & 20 & 8 \\
2 & 355 & 27 & 0.1 & 0 \\
3 & 483 & 0 & 0 & 0.8 \\
4 & 744 & 0 & 5 & 23 \\
5 & 850 & 0.8 & 0 & 0 \\
\hline
\end{tabular}
\end{table}
The instrument requirements list frequencies that can be expected in the range of $\rm10 - 40\,Hz$ during shipping and $\rm8-80\,Hz$ during operation, so the resonant frequency of the mechanism should be at least $\rm >80\,Hz$. The frequencies determined in the SolidWorks simulations lie well outside of this range.
\section{SUMMARY}\label{sec:summary}
A single mounting assembly and housing will be used for the Liger imager detector and the IFS pick-off mirrors. This assembly will hold these optics $\rm<1\,mm$ away from each other while allowing adjustability in multiple axes.
The detector and pick-off mirrors can be adjusted both together and individually, although with a smaller range of individual adjustment. Baffling is included around the individual optics as well as the full assembly to protect from scattered light and ghosting off the surfaces of other optics.
Analysis was carried out on the imager detector and IFS pick-off mirror mounting assembly to verify that the design meets the requirements. The planarity of the mounting points were evaluated throughout the range of focus positions by simulating the mechanism reaction to a force at the location of the linear actuator. The deviation from planarity across the $\rm3\,mm$ range of travel was $\rm0.02^\circ$; well within the $\rm0.25^\circ$ tolerance. Next the strength of the mount was evaluated under both static and an additional 4g shock load. In both cases the maximum stress determined in the simulations is under bolt heads and is not expected to lead to failure. Modal frequencies were also determined using the SOLIDWORKS simulation tools, with all frequencies falling well above the $\rm8-80\,Hz$ range the system is expected to be subjected to.
The detector and pick-off mirror mount will be assembled and alignment performed in a custom cryogenic vacuum chamber designed for use with the Liger imager which will operate at a temperature below $\rm77 \, K$ and vacuum pressure of $\rm 10^{-5} \, Torr$\cite{Wiley2020}. The unique design of Liger allows simultaneous imaging and spectroscopy which both improves the performance of the instrument and provides useful science benefits. For example, simultaneous imaging in crowded field observations (e.g., globular clusters or the galactic center) can provide accurate astrometry and real time measurements of the telescope and instrument point-spread function. Having the light for the spectrograph modes first pass through the imager provides benefits such as improved background masking at the larger pupil located in the imager\cite{Cosens2020}. It also allows for improved AO correction in the imager without sacrificing the IFS performance. The design of the detector and IFS pick-off mirrors is critical to maintaining this performance. The closer the pick-off mirrors are to the detector, the lower the wavefront error is for both as it is best in the center and degrades with increasing radius.
\acknowledgments
The Liger instrumentation program is supported by the Heising-Simons Foundation, the Gordon and Betty Moore Foundation, University of California Observatories, and W. M. Keck Observatory.
|
2,869,038,154,939 | arxiv | \section*{Introduction}
Machine learning (ML) correction methods aim to elevate the level of accuracy of ML properties, for example potential energy surfaces (PESs). There are two approaches currently being investigated to accomplish this goal. One is transfer learning, which has been developed extensively in the context of artificial neural networks,\cite{TL_ieee} and much of the work in that field has been brought into chemistry, especially in the development of PESs.\cite{roit19, meuwly20, TL_2020} For example, Meuwly and co-workers applied transfer learning using thousands of local CCSD(T) energies to improve their MP2-based neural network PESs for malonaldehyde, acetoacetaldehyde and acetylacetone.\cite{meuwly20}
The basic idea of transfer learning is that a fit obtained from one source of data (perhaps a large one) can be fine-tuned for a related problem by using limited data. Therefore, in the present context of PES fitting, an ML-PES trained with low-level electronic energies/gradients can be reused as the starting point of the model for an ML-PES with the accuracy of a high-level electronic structure theory. As noted, this is typically done with artificial neural networks, where weights and biases trained on lower-level data hopefully require minor changes in response to additional training using high-level data.
The other approach is $\Delta$-machine learning. In this approach a correction is made to a property obtained using an efficient, low-level \textit{ab initio} theory.\cite{Lilienfeld15, Stohr2020, Tuckerman_ML_2020, Csanyi_DeltaML}
We recently proposed and tested a $\Delta$-ML approach, that uses a small number of CCSD(T) energies, to correct a low-level PES based on DFT electronic energies and gradients.\cite{NandiDeltaML2021, QuDelteMLAcAc2021} The equation for this approach is simply
\begin{equation}
V_{LL{\rightarrow}CC}=V_{LL}+\Delta{V_{CC-LL}},
\end{equation}
where $V_{LL{\rightarrow}CC}$ is the corrected PES, $V_{LL}$ is a PES fit to low-level electronic data (such as DFT energies and gradients), and $\Delta{V_{CC-LL}}$ is the correction based on high-level coupled cluster energies. It is noted that the difference between CCSD(T) and DFT energies, $\Delta{V_{CC-LL}}$, is usually not as strongly varying as $V_{LL}$ with respect to the nuclear configurations and therefore just a small number of high-level electronic energies are adequate to fit the correction PES. The method was validated for PESs of small molecules, \ce{CH4} and \ce{H3O+}, 12-atom $N$-methyl acetamide, and 15-atom acetylacetone. In all cases, the coupled cluster energies were obtained over the same large span of configurations used to get the lower-level PES.
Here we propose to extend this $\Delta$-ML approach from molecular PESs to general, non-reactive force fields that are explicitly or implicitly many-body. There are many examples of such force fields that determine the total energy of $N$ monomers. For example, consider force fields for water. (For a recent review see ref \citenum{mbreview}.) For simplicity, we denote these by ``MB-FF''. Suppose our goal is to bring this force field to the ``gold-standard'' CCSD(T) level of theory. Clearly this cannot be done by simply applying the above equation for an arbitrary number of monomers owing to the prohibitively exponential computational cost of CCSD(T) calculations with respect to the number of monomers. Instead we propose a $\Delta$-ML force-field for $N$ monomers given by the sum of many-body corrections, namely
\begin{equation}
V_\text{$\Delta$-ML} = V_\text{MB-FF} + \sum_{i>j}^N\Delta{V_{2-b}(i,j)}+\sum_{i>j>k}^N\Delta{V_{3-b}}(i,j,k)+\sum_{i>j>k>l}^N\Delta{V_{4-b}}(i,j,k,l) + \cdots,
\end{equation}
where $\Delta V_{n-b}$ are the many-body corrections to the MB-FF many-body terms., given by the difference between CCSD(T) and MB-FF $n$-body ($n$-b) interaction energies. To be clear, recall that the $n$-b interaction energy is obtained from a cluster of $n$ monomers. For example, the 4-b interaction is obtained by calculating the total energy of the tetramer (four monomers) and subtracting all the 1-, 2-, and 3-b interactions from the total energy. Note for simplicity, we assume that an accurate 1-b term, e.g., the single water molecule, is given in the MB-FF.
We have truncated explicit correction terms at the 4-b level with force-fields for water in mind. This is because it has been established by high-level calculations that 4-b, while small, are needed to obtain nearly 100 percent of the electronic dissociation energies of water clusters up to the 21-mer.\cite{MBE20} This has also been shown previously for some isomers of the water hexamer\cite{KShexamer, mbpoltests} and we show this again explicitly here. And crucially, we have the CCSD(T) electronic energies needed for the correction up to 4-b, and these datasets are recently used to develop a pure many-body water potential q-AQUA.\cite{4bjpcl21, q_AQUA}
\begin{equation}
V_\text{q-AQUA} = \sum_{i=1}^NV_{1-b}(i) + \sum_{i>j}^NV_{2-b}(i,j) + \sum_{i>j>k}^NV_{3-b}(i,j,k) +\\
\sum_{i>j>k>l}^NV_{4-b}(i,j,k,l),
\end{equation}
where the meaning of each term is clear. In q-AQUA the 2-, 3-, and 4-b interactions are permutationally invariant polynomial (PIP) \cite{braams09, Xie10} fits to thousands of CCSD(T) energies (details are in ref. \citenum{q_AQUA}.) These CCSD(T) electronic energies for the 2-b, 3-b and 4-b are ready to be used to obtain a $\Delta V_{n-b}$ correction to any water MB-FF, and we return to this below.
In this work, the focus is on the 4-b correction to the MB-pol force field.\cite{mbpol2b, mbpol3b} In MB-pol the 2-b and 3-b are already at CCSD(T) level, but the 4-b interaction is essentially given by the TTM4-F potential,\cite{TTM4} which is a sophisticated MB-FF for water. Errors between 0.1 and 0.84 kcal/mole for the these 4-b interactions for the hexamer isomers against direct CCSD(T) calculations, were reported in 2015.\cite{medders15} These are fairly large fractions of the 4-b energy itself. Stimulated by recent assessments of the importance of the 4-b interaction and inaccuracy of the MB-pol 4-b, we report a correction 4-b PES, denoted $\Delta V_{4-b}$, that is aimed directly at extending the MB-pol potential to the CCSD(T) 4-b level. The correction is a PIP fit to the energy difference between the CCSD(T) 4-b interactions and TTM4-F 4-b interactions.
In the next section we present the details of the 4-b correction PES, followed by several tests that indicate the effectiveness of this correction PES.
\section*{$\Delta V_{4-b}$ Fitting Details}
The data set for the $\Delta V_{4-b}$ fit is simply the difference between the 4-b CCSD(T)-F12/haTZ (aug-cc-pVTZ basis for O atoms and cc-pVTZ for H atoms) and MB-pol/TTM4-F energies. The total number of configurations used for this fit is 3695. The fit uses the PIP approach, in which the PIPs are generated using MSA software.\cite{Xie10, msachen} The PIPs are usually polynomials in the Morse variables of the internuclear distances, $\exp(-r_{ij}/\lambda)$, where $r_{ij}$ is the distance between atoms $i$ and $j$ and $\lambda$ is a range parameter, taken for this calculation to be 1.5 bohr. We used 22221111 permutational symmetry at a maximum polynomial order of 3. Two additional issues concerning the basis set are important to note.
First, it is desirable not to include polynomials that do not have the correct limiting behavior as one or more monomers are removed from the others to a large distance. In the 4-body case, we need to consider the removal of each monomer from the other three and the removal of each possible dimer from the other one. In all of these cases, the 4-body interaction energy must vanish. The process of identifying PIPs that do not have the correct limiting behavior is what we call purification.\cite{purified13, purified14} To identify the PIPs with incorrect limiting behavior, the relevant distances are augmented by 100 \AA, and we accept the polynomial as having the correct behavior if its Morse value is below $10^{-6}$. We cannot, however, immediately eliminate these polynomials because there may be other polynomials that, for example, are composed of products between one with a correct limit and one with an incorrect limit. At first, we simply rename the ones with an incorrect limit. After all the polynomials have been evaluated, we examine the definitions of all those with the correct limits and determine which of the monomials and which of the renamed polynomials with incorrect limits are required to calculate them. Finally, we remove those polynomials that are not required and renumber those that remain, keeping the order of calculation to ensure that no partial calculation that contributes to any polynomial needs to be performed twice. We then have a set of polynomials that all have the correct limiting behavior and that can be calculated efficiently.\cite{conte20}
The second issue that we need to consider is how to maintain permutational symmetry, not only in each monomer, but when monomers as a whole are interchanged with one another. This latter exchange is not taken into account by the MSA software, so the polynomials that we create by the previously described purification will not, in general, have permutational symmetry with respect to exchange of identical monomers. A common method for dealing with this issue is to augment the dataset by adding all relevant permutations of the Cartesian coordinates and assigning them the same energy, thus requiring a set of $n!$ geometries for each energy, where $n$ is the number of monomers (4, in this case). A better method is to identify groups of polynomials that have permutational symmetry with respect to monomer exchange and then form ``superpolynomials'' that are the sum of the polynomial members of each group. We identify the permutationally invariant groups of polynomials by taking a single set of $n!$ permutationally related geometries and calculating the value of each polynomial. While the the values of individual polynomials vary from permutation to permutation, the groups of polynomials, taken together for each permutation, will have the same group of values. For each permutation, one can form pairs of the polynomial identities and their values, and then sort the pairs by their values. Looking at all pairs that have the same value component in all permutations gives the identities of the polynomials, some of which may be repeated, that make up a permutationally invariant group. In general, there will be as many groups as there were original polynomials. These groups, each with $n!$ (not necessarily unique) polynomial contributions, are then summed to form ``superpolynomials'' having permutational symmetry with respect to exchange of identical molecules. Having formed these superpolynomials, there is no need for augmentation of the dataset with permutationally related geometries.
We used basis sets of different sizes, with 200, 500, and 1000 ``superpolynomials''. More details of the bases are given in Table \ref{tab:basis}. As seen, although fitting with more polynomials can reduce the fitting error, the computational cost is roughly proportional to the sum of the number of monomials and polynomials. The results presented in this paper are based on the basis with 200 ``super-polynomials'', as this achieves reasonably good accuracy with smaller cost.
The final energy is written as
\begin{equation}
E = E_\text{MB-pol} + \sum_{i>j>k>l} S_{ijkl} \Delta V_{4-b}(i,j,k,l),
\end{equation}
where $S_{ijkl}$ is a switching function whose value is 1 at short range and 0 at the long range. Specifically,
\begin{align}
S = 10 \left(\frac{r_\text{max} - r_i}{r_f - r_i}\right)^3 - 15 \left(\frac{r_\text{max} - r_i}{r_f - r_i}\right)^4 + 6 \left(\frac{r_\text{max} - r_i}{r_f - r_i}\right)^5~~~~~(r_i < r_\text{max} < r_f),
\end{align}
where $r_\text{max}$ is the maximum OO distance in a water tetramer, and $S$ is 1 when $r_\text{max}$ is smaller than $r_i$ and is 0 when $r_\text{max}$ is greater than $r_f$. In this work we used $r_i = 5.5$ \AA~ and $r_f = 7.0$ \AA~ unless otherwise specified.
\begin{table}[htbp!]
\centering
\begin{tabular*}{0.5\columnwidth}{@{\extracolsep{\fill}}lccc}
\hline
\hline\noalign{\smallskip}
& PIP$_{200}$ & PIP$_{500}$ & PIP$_{1000}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
Number of m & 1438 & 3442 & 8610 \\
Number of q & 5490 & 12898 & 25084 \\
Number of p & 200 & 500 & 1000 \\
Fitting RMSE & 6.7 & 4.0 & 2.5 \\
Timing & 1.1 & 2.7 & 6.0 \\
\noalign{\smallskip}\hline
\hline
\end{tabular*}
\caption{Number of monomials (m), polynomials (q), and ``super-polynomials'' (p) in the three fitting bases used for $\Delta V_{4-b}$, and corresponding fitting root-mean-square error (RMSE, in cm$^{-1}$) and computational time (in seconds) for 100,000 energy evaluations. The computational time for all gradients is about 3 times that for the energy.\cite{Houston2022}}
\label{tab:basis}
\end{table}
\section*{Results and Discussions}
First we examine two 1-d cuts where we compare the 4-b CCSD(T)-F12/haTZ energies to those from MB-pol/TTM4-F and to those from MB-pol+$\Delta V_{4-b}$. Figures \ref{fig:22cut} and \ref{fig:13cut} show cuts of the potential for separating two dimers from one another and for separating a monomer from the remaining trimer, respectively. The major improvements of the 4-b correction are in the short-range. Whereas MB-pol/TTM4-F is quite accurate in the long range, it is not designed with the proper Pauli exchange and repulsion in the short range. Despite the fact that TTM4-F fails badly in the short range, the $\Delta_{ 4-b}$ potential does provide a reasonable correction. It should be noted that for both cuts shown, the equilibrium R$_{\text{OO}}$ distance is 2.7 \AA, so that large corrections are in the steeply repulsive part of the potential. However, there are also large corrections for highly distorted tetramers geometries not visited by this cut, discussed below.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{cut_22.pdf}
\caption{4-b energies from indicated sources as a function of the oxygen-oxygen distance between pairs of water dimers in the tetramer. The arrows indicate the dimer pair that separates from the rigid tetramer. The equilibrium value of this distance is 2.7 {\AA}.}
\label{fig:22cut}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{cut_13.pdf}
\caption{4-b energies from indicated sources with a single monomer separating from the tetramer. R$_\text{OO}$ is the distance between the O atoms on the two monomers on the axis inferred from the arrow.}
\label{fig:13cut}
\end{figure}
Figure \ref{fig:error} shows in the top panels the correlation plots between the TTM4-F 4-b and CCSD(T)-F12 4-b energies (panel a), and between $E_\text{4-b}^\text{TTM4-F}+\Delta V_\text{4-b}$ and CCSD(T)-F12 4-b energies (panel b). The bottom panels plot, as a function of the maximum R$_\text{OO}$ distance of the tetramer, the difference between TTM4-F and CCSD(T) energies (panel c) and the difference between the corrected TTM4-F and CCSD(T) energies (panel d). It is visually clear that the correction provides both a better correlation and a smaller error with respect to the CCSD(T)-F12 4-b energies. Note that in addition to large errors in the short range, the TTM4-F 4-b energies also have significant errors (>50 cm$^{-1}$) even when the R$_\text{OO}$ reaches 6.5 \AA, as panel c shows.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fitting_error.pdf}
\caption{(a) Correlation plot between TTM4-F 4-b and CCSD(T)-F12 4-b energies; (b) correlation plot between TTM4-F+$\Delta V_{4-b}$ and CCSD(T)-F12 4-b energies; (c) error of TTM4-F 4-b as a function of max R$_\text{OO}$ in the tetramer; (d) error of TTM4-F+$\Delta V_{4-b}$ as a function of max R$_\text{OO}$ in the tetramer}
\label{fig:error}
\end{figure}
Then consider the binding energies of the eight isomers of the water hexamer as shown in Fig. \ref{fig:hexamer}. The benchmark CCSD(T)/CBS values are taken from ref. \citenum{bates09}. As seen in the figure, with the 4-b correction, the binding energies are in better agreement with the benchmark values, especially for bag and cyclic isomers.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{hexamers.pdf}
\caption{The binding energies of the eight isomers of the water hexamer from indicated sources.}
\label{fig:hexamer}
\end{figure}
Table \ref{tab:freq} shows for four of the hexamer isomers the mean absolute error (MAE) in the harmonic frequencies for both uncorrected and corrected MB-pol PES. For prism and cage, the 4-b corrected version is essentially as accurate as the original MB-pol, while for book and cyclic ring, the 4-b correction clearly improves the frequencies.
\begin{table}[htbp!]
\centering
\small
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccccccc}
\hline
\hline\noalign{\smallskip}
& \multicolumn{2}{c}{Prism} && \multicolumn{2}{c}{Cage} &&
\multicolumn{2}{c}{Book-1} && \multicolumn{2}{c}{Cyclic Ring}\\
\noalign{\smallskip} \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \noalign{\smallskip}
& MB-pol & +$\Delta V_{4-b}$ && MB-pol & +$\Delta V_{4-b}$ && MB-pol & +$\Delta V_{4-b}$ && MB-pol & +$\Delta V_{4-b}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
MAE & 7.8 & 8.4 && 8.9 & 9.4 && 12.6 & 10.6 && 16.5 & 11.7 \\
\noalign{\smallskip}\hline
\hline
\end{tabular*}
\caption{Mean absolute errors (MAE) in harmonic frequencies (in cm$^{-1}$) for indicated PESs. The benchmarks are from ref. \citenum{Howard2015}.}
\label{tab:freq}
\end{table}
Next, we comment briefly on the timing requirements for the TTM4-F+$\Delta V_{4-b}$ potential. As shown in the last line of Table \ref{tab:basis}, the additional time for calculating the correction for the results presented in this paper (PIP$_\text{200}$) is only about 1 second for 100,000 energy evaluations and approximately 3 seconds if the energy and associated gradients are evaluated at the same time. Table \ref{tab:timing} shows the overall timing to evaluate the energy of 64, 128, and 256 monomers, with different cutoffs of the 4-b correction. In this table, $t_\text{MBX}$ is the time to evaluate all the terms in the MB-pol, including the TTM4-F and MB-pol 2-b and 3-b, with the latest MBX software,\cite{MBX} while $t_{\Delta V_{4-b}}$ is the time to evaluate our 4-b correction term, which is the extra cost when the $\Delta V_{4-b}$ is added to MB-pol. All timings are evaluated using a single Intel i7-8750H core.
It can be seen that the extra cost of the 4-b correction is in general of the same order of magnitude as the cost of MB-pol, and the cutoff distance can be tuned to achieve a balance between the cost and accuracy.
\begin{table}[htbp!]
\centering
\begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccccc}
\hline
\hline\noalign{\smallskip}
& & \multicolumn{2}{c}{Cutoff = 6.0 \AA} && \multicolumn{2}{c}{Cutoff = 6.5 \AA} && \multicolumn{2}{c}{Cutoff = 7.0 \AA} \\
\noalign{\smallskip} \cline{3-4} \cline{6-7} \cline{9-10} \noalign{\smallskip}
$N_\text{mono}$ & $t_\text{MBX}$ & $N_\text{tetra}$ & $t_{\Delta V_{4-b}}$ && $N_\text{tetra}$ & $t_{\Delta V_{4-b}}$ && $N_\text{tetra}$ & $t_{\Delta V_{4-b}}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
64 & 0.057 & 1379 & 0.011 && 2728 & 0.019 && 5639 & 0.035 \\
128 & 0.21 & 5823 & 0.085 && 12224 & 0.13 && 24460 & 0.19 \\
256 & 0.68 & 13023 & 1.03 && 28786 & 1.15 && 58804 & 1.32 \\
\noalign{\smallskip}\hline
\hline
\end{tabular*}
\caption{Time (in seconds) needed to evaluate the energy of $N_\text{mono}$ monomers. Here $t_\text{MBX}$ is the time to obtain all terms in MB-pol using the latest version (MBX), and $t_{\Delta V_{4-b}}$ is the time to evaluate our 4-b correction. $N_\text{tetra}$ is the number of tetramers needed to be evaluated.}
\label{tab:timing}
\end{table}
We have made comparisons between the TTM4-F 4-b potential, the TTM4-F 4-b with $\Delta V_{4-b}$ correction, and the CCSD(T) results. A remaining question that should be addressed is how the correction potential performs in comparison with the previously reported full 4-b potential\cite{4bjpcl21} as recently improved.\cite{q_AQUA} Of course, many larger studies may already be based on the very successful MB-pol potential. For these, improvement by $\Delta V_{4-b}$ might be the easiest upgrade, with some extra computational cost, depending on the choice of the 4-b cutoff. But for those who are interested only in the 4-b potential, we suggest previously reported 4-b PES since it is much faster than TTM4-F 4-b $+\Delta V_{4-b}$, and is slightly more accurate as well.
Finally, we note that many potentials or components of potentials can be corrected by this method, which has already been shown to be accurate and efficient for \ce{CH4}, \ce{H3O+}, NMA,\cite{NandiDeltaML2021} and for AcAc,\cite{QuDelteMLAcAc2021} with more applications in process. In the current study, we have shown substantial improvement of the 4-b potential for TTM4-F, but one might have reasonable hope that this $\Delta$-ML method is a general approach that could provide substantial improvements to other potentials at relatively small cost. In this case, the corrections would begin with the 2-b ones and could go up to 4-b. Our recent CCSD(T) 2-b, 3-b, and 4-b datasets are available,\cite{datasets} and so the corrections, $\Delta V_{n-b}$, simply require a water force field. Some examples are the well-known potentials AMOEBA,\cite{amoeba13} and TTM2.1\cite{TTM2F} or TTM3.\cite{TTM3F} These are polarizable potentials, however, with significant differences. Another force field that might be interesting to ``correct'' is MB-UCB \cite{MBUCB}. Since this potential relies heavily on DFT calculation, using the $\omega$B97X-V/def2-QZVPPD functional, it appears that the correction to MB-UCB would analogous to the correction of DFT to CCSD(T) PESs that we considered originally our Delta-ML method.
\section*{Summary and Conclusions}
The 4-b interaction in the MB-pol potential has been corrected using the proposed $\Delta$-ML approach. The 4-b interaction itself and the correction are ``small'' compared to say the 2-b interaction. However, as noted above the 4-b correction has been shown to be the ``ultimate'' interaction by Xantheas and co-workers for large water clusters. Further it extends the successful MB-pol potential to this level of interaction. It is worth noting that the PIP fit to the difference 4-b energies is challenging because it is a 12-atom PES. While this number of atoms is not at the frontier of ML-PESs currently, it was not feasible years ago when PIP 2-b and 3-b potentials were reported for water in the WHBB \cite{WHBB} and the MB-pol\cite{mbpol2b, mbpol3b} potentials.
That the correction potential is a significant improvement over the TTM4-F potential can be seen a) by comparing cuts of the potentials for TTM4-F 4-b, and TTM4-F 4-b + $\Delta V_{4-b}$ along with the CCSD(T)-F12 values; b) by comparing the correlation between these potentials with the CCSD(T) values and the errors as a function of R$_{\text{OO}}$; c) by comparing results for the binding energies of the water hexamer isomers; and d) comparing the MAEs in harmonic frequencies for four of the isomers.
The methods described above offer three important ideas. First, we have described a method for ensuring that the potentials go to zero when appropriate distances get large. Second, we have also described a method for maintaining permutational symmetry not only in each monomer but when monomers as a whole are interchanged with one another. Finally, the $\Delta V$ method itself allows large improvements for small amount of effort, and this approach appears to be general and could be applied for other water force fields and similar types of force fields for other liquids and materials.
\section*{Acknowledgment}
JMB thanks the ARO, DURIP grant (W911NF-14-1-0471), for funding a computer cluster where most of the calculations were performed and current financial support from NASA (80NSSC20K0360). We are thankful for correspondence with Markus Meuwly and Silvan Käser.
|
2,869,038,154,940 | arxiv | \section{Introduction}\label{sec:intro}
Let $\K$ denote a field. The set of all $m\times n$ matrices over $\K$ forms a $\K$-vector space, which we denote by $\K^{m\times n}$. For $A,B\in \K^{m\times n}$, we define
$$d(A,B)=\mathrm{rank}(A-B),$$
which is often called the \emph{rank metric} or the \emph{rank distance} on $\K^{m\times n}$.
A subset $\cC\subseteq \K^{m\times n}$ with respect to the rank metric is called a \emph{rank-metric code} or a \emph{rank-distance code}. If $\cC$ contains at least two elements, the \emph{minimum distance} of $\cC$ is given by
\[d(\cC)=\min_{A,B\in \cC, A\neq B} \{d(A,B)\}.\]
When $\cC$ is a $\K$-linear subspace of $\K^{m\times n}$, we say that $\cC$ is a $\K$-linear code and its dimension $\dim_{K}(\cC)$ is defined to be the dimension of $\cC$ as a subspace over $\K$.
Let $\F_q$ denote the finite field of $q$ elements. For any $\cC\subseteq \F_q^{m\times n}$ with $d(\cC)=d$, it is well-known that
$$\#\cC\le q^{\max\{m,n\}(\min\{m,n\}-d+1)},$$
which is the Singleton-like bound for the rank metric; see \cite{delsarte_bilinear_1978}. When equality holds, we call $\cC$ a \emph{maximum rank-distance} (MRD for short) code. More properties of MRD codes can be found in \cite{delsarte_bilinear_1978},~\cite{gabidulin_MRD_1985},~\cite{gadouleau_properties_2006},~\cite{morrison_equivalence_2014} and~\cite{ravagnani_rank-metric_2016}.
Rank-metric codes, in particular, MRD codes have been studied since the 1970s and have seen much interest in recent years due to a wide range of applications including storage systems~\cite{roth_1991_Maximum}, cryptosystems~\cite{gabidulin_public-key_1995}, spacetime codes~\cite{lusina_maximum_2003} and random linear network coding~\cite{koetter_coding_2008}.
In finite geometry, there are several interesting structures including quasifields, semifields, and splitting dimensional dual hyperovals can be equivalently described as special types of rank-metric codes; see~\cite{dempwolff_dimensional_2014},~\cite{dempwolff_orthogonal_2015},~\cite{johnson_handbook_2007}, \cite{taniguchi_unified_2014} and the references therein. In particular, a finite quasifield corresponds to an MRD code in $\F_q^{n\times n}$ of minimum distance~$n$ and a finite semifield corresponds to such an MRD code that is a subgroup of $\F_q^{n\times n}$ (see~\cite{de_la_cruz_algebraic_2016} for the precise relationship). Many essentially different families of finite quasifields and semifields are known \cite{lavrauw_semifields_2011}, which yield many inequivalent MRD codes in $\K^{n\times n}$ of minimum distance~$n$. In contrast, it appears to be much more difficult to obtain inequivalent MRD codes in $\F_q^{n\times n}$ of minimum distance strictly less than $n$. For the relationship between MRD codes and other geometric objects such as linear sets and Segre varieties, we refer to \cite{lunardon_mrd-codes_2017}.
Besides quasifields, there are only a few known constructions of MRD codes in $\F_q^{n\times n}$. The first construction of MRD codes was given by Delsarte~\cite{delsarte_bilinear_1978}. This construction was later rediscovered by Gabidulin~\cite{gabidulin_MRD_1985} and generalized by Kshevetskiy and Gabidulin~\cite{kshevetskiy_new_2005}. Today this family is usually called the \emph{generalized Gabidulin codes}, sometimes it is also simply called the \emph{Gabidulin codes} (see Section~\ref{sec:pre}, for a precise definition). It is easy to show that a Gabidulin code is always $\F_{q^n}$-linear. Recently, another $\F_q$-linear family was found by Sheekey~\cite{sheekey_new_2016} and we often call them (generalized) twisted Gabidulin codes. This family has been further generalized into additive MRD codes by Otal and \"Ozbudak~\cite{otal_additive_2016}, who also constructed a family of non-additive MRD codes \cite{otal_non-additive_2018}. Given any $2\leq d \leq n$, all these constructions can provide us MRD codes of minimum distance $d$.
For MRD codes in $\F_q^{n\times n}$ of minimum distance $d=n-1$, there are a few more constructions. First, there is a nonlinear family constructed by Cossidente, Marino and Pavese~\cite{cossidente_non-linear_2016} and later generalized by Durante and Siciliano~\cite{durante_nonlinear_MRD_2017}. Besides this family, there are other constructions associated with maximum scattered linear sets over $\PG(1,q^6)$ and $\PG(1,q^8)$ presented recently in \cite{csajbok_newMRD_2017} and~\cite{csajbok_maximum_arxiv}. For more results concerning maximum scattered linear sets and associated MRD codes, see~\cite{bartoli_scattered_arxiv}, \cite{csajbok_classes_2018}, \cite{csajbok_equivalence_2016} and \cite{csajbok_maximum_4_2018}.
For MRD codes in $\F_q^{m\times n}$ with $m<n$, there are many different approaches to construct them. A canonical way to get them is puncturing (or projecting) MRD codes in $\F_{q}^{m'\times n}$ with $n\geq m'>m$. In \cite{horlemann-trautmann_new-criteria_2017}, a new criterion for the punctured Gabidulin codes is presented, and for small $m$ and $n$, several constructions of inequivalent MRD codes are obtained. In \cite{neri_genericity_2018}, it is presented a generic construction of MRD codes by using algebraic geometry approaches, under the condition that $n$ is large enough compared with $d$ and $m$. In \cite{csajbok_maximum_2017}, an approach to derive MRD codes in $\F_q^{m\times n}$ from linear sets is investigated. In \cite{donati_generalization_2017}, a nonlinear construction is presented. Recently, Schmidt and the second author~\cite{schmidt_number_MRD_2017} showed that even in Gabidulin codes there are a huge subset of inequivalent MRD codes.
In this paper, we present a new family of MRD codes in $\F_{q}^{2n\times 2n}$ of any minimum distance $d$ between $2$ and $2n$. In particular, when $d=2n$, we can show that the corresponding semifield is exactly the Hughes-Kleinfeld semifield~\cite{hughes_seminuclear_1960} found in 1960. Through the investigation of their middle and right nuclei, we can prove that the MRD codes in this new family are inequivalent to all known constructions.
The rest of this paper is organized as follows. In Section \ref{sec:pre}, we introduce semifields, describe rank-metric codes in $\F_{q}^{n\times n}$ via linearized polynomials and introduce the equivalence between rank-metric codes as well as their dual codes and adjoint codes. In Section \ref{sec:construction}, we present our new family of MRD codes and determine their middle and right nuclei. Based on these results, we show that they are inequivalent to all the known MRD codes except for one special case which is later excluded in Section \ref{sec:equivalence}. Another result in Section \ref{sec:equivalence} is the complete answer to the equivalence problem between different members of this new family.
\section{Preliminaries}\label{sec:pre}
Roughly speaking, a \emph{semifield} $\bbS$ is an algebraic structure satisfying all the axioms of a skewfield except (possibly) the associativity of its multiplication. A finite field is a trivial example of a semifield. Furthermore, if $\bbS$ does not necessarily have a multiplicative identity, then it is called a \emph{presemifield}. For a presemifield $\bbS$, $(\bbS,+)$ is necessarily abelian \cite{knuth_finite_1965}.
The first family of non-trivial semifields was constructed by Dickson \cite{dickson_commutative_1906} more than a century ago. In \cite{knuth_finite_1965}, Knuth showed that the additive group of a finite semifield $\bbS$ is an elementary abelian group, and the additive order of the nonzero elements in $\bbS$ is called the \emph{characteristic} of $\bbS$. Hence, any finite semifield can be represented by $(\mathbb{F}_q, +, *)$, where $q$ is a power of a prime $p$. Here $(\mathbb{F}_q, +)$ is the additive group of the finite field $\mathbb{F}_q$ and $x*y$ can be written as $x*y=\sum_{i,j}a_{ij} x^{p^i}y^{p^j}$, which forms a map from $\mathbb{F}_q\times \mathbb{F}_q$ to $\mathbb{F}_q$. We refer to \cite{lavrauw_semifields_2011} for a recent and comprehensive survey on finite semifields.
Geometrically speaking, there is a well-known correspondence, via coordinatisation, between (pre)semifields and projective planes of Lenz-Barlotti type V.1, see \cite{dembowski_finite_1997,hughes_projective_1973}. The most important equivalence relation defined on (pre)semifields is the \emph{isotopism}. Given two (pre)semifields $\bbS_1=(\mathbb{F}_p^n, +, *)$ and $\bbS_2=(\mathbb{F}_p^n, +, \star)$. If there exist three bijective linear mappings $L, M, N:\mathbb{F}_{p}^n\rightarrow \mathbb{F}_p^n$ such that
$$M(x)\star N(y)=L(x*y)$$
for any $x,y\in\mathbb{F}_p^n$, then $\bbS_1$ and $\bbS_2$ are called \emph{isotopic}, and the triple $(M,N,L)$ is called an \emph{isotopism} between $\bbS_1$ and $\bbS_2$. In \cite{albert_finite_1960}, Albert showed that two (pre)semifields coordinatize isomorphic planes if and only if they are isotopic. Every presemifield can be normalized into a semifield under an appropriate isotopism; see~\cite{bierbrauer_projective_2016} and~\cite{lavrauw_semifields_2011}.
Given a semifield $\bbS$ with multiplication $*$, we define its left, middle and right nucleus by
\begin{align*}
N_l(\bbS) &:= \{a\in \bbS: a*(x* y) = (a* x) * y \text{ for all }x,y\in \bbS \},\\
N_m(\bbS) &:= \{a\in \bbS: x*(a* y) = (x* a) * y \text{ for all }x,y\in \bbS \},\\
N_r(\bbS) &:= \{a\in \bbS: x*(y* a) = (x* y) * a \text{ for all }x,y\in \bbS \}.
\end{align*}
It is not difficult to prove that the semifield $\bbS$ can be viewed as a left vector space over its left nucleus. In particular, when $\bbS$ is finite, we can further show $N_l(\bbS)$ is actually a finite field $\F_q$. Let us assume that $\bbS$ is of size $q^n$. For every $b\in \bbS$, the map $x\mapsto x*b$ defines an $n\times n$ matrix $M_b$ over
its left nucleus $N_l(\bbS)$. Furthermore, all such matrices together form a rank metric code $\{M_b: b\in \bbS \}$ which is actually an MRD code, because the difference between any two distinct members $M_b$ and $M_d$ in it equals $M_b-M_d=M_{b-d}$ which is always nonsingular. This MRD code is usually called the semifield spread set associated with $\bbS$; see \cite{lavrauw_semifields_2011}.
Next, let us turn to rank-metric codes.
As we are working with rank-metric codes in $\F_q^{n\times n}$ rather than $\F_{q}^{m\times n}$ with $m<n$ in this paper, it is more convenient to describe such a rank-metric code using the language of $q$-polynomials or linearized polynomials over $\F_{q^n}$ which are the polynomials in the set
\[\lp{n}{q}[X]=\left\{\sum c_i X^{q^i}: c_i\in \F_{q^n} \right\}.\]
In fact, there is a bijection between $\F_q^{n\times n}$ and $\lp{n}{q}[X]/(X^{q^n}-X)$; for more results about linearized polynomials, we refer to \cite{lidl_finite_1997}.
As we mentioned in the introduction part, the most well-known family of MRD codes is called (generalized) Gabidulin codes. They can be described by the following set $\cG_{k,s}$ of linearized polynomials
\[\small{\{a_0 x + a_1 x^{q^{s}} + \dots a_{k-1} x^{q^{s(k-1)}}: a_0,a_1,\dots, a_{k-1}\in \F_{q^n} \},}\]
where $s$ is relatively prime to $n$. It is obvious that there are $q^{kn}$ polynomials in $\cG_{k,s}$ and each polynomial in it has at most $q^{k-1}$ roots which means its minimum distance $d=n-k+1$. Hence its size meets the Singleton-like bound.
For $x\in \F_{q^{m}}$, let $N_{q^{m}/q}(x)$ denote the norm from $\F_{q^m}$ to $\F_q$, i.e.\ $N_{q^{m}/q}(x)=x^{1+q+\cdots q^{m-1}}$. The following result follows from \cite[Theorem 10]{gow_galois_linear_2009}.
\begin{lemma}\label{lm:gow}
Let $s$ and $m$ be two relatively prime positive integers. Suppose that $f=f_0X+f_1X^{q^s} +\cdots+ f_kX^{q^{sk}}\in \F_{q^m}[X]$ is a linearized polynomial with $f_k\neq 0$. If $f$ has $q^k$ roots, then $N_{q^{sm}/q^s}(f_0)=(-1)^{km}N_{q^{sm}/q^s}(f_k)$.
\end{lemma}
In \cite{sheekey_new_2016}, Sheekey applied Lemma \ref{lm:gow} and found a new family of MRD codes $\cH_{k,s}(\eta, h)$ which equals
\[\small{\{a_0 x + a_1 x^{q^s} + \dots +a_{k-1} x^{q^{s(k-1)}} + \eta a_0^{q^h} x^{q^{sk}}: a_i\in \F_{q^m} \},}\]
where $\eta$ satisfies $N_{q^{sm}/q^s}(\eta)\neq (-1)^{km}$, i.e.\ $N_{q^{m}/q}(\eta)\neq (-1)^{km}$.
Such an MRD code is usually called a \emph{(generalized) twisted Gabidulin code}.
It is clear that if we allow $\eta$ equal $0$, $\cG_{k,s}$ can be viewed as a subfamily of the twisted Gabidulin codes. Replacing the field automorphism $a_0\mapsto a_0^{q^h}$ in the coefficient of the last term of the elements in $\cH_{k,s}(\eta, h)$ by an automorphism in $\Aut(\F_{q^n})\setminus \Aut(\F_{q^n}/\F_q)$, Otal and \"Ozbudak~\cite{otal_additive_2016} generalized this family into an additive one.
There are several slightly different definitions of equivalence of rank-metric codes. In this paper, we use the following notion of equivalence.
\begin{definition}
\label{def:equivalence}
Two rank-metric codes $\cC_1$ and $\cC_2$ in $\K^{m\times n}$ are \emph{equivalent} if there exist $A\in\GL_m(\K)$, $B\in \GL_n(\K)$, $C\in\K^{m\times n}$ and $\rho\in\Aut(\K)$ such that
\begin{equation}\label{eq:def_equiv}
\cC_2=\{AM^{\rho}B+C:M \in\cC_1\}.
\end{equation}
For $m=n$, if $\cC_2$ is equivalent to $\cC_1$ or $\cC_1^T := \{M^T: M\in \cC_1\}$
where $(\,.\,)^T$ means transposition, then we say $\cC_1$ and $\cC_2$ are \emph{isometrically equivalent}. An equivalence map from a rank-metric code $\cC$ to itself is also called an \emph{automorphism} of $\cC$.
\end{definition}
When $\cC_1$ and $\cC_2$ are both additive and equivalent, it is not difficult to show that we can choose $C=0$ in \eqref{eq:def_equiv}. In particular, when $\cC_1$ and $\cC_2$ are semifield spread sets, they are equivalent if and only if the associated semifields are isotopic \cite[Theorem 7]{lavrauw_semifields_2011}.
Back to the descriptions in linearized polynomials, given two rank-metric codes $\cC_1$ and $\cC_2$ which consist of linearized polynomials, they are equivalent if there exist $\varphi_1$, $\varphi_2\in \lp{n}{q}[X]$ permuting $\F_{q^n}$, $\psi\in \lp{n}{q}[X]$ and $\rho\in \Aut(\F_q)$ such that
\[ \varphi_1\circ f^\rho \circ \varphi_2 + \psi\in \cC_2 \text{ for all }f\in \cC_1,\]
where $\circ$ stands for the composition of maps and $f^\rho= \sum a_i^\rho X^{q^i}$ for $f=\sum a_i X^{q^i}$.
In general, it is a difficult problem to tell whether two given rank-metric codes are equivalent or not. There are several invariants which may help us to distinguish them.
Given a $\K$-linear rank-metric code $\cC\subseteq \K^{m\times n}$, its middle nucleus is defined as
\[N_m(\cC) =\{M\in\K^{m\times n} : MC\in \cC \text{ for all }C\in \cC \},\]
and its right nucleus is defined as
\[N_r(\cC) =\{M\in\K^{m\times n} : CM\in \cC \text{ for all }C\in \cC \}.\]
These two concepts were introduced in \cite{lunardon_kernels_2017} and they can be viewed as a natural generalization of the middle and right nucleus of semifields.
In \cite{liebhold_automorphism_2016}, they are called the left idealizer and the right idealizer of $\cC$, respectively. In general, we can also define the left nucleus of $\cC$. However, for MRD codes over $\K$ containing singular matrices, it is always $\K$ which means it is not a useful invariant; see \cite{lunardon_kernels_2017}.
For a rank-metric code $\cC$ given by a set of linearized polynomials, its middle nucleus and right nucleus can also be written as sets of linearized polynomials. Precisely the middle nucleus of $\cC$ is
\[\mathcal N_m(\cC)= \{ \varphi \in \lp{n}{q}: f\circ \varphi\in \cC \text{ for all }f\in \cC \}.\]
It is defined by $f\circ \varphi$ rather than $\varphi\circ f$ because we always consider a row vector $\bu$ multiplying a matrix $C$ which is a member of a rank-metric code. This means that $M\in N_m(\cC)$ only if $\bu MC=\bu C'$ for some $C'\in \cC$.
Similarly, the right nucleus of $\cC$ is
\[\mathcal N_r(\cC)= \{ \varphi \in \lp{n}{q}: \varphi \circ f\in \cC \text{ for all }f\in \cC \}.\]
They played an important role in \cite{schmidt_number_MRD_2017} proving a lower bound on the numbers of inequivalent Gabidulin codes in $\F_q^{m\times n}$. The middle and right nuclei of generalized twisted Gabidulin codes together with a complete answer to the equivalence between members in this family can be found in \cite{lunardon_generalized_2018}.
We define a symmetric bilinear form on the set $\F_q^{m\times n}$ by
\[\langle M,N\rangle:= \Tr(MN^T),\]
where $N^T$ is the transpose of $N$. The \emph{Delsarte dual code} of an $\F_q$-linear code $\cC$ is
\[\cC^\perp :=\{M\in \F_q^{m\times n}:\langle M,N \rangle=0 \text{ for all } N\in \cC \}.\]
One important result proved by Delsarte \cite{delsarte_bilinear_1978} is that the Delsarte dual code of a linear MRD code is still MRD. As we are considering MRD codes using linearized polynomials, the Delsarte dual can also be interpreted in the following way~\cite{sheekey_new_2016}.
We define the bilinear form $b$ on $q$-polynomials by
\[b\left( f,g \right)=\Tr_{q^n/q}\left(\sum_{i=0}^{n-1}a_ib_i\right),\]
where $f(x)=\sum_{i=0}^{n-1}a_ix^{q^i}$ and $g(x)=\sum_{i=0}^{n-1}b_ix^{q^i}\in \F_{q^n}[x]$. The \emph{Delsarte dual code} $\cC^\perp$ of a set of $q$-polynomials $\cC$ is
\[\cC^\perp=\{f: b(f,g)=0 \text{ for all } g\in \cC \} .\]
It is well-known and also not difficult to show directly that two linear rank-metric codes are equivalent if and only if their duals are equivalent.
Let $\cC$ be an MRD codes in $\K^{m\times n}$. It is obvious that $\{M^T: M\in \cC\}$ is also an MRD codes, because the ranks of $M^T$ and $M$ are the same. When $\K=\F_q$ and $m=n$, we can also interpret the transpose of matrices into an operation on $q$-polynomials.
The \emph{adjoint} of a $q$-polynomial $f=\sum_{i=0}^{n-1}a_i x^{q^i}$ is given by
$$\hat{f}:=\sum_{i=0}^{n-1}a_{i}^{q^{n-i}} x^{q^{n-i}}.$$
If $\cC$ is a rank-metric codes consisting of $q$-polynomials, then the \emph{adjoint code} of $\cC$ is $\widehat{\cC}:=\{\hat{f}: f\in\cC\}$. In fact, the adjoint of $f$ is equivalent to the transpose of the matrix derived from $f$. This result can be found in \cite{kantor_commutative_2003}.
Regarding the adjoint and Delsarte dual operation, we have
\begin{equation}\label{eq:operation_1}
\mathcal{N}_m(\widehat{\cC}) = \widehat{\mathcal{N}_r(\cC)}=\mathcal{N}_r(\cC^\perp),
\end{equation}
and
\begin{equation}\label{eq:operation_2}
\mathcal{N}_m(\cC^\perp) = \widehat{\mathcal{N}_m(\cC)}=\mathcal{N}_r(\widehat{\cC}),
\end{equation}
which are proved in \cite[Proposition 4.2]{lunardon_kernels_2017}.
\section{A class of MRD codes}\label{sec:construction}
In the rest of this paper, we write $N(x)$ instead of $N_{q^{2n}/q}(x)$ for short. By applying Lemma \ref{lm:gow}, we can get another family of MRD codes.
\begin{theorem}\label{th:construction1}
Let $s$ and $n$ be two integers satisfying $\gcd(s,2n)=1$. For $\gamma \in \F_{q^{2n}}$ satisfying that $N(\gamma)$ is a non-square in $\F_q$, we define $\cD_{k,s}(\gamma)$ as the set
\begin{equation}\label{eq:D_gamma}
\small{
\left\{a X + \sum_{i=1}^{k-1}c_i X^{q^{is}} + \gamma bX^{q^{ks}} : c_1, \cdots, c_{k-1}\in \F_{q^{2n}}, a,b\in \F_{q^n} \right\}.}
\end{equation}
Then $\cD_{k,s}(\gamma)$ is an MRD code.
\end{theorem}
\begin{proof}
It is clear that $\#\cD_{k,s}(\gamma)=q^{2nk}$. We need to show that for each polynomial $f\in \cD_{k,s}(\gamma)$, it has at most $q^{k-1}$ roots which means its minimum distance is $d=2n-k+1$. Hence $\cD_{k,s}(\gamma)$ is an MRD code.
By way of contradiction, let us assume that $f=a X + \sum_{i=1}^{k-1}c_i X^{q^{is}} + \gamma bX^{q^{k}}$ has $q^k$ roots which implies that $a$ and $b$ are both nonzero. By Lemma \ref{lm:gow}, $N_{q^{2sn}/q^{s}}(a)=(-1)^{2nk}N_{q^{2sn}/q^{s}}(\gamma b)$. Hence $N_{q^{2sn}/q^{s}}(a/b)=N_{q^{2sn}/q^{s}}(\gamma) = N(\gamma)$. As $a,b\in \F_{q^n}$,
\[N_{q^{2sn}/q^{s}}\left(\frac{a}{b}\right)=\left(\frac{a}{b}\right)^{2(1+q^s+\cdots +q^{(n-1)s})}=N_{q^n/q}\left(\frac{a}{b}\right)^2\]
which is a square in $\F_q$. However, this contradicts the assumption on $N(\gamma)$.
\end{proof}
Let us first look at the Delsarte dual code of $\cD_{k,s}(\gamma)$. It is straightforward to compute that $\cD_{k,s}(\gamma)^\perp$ equals
\[\left\{-\gamma bX + aX^{q^{ks}} + \sum_{i=k+1}^{2n-1} c_i X^{q^{is}}: a,b\in \F_{q^n}, c_i\in \F_{q^{2n}} \right\}.\]
Replacing $X$ by $X^{q^{2n-ks}}$ in every term and module $X^{q^{2n}}-X$, we get $\cD_{2n-k,s}(-\gamma)$.
\begin{proposition}\label{prop:dual}
The Delsarte dual code of $\cD_{k,s}(\gamma)$ is equivalent to $\cD_{2n-k,s}(-\gamma)$.
\end{proposition}
It can also be readily verified the following result of the adjoint code of $\cD_{k,s}(\gamma)$.
\begin{proposition}\label{prop:adjoint}
The adjoint code of $\cD_{k,s}(\gamma)$ is equivalent to $\cD_{k,s}(1/\gamma)$.
\end{proposition}
By Theorem \ref{th:construction1}, it is clear that $aX+\gamma bX^{q^{s}}$ defines a semifield multiplication. As $\gamma\notin \F_{q^n}$ (otherwise $N(\gamma)$ must be a square in $\F_q$), for every $x\in \F_{q^{2n}}$, we can write it as $x=c+d\gamma$ for some $c,d\in \F_{q^{n}}$.
Assume that $\gamma^{q^s+1}=u+v\gamma$ for certain $u,v\in \F_{q^n}$.
By expanding $ax+\gamma bx^{q^{s}}$, we have
\[ax+\gamma bx^{q^{s}}= (ac+bd^{q^s}u) + (ad+bc^{q^s} + bd^{q^s}v)\gamma. \]
We view them as vectors in $\F_{q^n}^2$ and define a semifield multiplication
\begin{equation}\label{eq:semi_multi}
(c,d)*(a,b)=(ac+bd^{q^s}u , ad+bc^{q^s} + bd^{q^s}v),
\end{equation}
for $a,b,c,d\in \F_{q^n}$. By comparing with \cite[Theorem 9.7]{hughes_projective_1973}, we see that \eqref{eq:semi_multi} is exactly the multiplication of a Hughes-Kleinfeld semifield \cite{hughes_seminuclear_1960}, which is also the multiplication of a Knuth semifield of type \RN{2} \cite{knuth_finite_1965}.
By \cite[Lemma 9.8]{hughes_projective_1973}, $\F_{q^n}$ is the right and middle nucleus of $\mathbb{H}$. In \cite{hughes_collineation_1960}, a necessary and sufficient condition for $x+y\gamma \in N_l(\mathbb{H})$ are derived.
\begin{proposition}\label{prop:3nuclei}
Let $*$ be the multiplication defined by \eqref{eq:semi_multi} and $\mathbb{H}$ denote the associated semifield $(\F_{q^n}^2, +, *)$.
\begin{enumerate}[label=(\alph*)]
\item $N_r(\mathbb{H}) = \F_{q^n}$.
\item $N_m(\mathbb{H}) = \F_{q^n}$.
\item For $x,y\in \F_{q^n}$, $x+y\gamma \in N_l(\mathbb{H})$ if and only if
\[ \left\{
\begin{array}{l}
x^{q^{2s}}+y^{q^{2s}}v^{q^s}=x+y^{q^s}v,\\
yu + x^{q^s}v +y^{q^s}v^2 = y^{q^{2s}} u^{q^s} + x^{q^{2s}} v + y^{q^{2s}} v^{q^s+1}.
\end{array}
\right. \]
\end{enumerate}
\end{proposition}
Next, let us investigate the middle and right nucleus of the MRD codes $\cD_{k,s}(\gamma)$ defined in Theorem \ref{th:construction1}. They are very important invariants with respect to the equivalence of rank-metric codes. We will use them later to show that $\cD_{k,s}(\gamma)$ contains MRD codes which are not equivalent to any known one.
\begin{theorem}\label{th:N_rN_m_construction1}
Let $k$ be an integer satisfying $1\le k\le 2n-1$. Then the right nucleus of $\cD_{k,s}(\gamma)$ is
\begin{equation}\label{eq:N_r(D)}
\mathcal N_r(\cD_{k,s}(\gamma))= \{aX : a\in \F_{q^n}\},
\end{equation}
and its middle nucleus is
\begin{equation}\label{eq:N_m(D)}
\mathcal N_m(\cD_{k,s}(\gamma))= \{aX : a\in \F_{q^n}\}.
\end{equation}
\end{theorem}
\begin{proof}
When $k=1$, $\cD_{k,s}(\gamma)$ is isotopic to a Hughes-Kleinfeld semifield $\mathbb H$, and the result can be then derived from Proposition \ref{prop:3nuclei}. By duality, we get the result for $k=2n-1$.
In the rest part, we assume that $2\leq k \leq 2n-2$. Assume that $\varphi=\sum_{i=0}^{2n-1}d_iX^{q^{is}}$ is an element in $\mathcal N_r(\cD_{k,s}(\gamma))$. As $\varphi(c_1 X^{q^s})\in \cD_{k,s}(\gamma)$, we see that $d_i=0$ for $k<i<2n-1$. In fact, only $d_{2n-1}$, $d_0$ and $d_1$ can be nonzero. When $k=2$, this is obvious. When $k>2$, this statement can be directly verified by checking $\varphi(c_jX^{q^{js}})\in \cD_{k,s}(\gamma)$ for $j=2,\cdots, k-1$.
Next we consider $\varphi(aX+ \gamma bX^{q^{ks}})$ which should also be in $\cD_{k,s}(\gamma)$. As $\varphi = d_{2n-1}X^{q^{(2n-1)s}}+d_0X +d_1X^{q^s}$, in the expansion of $\varphi(aX+ \gamma bX^{q^{ks}})$ the coefficient of $X^{q^{(2n-1)s}}$ is $d_{2n-1}a^{q^{(2n-1)s}}$ and the coefficient of $X^{q^{(k+1)s}}$ is $ d_1(\gamma b)^{q^s}$. Since $a$ and $b$ can take any value in $\F_{q^n}$, by checking the elements in $\cD_{k,s}(\gamma)$ we see that $d_{2n-1}$ and $d_1$ must be $0$. Thus $\varphi=d_0X$. From
\[\varphi(aX+ \gamma bX^{q^{ks}})=d_0aX+ \gamma d_0bX^{q^{ks}}\in \cD_{k,s}(\gamma) \]
we derive $d_0\in \F_{q^n}$.
By $\widehat{\cD_{k,s}}(\gamma)=\cD_{k,s}(1/\gamma)$ in Proposition \ref{prop:adjoint} and \eqref{eq:operation_1}, we get
\begin{align*}
\mathcal N_m(\cD_{k,s}(\gamma)) &=\mathcal{N}_r\left(\cD_{k,s}\left(\frac{1}{\gamma}\right)^\perp\right)\\
&=\mathcal{N}_r\left(\cD_{2n-k,s}\left(-\frac{1}{\gamma^{q^{2n-ks}}}\right)\right)
\end{align*}
which equals $\{aX : a\in \F_{q^n}\}$.
\end{proof}
\begin{corollary}\label{coro:inequivalence}
In $\F_{q}^{2n\times 2n}$, the MRD code $\cD_{k,s}(\gamma)$ is not equivalent to any generalized Gabidulin code. When $k\neq n$ or $h\neq n$, $\cD_{k,s}(\gamma)$ is also not equivalent to any generalized twisted Gabidulin code $\cH_{k,t}(\eta, h)$ with $\eta\neq 0$.
\end{corollary}
\begin{proof}
When $k=1$, a (generalized) Gabidulin code $\{aX : a\in\F_{q^{2n}} \}$ is derived from the multiplication of the finite field $\F_{q^{2n}}$, which is never isotopic to a Hughes-Kleinfeld semifield. Hence the corresponding MRD codes are not equivalent.
Moreover, $\cH_{1,s}(\eta,h)$ associates a generalized twisted field, which is a presemifield. If $s\not \equiv h \pmod{2n}$, it is isotopic to a semifield whose middle nucleus is of size $q^{\gcd(2n,s-h)}$ and right nucleus is of size $q^{\gcd(2n, h)}$; otherwise $s\equiv h \pmod{2n}$ and this presemifield is isotopic to the finite field $\F_{q^{2n}}$; see~\cite{albert_generalized_1961} and~\cite{biliotti_collineation_1999}. Thus $\cH_{1,s}(\eta,h)$ cannot be equivalent to $\cD_{1,s}(\gamma)$.
When $k=2n-1$, we can simply consider the equivalences between their Delsarte dual codes which have been already determined above.
In the rest, we only have to investigate the equivalence problem for $1<k<2n-1$.
According to \cite[Corollary 5.9 (a)]{lunardon_kernels_2017}, the middle (or right) nucleus of a generalized Gabidulin codes over $\F_{q^{2n}}$ is always $\F_{q^{2n}}$. Hence it is different from the middle nucleus of $\cD_{k,s}(\gamma)$ by Theorem \ref{th:N_rN_m_construction1}, which means that they are not equivalent.
According to \cite[Corollary 5.9 (b)]{lunardon_kernels_2017},
$\mathcal N_m(\cH_{k,t}(\eta, h))$ in $\F_q^{2n\times 2n}$ is of size $q^{\gcd(2n, sk-h)}$ and $\mathcal N_r(\cH_{k,t}(\eta, h))$ is of size $q^{\gcd(2n, h)}$. Hence, $\cD_{k,s}(\gamma)$ and $\cH_{k,t}(\eta, h)$ are equivalent only if $\gcd(2n,h)=h$ and $\gcd(2n, sk-h)=n$ which means $h=n=k$.
\end{proof}
For $k=2$, as the middle nucleus of $\cD_{2,s}(\gamma)$ is not of size $q^{2n}$, it is also not equivalent to those MRD codes associated with maximum scattered linear sets constructed in \cite{csajbok_newMRD_2017} and \cite{csajbok_maximum_arxiv}.
\section{Equivalence}\label{sec:equivalence}
In Section \ref{sec:construction}, we have shown that most members of the MRD codes $\cD_{k,s}(\gamma)$ are new with respect to the equivalence of rank-metric codes. In the last part of this section, we will completely solve the last open case whether $\cD_{n,s}(\gamma)$ is equivalent to $\cH_{n,t}(\theta, n)$ or not.
First we investigate the equivalence between different members of this family.
If we want to further determine the isometric equivalence between $\cD_{k,s}(\gamma)$ and $\cD_{k,t}(\theta)$, the answer follows directly from our result about the equivalence map from $\cD_{k,s}(\gamma)$ to $\cD_{k,t}(\theta)$ or its adjoint $\cD_{k,t}(1/\theta)$.
By using our knowledge of the middle and right nucleus of $\cD_{k,s}(\gamma)$, we can prove the following results.
\begin{lemma}\label{lm:equivalence_map}
Let $n,s,t\in \Z^+$ satisfying $\gcd(2n,s)=\gcd(2n,t)=1$. Let $\gamma$ and $\theta$ be in $\F_{q^{2n}}$ satisfying that $N(\gamma)$ and $N(\theta)$ are both non-square in $\F_q$. Let $(\varphi_1,\varphi_2, \rho)$ be an equivalence map between $\cD_{k,s}(\gamma)$ and $\cD_{k,t}(\theta)$ for $1<k<2n-1$. If $k\neq n$ or $n\geq3$, then $\varphi_1$ and $\varphi_2$ are both monomials.
\end{lemma}
\begin{proof}
By Proposition \ref{prop:dual}, $\cD_{2n-k,s}(-\gamma)$ is equivalent to the Delsarte dual code of $\cD_{k,s}(\gamma)$. As two MRD codes are equivalent if and only if their Delsarte duals are equivalent, we only have to prove the statement for $k\le n$.
According to the definition of equivalence, $\varphi_1 \circ f^\rho \circ \varphi_2\in \cD_{k,t}(\theta)$ for every $f\in \cD_{k,s}(\gamma)$. As $f^\rho\in \cD_{k,s}(\gamma^\rho)$, $\varphi_1$ must be in the normalizer of $\mathcal N_r(\cD_{k,s}(\gamma^\rho))$ in $\GL(2n,q)$. By Theorem \ref{th:N_rN_m_construction1}, the right nucleus $\mathcal N_r(\cD_{k,s}(\gamma^\rho))=\{aX : a\in \F_{q^n}\}$. It follows that
$$\varphi_1=cX^{q^l} + dX^{q^{l+n}}$$
for a certain $l\in \{0,\cdots, 2n-1\}$ and $c,d\in \F_{q^{2n}}$. This result is well-known and can be verified directly as follows. Assume that $\varphi_1= \sum_{i=0}^{2n-1}a_iX^{q^i}$. Then for each $b\in \F_{q^n}$, there always exists some $b'\in \F_{q^n}$ such that
\[ \varphi_1\circ bX= \sum_{i=0}^{2n-1}a_ib^{q^i}X^{q^i}=b'X\circ \varphi_1=\sum_{i=0}^{2n-1}b'a_iX^{q^i}.\]
This implies that if $a_i\neq 0$ then $b^{q^i}=b'$, which means that at most two coefficients $a_l$ and $a_{l+n}$ are nonzero for a certain $l$.
By the same argument, we can also show that $$\varphi_2=gX^{q^j} + hX^{q^{j+n}}$$
for some $j\in \{0,\cdots, 2n-1\}$ and $g,h\in \F_{q^{2n}}$.
Now let us look at the image of $c_iX^{q^{is}}$ under the equivalence map $(\varphi_1, \varphi_2, \mathrm{id})$. By calculation,
\begin{align}
\nonumber &\varphi_1 \circ c_iX^{q^{is}} \circ \varphi_2\\
\nonumber =&cc_i^{q^l}(gX^{q^j} + hX^{q^{j+n}})^{q^{is+l}} + dc_i^{q^{l+n}}(gX^{q^j} + hX^{q^{j+n}})^{q^{is+l+n}}\\
\label{eq:binomials}=& (cg^{q^{is+l}} c_i^{q^l} + dh^{q^{is+l+n}}c_i^{q^{l+n}})X^{q^{j+is+l}} \\
&\nonumber + (ch^{q^{is+l}} c_i^{q^l} + dg^{q^{is+l+n}}c_i^{q^{l+n}}) X^{q^{j+is+l+n}}.
\end{align}
When $k<n$, as $\varphi_1 \circ c_iX^{q^{is}} \circ \varphi_2\in \cD_{k,t}(\theta)$, one of the coefficients of $X^{q^{j+is+l}}$ and $X^{q^{j+is+l+n}}=X^{q^{j+is+l+tn}}$ must be $0$ for all $c_i\in \F_{q^{2n}}$. Together with the condition that $\varphi_1$ and $\varphi_2$ are permutation polynomials, we show that $c=h=0$ or $d=g=0$ or $c=g=0$ or $d=h=0$, which means $\varphi_1$ and $\varphi_2$ are both monomials.
When $k=n$, as $\varphi_1 \circ c_iX^{q^{is}} \circ \varphi_2\in \cD_{k,t}(\theta)$, the coefficients of $X^{q^{j+is+l}}$ and $X^{q^{j+is+l+n}}=X^{q^{j+is+l+tn}}$ are both nonzero only if exact one of $j+is+l$ and $j+is+l+tn$ equals $0$. If $n\geq3$, $i$ can be taken for at least two different values from $\{1,2,\cdots,k-1\}$. Hence we can choose the value of $i$ such that $j+is+l\neq 0,n$. As in the case $k<n$, we see that $\varphi_1$ and $\varphi_2$ must be both monomials.
\end{proof}
The following lemma can be proved by using exactly the same argument and we omit its proof.
\begin{lemma}\label{lm:equivalence_map_D_H}
Let $n,s,t\in \Z^+$ satisfying $\gcd(2n,s)=\gcd(2n,t)=1$. Let $\gamma$ and $\eta$ be in $\F^*_{q^{2n}}$ satisfying that $N(\gamma)$ is a non-square in $\F_q$ and $N(\eta)\neq 1$. Let $(\varphi_1,\varphi_2, \rho)$ be an equivalence map between $\cD_{n,s}(\gamma)$ and $\cH_{n,t}(\eta, n)$. If $n\geq3$, then $\varphi_1$ and $\varphi_2$ are both monomials.
\end{lemma}
Now we can determine the equivalence between $\cD_{k,s}(\gamma)$ and $\cD_{k,t}(\theta)$.
\begin{theorem}\label{th:equivalence}
Let $n,s,t\in \Z^+$ satisfying $\gcd(2n,s)=\gcd(2n,t)=1$. Let $\gamma$ and $\eta$ be in $\F_{q^{2n}}$ satisfying that $N(\gamma)$ and $N(\theta)$ are both non-square in $\F_q$. Let $k$ be an integer satisfying $1<k<2n-1$.
When $k\neq n$ or $n\geq 3$, the MRD code $\cD_{k,s}(\gamma)$ is equivalent to $\cD_{k,t}(\theta)$ if and only if one of the following collections of conditions are satisfied.
\begin{enumerate}[label=(\alph*)]
\item $s\equiv t \pmod{2n}$, there exist $\sigma \in \Aut(\F_{q^{2n}})$ and $h\in \F_{q^{2n}}$ such that $\gamma^{\sigma} h^{q^{ks}-1}=\theta$.
\item $s\equiv -t \pmod{2n}$, there exist $\sigma \in \Aut(\F_{q^{2n}})$ and $h\in \F_{q^{2n}}$ such that $\gamma^{\sigma} h^{q^{ks}-1}=1/\theta$.
\end{enumerate}
\end{theorem}
\begin{proof}
As in the proof of Lemma \ref{lm:equivalence_map}, we only have to handle the cases $k\leq n$.
Assume that $(\varphi_1, \varphi_2, \rho)$ is an equivalence map between $\cD_{k,s}(\gamma)$ and $\cD_{k,t}(\theta)$.
When $k\neq n$ or $n\geq 3$, by Lemma \ref{lm:equivalence_map}, we can assume that $\varphi_1=dX^{q^l}$ and $\varphi_2 = g X^{q^j}$ for some $d, g\in \F_{q^n}^*$ and $l,j\in \{0,\cdots, 2n-1\}$.
For arbitrary $c_i\in \F_{q^{2n}}$ with $i\in \{1,\cdots, k-1 \}$,
\[ \varphi_1\circ c_iX^{q^{is}} \circ \varphi_2 = dc_i^{q^l} g^{q^{is+l}} X^{q^{is+l+j}}.\]
It follows that $is+l+j\in \{t,2t, \cdots, (k-1)t\}$. As $\gcd(2n,s)=\gcd(2n,t)=1$, we can assume that $s\equiv rt \pmod{2n}$ which means
\[ \{irt+l+j \pmod{2n}: i =1,\cdots, k-1 \}= \{t,2t, \cdots, (k-1)t\}. \]
As $k\le n$, it is straightforward to see that either $r=1$ and $l+j\equiv 0 \pmod{2n}$, or $r=-1$ and $l+j\equiv kt \pmod{2n}$.
When $r=1$, i.e.\ $s\equiv t\pmod{2n}$ and $j\equiv -l \pmod{2n}$, for $a,b\in \F_{q^n}$, applying $(\varphi_1, \varphi_2, \rho)$ onto $aX+ \gamma bX^{q^{ks}}$, we obtain
\begin{eqnarray*}
\lefteqn{\varphi_1\circ \left(a^\rho X+ \gamma^\rho b^\rho X^{q^{ks}} \right)\circ \varphi_2 }\\
&=& da^{\rho q^l}g^{q^l}X+ d\gamma^{\rho q^l}b^{\rho q^l}g^{q^{ks+l}}X^{q^{ks}},
\end{eqnarray*}
which belongs to $\cD_{k,t}(\theta)$ if and only if $dg^{q^l}\in \F_{q^n}$ and $d\gamma^{\rho q^l}g^{q^{ks+l}}\in \theta \F_{q^n}$. Let $\sigma$ denote the automorphism of $\F_{q^n}$ defined by $x\mapsto x^{\rho q^l}$. Let $h = g^{q^l}$. Then we see that there must be a solution of $h$ such that $\gamma^{\sigma} h^{q^{ks}-1}=\theta$.
When $r=-1$, i.e.\ $s\equiv -t\pmod{2n}$ and $j\equiv kt-l \pmod{2n}$, for $a,b\in \F_{q^n}$, we apply $(\varphi_1, \varphi_2, \rho)$ onto $aX+ \gamma bX^{q^{ks}}$ and get
\begin{eqnarray*}
\lefteqn{\varphi_1\circ \left(a^\rho X+ \gamma^\rho b^\rho X^{q^{ks}} \right)\circ \varphi_2}\\
&=& da^{\rho q^l}g^{q^l}X^{q^{kt}}+ d\gamma^{\rho q^l}b^{\rho q^l}g^{q^{ks+l}}X,
\end{eqnarray*}
which belongs to $\cD_{k,t}(\theta)$ if and only if $dg^{q^l}\in\theta \F_{q^n}$ and $d\gamma^{\rho q^l}g^{q^{ks+l}}\in \F_{q^n}$. Let $\sigma$ denote the automorphism of $\F_{q^n}$ defined by $x\mapsto x^{\rho q^l}$. Let $h = g^{q^l}$. Then we see that there must be a solution of $h$ such that $\gamma^{\sigma} h^{q^{ks}-1}=1/\theta$.
Therefore we have proved the necessary condition in the statement for $k\neq n$ or $n\geq3$. For sufficiency, it is routine to do a verification.
\end{proof}
There are 3 cases which are not covered by Theorem \ref{th:equivalence}: $k=1$, $k=2n-1$ and $k=n=2$.
For $k=1$, $\cD_{1,s}(\gamma)$ defines a Hughes-Kleinfeld semifield whose autotopism group has been completely determined in \cite{hughes_collineation_1960}. It appears that using the same approach, the equivalence between $\cD_{1,s}(\gamma)$ and $\cD_{1,t}(\theta)$ can also be determined. Hence, in the rest of this section, we will skip the case $k=1$. Moreover, for $k=2n-1$, the MRD code is the Delsarte dual code of a Hughes-Kleinfeld semifield by Proposition \ref{prop:dual}. We will also skip this case, because the equivalence problem for this case can be completely converted into the equivalence problem for Hughes-Kleinfeld semifields.
Next we investigate the last case in which $k=n=2$.
As $\gcd(2n,s)=\gcd(2n,t)=1$ and $n=2$, $t\equiv \pm s \pmod{2n}$. In fact, $t$ and $s$ can only be $1$ or $-1$ modulo $2n$.
\begin{theorem}\label{th:equivalence_n=k}
Let $s,t\in \Z^+$ satisfying $\gcd(4,s)=\gcd(4,t)=1$. Let $\gamma$ and $\theta$ be in $\F_{q^{4}}$ satisfying that $N_{q^{4}/q}(\gamma)$ and $N_{q^{4}/q}(\theta)$ are both non-square in $\F_q$.
The MRD code $\cD_{2,s}(\gamma)$ is equivalent to $\cD_{2,t}(\theta)$ if and only if one of the following collections of conditions are satisfied.
\begin{enumerate}[label=(\alph*)]
\item $s\equiv t \pmod{4}$, there exists $\sigma \in \Aut(\F_{q^{4}})$ and $h\in \F_{q^{4}}$ such that $\gamma^{\sigma} h^{q^{2s}-1}=\theta$.
\item $s\equiv t \pmod{4}$, there exist $c,d,g,h\in\F_{q^4}$, $\rho\in \Aut(\F_q)$ and $l\in \{0,1,2,3\}$ such that
\[ \left\{
\begin{array}{rcl}
cg^{q^{s+l}}-d^{q^2}h^{q^{s+l}}&=&0, \\
ch^{q^{s+l}}\theta^{q^2}-d^{q^2}g^{q^{s+l}}\theta&=&0, \\
cg^{q^l}+ dh^{q^{l+2}}&=&0,\\
ch^{q^{2s+l}}\gamma^{\rho q^l}+ dg^{q^{l}}\gamma^{\rho q^{l+2}}&=&0.
\end{array}
\right. \]
\item $s\equiv -t \pmod{4}$, there exists $\sigma \in \Aut(\F_{q^{4}})$ and $h\in \F_{q^{4}}$ such that $\gamma^{\sigma} h^{q^{2s}-1}=1/\theta$.
\item $s\equiv -t \pmod{4}$, there exist $c,d,g,h\in\F_{q^4}$, $\rho\in \Aut(\F_q)$ and $l\in \{0,1,2,3\}$ such that
\[ \left\{
\begin{array}{rcl}
cg^{q^{s+l}}-d^{q^2}h^{q^{s+l}}&=&0, \\
ch^{q^{s+l}}\theta^{q^2}-d^{q^2}g^{q^{s+l}}\theta&=&0, \\
ch^{q^l}+ dg^{q^{l+2}}&=&0,\\
cg^{q^{2s+l}}\gamma^{\rho q^l}+ dh^{q^{l}}\gamma^{\rho q^{l+2}}&=&0.
\end{array}
\right. \]
\end{enumerate}
\end{theorem}
\begin{proof}
In this proof, we will still write $n$ instead of $2$ in some equations even though we have assumed that $n=2$.
If $cX^{q^s}\in \cD_{2,s}(\gamma)$ is always mapped to another monomial for all $c\in \F_{q^{2n}}$, then the same calculation in Theorem \ref{th:equivalence} shows the necessary and sufficient conditions (a) and (c).
In the rest of the proof, we always assume that $cX^{q^s}\in \cD_{2,s}(\gamma)$ is mapped to a binomial for some $c$.
Taking $i=1$ in \eqref{eq:binomials}, we see that $j+s+l$ can be taken for exact two possible value: $j+s+l\equiv 0 \pmod{2n}$ or $j+s+l\equiv n \pmod{2n}$.
Let us consider the case $s\equiv t \pmod{2n}$.
First we assume that $j+s+l\equiv 0 \pmod{2n}$.
From $\varphi_1\circ c_1X^{q^{s}} \circ \varphi_2\in\cD_{2,t}(\theta)$ and \eqref{eq:binomials}, we derive that the coefficient of $X^{q^{j+s+l}}$ belongs to $\F_{q^n}$ and the coefficient of $X^{q^{j+s+l+n}}$ belongs to $\theta\F_{q^n}$, which means that
\begin{eqnarray}
\label{eq:k=n=2_qs_1}\lefteqn{cg^{q^{s+l}} c_1^{q^l} + dh^{q^{s+l+n}}c_1^{q^{l+n}}} \\
\nonumber & = & c^{q^n}g^{q^{s+l+n}} c_1^{q^{l+n}} + d^{q^n}h^{q^{s+l}}c_1^{q^{l}}
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:k=n=2_qs_2} \lefteqn{(ch^{q^{s+l}}c_1^{q^l} + d g^{q^{s+l+n}} c_1^{q^{l+n}})\theta^{q^n}}\\
\nonumber & = &(c^{q^n}h^{q^{s+l+n}}c_1^{q^{l+n}} + d^{q^n} g^{q^{s+l}} c_1^{q^{l}})\theta
\end{eqnarray}
hold for every $c_1\in\F_{q^{2n}}$. If we view \eqref{eq:k=n=2_qs_1} as a polynomial of $c_1$, by comparing the coefficients of $c_1^{q^l}$ (or those of $c_1^{q^{l+n}}$) in it, we obtain
\begin{equation}\label{eq:cgdh1}
cg^{q^{s+l}}=d^{q^n}h^{q^{s+l}}.
\end{equation}
Similarly, from \eqref{eq:k=n=2_qs_2} we derive
\begin{equation}\label{eq:chdg1}
ch^{q^{s+l}}\theta^{q^n} = d^{q^n}g^{q^{s+l}}\theta.
\end{equation}
Furthermore, from $\varphi_1\circ aX\circ \varphi_2\in \cD_{2,t}(\theta)$ with $a\in \F_{q^n}$ we can derive more conditions. As the coefficient of $X^{q^{j+l}}=X^{q^{2n-s}}$ in it must be zero, by plugging $a=c_i$ and $i=0$ into \eqref{eq:binomials}, we get
\begin{equation}\label{eq:cgdh2}
cg^{q^l}+ dh^{q^{l+n}}=0.
\end{equation}
Analogously, by checking the coefficient of $X^{q^{j+2s+l+n}}=X^{q^{j+l}}=X^{q^{3s}}$ in $\varphi_1\circ \gamma^\rho bX^{q^{2s}}\circ \varphi_2\in \cD_{2,t}(\theta)$ with $b\in \F_{q^n}$, we obtain
\begin{equation}\label{eq:chdg2}
ch^{q^{2s+l}}\gamma^{\rho q^l}+ dg^{q^{l}}\gamma^{\rho q^{l+n}}=0.
\end{equation}
For $j+s+l\equiv n \pmod{2n}$, the proof is similar. By checking the coefficients of $X^{q^{j+s+l}}=X^{q^n}$ and $X^{q^{j+s+l+n}}=X$ in $\varphi_1\circ c_1X^{q^{s}} \circ \varphi_2\in\cD_{2,t}(\theta)$, we obtain
\begin{align}
\label{eq:-cgdh1} ch^{q^{s+l}}&=d^{q^n}g^{q^{s+l}},\\
\label{eq:-chdg1} cg^{q^{s+l}}\theta^{q^n} &= d^{q^n}h^{q^{s+l}}\theta.
\end{align}
Furthermore, as the coefficient of $X^{q^{j+l+n}}=X^{q^{3s}}$ in $\varphi_1\circ aX\circ \varphi_2\in \cD_{2,t}(\theta)$ must be $0$ for every $a\in \F_{q^n}$,
\begin{equation}\label{eq:-cgdh2}
ch^{q^l}+ dg^{q^{l+n}}=0.
\end{equation}
By checking the coefficient of $X^{q^{j+2s+l}}=X^{q^{3s}}$ in $\varphi_1\circ \gamma^\rho bX^{q^{2s}}\circ \varphi_2\in \cD_{2,t}(\theta)$ with $b\in \F_{q^n}$, we get
\begin{equation}\label{eq:-chdg2}
cg^{q^{2s+l}}\gamma^{\rho q^l}+ dh^{q^{l}}\gamma^{\rho q^{l+n}}=0.
\end{equation}
Hence, \eqref{eq:-cgdh1}, \eqref{eq:-cgdh2}, \eqref{eq:-chdg1} and \eqref{eq:-chdg2} can be simply obtained by switching of $g$ and $h$ in \eqref{eq:cgdh1}, \eqref{eq:cgdh2}, \eqref{eq:chdg1} and \eqref{eq:chdg2}, respectively. We finish the proof of the necessity part of (b).
After a careful check of the previous calculations, we can see that if $c$, $d$, $g$, $h$, $\rho$ and $l$ satisfy \eqref{eq:cgdh1}, \eqref{eq:chdg1}, \eqref{eq:cgdh2} and \eqref{eq:chdg2} simultaneously, then the map $(\varphi_1, \varphi_2, \rho)$ is indeed an equivalence map between $\cD_{2,s}(\gamma)$ and $\cD_{2,t}(\theta)$. Therefore the condition (b) is also sufficient.
For the case (d) in which $s\equiv -t \pmod{4}$, the proof is the same. For $j+s+l\equiv 0\pmod{2n}$, we can also get the same equations \eqref{eq:cgdh1} and \eqref{eq:chdg1}. However, now $g$ and $h$ are switched in \eqref{eq:cgdh2} and \eqref{eq:chdg2}. We omit the details of these calculations.
\end{proof}
\begin{remark}
It is possible that the conditions (b) and (d) hold. For instance, let $q=3$, $s=1$ and $\gamma=\theta=\omega$ which is a root of $X^4+2X^3+2\in \F_{q}[X]$. Taking $l=0$, $c=1$, $d=\omega^{36}$, $g=\omega^2$, $h=\omega^{54}$ and $\rho=\mathrm{id}$, we get an equivalence map from $\cD_{2,1}(\gamma)$ to itself.
\end{remark}
\begin{remark}
By Theorem \ref{th:equivalence} (a) and Theorem \ref{th:equivalence_n=k} (a) (b), the automorphism group of an MRD code $\cD_{k,s}(\gamma)$ can also be determined.
\end{remark}
Recall that in Corollary \ref{coro:inequivalence}, the equivalence between $\cD_{n,s}(\gamma)$ and $\cH_{n,t}(\eta, n)$ is the unique open case. Finally we will solve this problem by using the same approach which was used in the proofs of Theorems \ref{th:equivalence} and \ref{th:equivalence_n=k}.
\begin{theorem}\label{th:n=k_equivalence_D_H}
Let $n,s,t\in \Z^+$ satisfying $\gcd(2n,s)=\gcd(2n,t)=1$. Let $\gamma$ and $\eta$ be in $\F^*_{q^{2n}}$ satisfying that $N(\gamma)$ is a non-square in $\F_q$ and $N(\eta)\neq 1$. Then $\cD_{k,s}(\gamma)$ and $\cH_{k,t}(\eta, h)$ are not equivalent for all $k$ and $h$.
\end{theorem}
\begin{proof}
By Corollary \ref{coro:inequivalence}, we only have to face the case $k=n=h$. Assume that $(\varphi_1, \varphi_2,\rho)$ defines an equivalence map from $\cD_{n,s}(\gamma)$ to $\cH_{n,t}(\eta, n)$. As we are going to show that such a map never exist, without loss of generality, we assume that $\rho=\mathrm{id}$; otherwise we consider the equivalence map from $\cD_{n,s}(\gamma^\rho)$ to $\cH_{n,t}(\eta, n)$.
We separate our proof into two parts depending on the value of $n$.
(a) When $n\geq 3$, the proof is quite similar to that for Theorem \ref{th:equivalence}. By Lemma \ref{lm:equivalence_map_D_H}, we can assume that $\varphi_1=dX^{q^l}$ and $\varphi_2 = g X^{q^j}$ for some $d, g\in \F_{q^n}^*$ and $l,j\in \{0,\cdots, 2n-1\}$.
For arbitrary $c_i\in \F_{q^{2n}}$ with $i\in \{1,\cdots, n-1 \}$,
\[ \varphi_1\circ c_iX^{q^{is}} \circ \varphi_2 = dc_i^{q^l} g^{q^{is+l}} X^{q^{is+l+j}},\]
which should belong to $\cH_{n,t}(\eta, n)$. It follows that $is+l+j\in \{t,2t, \cdots, (n-1)t\}$. As $\gcd(2n,s)=\gcd(2n,t)=1$, we can assume that $s=rt$ whence
\[ \{irt+l+j : i =1,2,\cdots, n-1 \}= \{t,2t, \cdots, (n-1)t\}. \]
It is straightforward to see that $r=1$ and $l+j\equiv 0 \pmod{2n}$, or $r=-1$ and $l+j\equiv nt \pmod{2n}$.
No matter $r=1$ or $-1$, applying $(\varphi_1, \varphi_2, \mathrm{id})$ to $aX$, we see that one of the coefficients of $X$ and $X^{q^{kt}}$ is zero and the other one is a function of $a$. This contradicts the assumption $\varphi_1\circ aX\circ \varphi_2\in \cH_{n,t}(\eta, n)$ for every $a\in \F_{q^n}$.
(b) When $n=2$, it is clear that $s$ and $t$ are congruent to $\pm 1$ modulo $2n$. In fact, it is sufficient to consider the case $t=s$, because $\cH_{n,-s}(\eta, n)$ is equivalent to $\cH_{n,s}(1/\eta^{q^{2s}},n)$.
As the middle and right nuclei of $\cD_{n,s}(\gamma)$ to $\cH_{n,t}(\eta, n)$ are $\F_{q^n}$, we can assume that $\varphi_1=cX^{q^l} + dX^{q^{l+n}}$ and $\varphi_2=gX^{q^j} + hX^{q^{j+n}}$. Our first goal is to show that $\varphi_1$ and $\varphi_2$ must be monomials.
Assume, by way of contradiction, that $c,d,g,h$ are all nonzero. Plugging $i=1$ and $c_i=w$ into \eqref{eq:binomials}, we get
\begin{align}
\label{eq:binomials-final} \varphi_1\circ wX^{q^s} \circ \varphi_2 = &(cg^{q^{s+l}} w^{q^l} + dh^{q^{s+l+n}}w^{q^{l+n}})X^{q^{j+s+l}}\\
\nonumber & + (ch^{q^{s+l}} w^{q^l} + dg^{q^{s+l+n}}w^{q^{l+n}}) X^{q^{j+s+l+n}},
\end{align}
which should belong to $\cH_{n,s}(\eta, n)$ for all $w\in \F_{q^{2n}}$.
As in the proof of Theorem \ref{th:equivalence_n=k}, we see that $j+s+l$ can only take two possible value: $j+s+l\equiv 0 \pmod{2n}$ or $j+s+l\equiv n \pmod{2n}$.
If $j+s+l\equiv 0 \pmod{2n}$, from
$\varphi_1\circ wX^{q^s} \circ \varphi_2\in \cH_{n,t}(\eta, n)$, we derive
\[\eta(cg^{q^{s+l}} w^{q^l} + dh^{q^{s+l+n}}w^{q^{l+n}})^{q^n} = ch^{q^{s+l}} w^{q^l} + dg^{q^{s+l+n}}w^{q^{l+n}}\]
for every $w\in \F_{q^{2n}}$, which means
\[\left\{
\begin{array}{l}
\eta c^{q^{n}}g^{q^{3s+l}} = dg^{q^{3s+l}},\\
\eta d^{q^n}h^{q^{s+l}}= ch^{q^{s+l}}.
\end{array}
\right. \]
As we have assumed that $g$ and $h$ are both nonzero, the two above equations implies that $\eta c^{q^n}=d$ and $\eta d^{q^n}=c$. Hence $\eta^{q^n+1}=1$, which implies that $N(\eta)= \eta \eta^q\eta^{q^2}\eta^{q^3}= (\eta \eta^{q^2})(\eta \eta^{q^2})^q=1$ contradicting the assumption that $N(\eta)\neq 1$.
For $j+s+l\equiv n\pmod{2n}$, the proof is analogous and we omit it.
Therefore we have proved that $\varphi_1=dX^{q^l}$ and $\varphi_2 = g X^{q^j}$ for some $d, g\in \F_{q^{2n}}^*$ and $l,j\in \{0,\cdots, 3\}$. As the case $n\ge 3$ proved in part (a), it is routine to expand $\varphi_1 \circ aX \circ \varphi_2$ and to check that it cannot belong to $\cH_{n,t}(\eta, n)$. Hence there is no equivalence map from $\cD_{n,s}(\gamma)$ to $\cH_{n,t}(\eta, n)$.
\end{proof}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,869,038,154,941 | arxiv | \section{Introduction}
Let $BZ_{n}$ denote the classifying space of the cyclic group $Z_{n}$. The
K $-ring of $BZ_{n}$ is given by $K(BZ_{n})=Z\left[ \mu \right] \diagup
((1+\mu )^{n}-1)$ where $\mu =\eta -1$ is the reduction of the tautological
complex line bundle (Hopf bundle) over $BZ_{n}.$ When $n$ is odd, the $KO
-ring of $BZ_{n}$ is described in the following way: Let $w=r(\mu )$ be the
realification of $\mu $. Then, $KO(BZ_{n})=Z\left[ \omega \right] \diagup
(wf_{n}(w))$ where $f_{n}(\cdot )$ is the polynomial
\begin{equation*}
f_{n}(w)=n+\sum_{j=1}^{\frac{n-3}{2}}\frac
n(n^{2}-1^{2})(n^{2}-3^{2})...(n^{2}-(2j-1)^{2})}{2^{2j}.(2j+1)!}w^{j}+w^
\frac{n-1}{2}}.
\end{equation*
The topological $K$-theory over the real numbers is a little more involved
when $n$ is even. See [2] for details.
From now on, for simplicity, let $n=p$ be an odd prime number, although the
very simple idea of the paper can be extended for all natural numbers. In
this case, the topological K-theory of the lens spaces is very well-studied.
See [1].
In this note, we will define a reduction called "complete reduction" for the
relations of $\mu $ and $\omega $ coming from the generators of the
principal ideals of the above rings. Complete reduction is the smallest way
of writing these relations geometrically, respecting the Atiyah-Hirzebruch
spectral sequence and detects the group cohomology of $Z_{p}$ by means of
the filtrations of that spectral squence.
In order to obtain first few terms of the complete reduction, we make a
division trick and this gives some invariants -numbers- which we named
K_{n} $ for the complex case and $M_{n}$ for the real case, interesting not
only for $K\Lambda $-rings of lens spaces in topological $K$-theory but for
R\Lambda $-rings of cyclic groups in representation theory and for
cyclotomic rings of integers in number theory, due to the equivalence of
theories $K(BZ_{p}),$ $R(Z_{p})$ and $Z\left[ \exp \frac{2\pi i}{p}\right] $.
\section{K-Reduction}
By iteration, the relation $(1+\mu )^{p}-1=0$ can be put in the form
\begin{equation*}
p\mu =-\mu ^{p}+\frac{p-1}{2}\mu ^{p+1}+a_{2}\mu ^{p+1}+......+a_{n}\mu
^{p+n}+......
\end{equation*
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\textrm{Definition 1.1. }The relation above is called completely reduced if
\left\vert a_{n}\right\vert \leq \frac{p-1}{2}$ for all $n\geq 2.$
Obviously the complete reduction is unique.
\textrm{Example 1.2.} For $p=3,$ the complete reduction is periodic with
period 2 and repeating coefficients $-1,1.$ For $p=5,$ the complete
reduction is periodic with period 6 and repeating coefficients
-1,2,-2,1,0,0.$ For $p=7,$ the first 28 coefficients of the complete
reduction is as below
\begin{eqnarray*}
7\mu &=&-\mu ^{7}+3\mu ^{8}+3\mu ^{9}+2\mu ^{10}+2\mu ^{11}+3\mu ^{12}+\mu
^{13}-2\mu ^{14}+0.\mu ^{15}+\mu ^{16} \\
&&+\mu ^{17}-2\mu ^{18}-2\mu ^{19}+0.\mu ^{20}+3\mu ^{21}-\mu ^{22}+2\mu
^{23}+\mu ^{24}-3\mu ^{25} \\
&&-\mu ^{26}+3\mu ^{27}+0.\mu ^{28}+2\mu ^{29}+0.\mu ^{30}+2\mu ^{31}-\mu
^{32}-2\mu ^{33}+\mu ^{34} \\
&&-653\mu ^{35}-3662\mu ^{36}-5800\mu ^{37}-4373\mu ^{38}-1651\mu
^{39}-253\mu ^{40}
\end{eqnarray*
Quite surprisingly, we couldn't observe periodicity of the coefficients of
the complete reduction for $p=7.$ One should do further reduction to decide
if it is periodic or not. Although we had the opposite belief after this
experimentation, as an interesting open problem in number theory, we make
the following probably false conjecture.
\textbf{Conjecture 1.3.}\textrm{\ }The complete reduction is periodic for
all odd prime numbers.
Next, we want to express first few coefficients of the complete reduction in
terms of $p.$ We introduce a very simple idea, probably done before, many
times in history. We will do the division trick used in the example above.
\textrm{Definition 1.4. }Define integers $K_{p,n}$ by
\begin{equation*}
\sum_{n=0}^{\infty }K_{p,n}\mu ^{n}=\frac{-p\mu }{\left( 1+\mu \right)
^{p}-1-\mu ^{p}}
\end{equation*}
Let us denote $K_{p,n}$ simply by $K_{n}$ when $p$ is understood. Then $p\mu
=\sum_{n=0}^{\infty }K_{n}\mu ^{p+n}$ is a reduction of the relation of $\mu
.$ But, of course, it is not the complete reduction except for the primes $3$
and $5$. On the other hand, the first $p+1$ coefficients of the complete
reduction of the relation of $\mu $ are $K_{n}$ $(\func{mod}$ $p),$ $0\leq
n\leq p.$
The numbers $K_{n}$ satisfy a recursive formula. By using this recursive
formula or by direct division, which is the same process, we can compute
K_{n}$ for all $n\leq p-2$ as a polynomial of $p.$ We computed upto $K_{6}$
as below:
\begin{eqnarray*}
K_{1} &=&\frac{p-1}{2},\text{ \ \ for }p\geq 3 \\
K_{2} &=&-\frac{p^{2}-1}{12},\text{ \ \ for }p\geq 4 \\
K_{3} &=&\frac{p^{2}-1}{24},\text{ \ \ for }p\geq 5 \\
K_{4} &=&\frac{(p^{2}-1)(p^{2}-19)}{720},\text{ \ \ for }p\geq 6 \\
\text{ }K_{5} &=&-\frac{(p^{2}-1)(p^{2}-9)}{480},\text{ \ for }p\geq 7 \\
K_{6} &=&-\frac{(p-1)(2p^{5}+122p^{4}-1825p^{3}+8375p^{2}-17617p+15263)}
60480},\text{ \ \ for }p\geq 8
\end{eqnarray*
The author doesn't know whether these, probably very well-known, polynomials
are used somewhere in number theory. For large primes, we can use these
tabulated formulas, to find at least first $p-1$ terms of the complete
reduction for the prime number $p$.
\textrm{Example 1.5. }For $p=23,$ $K_{1}=11,$ $K_{2}=-44\equiv 2,$
K_{3}=22\equiv -1,$ $K_{4}=374\equiv 6,$ $K_{5}=-572\equiv 3,$
K_{6}=-10494\equiv -6,$ and hence the first seven terms of the complete
reduction ar
\begin{equation*}
23\mu =-\mu ^{23}+11\mu ^{24}+2\mu ^{25}-\mu ^{26}+6\mu ^{27}+3\mu
^{28}-6\mu ^{29}+......
\end{equation*}
Here, we recall the famous Bernoulli number $B_{n}$. It immediately follows
from the definitions that $B_{n}=\lim_{p\rightarrow \infty }\frac{-n!K_{n}}
p^{n}}.$
\section{KO-Reduction}
By iteration, the relation $wf_{p}(w)=0$, explicitly,
\begin{equation*}
p\omega +\sum_{j=2}^{\frac{p-1}{2}}\frac
p(p^{2}-1^{2})(p^{2}-3^{2})...(p^{2}-(2j-3)^{2})}{2^{2j-2}.(2j-1)!}\omega
^{j}+\omega ^{\frac{p+1}{2}}=0
\end{equation*
can be written in the for
\begin{equation*}
p\omega =-\omega ^{\frac{p+1}{2}}+b_{1}\omega ^{\frac{p+3}{2}}+b_{2}\omega ^
\frac{p+5}{2}}+......\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ }
\end{equation*}
Similar to the complex case, we call the relation above completely reduced
if $\left\vert b_{n}\right\vert \leq \frac{p-1}{2}$ for all $n\geq 1.$
Complete reduction is clearly unique.
\textrm{Example 2.1. }For $p=3,$ the complete reduction is $3\omega =-\omega
^{2}$. It is periodic of period $1$, all coefficients, after $a_{0}=-1,$
being $0$. For $p=5,$ $5\omega =-\omega ^{3}+\omega ^{4}+5\omega ^{3}$ is
the complete reduction. It is periodic of period $2$ with repeating
coefficients $-1,+1.$ For $p=7,$ we did some reduction and found first $\ 16$
terms of the complete reduction as below
\begin{eqnarray*}
7\omega &=&-\omega ^{4}+2\omega ^{5}-3\omega ^{6}-3\omega ^{7}+2\omega
^{8}-\omega ^{9}-\omega ^{10}-3\omega ^{11} \\
&&-\omega ^{12}-\omega ^{13}+\omega ^{14}+\omega ^{15}+\omega ^{16}+\omega
^{17}-\omega ^{18}-3\omega ^{19} \\
&&-2481\omega ^{20}-1627\omega ^{21}-266\omega ^{22}
\end{eqnarray*
Again, similar to the complex case, we couldn't observe a periodicty for the
prime number $7.$ On the other hand, we conjecture that the complete
reduction is periodic in the real case too.
Next we define some numbers for the computation of the coefficients of the
complete reduction in terms of $p$.
\textrm{Definition 2.2. }Define integers $M_{p,n}$ by
\begin{equation*}
\sum_{n=0}^{\infty }M_{p,n}\omega ^{n}=\frac{-p\omega }{wf_{p}(w)-\omega ^
\frac{p+1}{2}}}
\end{equation*}
Let us denote $M_{p,n}$ simply by $M_{n}$ when $p$ is understood. Then
p\omega =\sum_{n=0}^{\infty }M_{n}\omega ^{n+\frac{p+1}{2}}$ is a reduction
for $\omega .$ Of course, it is not complete reduction except for the primes
$3$ and $5$. On the other hand, the first $\frac{p+1}{2}$ coefficients of
the complete reduction of the relation of $\omega $ are $M_{n}$ $(\func{mod}$
$p),0\leq n\leq \frac{p-1}{2}.$
Clearly $M_{0}=-1$ for all $p.$ We can calculate $M_{n}$ by writing a
recursive formula like we did in complex case, or by direct division. We
obtain formulas for $M_{n}$ in terms of $p$ which are valid for $p\geq 2n+3.$
The next three ar
\begin{eqnarray*}
M_{1} &=&\frac{p^{2}-1}{24},\text{ \ \ \ \ \ }p\geq 5 \\
M_{2} &=&-\frac{(p^{2}-1)(7p^{2}+17)}{5760},\text{ \ \ \ \ \ }p\geq 7 \\
M_{3} &=&\frac{(p^{2}-1)(57p^{4}-34p^{2}+169)}{322560},\text{ \ \ \ \ \
p\geq 9
\end{eqnarray*}
\textrm{Example 2.3. }For $p=23,$ $M_{1}=22\equiv -1,$ $M_{2}=-341\equiv 4,$
$M_{3}=26081\equiv -1$ and the first four terms of the complete reduction ar
\begin{equation*}
23\omega =-\omega ^{12}-\omega ^{13}+4\omega ^{14}-\omega ^{15}+......
\end{equation*}
Note also that by means of the realification map, $M_{n}$ should be
expressed in terms of $K_{n}.$ So, if the conjecture is true for the complex
case, it is also true for the real case.
|
2,869,038,154,942 | arxiv | \section{Introduction}
Face recognition systems, which identify an individual with her/his face, have been widely used in practical applications such as mobile phone unlocking. However, the existing face recognition techniques cannot differentiate between genuine faces (captured from human) and spoofing faces (captured from the faces in images, digital display, etc.). Most of the face recognition systems are therefore vulnerable to Presentation Attack (PA), including print attack, replay attack. Attackers could bypass the face recognition systems by presenting different types of spoofing faces since face images can be readily available to attackers from social platforms, e.g., Facebook, Instagram \cite{patel2016secure}. To guarantee the security of face recognition systems, there are increasing demands for developing the FAS techniques.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{figure_face.pdf}
\caption{Examples of Presentation Attack (PA). (a): display attack. The face is in a digital display screen \cite{replayattack}. (b): replay attack \cite{CASIAFASD}. The face is in a video. (c): print attack. The face is in a print photo \cite{patel2016secure}. (d): print attack. The face is in a print photo that is tailored \cite{ROSE}. }\label{fig:face}
\hspace{-5cm}
\end{figure}
\par Traditionally, image descriptors, such as Local Binary Pattern (LBP) and Scale Invariant Feature Transform (SIFT), are utilized to extract features for describing the data from the FAS databases. Recently, with the powerful ability for learning deep representations from data, Convolutional Neural Networks (CNNs) have been successfully exploited in various visual tasks, e.g. objects classification \cite{krizhevsky2012imagenet}, face recognition \cite{zhu2013deep}, etc. and have achieved the state-of-the-art performances. The attempts of CNNs in the FAS have been also reported and have achieved much improvement \cite{Yang2014Learn,krizhevsky2012imagenet,menotti2015deep, Xu2016Learning,sigportLSTMface}.
Although CNN-based methods have shown their excellent capacities, it is pointed out that they are vulnerable to adversarial attack \cite{adv_GoodFellow_2016_CoRR_intriguing}. Under such adversarial attack, a CNN-model would fail to correctly classify the adversarial examples, which are generated by imposing some human-invisible perturbations on the original samples. What is more, though adversarial examples are usually manipulated in the digital world, they could still take effects even after a print-and-capture cycle \cite{adv_GoodFellow_2016_CoRR_PhysicalAttack, adv_DawnSong_2018_CVPR_AttackSign, adv_Sharif_2016_CCS_glassFace}. In other words, the adversarial attack can be conducted in the physical world. Worse still, the adversarial examples are shown to be transferable. Empirical experiments in \cite{adv_DawnSong_2017_ICLR_Transfer, securekernel2016} and theoretical analysis in \cite{transfer_space} show that adversarial examples can be transferred to attack other models as long as they adopt the same or similar features even if the classification models are different (Support Vector Machine, Random Forest, etc.). Therefore, it is likely for attackers to generate adversarial-spoofing examples to attack a CNN model for face liveness detection in a face recognition system.
Fortunately, using handcrafted feature-based methods could be a solution. In \cite{securekernel2016, transfer_space}, it is revealed that adversarial examples are non-transferable when they are in the different feature spaces as the input of their victim models. This indicates that the handcrafted features from RGB images as input for a face anti-spoofing model could be an approach against the adversarial-spoofing attack targeted at the CNN-based models. In the cybersecurity applications, it is also suggested in \cite{DBLP} that ensembling a diverse pool of models of different features could improve the security of a cyber system against the adversarial attack. Hence, to alleviate the threats of the adversarial attack, handcrafted feature-based methods also deserve efforts of exploration.
\par In this paper, we introduce a new feature-based method, the deep forest \cite{gcforest}, to the FAS problem. The deep forest is an advanced synthesis of tree-ensemble methods. It consists of the Grained-Scanning Mechanism (GSM) for learning representations from data and the layer-cascade strategy for further processing the representations. The deep forest has been evaluated on several visual tasks, e.g., face recognition, handwriting recognition, etc., and it achieves competitive performance \cite{gcforest}. Since the deep forest is newly-published, there are not yet many works about using the deep forest in applications related to biometrics. To the best of our knowledge, we are the first to introduce the deep forest in the problem of the FAS. However, the performance is not satisfactory in our initial attempt when the GSM, proposed by \cite{gcforest}, is directly used to learn representations for the spoofing detection. The unsatisfactory result suggests that the GSM is not competent enough in capturing the cues for the face spoofing detection. Inspired by texture analysis \cite{maatta2011face, Yang2013Face, Nosaka2011Feature}, the baseline approaches in the research area of the FAS, we propose to employ Local Binary Pattern (LBP) descriptors to construct the representations of spoofing information. Experimental results show that the proposed approach has achieved competitive performance.
\par $\bullet$ To the best of our knowledge, this is the first work that introduces the deep forest to the problem of FAS. Our method offers an important reference and a competitive option to those who want to fuse diverse methods in their schemes for system-level security in their cases.
\par $\bullet$ We re-devise the representation constructing by utilizing the LBP descriptors instead of the GSM. The proposed scheme that integrates LBP descriptors and the deep forest learning method achieves better results than that of the GSM \cite{gcforest}.
\par $\bullet$ The proposed scheme shows competitive performance compared to state-of-the-art approaches. On the IDIAP REPLAY-ATTACK database \cite{replayattack}, 0\% Equal Error Rate (EER) is achieved. Also, extensive experiments on the two newly-published databases, MSU USSA database and ROSE-YOUTU database have been conducted. On the MSU database, EER of 1.56\% is obtained, which is a competitive result compared to the Patch-based CNN (0.55\% EER) and the Depth-based CNN (2.62\% EER) proposed by \cite{Atoum2018Face}.
\par The rest of the paper is organized as follow: Section \ref{sec: review} presents brief literature reviews about approaches to FAS and about learning methods that are forest-related. The proposed scheme is elaborated in Section \ref{sec:method}. The performance of the proposed scheme is evaluated in Section \ref{sec:exp}. Finally, Section \ref{sec:end} concludes this paper.
\section{Related Works} \label{sec: review}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\linewidth]{figure_disparities.pdf}
\caption{Examples of genuine faces and spoofing faces from MSU USSA database \cite{patel2016secure}. Columns (a), (b), (c) are the genuine face, display face and printed photo face and their correspondent magnifying regions, respectively.}\label{fig:texture}
\end{center}
\hspace{-20cm}
\end{figure}
In this section, the literature on both traditional handcrafted feature-based methods and CNN-based methods in the problem of FAS is first reviewed, followed by the tree-ensemble learning methods.
\subsection{The Existing Works on Face Anti-Spoofing}
\subsubsection{The Traditional Methods}
Most of the traditional FAS approaches focus on designing handcrafted features and learning classifiers with traditional learning methods, e.g., Support Vector Machine (SVM) \cite{Atoum2018Face}. Texture analysis is one of the main approaches to spoofing faces detection since there are inherent texture disparities between genuine faces and spoofing faces of the print attack or of the replay attack. As can be seen in Fig.~\ref{fig:texture}, images of the spoofing faces, compared to the genuine faces, usually have lower quality and contain visual artifacts because of the recapturing process. These disparities can be described effectively by texture descriptors. Relevant methods aimed at capturing these disparities in the Fourier spectrum or spatial domain are reported. Ref.\cite{Tan2010Face} uses Difference-of-Gaussian (DoG) features to describe the disturbance of frequency resulting from the recapturing. Besides, the Local Phase Quantization (LPQ) that analyzes distortion through the phase is also discussed by \cite{Gragnaniello2015An}. In addition, in the spatial domain, a significant number of research works employ the LBP-based features to describe the disparities from local texture information \cite{maatta2011face, Yang2013Face, Nosaka2011Feature}. Analogously, methods that utilize Scale-Invariant Feature Transform (SIFT) and Speed-Up Robust Feature \cite{Boulkenafet2017Face} are also reported. Besides, to utilize motion information from the temporal domain, the texture-based methods mentioned above are extended into three orthogonal planes, e.g., LBP-TOP \cite{Pereira2012LBP}, and LPQ-TOP \cite{Arashloo2017Face}. Moreover, the color information of spoofing faces, which is less abundant after distortions in the recapturing process, is essential in discriminating spoofing faces. Therefore, color texture methods are proposed in \cite{Color2017Face} by extracting features from separate channels in a certain color space (e.g., to extract features of images in HSV space from the three components H, S, and V individually) using the aforementioned methods.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{figure3.pdf}
\caption{The illustration of how the GSM learns representations for local information \cite{gcforest}. First, a sliding window with a certain stride is used to scan raw pixels. Then, all the scanned patches are fed to forests, a random forest (black) and a completely-random forest (rose). Finally, all the output results from the forests will be concatenated as the representations of the raw pixel data. For full details about the GSM, please refer to \cite{gcforest}}. \label{fig:scanning}
\hspace{-20cm}
\end{figure}
\subsubsection{The Deep-learning Based Methods}
Recently, CNN-based methods with the powerful ability for learning deep representations from data has attracted many research attention. Ref.\cite{Yang2014Learn} trains a CNN to learn deep representations for face anti-spoofing based on the AlexNet architecture \cite{krizhevsky2012imagenet}. After that, the feasibility of CNN in learning deep representation for biometric, including face anti-spoofing, is further demonstrated by \cite{menotti2015deep}, and more CNN-based methods are increasingly reported \cite{Atoum2018Face, Haoliang2}. In addition, efforts in exploiting Long Short-Term Memory networks (LSTM) to utilize temporal information from frames of videos are also reported in \cite{Xu2016Learning,sigportLSTMface}.
\subsection{The Tree-Ensemble Methods}
\begin{figure}[t!]
\centering
\includegraphics[width=0.958\linewidth]{figure2.pdf}
\caption{Illustrations of constructing multi-scale representations. The (a) and (b) illustrate how the MGSM and the proposed scheme construct representations on multi-scales respectively.}\label{fig:multiscale}
\end{figure}
The tree-ensemble methods are based on decision trees. The random decision forest is first proposed as a solution towards the dilemma between performance and generalization of the decision tree \cite{Ho1998The}. It is later ameliorated to the Random Forest (RF) by introducing the feature sampling and the data bootstrapping by \cite{Breiman2001Random}. Completely-Random tree Forest (CRF) has a mechanism that is much more ``random'' than RF since it splits the nodes randomly, regardless any criterions \cite{Liu2008Spectrum}.
Both the RF and the CRF would project original features into subspaces by sampling the original features. This reduces dimensions of features to process, which facilitates the handling of high dimensional features \cite{gcforest}. The Gradient Boosting Decision Tree (GDBT) methods introduce loss functions for training which have not been included in the RF and the CRF. The GDBT models are trained by boosting the gradients of the loss. An effective way to implement GDBT is proposed by \cite{xgboost}, namely the XGBoost. The XGBoost provides a more flexible and powerful scheme that approximates non-differentiable loss functions by the first two terms of their Taylor Expansion, so users are enabled to define arbitrary loss functions in their problems. The XGBoost has achieved superior performance among many GDBT implementations.
\par The deep forest, proposed by \cite{gcforest}, can achieve state-of-the-art performance compared to CNN-based methods on several visual tasks reported by \cite{gcforest}. It is proposed in \cite{gcforest} that a Grain Scanning Mechanism (GSM) is used to learn representation from data and a cascade strategy for further processing the representations. The RF is a basis of the GSM of the deep forest and CRF offers another option for the deep forest. By combining different types of forest, the diversity of representations learned by the deep forest can be improved \cite{gcforest}. The XGBoost and other implementations of GBDT can also be a basis in the deep forest. More details about the deep forest can be found in \cite{gcforest}. Unlike CNNs, whose structures are fixed during the training process, the number of cascade levels of the deep forest model depends on the scale of the data and grows as the training proceed. Once the output scores (accuracy, loss, etc.) begins to converge, the growth stops. Hence, the complexity of the model can be adaptively adjusted according to the scale of the database. This ensures that the deep forest can maintain a satisfactory result even on a small-scale database \cite{gcforest}.
\section{The Proposed Deep Forest with LBP Features} \label{sec:method}
\par The LBP is selected due to two reasons. Firstly, LBP features cannot be reconstructed back to RGB pixels images, thus helpful against the adversarial-spoofing attack. Secondly, LBP is designed for texture description, which may be appropriate for the face spoofing problem. This section will first elaborate on how to use the LBP descriptors \cite{menotti2015deep} to leverage texture information. Then, the proposed scheme integrating the deep forest and the LBP features will be presented.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{figure.pdf}
\caption{The procedure of the deep forest learning with multi-scale representations. The left part contains the LBP representations on three different scales, denoted by $S_{LBP}^1$, $S_{LBP}^2$ and $S_{LBP}^3$, respectively. The right part illustrates the cascading strategy. The ``black" and ``red" boxes are output results of each forest from the previous layer. They will be concatenated with features on different scales in different layers as the input of their next layers}\label{fig:architecture}
\hspace{-5cm}
\end{figure*}
\subsection{The LBP-based Features for Texture Analysis}
\par The Local Binary Pattern (LBP) descriptor proposed by \cite{Ojala2002Multiresolution} is a grey-scale descriptor that is effective for texture description. By calculating the LBP values of the binary patterns for each pixel and accumulating the occurrences of them into histograms, the LBP features can be extracted to represent local texture information. The calculation of LBP can be described as
\[LBP_{P, R} = \sum_{n=1}^P {\rm sgn} (r_n-r_c) \times 2^{n-1} \tag{1}\]
where ${\rm sgn}(\cdot)$ takes the sign of the operand, $r_c$ denotes the intensity value of the central pixel and $r_n (n=1, 2,..., P)$ denotes the intensity values of $P$ adjacent pixels distributed symmetrically at a circle of radius $R (R > 0)$. An image can be divided into several patches, and LBP histograms are calculated for each patch. Then, all the histograms can be concatenated into a feature vector to represent the image in the texture field. To fully exploit the color information, color LBP features will be employed by referring to \cite{Color2017Face} in this paper. The color LBP features are to extract LBP features from each component individually of the color space (e.g. Red, Blue, Green in the RGB space or Hue, Saturation, Value in the HSV space) and the obtained results will be concatenated into a feature vector \cite{Color2017Face}. These features based on LBP descriptors are to be called LBP features in this paper.
\par The GSM learns the representations of local information from adjacent pixels within a certain window, and similarly, the extraction of LBP-based features also considers the local information. On the other hand, the significant contrast between employing the GSM~\cite{gcforest} (illustrated in Fig.~\ref{fig:scanning}) and the LBP features lies in the representations constructing. The GSM constructs representations by learning from data while the LBP features construct representations with the domain knowledge of a researcher.
\subsection{The Proposed Multi-scale Representations}
Firstly, we propose to use the multi-scale LBP descriptor to construct the multi-scale representations. Taking multi-scales into accounts is important because the image samples are from practical capturing conditions and there are variations of the textural disparities. For example, although both Fig.~\ref{fig:texture}~(b) and (c) are spoofing faces, they are captured under different conditions, i.e., different devices, different circumstances, etc., so they show different texture appearances in both patterns and scales. Therefore, different scales of local information should be taken into considerations.
As is illustrated in Fig.~\ref{fig:multiscale}~(a), the Multi-Grained Scanning Mechanism (MGSM) \cite{gcforest} is used to learn representations from data on multiple scales. By changing the size of the sliding windows and conducting the GSM, relationships of the pixels on different scales will be learned, and local information on different scales can be leveraged \cite{gcforest}. On the other hand, Fig.~\ref{fig:multiscale}~(b) illustrates our proposed scheme. In the proposed scheme, there is a sliding window for scanning patches of pixels, and ${\rm LBP}_{P, R}^{u2}$ descriptors \cite{Ojala2002Multiresolution} are used to obtain LBP features. By changing the parameters $P$ and $R$, representations on different scales can be obtained. To utilize color information, the color LBP features will be adopted in the proposed scheme to construct representations on different color channels and scales according to \cite{Color2017Face}. One of the differences between the MGSM and our proposed scheme in constructing multi-scale representations lies in the selection of sliding windows. With the MGSM, windows of different sizes are needed to learn multi-scale representations, while multi-scale representations based on LBP features can be obtained with a fixed size window. This is because the representations on a certain scale learned by the GSM only depends on the size of the window; while, in the exploitations of LBP descriptors, the representation in a certain scale can also be determined by certain parameters of LBP descriptors, i.e., $P$ and $R$. Multiple sizes of windows are not adopted in this paper for the consideration that when small-size windows are used to extract LBP histograms, many of the bins are empty, and the obtained features will be of high-dimension and sparse, i.e., less informative.
\par Secondly, instead of concatenating all the representations on these three scales to construct a feature vector, as performed in some traditional methods \cite{Color2017Face, patel2016secure}, a circular cascading strategy is adopted in our proposed scheme by referring to \cite{gcforest}. This strategy is shown in Fig.~\ref{fig:architecture}. The $n$-th layer will be identified as ${\rm L}_n$. Representations on the three scales are denoted by $S_{LBP}^1$, $S_{LBP}^2$ and $S_{LBP}^3$, respectively. They will be individually fused with the output of each layer of the deep forest, and each layer will focus on the representations on a certain scale. The $S_{LBP}^1$ will be fed to the first layer of deep forest and fused with the output of ${\rm L_1}$. Then, the representation $S_{LBP}^2$ will be fused with the output from ${\rm L_1}$ and become the input of ${\rm L_2}$. The $S_{LBP}^3$ and ${\rm L_3}$ will do so. It should be noted that this cascading process is circular. For instance, in the next circle, the $S_{LBP}^1$ is concatenated in the ${\rm L_4}$. In the $k$-th circle, the $S_{LBP}^1$ will be concatenated in the ${\rm L}_{3k-2}, k\in \mathbb{N}$. Actually, the options of the scales and cascade strategies are flexible according to tasks.
\section{Experiments}\label{sec:exp}
In this section, a brief introduction for four databases, on which the experiments are conducted, will be first given. Then, the details about the settings of the experiments are shown. Finally, the experimental results are presented and discussed.
\subsection{Databases}
In our experiment, several representative databases have been employed. Two are benchmark databases, CASIA FASD \cite{CASIAFASD} and IDIAP REPLAY-ATTACK \cite{replayattack}, and two newly-published databases, ROSE-YOUTU LIVENESS database \cite{ROSE} and MSU USSA database \cite{patel2016secure}. The IDIAP, CASIA and ROSE-YOUTU databases consist of videos, covering replay attack, display attack, and print attack. The MSU database only contains images, i.e., only including display attack and print attack. More specifically, the scales of each database is summarized below.
\par The IDIAP REPLAY-ATTACK database \cite{replayattack} constitutes about 50 subjects. There are 60 videos of genuine faces and 300 videos of fakes faces in the training set. In the testing set, there are 80 videos of genuine faces and 400 videos of spoofing faces.
\par The CASIA database \cite{CASIAFASD} consists of 600 videos from 50 subjects, 20 subjects for the training set and 30 subjects for the testing set. For each subject, there are 3 videos of genuine faces and 9 videos of spoofing faces.
\par The ROSE-YOUTU LIVENESS database \cite{ROSE} contains 10 and 12 subjects in the training set and the testing set, respectively. For each subject, there are 180 videos, consisting of various types of attack and light conditions. This database is the latest database concerning PA, and there is a tailored print attack. As can be seen in Fig.~\ref{fig:face}, the background is not included in the recapturing process, making this database more challenging.
\par The MSU USSA database \cite{patel2016secure} includes 1,000 genuine faces (about 1,000 subjects) and about 6,000 spoof face images. There is no division of the training set and the testing set, so a 5-fold validation protocol is used to evaluate the performance of the FAS methods.
\subsection{Experimental Setups}\label{subsec:Preprocessing}
\par In the first place, it should be highlighted that some data preprocessing have been performed in our experiments. When conducting experiments on the MSU and IDIAP databases, the whole image frames are taken as the inputs for making full use of information. This is because the PA places the spoof media near the cameras to achieve high recapturing quality, and the ``background'' is also recaptured (as shown in (a) and (c) of Fig.~\ref{fig:face}). The recaptured background provides useful information which is beneficial to the spoof face detection. However, in the CASIA database and the ROSE-YOUTU database, the PA is far away from cameras and hence the background is not included in the recapturing process (as shown in (b) and (d) of Fig.~\ref{fig:face}). Under this circumstance, if the whole frame is used as the input, unnecessary interference (from the genuine background) will be introduced. Therefore, the Viola-Jones method \cite{viola} is utilized to detect the faces in frames from the ROSE-YOUTU and CASIA databases. The detected faces are cropped and are employed as the inputs in the experiments. Then the resolution of all the inputs (i.e., whole frames from the IDIAP databases and MSU database or cropped face regions from the CASIA database and ROSE-YOUTU database) is normalized to $128 \times 128$ pixels for a trade-off between the computational complexity and performance by referring to the prior works \cite{patel2016secure}.
\par Secondly, the settings of the experiments are elaborated. In \cite{gcforest}, three square sliding windows of different sizes are employed to evaluate the performance of the deep forest. By referring to this, three scales of windows, 16, 32 and 64 pixels, with the strides of 8, 16, and 32 pixels, respectively, are used for the MGSM in this paper. The obtained representations on these three scales will be denoted by $S_{GSM}^1$, $S_{GSM}^2$, $S_{GSM}^3$, respectively. In our proposed scheme, the size of the sliding window is fixed as 32 pixels and the stride is 16 pixels. For each image of $128 \times 128$, there will be $7 \times 7$ overlapped sub-patches in total. Three ${\rm LBP}_{P, R}^{u2}$ descriptors \cite{Ojala2002Multiresolution}, ${\rm LBP}_{8,1}^{u2}$, ${\rm LBP}_{16,2}^{u2}$ and ${\rm LBP}_{24,3}^{u2}$, are utilized to construct representations on three scales, and the obtained representations are referred to as $S_{LBP}^1$, $S_{LBP}^2$, $S_{LBP}^3$, respectively. Also, color LBP features in HSV and YCbCr spaces are considered in this paper. That is to extract features in each seperate channels of a image. For one patch, the feature lengths of color (RGB, HSV, YCbCr) ${\rm LBP}_{8,1}^{u2}$, ${\rm LBP}_{16,2}^{u2}$ and ${\rm LBP}_{24,3}^{u2}$ are $59 \times 3$, $59 \times 3$, $59 \times 3$, respectively. Since there are 49 sub-patches for each image, the lengths of the final $S_{LBP}^1$, $S_{LBP}^2$, $S_{LBP}^3$ will be $49 \times 59 \times 3$, $49 \times 243 \times 3$, $49 \times 555 \times 3$ respectively. During the cascading operation, $S_{LBP}^1$ / $S_{GSM}^1$ will be fused with ${\rm L_1}$, $S_{LBP}^2$ / $S_{GSM}^2$ with ${\rm L_2}$ and $S_{LBP}^3$ / $S_{GSM}^3$ with ${\rm L_3}$. This process continues circularly until the training process terminates. This process will stop automatically when accuracies converge for several rounds. As for the setting of forests utilized in the deep forest, four RFs and four CRF are employed and there are 500 trees in each forest by referring to \cite{gcforest}. These are implemented with the package of the gcForest\footnote{https://github.com/kingfengji/gcForest} with default settings of the forests. For more details of the mechanism about the deep forest, please refer to \cite{gcforest}.
\subsection{Experimental Results} \label{sec:expresult}
\subsubsection{Comparisons between Multi-scale Representations} \label{exp-1}
\begin{table}[tbp]
\footnotesize
\begin{center}
\caption{Comparisons between two implementations of multi-scale representations on MSU USSA database, IDIAP database, CASIA database. Performance is evaluated by EER (\%).}
\begin{tabular}{|l|c|c|c|}
\hline
\multicolumn{1}{|l|}{\multirow{2}[4]{*}{Multi-scale\newline{} representations}} & \multicolumn{3}{c|}{Database} \\\cline{2-4}
& \multicolumn{1}{c|}{MSU} & \multicolumn{1}{c|}{IDIAP} & \multicolumn{1}{c|}{CASIA} \\
\hline
GSM (RGB) \cite{gcforest} & 4.84 & 1.02 & 14.50 \\
\hline
proposed (RGB) & 4.17 & 0 & 11.82 \\
\hline
proposed (HSV) & 2.14 & 0.052 & 8.73 \\
\hline
proposed (YCBCR) & 1.56 & 0 & 9.66 \\
\hline
\end{tabular}%
\label{tab:cmp}
\end{center}
\hspace{-5cm}
\end{table}%
Table~\ref{tab:cmp} provides the experimental results of the GSM and of the proposed scheme in terms of Equal Error Rate (EER). From Table~\ref{tab:cmp}, by integrating the LBP features (RGB) with the deep forest, the EER on the MSU, IDIAP, CASIA databases are reduced from 4.84\% to 4.17\%, from 1.02\% to 0\% and from 14.50\% to 11.82\%, respectively. These results suggest that LBP-based features are more competent in exploiting texture information to represent the degradation of the spoofing faces than the GSM. Furthermore, across different color spaces, the performances of LBP features in HSV and YCbCr color spaces are generally better than that in the RGB color space. This is because the change of illuminance should not interfere chrominance information, which is crucial in color texture methods, and the HSV space and YCbCr space separate primely components of illumination and chrominance. However, the RGB space remains high correlations in the three components, and slight variance of illumination by altering the R, G, B may result in unexpected change chrominance, making feature less effective \cite{Color2017Face}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{ACC.pdf}
\caption{The convergence curves of experiments on MSU database. An average of the results of the five validations is taken. The $x$-axis refers to the number of the cascade layer that increases along with the training process. The $y$-axis refers to the testing accuracy of the output of each layer. }\label{fig:acc}
\hspace{-5cm}
\end{figure}
To further probe into the effectiveness of the proposed scheme, curves of the training accuracy outputted by each layer are drawn and shown in Fig.~\ref{fig:acc}. An upward trend of the accuracy results can be seen. It goes up along with the growth of the structure. There are limited improvements in the curve of over different layers with the GSM, which indicates that the GSM is not able to capture the texture information of the spoofing cues over different scales efficiently. Meanwhile, despite inferior accuracies in the first two layers, the accuracies of the proposed scheme (RGB, HSV, YCbCr) finally outperform that of the GSM. Moreover, the trend of the curves indicates that the cascading strategy enables LBP features to be re-represented. For instance, $S_{LBP}^1$ is fed to the layers ${\rm L_1}$, ${\rm L_4}$ and ${\rm L_7}$, and the outputted accuracies get improved. In layer ${\rm L_1}$, the deep forest model learns from $S_{LBP}^1$, representations on a small scale. Then, after ${\rm L_2}$ and ${\rm L_3}$, where the model has perceived more information from representations on larger scales, the model leads to a better understanding towards the distortion on different scales.
\subsubsection{Comparisons with State-of-the-Art Approaches}
\begin{table}[tbp]
\footnotesize
\begin{center}
\caption{Comparisons between the proposed scheme and state-of-the-art approaches on the IDIAP database, CASIA database and ROSE-YOUTU database, which are in terms of EER (\%).}
\begin{tabular}{|l|c|c|c|}
\hline
Method & IDIAP & CASIA & ROSE-YOUTU\\
\cline{2-4}
\hline
LBP-TOP \cite{Pereira2012LBP} & 7.9 & - & - \\
\hline
CoALBP (HSV) \cite{Color2017Face} & 3.7 & 5.5 &16.4\\
\hline
CoALBP (YCbCr) \cite{Color2017Face}& 1.4 & 10.0 & 17.7\\
\hline
Fine-tuned AlexNet \cite{Yang2014Learn}& 6.1 & 7.4 & 8.0\\
\hline
CNN+Conv-LSTM \cite{sigportLSTMface}& 5.12 & 22.40 &-\\
\hline
CNN+LSTM \cite{sigportLSTMface}& 1.28 & 14.60 &-\\
\hline
Patch-based CNN \cite{Atoum2018Face}& 2.5 & 4.44 & - \\
\hline
Depth-based CNN \cite{Atoum2018Face}& 0.86 & 2.85 & - \\
\hline
proposed (HSV) & 0.052 & 8.73 & 10.9 \\
\hline
proposed (YCbCr) & 0 & 9.66 & 11.9\\
\hline
\end{tabular}%
\label{tab:exp-both}%
\end{center}
\hspace{-5cm}
\end{table}%
Tables~\ref{tab:exp-both} and \ref{tab:exp-msu} provide results of comparisons between the proposed scheme and the state-of-the-art approaches. From Table~\ref{tab:exp-both}, The proposed scheme with simple LBP features is demonstrated to be highly competitive. Firstly, on the CASIA database, the proposed scheme (HSV) achieves 8.73\% EER. Although this result is inferior to results of some CNN-based methods, the patch-based CNN (4.44\%) \cite{Atoum2018Face}, the depth-based CNN (2.85\% ) \cite{Atoum2018Face} and fine-tuned AlexNet (6.1\%) \cite{Yang2014Learn}, it is better than some LSTM-based method with 22.40\% and 14.60\% EERs presented in \cite{sigportLSTMface}. It is worth mentioning that, among the traditional methods (using the SVM classifiers with handcrafted features), particularly among LBP-based methods, Co-occurrence of Adjacent Local Binary Patterns (CoALBP) method \cite{Nosaka2011Feature} has achieved the state-of-the-art performance \cite{ROSE}. Experiments on the CASIA database show that the proposed scheme with LBP features has achieved a better result (9.66\%) than CoALBP (10.0\%) in YCbCr space. Moreover, experimental results on the ROSE-YOUTU database \cite{ROSE}, a more diverse and challenging database, are presented in the last column in Table~\ref{tab:exp-both}. The results show that the CoALBP, which performs well on IDIAP database (3.7\% in HSV and 1.4\% in YCbCr) and CASIA database (5.0\% in HSV and 10.0\% in YCbCr), drops dramatically (16.4\% in HSV and 17.7\% in YCbCr) \cite{ROSE}. However, the proposed scheme, which is also related to the LBP, achieves 10.9\% (HSV) and 11.9\% (YCbCr). Furthermore, from Table~\ref{tab:exp-both}, the proposed scheme achieves 0\% (YCbCr) on the IDIAP REPLAY-ATTACK database, which is better than the results of all the presented CNN-based methods and CoALBP. Therefore, it is concluded that the proposed scheme has achieved comparable performance to the state-of-the-art CNN methods and traditional methods.
\begin{table}[tbp]
\footnotesize
\begin{center}
\caption{Performance in terms of EER (\%) and HTER (\%) on MSU USSA database. The reuslts are obtained according to the 5-fold validation protocol in \cite{patel2016secure}.} \label{tab:exp-msu}%
\begin{tabular}{|l|c|c|}
\hline
Method & EER & HTER \\
\hline
Patel et al. \cite{patel2016secure}& 3.84 & - \\
\hline
Patch-based CNN \cite{Atoum2018Face} & 0.55$\pm$0.26 & 0.41$\pm$0.32 \\
\hline
Depth-based CNN \cite{Atoum2018Face} & 2.62$\pm$0.73 & 2.22$\pm$0.66 \\
\hline
proposed (HSV) & 2.14$\pm$0.58 & 1.98$\pm$0.58 \\
\hline
proposed (YCbCr) & 1.56$\pm$0.61 & 1.33$\pm$0.51 \\
\hline
\end{tabular}%
\end{center}
\setlength{\textfloatsep}{0.1cm}
\end{table
\begin{table}[tbp]
\footnotesize
\begin{center}
\caption{Performance (EER \%) of different numbers of trees in each forest.} \label{tab:exp-number_of_trees}%
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Dataset & Number of trees & 64 & 128 & 256 & 500 \\
\hline
\multirow{2}*{CASIA} & HSV & 8.62 &8.59& 8.67& 8.73 \\ \cline{2-6}
& YCbCr &9.53 & 9.54 &9.61& 9.66 \\ \hline
\multirow{2}*{IDIAP} & HSV & 0.054& 0.047& 0.048& 0.052\\ \cline{2-6}
& YCbCr & 0.026 &0.017 &0.023 & 0 \\ \hline
\multirow{2}*{MSU} & HSV & 1.99& 1.96& 2.22& 2.14\\ \cline{2-6}
& YCbCr & 1.26& 1.28& 1.42& 1.51\\ \hline
\multirow{2}*{ROSE-YOUTU} & HSV & 10.4 & 10.4& 10.7 &10.9 \\ \cline{2-6}
& YCbCr & 11.4& 11.5 &11.3& 11.9 \\
\hline
\end{tabular}%
\end{center}
\setlength{\textfloatsep}{0.1cm}
\end{table}%
\par The experimental results on the MSU USSA database, in terms of the EER and the Half Total Error Rate (HTER), are provided in Table~\ref{tab:exp-msu}. According to Table~\ref{tab:exp-msu}, the patch-based CNN \cite{Atoum2018Face} achieves the best results both in EER (0.55\%) and HTER (0.41\%) on the MSU database, but our proposed scheme achieves 2.14\% EER and 1.98 \% HTER in HSV space as well as 1.56\% EER and 1.33\% HTER in YCbCr, which are better than that of the Depth-based CNN \cite{Atoum2018Face} with 2.62\% EER and 2.22\% HTER.
\par In summary, taking Tables~\ref{tab:exp-both} and \ref{tab:exp-msu} together, our proposed method is highly competitive when compared with the state-of-the-art CNN-based methods and the traditional methods, e.g., CoALBP \cite{Nosaka2011Feature}.
\subsubsection{Comparisons of different numbers of trees in each forest}
\par In the above experiments, we follow \cite{gcforest} and adopt 500 trees in each forest. In a certain range, the more trees in a forest, the better the performance. However, too many trees in a forest would introduce heavy computational costs. In \cite{howmanytrees}, it is suggested that a trade-off between performance and computational costs can be achieved when the number of trees in a forest is in the range from 64 to 128. There are no significant performance gains when the number of trees increases to 512, 1024, 2048 or other larger numbers. Experimental results in Table 4 show that when the number of trees is smaller than 500, the performance does not necessarily drop. This observation coincides with the conclusion in \cite{howmanytrees}.
\section{Conclusion and Future Work}\label{sec:end}
\par Given the concern on the adversarial attack, in this paper, we propose to utilize the deep forest \cite{gcforest} in the problem of the FAS. To the best of our knowledge, this is the first attempt to introduce the deep forest into the FAS problem. Inspired by works related to texture analysis, we re-devise the constructing of multi-scale representations by integrating LBP descriptors with the deep forest learning scheme. Our proposed scheme has achieved better results than the original GSM proposed by \cite{gcforest}. Furthermore, compared with the state-of-the-art approaches, competitive results have been achieved on several benchmark databases by the proposed scheme. For example, 0\% EER is achieved on the IDIAP dataset. This indicates the effectiveness and competitiveness of our proposed scheme. Hence, our method could offer a competitive option to those who would like to improve the security of their systems by fusing diverse approaches in their schemes in system-level. Moreover, there have been a limited number of research works which exploit the deep forest on practical problems. This paper could serve as an important reference to the researchers who want to explore methods beyond the CNN-based schemes.
\par Admittedly, the results of our approach do not look as attractive as some CNN-based methods. In the future, various efforts can be made to improve the overall performance, such as investigating more cascading strategies and feature extraction methods. In this work, the LBP is utilized because it is common in the field of the FAS problem and it is relatively simple for us to implement with the deep forest. However, the LBP is designed by the researchers in computer vision society based on their domain knowledge. Such knowledge may not be fully applicable to the FAS problem. Some novel methods of binary descriptors have raised our strong interest and given us significant references, \cite{SensitiveLBP_TIP2015, DeepHash_TIP2017, LBF_PAMI2018, ContextLBP_PAMI2018}. Designed in a more intellectual idea, they can learn features from data and are less dependent on people's knowledge. Hopefully, we could achieve better results by referring to these methods.
\bibliographystyle{ieeetr}
|
2,869,038,154,943 | arxiv | \section{Introduction}
Computer vision is based on mathematical foundations known as
{\em multiview geometry} \cite{FL, Grosshans} or {\em epipolar geometry} \cite[\S 9]{HartleyZisserman}.
In that subject one studies the
space of pictures of three-dimensional objects seen from $n \geq 2$ cameras.
Each camera is represented by a $3 \times 4$-matrix $A_i$ of rank $3$. The matrix
specifies a linear projection from $\PP^3$ to $\PP^2$, which is well-defined on
$\PP^3 \backslash \{f_i\}$, where the focal point $f_i $ is represented by a
generator of the kernel of~$A_i$.
The space of pictures from the $n$ cameras is the image of the rational~map
\begin{equation}
\label{eq:phiA}
\phi_A \,:\,\,\PP^3 \,\dashrightarrow\, (\PP^2)^n, \,\,\,\, \textbf{x} \,\mapsto \, (A_1 \textbf{x}, A_2 \textbf{x}, \ldots,A_n \textbf{x}).
\end{equation}
The closure of this image is an algebraic variety, denoted $V_A$
and called the {\em multiview variety} of the given $n$-tuple of $3 \times 4$-matrices $A = (A_1,A_2,\ldots,A_n)$.
In geometric language, the multiview variety $V_A$ is the blow-up of
$\PP^3$ at the cameras $f_1,\ldots,f_n$, and we here study
this threefold as a subvariety of $(\PP^2)^n$.
The {\em multiview ideal} $J_A$ is the prime ideal of all polynomials that vanish on the
multiview variety $V_A$. It lives in a polynomial ring $K[x,y,z]$ in $3n$ unknowns
$(x_i,y_i,z_i)$, $ i = 1,2,\ldots,n $, that serve as coordinates on $(\PP^2)^n$.
In Section 2 we give a determinantal representation of $J_A$ for generic $A$,
and identify a universal Gr\"obner basis consisting of
multilinear polynomials of degree $2$, $3$ and $4$.
This extends previous results of
Heyden and {\AA}str{\"o}m \cite{HA}.
The multiview ideal $J_A$ has
a distinguished initial monomial ideal $M_n$ that is independent
of $A$, provided the configuration $A$ is generic.
Section 3 gives an explicit description of $M_n$ and shows
that it is the unique Borel-fixed ideal with its $\ensuremath{\mathbb{Z}}^n$-graded Hilbert function.
Following \cite{CS}, we introduce the multigraded
Hilbert scheme $\mathcal{H}_n$
which parametrizes $\ensuremath{\mathbb{Z}}^n$-homogeneous ideals in
$K[x,y,z]$ with the same Hilbert function as $M_n$.
We show in Section 6 that, for $n \geq 3$, $\mathcal{H}_n$ has a distinguished component
of dimension $11n-15$ which compactifies the space
of camera positions studied in computer vision.
For two cameras, that space
is an irreducible cubic hypersurface in $ \mathcal{H}_2 \simeq \PP^8$.
Section 4 concerns the case when $n \leq 4$ and
the focal points $f_i$ are among the coordinate points
$(1{:}0{:}0{:}0), \ldots, (0{:}0{:}0{:}1)$. Here the multiview variety $V_A$
is a toric threefold, and its degenerations are parametrized by a certain
toric Hilbert scheme inside $\mathcal{H}_n$. Each initial monomial
ideal of the toric ideal $J_A$ corresponds to a three-dimensional mixed subdivision
as seen in Figure~\ref{V3_J8_Blowup}.
A classification of such mixed subdivisions for $n=4$ is given in
Theorem~\ref{thm:1068}.
\begin{figure}
\includegraphics[width=0.44\linewidth]{V3_J8_Blowup.png}
\includegraphics[width=0.55\linewidth]{V3_J8_Explode.png}
\caption{A multiview variety $V_A$ for $n = 3$ cameras degenerates into
six copies of $\PP^1 {\times} \PP^2$ and one copy of $\PP^1 {\times} \PP^1 {\times} \PP^1$.}
\label{V3_J8_Blowup}
\end{figure}
In Section 5 we place our $n$ cameras on a line in $\PP^3$.
Moving them very close to each other on that line induces
a two-step degeneration of the form
\begin{equation}
\label{eq:TriBiMono}
\hbox{trinomial ideal}
\ \longrightarrow \
\hbox{binomial ideal}
\ \longrightarrow \
\hbox{monomial ideal}.
\end{equation}
We present an in-depth combinatorial study of this curve of multiview ideals.
In Section 6 we finally define the Hilbert scheme $\mathcal{H}_n$,
and we construct the space of camera positions
as a GIT quotient of a Grassmannian.
Our main result (Theorem \ref{thm:component})
states that the latter is an irreducible
component of~$\mathcal{H}_n$.
As a key step in the proof,
the tangent space of $\mathcal{H}_n$
at the monomial ideal in (\ref{eq:TriBiMono}) is computed
and shown to have the correct dimension $11n-15$.
Thus, the curve (\ref{eq:TriBiMono})
consists of smooth points on the distinguished component of~$\mathcal{H}_n$.
For $n \geq 3$, our Hilbert scheme has multiple components.
This is seen from our classification of
monomial ideals on $\mathcal{H}_3$, which relates closely to
\cite[\S 5]{CS}.
\bigskip
\noindent{\bf Acknowledgments}. Aholt and Thomas thank Fredrik Kahl for hosting them
at Lund in February 2011 and pointing them to the work of Heyden and {\AA}str{\"o}m.
They also thank Sameer Agarwal for introducing them to problems in computer vision
and continuing to advise them in this field.
Sturmfels thanks the Mittag-Leffler Institute, where this project started,
and MATHEON Berlin for their hospitality.
All three authors were partially supported by the US National Science Foundation.
We are indebted to the makers of the software packages
{\ensuremath{{\bf{t}}} CaTS}, {\ensuremath{{\bf{t}}} Gfan}, {\ensuremath{{\bf{t}}} Macaulay2} and {\ensuremath{{\bf{t}}} Sage}
which allowed explicit computations that were crucial in discovering our results.
\section{A universal Gr\"obner basis}
Let $K$ be any algebraically closed field, $n \geq 2$, and consider the map $\phi_A$
defined as in (\ref{eq:phiA}) by a tuple $A = (A_1,A_2,\ldots,A_n) $ of $3 \times 4$-matrices
of rank $3$ with entries in $K$.
The subvariety $V_A = \overline{{\rm image}(\phi_A)}$ of $(\PP^2)^n$
is the {\em multiview variety}, and its ideal $J_A \subset K[x,y,z]$
is the {\em multiview ideal}. Note that $J_A$ is prime because
its variety $V_A$ is the image
under $\phi_A$ of an irreducible variety.
We say that the camera configuration $A$ is {\em generic} if all $4 \times 4$-minors of the
$(4 \times 3n)$-matrix $ \bmat{ A_1^T \! & \! A_2^T \! & \! \cdots \! & \! A_n^T }$
are non-zero. In particular, if $A$ is generic then the focal points of the $n$ cameras are
pairwise distinct in $\PP^3$.
For any subset $\sigma = \{\sigma_1,\ldots,\sigma_s\} \subseteq [n]$
we consider the $3s \times (s+4)$-matrix
$$ A_\sigma \,\,\, :=\,\,\, \begin{bmatrix}
A_{\sigma_1} & p_{\sigma_1} & \mathbf{0} & \cdots & \mathbf{0}\\
A_{\sigma_2} & \mathbf{0} & p_{\sigma_2} & \ddots & \mathbf{0} \\
\vdots& \vdots & \ddots &\ddots & \vdots \\
A_{\sigma_s} & \mathbf{0} & \cdots & \mathbf{0} & p_{\sigma_s}
\end{bmatrix},$$
where $p_i:= \bmat{x_i \! & \! y_i \! & \! z_i}^T$ for $i \in [n]$.
Assuming $s \geq 2$, each maximal minor of $A_\sigma$ is
a homogeneous polynomial of degree $s= |\sigma|$ that is linear
in $p_i$ for $i \in \sigma$. Thus for $ s= 2,3,\ldots$
these polynomials are bilinear, trilinear, etc.
The matrix $A_\sigma$ and its maximal minors are considered frequently
in multiview geometry \cite{HartleyZisserman, HA}.
Recall that a {\em universal Gr\"obner basis} of an ideal is a subset that is a Gr\"obner basis of the ideal under all term orders.
The following is the main result in this section.
\begin{theorem}
\label{thm:UGB}
If $A$ is generic then
the maximal minors of the matrices $A_\sigma$ for $2 \leq |\sigma| \leq 4$
form a universal Gr\"obner basis of the multiview ideal $J_A$.
\end{theorem}
The proof rests on a sequence of lemmas.
Here is the most basic~one.
\begin{lemma}
\label{lem:easyinclusion}
The maximal minors of $A_\sigma$ for $|\sigma| {\geq} 2$ lie in the prime ideal~$J_A$.
\end{lemma}
\begin{proof} If $ (p_1,\ldots, p_n) \in (K^3)^n$
represents a point in ${\rm image}(\phi_A)$
then there exists a non-zero vector $q \in K^4$
and non-zero scalars $c_1,\ldots,c_n \in K$ such that
$A_i q = c_i p_i$ for $i = 1,2,\ldots,n$.
This means that the columns of $A_\sigma$ are linearly dependent.
Since $A_\sigma$ has at least as many rows as columns,
the maximal minors of $A_\sigma$ must vanish at every point $ p \in V_A$.
\end{proof}
Later we shall see that when $A$ is generic, $J_A$ has only one initial monomial ideal up to symmetry.
We now identify that ideal.
Let $M_n$ denote the ideal in $K[x,y,z]$ generated by the ${n\choose 2}$ quadrics $x_ix_j$, the
$3{n\choose 3}$ cubics $x_iy_jy_k$,
and the ${n\choose 4}$ quartics $y_iy_jy_ky_l$, where $i,j,k,l$ runs over distinct indices in $[n]$.
We fix the lexicographic term order $\prec$ on $K[x,y,z]$ which is specified by
$x_1 {\succ} \cdots {\succ} x_n {\succ}
y_1 {\succ} \cdots {\succ} y_n {\succ} z_1 {\succ} \cdots {\succ} z_n$.
Our goal is to prove that the initial monomial ideal $\tin_\prec(J_A)$ is equal to $M_n$.
We begin with the easier inclusion.
\newpage
\begin{lemma} \label{lem:generic1}
If $A$ is generic then $M_n \subseteq \tin_\prec(J_A) $.
\end{lemma}
\begin{proof}
The generators of $M_n$ are the quadrics $x_ix_j$, the cubics $x_iy_jy_k$, and the quartics $y_iy_jy_ky_l$.
By Lemma \ref{lem:easyinclusion}, it suffices to show that these are the initial monomials
of maximal minors of $A_{\{ij\}}$, $A_{\{ijk\}}$ and $A_{\{ijkl\}}$ respectively.
For the quadrics this is easy. The matrix $A_{\{ij\}}$ is square and we have
\begin{equation}
\label{eq:zwei}
{\rm det}(A_{\{ij\}})
\,\, = \,\, {\rm det} \begin{bmatrix}
A_i^1 & \! x_i & \! 0\\
A_i^2 & \! y_i & \! 0\\
A_i^3 & \! z_i & \! 0\\
A_j^1 & \! 0 & \! x_j\\
A_j^2 & \! 0 & \! y_j\\
A_j^3 & \! 0 & \! z_j
\end{bmatrix} \,= \,\,
{\rm det} \! \begin{bmatrix} A_i^2 \\ A_i^3 \\ A_j^2 \\ A_j^3 \end{bmatrix} \! x_i x_j \,+\, \hbox{lex.~lower terms}.
\end{equation}
where $A_t^r$ is the $r$th row of $A_t$.
The coefficient
of $x_i x_j$ is non-zero because $A$ was assumed to be generic.
For the cubics, we consider the $9\times 7$-matrix
\begin{equation}
\label{eq:drei}
A_{\{ijk\}} \quad = \quad \begin{bmatrix}
A_i & p_i & 0 & 0\\
A_j & 0 & p_j & 0\\
A_k & 0 & 0 & p_k
\end{bmatrix}.
\end{equation}
Now, $x_i y_j y_k$ is the lexicographic initial monomial of the
$7\times 7$-determinant formed by removing the fourth and seventh rows of $A_{\{ijk\}}$.
Here we are using that, by genericity, the vectors $A_i^2, A_i^3, A_j^3, A_k^3$
are linearly independent.
Finally, for the quartic monomial $y_iy_jy_ky_l$ we consider the $12\times 8$ matrix
\begin{equation}
\label{eq:vier}
A_{\{ijkl\}} \quad = \quad \begin{bmatrix}
A_i & p_i & 0 & 0 & 0\\
A_j & 0 & p_j & 0 & 0\\
A_k & 0 & 0 & p_k & 0\\
A_l & 0 & 0 & 0 & p_l
\end{bmatrix}.
\end{equation}
Removing the first row from each of the four blocks, we obtain an $8 \times 8$-matrix
whose determinant has $y_iy_jy_ky_l$ as its lex.~initial monomial.
\end{proof}
The next step towards our proof of Theorem \ref{thm:UGB} is to express
the multiview variety $V_A$ as a projection of a
diagonal embedding of $\PP^3$. This will put us in a position to
utilize the results of Cartwright and Sturmfels in \cite{CS}.
We extend each camera matrix $A_i$ to an invertible
$4 \times 4$-matrix $B_i = \bmat{b_i\\ensuremath{\mathcal{A}}_i} $ by adding a row $b_i$ at the top.
Our diagonal embedding of $\PP^3$ is the map
\begin{equation}
\label{eq:psiB}
\psi_B: \PP^3 \,\to\, (\PP^3)^n, \,\,\,\, \textbf{x} \,\mapsto\, (B_1 \textbf{x}, B_2 \textbf{x}, \ldots,B_n \textbf{x}).
\end{equation}
Let $V^B := {\rm image}(\psi_B) \subset (\PP^3)^n$ and $J^B \subset K[w,x,y,z]$ its prime ideal.
Here $(w_i:x_i:y_i:z_i)$ are coordinates on the $i$th copy of $\PP^3$ and $(w,x,y,z)$ are
coordinates on $(\PP^3)^n$. The ideal $J^B$ is generated by the $2 \times 2$-minors of
\begin{equation} \label{eq:inverseB}
\left[
B_1^{-1} \! \begin{bmatrix} w_1 \\ x_1 \\ y_1 \\ z_1 \end{bmatrix} \,\,
B_2^{-1} \! \begin{bmatrix} w_2 \\ x_2 \\ y_2 \\ z_2 \end{bmatrix} \, \cdots \,\,\,
B_n^{-1} \! \begin{bmatrix} w_n \\ x_n \\ y_n \\ z_n \end{bmatrix} \,\right] .
\end{equation}
This is a $4 \times n$-matrix.
Now consider the coordinate projection
$$ \pi \,:\, (\PP^3)^n \dashrightarrow (\PP^2)^n \, , \,\,\,
(w_i:x_i:y_i:z_i) \mapsto (x_i:y_i:z_i) \textup{ for } i=1,\ldots, n. $$
The composition $\pi \circ \psi_B$ is a rational map, and it coincides with $\phi_A$
on its domain of definition $\PP^3 \backslash \{f_1, \ldots, f_n \}$.
Therefore, $V_A = \overline{\pi(V^B)}$ and
\begin{equation}
\label{eq:elimideal}
J_A \,\, = \,\, J^B \cap K[x,y,z] .
\end{equation}
The polynomial ring $K[w,x,y,z]$ admits the natural $\mathbb Z^n$-grading $\deg(w_i) = \deg(x_i) = \deg(y_i) = \deg(z_i) = e_i$ where $e_i$ is the standard unit vector in $\ensuremath{\mathbb{R}}^n$. Under this grading, $K[w,x,y,z]/J^B$ has the multigraded Hilbert function
$$ \mathbb N^n \to \mathbb N, \,\,\,(u_1, \ldots, u_n) \mapsto \left( \begin{array}{c}
u_1+\cdots+u_n+3 \\ 3 \end{array} \right).$$
The multigraded Hilbert scheme $H_{4,n}$ which parametrizes $\ensuremath{\mathbb{Z}}^n$-homogeneous ideals
in $K[w,x,y,z]$ with that Hilbert function was studied in \cite{CS}.
More generally, the multigraded Hilbert scheme $H_{d,n}$ represents
degenerations of the diagonal $\PP^{d-1}$ in $(\PP^{d-1})^n$ for any $d$ and $n$.
For the general definition of
multigraded Hilbert schemes see \cite{HaimanSturmfels}.
It was shown in \cite{CS} that $H_{d,n}$ has a unique Borel-fixed ideal $Z_{d,n}$.
Here {\em Borel-fixed} means that $Z_{d,n}$ is stable
under the action of ${\mathcal B}^n$ where ${\mathcal B}$ is the group of
lower triangular matrices in ${\PGL}(d,K)$.
Here is what we shall need
about the monomial ideal $Z_{4,n}$.
\begin{lemma} \label{lem:generators of Z} {\rm (Cartwright-Sturmfels \cite[\S 2]{CS} and
Conca \cite[\S 5]{Conca})}
\begin{enumerate}
\item The unique Borel-fixed monomial ideal $Z_{4,n}$ on $H_{4,n}$ is generated by the following
monomials where $i,j,k,l$ are distinct indices in $[n]$:
\begin{center}
$\begin{array}{l}
w_iw_j, \,w_ix_j,\, w_iy_j, \,x_ix_j,\,\,
\,x_iy_jy_k, \,\,
y_iy_jy_ky_l.
\end{array}$
\end{center}
\item This ideal $Z_{4,n}$ is the lexicographic
initial ideal of $J^B$ when $B$ is sufficiently generic. The lexicographic order here is $w \succ x \succ y \succ z$ with each block ordered lexicographically in increasing order of index.
\end{enumerate}
\end{lemma}
Using these results, it was deduced in \cite{CS} that all ideals on $H_{4,n}$ are radical and Cohen-Macaulay, and that $H_{4,n}$ is connected. We now use this distinguished Borel-fixed ideal $Z_{4,n}$ to
prove the equality in Lemma \ref{lem:generic1}.
\begin{lemma} \label{lem:generic2}
If $A$ is generic then $M_n = \tin_\prec(J_A) $.
\end{lemma}
\begin{proof}
We fix the lexicographic term order $\prec$ on $K[w,x,y,z]$
and its restriction to $K[x,y,z]$. Lemma \ref{lem:generators of Z} (1)
shows that $\,M_n = Z_{4,n} \cap K[x,y,z] $.
Lemma \ref{lem:generators of Z} (2) states that $Z_{4,n} = \tin_{\prec}(J^B)$ when $B$ is generic.
The lexicographic order has the important property that it allows the operations of taking initial ideals and intersections to commute \cite[Chapter 3]{CLO}. Therefore,
\begin{align*}
\tin_{\prec}(J_A) &\,=\, \tin_{\prec}(J^B\cap K[x,y,z]) & \\
&\,=\, \tin_{\prec}(J^B)\cap K[x,y,z] \\
&\,=\, Z_{4,n} \cap K[x,y,z] \,\,= \,\, M_n.
\end{align*}
This identity is valid whenever the conclusion of
Lemma \ref{lem:generators of Z} (2) is true.
We claim that, for this to hold, the appropriate genericity notion for $B$ is
that all $4 \times 4$-minors of the
$(4 \times 4n)$-matrix $ \bmat{ B_1^T \! & \! B_2^T \! & \! \cdots \! & \! B_n^T }$
are non-zero. Indeed, under this hypothesis, the maximal minors
of the $4s \times (s+4)$-matrix
$$ \quad B_\sigma \,\,\, :=\,\,\, \begin{bmatrix}
B_{\sigma_1} & \tilde p_{\sigma_1} & \mathbf{0} & \cdots & \mathbf{0}\\
B_{\sigma_2} & \mathbf{0} & \tilde p_{\sigma_2} & \ddots & \mathbf{0} \\
\vdots& \vdots & \ddots &\ddots & \vdots \\
B_{\sigma_s} & \mathbf{0} & \cdots & \mathbf{0} & \tilde p_{\sigma_s}
\end{bmatrix}\! ,\, \hbox{where $\tilde p_i:= \bmat{w_i \!\! & \!\! x_i \!\! & \!\! y_i \!\! & \!\! z_i }^{\! T}$ for $i \in [n]$,}
$$
have non-vanishing leading coefficients. We see that $Z_{4,n} \subseteq \tin_{\prec}(J^B)$
by reasoning akin to that in the proof of Lemma \ref{lem:generic1}. The equality
$Z_{4,n}=\tin_{\prec}(J^B)$ is then immediate since
$Z_{4,n}$ is the generic initial ideal of $J^B$.
Hence, for any generic camera positions $A$, we can add a row to $A_i$ and
get $B_i$ that are ``sufficiently generic'' for Lemma \ref{lem:generators of Z} (2).
This completes the proof.
~\end{proof}
\smallskip
\noindent
{\em Proof of Theorem \ref{thm:UGB}:}
Lemma~\ref{lem:generic2} and the proof of Lemma~\ref{lem:generic1} show
that the maximal minors of the matrices $A_\sigma$ for $2 \leq |\sigma| \leq 4$
are a Gr\"obner basis of $J_A$ for the lexicographic term order.
Each polynomial in that Gr\"obner basis is multilinear,
thus the initial monomials remain the same for any
term order satisfying $x_i \succ y_i \succ z_i$ for $i = 1,2,\ldots,n$.
So, the minors form a Gr\"obner basis for that term order.
The set of minors is invariant under permuting
$\{x_i,y_i,z_i\}$ for each $i$.
Moreover, the genericity of $A$ implies that every monomial
which can possibly appear in the support of a minor does so.
Hence, these minors form a universal Gr\"obner basis of $J_A$.
\qed
\begin{remark}
Computer vision experts have known for a long time that multiview
varieties $V_A$ are defined set-theoretically by the above multilinear constraints of degree at most $4$.
We refer to work of Heyden and {\AA}str{\"o}m \cite{HA, Heyden}.
What is new here is that these constraints define $V_A$
in the strongest possible sense: they form a universal Gr\"obner basis
for the prime ideal $J_A$.
\end{remark}
The $n$ cameras are in {\em linearly general position} if no four focal points are coplanar and no three are collinear.
While the number of multilinear polynomials in our lex Gr\"obner basis of $J_A$ is
$\,\binom{n}{2} + 3 \binom{n}{3} + \binom{n}{4}$, far
fewer suffice to generate the ideal $J_A$ when $A$ is in linearly general position.
\begin{corollary}
If $A$ is in linearly general position then the ideal $J_A$ is minimally generated by $\,\binom{n}{2}$ bilinear and $\binom{n}{3}$ trilinear polynomials.
\end{corollary}
\begin{proof}
This can be shown for $n \leq 4$ by a direct calculation.
Alternatively, these small cases are covered by transforming to the toric ideals
in Section 4.
First map the focal points of the cameras to the
torus fixed focal points of the toric case, followed by multiplying each $A_i$ by a suitable
$g_i \in \PGL(3,K)$.
Now let $n \geq 5$.
For any three cameras $i, j, k$,
the maximal minors of (\ref{eq:drei}) are generated
by only one such maximal minor modulo
the three bilinear polynomials (\ref{eq:zwei}).
Likewise, for any four cameras $i$, $j$, $k$ and $l$,
the maximal minors of (\ref{eq:vier})
are generated by the trilinear and bilinear polynomials.
This implies that the resulting
$\binom{n}{2} + \binom{n}{3}$ polynomials generate $J_A$,
and, by restricting to two or three cameras, we see that they
minimally generate.
\end{proof}
\section{The Generic Initial Ideal}
We now focus on combinatorial properties of our special monomial ideal
$$ M_n \quad = \quad \bigl\langle \,
x_ix_j, \, x_iy_jy_k, \,y_iy_jy_ky_l \,\,:\,\, \forall \,\,i,j,k,l \in [n] \,\, \textup{distinct}\bigr\rangle. $$
We refer to $M_n$ as the {\em generic initial ideal} in multiview geometry because it is the lex initial ideal of
any multiview ideal $J_A$ after a generic coordinate change
via the group $G^n$ where $G = {\PGL}(3,K)$. Indeed, consider {\bf any} rank $3$ matrices
$A_1,A_2, \ldots,A_n \in K^{3 \times 4}$ with
pairwise distinct kernels $K \{f_i\}$. If $g = (g_1,g_2,\ldots,g_n) $ is
generic in $G^n$ then $g \circ A$ is generic in the sense that
all $4 \times 4$-minors of the matrix $ \bmat{ (g_1 A_1)^T \! \! & \!\! (g_2 A_2)^T \!\! & \!\! \cdots \!\! & \!\! (g_n A_n)^T }$
are non-zero.
Thus, by the results of Section 2, $M_n$ is the initial ideal of $J_{g \circ A}$, or, using standard
commutative algebra lingo, $M_n$ is the generic initial ideal of $J_A$.
Since $M_n$ is a squarefree monomial ideal, it is radical. Hence $M_n$ is the intersection of its minimal primes,
which are generated by subsets of the variables $x_i$ and $y_j$.
We begin by computing this prime decomposition.
\begin{proposition} \label{prop:prime_decomposition}
The generic initial ideal $M_n$ is the irredundant intersection of
$\binom{n}{3} + 2 \binom{n}{2}$ monomial primes. These are the monomial primes
$P_{ijk}$ and $Q_{ij}\subseteq K[x,y,z]$ defined below for any distinct indices $i,j,k\in[n]$:
\begin{itemize}
\item $P_{ijk}$ is generated by $x_1,\dots,x_n$ and all $y_l$ with $l\not\in\{i,j,k\}$,
\item $Q_{ij}$ is generated by all $x_l$ for $l\ne i$ and $y_l$ for $l\not\in\{i,j\}$.
\end{itemize}
\end{proposition}
\smallskip\noindent {\it Proof: \ }
Let $L$ denote the intersection of all $P_{ijk}$ and $Q_{ij}$.
Each monomial generator of $M_n$ lies in $P_{ijk}$ and in $Q_{ij}$, so
$M_n \subseteq L$. For the reverse inclusion, we will show that
$V(M_n)$ is contained in $V(L) = (\cup V(P_{ijk})) \cup (\cup V(Q_{ij}))$.
Let $(\tilde{x},\tilde{y}, \tilde{z})$ be any point in the variety $V(M_n)$. First suppose
$\tilde{x}_i=0$ for all $i\in[n]$. Since $\tilde{y}_i \tilde{y}_j \tilde{y}_k \tilde{y}_l=0$ for distinct indices,
there are at most three indices $i,j,k$ such that $\tilde y_i$, $\tilde y_j$ and $\tilde y_k$ are nonzero.
Hence $(\tilde{x},\tilde{y}, \tilde{z}) \in V(P_{ijk})$.
Next suppose $\tilde x_i \not = 0$. The index $i$ is unique because $x_i x_j \in M_n$ for all $j \not= i$.
Since $\tilde x_i \tilde y_j \tilde y_k=0$ for all $j,k\ne i$, we have $\tilde y_j \ne 0$
for at most one index $j\ne i$. These properties imply
$(\tilde{x},\tilde{y}, \tilde{z}) \in V(Q_{ij})$.
\hfill$\square$\medskip
We regard the monomial variety $V(M_n)$ as a threefold inside
the product of projective planes $(\PP^2)^n$.
If the focal points are distinct, $V_A$ has a Gr\"obner degeneration to the reducible threefold $V(M_n)$.
The irreducible components of $V(M_n)$ are
\begin{equation}
\label{eq:components} V(P_{ijk}) \,\simeq \,\PP^1 \times \PP^1 \times \PP^1
\quad \hbox{and} \quad
V(Q_{ij}) \,\simeq \,\PP^2 \times \PP^1.
\end{equation}
We find it convenient to regard $(\PP^2)^n$ as a toric variety, so as to
identify it with its polytope $(\Delta_2)^n$, a direct product of triangles.
The components in (\ref{eq:components}) are $3$-dimensional boundary
strata of $(\PP^2)^n$, and we identify them with faces of $(\Delta_2)^n$.
The corresponding $3$-dimensional polytopes are the {\em $3$-cube}
and the {\em triangular prism}. The following three examples illustrate this view.
\begin{figure}
\includegraphics[width=0.56\linewidth]{V2_Z_Monomial.png}
\caption{The variety of the generic initial ideal $M_2$ seen as two adjacent facets
of the $4$-dimensional polytope $\Delta_2 \times \Delta_2$.}
\label{V2_Z_Monomial_Figure}
\end{figure}
\begin{example}{\rm [Two cameras $(n=2)$] \ }
The variety of $\,M_2 = \langle x_1 \rangle \,\cap \, \langle x_2 \rangle \,$
is a hypersurface in $\PP^2 \times \PP^2$.
The two components are triangular prisms $\PP^2 \times \PP^1$,
which are glued along a common square $\PP^1 \times \PP^1$, as shown in
Figure~\ref{V2_Z_Monomial_Figure}. \qed
\end{example}
\begin{example}{\rm [Three cameras $(n=3)$] \ } \label{ex:M_3}
The variety of $M_3 $ is a threefold in $\PP^2 \times \PP^2 \times \PP^2$.
Its seven components are given by the prime decomposition
$$
\begin{matrix} M_3 \quad = & \quad
\langle x_1, x_2, y_1 \rangle \, \cap \,
\langle x_1, x_2, y_2 \rangle \, \cap \,
\langle x_1, x_3, y_1 \rangle & \\
& \, \cap \,\,
\langle x_1, x_3, y_3 \rangle \, \cap \,
\langle x_2, x_3, y_2 \rangle \, \cap \,
\langle x_2, x_3, y_3 \rangle & \!\! \cap \,\,
\langle x_1, x_2, x_3 \rangle .
\end{matrix}
$$
The last component is a cube $\PP^1 \times \PP^1 \times \PP^1$,
and the other six components are triangular prisms $\PP^2 \times \PP^1$.
These are glued in pairs along three of the six faces of the cube.
For instance, the two triangular prisms $V(x_1,x_2,y_1)$
and $V(x_1,x_3,y_1)$ intersect the cube $V(x_1,x_2,x_3)$
in the common square face $V(x_1,x_2,x_3,y_1)$ $\simeq \PP^1\times \PP^1$.
This polyhedral complex lives in the boundary of $(\Delta_2)^3$,
and it shown in Figure~\ref{V3_Z_Monomial_Figure}.
Compare this picture with Figure \ref{V3_J8_Blowup}.
\qed
\end{example}
\begin{figure}
\includegraphics[width=0.43\linewidth]{V3_Z_Monomial.png}
\!\!\!\!\!
\includegraphics[width=0.55\linewidth]{M3_on_V3_explode.png}
\caption{The monomial variety $V(M_3)$ as a subcomplex of $(\Delta_2)^3$.}
\label{V3_Z_Monomial_Figure}
\end{figure}
\begin{example} {\rm [Four cameras $(n = 4)$] } \label{ex:M_4}
The variety $V(M_4) $ is a threefold in $(\PP^2)^4$,
regarded as a $3$-dimensional subcomplex
in the boundary of the $8$-dimensional polytope $(\Delta_2)^4$.
It consists of four cubes and twelve triangular prisms.
The cubes share a common vertex, any two cubes
intersect in a square, and each of the six squares
is adjacent to two triangular prisms. \qed
\end{example}
From the prime decomposition in Proposition \ref{prop:prime_decomposition}
we can read off the {\em multidegree} \cite[\S 8.5]{MS} of the ideal $M_n$.
Here and in what follows, we use
the natural $\ensuremath{\mathbb{Z}}^n$-grading on $K[x,y,z]$ given by
$\deg(x_i) = \deg(y_i) = \deg(z_i) = e_i$.
Each multiview ideal $J_A$ is homogeneous with respect to this
$\ensuremath{\mathbb{Z}}^n$-grading.
\begin{corollary}
The multidegree of the generic initial ideal $M_n $ is equal to
\begin{equation}
\label{eq:multidegree} \mathcal{C}\bigl(K[x,y,z]/M_n; {\bf t} )\bigr) \,\,\, = \, \,\,
t_1^2 t_2^2 \cdots t_n^2 \cdot \left( \sum_{1 \leq i < j < k \leq n} \! \frac{1}{t_i t_j t_k} \,+ \!
\sum_{1 \leq i,j \leq n} \! \frac{1}{t_i^2 t_j}\, \right)
\end{equation}
\end{corollary}
A more refined analysis also yields the Hilbert function in the $\ensuremath{\mathbb{Z}}^n$-grading.
\begin{theorem} \label{thm:hilbertfct}
The multigraded Hilbert function of $K[x,y,z]/M_n$ equals
\begin{equation}
\label{Vn_Hilbert_function}
\ensuremath{\mathbb{N}}^n \,\to \,\ensuremath{\mathbb{N}},\ (u_1,\ldots,u_n)\,\mapsto \,{u_1+\cdots+u_n+3\choose 3}-\sum_{i=1}^n{u_i+2\choose 3}.
\end{equation}
\end{theorem}
\smallskip\noindent {\it Proof: \ } Fix $u\in\ensuremath{\mathbb{N}}^n$. A $K$-basis $\mathfrak{B}_u$ for $(K[x,y,z]/M_n)_u$ is given by all monomials $x^ay^bz^c \not \in M_n$ such that $a+b+c=u$. Therefore, either (i) $a=0$ and at most three components
of $b$ are non-zero; or (ii) $a\ne 0$, in which case only one $a_i$ can be non-zero and $b_j \not= 0$ for at most
one $j \in [n] \backslash \{i\}$.
We shall count the monomials in $\mathfrak{B}_u$. Monomials of type (i) look like $y^bz^c$, with at most three nonzero entries in $b$. Also, $b$ determines $c$ since $c_i = u_i - b_i$ for all $i \in [n]$, and so we count the number of possibilities for $y^b$.
There are $u_i$ choices for $b_i \ne 0$, and thus $U := u_1+\cdots+u_n$ many monomials in the set ${\mathcal Y} := \{y_i^{b_i} \,:\, 1 \leq b_i \leq u_i, \,i=1,\ldots,n\}$. The factor $y^b$ in $y^bz^c$ is the product of $0$, $1$, $2$ or $3$ monomials from ${\mathcal Y}$
with distinct subscripts.
To resolve over-counting, consider a fixed index $i$. There are ${u_i\choose 2}$ ways of choosing two monomials from ${\mathcal Y}$ with subscript $i$ and ${u_i\choose 3}$ ways of choosing three monomials from ${\mathcal Y}$ with subscript $i$. Also, there are ${u_i\choose 2}(U- u_i)$ ways of choosing two monomials from ${\mathcal Y}$ with subscript $i$ and a third monomial with a different subscript. Hence, the number of choices for $y^b$ in $y^bz^c$ is
\begin{small}
$${U\choose 0}+{U\choose 1}+\left[{U\choose 2} - \sum_{i=1}^n \! {u_i\choose 2}\right]+
\left[{U\choose 3} -\sum_{i=1}^n \! {u_i\choose 3} -U\sum_{i=1}^n \! {u_i\choose 2}+\sum_{i=1}^nu_i{u_i\choose 2}\right]\!.$$
\end{small}
For case (ii) we count all monomials $x^ay^bz^c\in\mathfrak{B}_u$ with $a_i\ne 0$ and all other $a_j=0$.
It suffices to count the choices for the factor $x^ay^b$. For fixed $i$, there are ${u_i+1\choose 2}$ monomials of the form
$x_i^{a_i}y_i^{b_i}$ with $a_i+b_i \leq u_i$ and $a_i \geq 1$. Such a monomial may be multiplied with
$y_j^{b_j}$ such that $j \ne i$ and $0 \leq b_j \leq u_j$. This amounts to choosing zero or one monomial from ${\mathcal Y} \backslash \{y_i, y_i^2,
\ldots, y_i^{u_i} \}$ for which there are $1+U-u_i$ choices. Hence, there are
$$[1+U]\sum_{i=1}^n {u_i+1\choose 2} \,-\, \sum_{i=1}^nu_i{u_i+1\choose 2}$$
monomials in $\mathfrak{B}_u$ of type (ii).
Adding the two expressions, we get
\begin{smaller}
\begin{align*}
|\mathfrak{B}_u|
&= 1+U+{U\choose 2}+{U\choose 3}+(1+U)\sum_{i=1}^n{u_i\choose 1} -\sum_{i=1}^n u_i{u_i\choose 1} -\sum_{i=1}^n {u_i\choose 3}\\
&= 1+U+{U\choose 2}+{U\choose 3}+(1+U)U - \sum_{i=1}^n{u_i+2\choose 3}\\
&= {U+3\choose 3}-\sum_{i=1}^n{u_i+2\choose 3}.
\end{align*}
\end{smaller}
\vskip -0.9cm
\hfill$\square$\medskip
\smallskip
Our analysis of $M_n$ has the following implication for the multiview ideals $J_A$.
Note that these are $\ensuremath{\mathbb{Z}}^n$-homogeneous for any camera configuration $A$.
\begin{theorem} \label{thm:multiview ideals on Hilbert scheme}
For an $n$-tuple of camera matrices $A = (A_1,\ldots, A_n)$ with $\textup{rank}(A_i)=3$ for each $i$, the
multiview ideal $J_A$ has the Hilbert function (\ref{Vn_Hilbert_function})
if and only if the focal points of the $n$ cameras are pairwise distinct.
\end{theorem}
\begin{proof}
The if-direction follows from the argument in the first paragraph of this section.
If the $n$ camera positions $f_i = {\rm ker}(A_i)$ are distinct in $\PP^3$
then $M_n$ is the generic initial ideal of $J_A$, and hence both ideals have
the same $\ensuremath{\mathbb{Z}}^n$-graded Hilbert function.
For the only-if-direction we shall use:
\begin{equation} \label{lem:multiview ideal stays same}
\hbox{If $Q \in \PGL(4,K)$ and $AQ := (A_1Q, \ldots, A_nQ)$, then $J_A = J_{AQ}$.}
\end{equation}
This holds because $Q$ defines an isomorphism on $\PP^3 $
and hence $\phi_A$ as in (\ref{eq:phiA}) has the same image in $(\PP^2)^n$ as
$\phi_{AQ}$.
Suppose first that $n=2$ and $A_1$ and $A_2$ have the same focal point
and hence the same (three-dimensional) rowspace $W$.
We can map $W$ to the hyperplane $\{x_1 = 0\}$ by some
$Q \in \PGL(4,K)$, and (\ref{lem:multiview ideal stays same})
ensures that
$J_A = J_{AQ}$.
Thus we may assume that $ A_1 = \left[ \begin{array}{ll} \bf{0} & C_1 \end{array} \right]$ and $A_2 = \left[ \begin{array}{ll} \bf{0} & C_2 \end{array} \right]$ where $C_1$ and $C_2$ are invertible matrices and $\bf{0}$ is a column of zeros. Choosing
$f_1 = f_2 = (1,0,0,0)$ as the top row of $B_1$ and $B_2$ (as in Section 2), we have
$$ B_1^{-1} = \left[ \begin{array}{cc} 1 & \bf{0} \\ \bf{0} & C_1^{-1} \end{array} \right], \,\,\,B_2^{-1} = \left[ \begin{array}{cc} 1 & \bf{0} \\ \bf{0} & C_2^{-1} \end{array} \right].$$
The ideal $J^B$ is generated by the $2\times2$ minors of the matrix (\ref{eq:inverseB}) which is
$$D = \left[ \begin{array}{cc} w_1 & w_2 \\
p_1(x_1,y_1,z_1) & q_1(x_2,y_2,z_2) \\ p_2(x_1,y_1,z_1) & q_2(x_2,y_2,z_2) \\ p_3(x_1,y_1,z_1) & q_3(x_2,y_2,z_2) \end{array} \right]$$
where the $p_i$'s and $q_i$'s are linear polynomials. The ideal $I$ generated by the $2 \times 2$ minors of the submatrix of $D$ obtained by deleting the top row lies on the Hilbert scheme $H_{3,2}$ from \cite{CS} and hence $K[x,y,z]/I$ has Hilbert function
$$ \mathbb N^2 \to \mathbb N, \,\,\,(u_1,u_2) \mapsto \left( \begin{array}{c}
u_1+u_2+2 \\ 2 \end{array} \right).$$
For $(u_1,u_2) = (1,1)$, this has value
$6$. Since $I \subseteq J_A = J^B \cap K[x,y,z] $, the Hilbert function
of $ K[x,y,z]/J_A$ has value $\leq 6$, while (\ref{Vn_Hilbert_function}) evaluates to $8$.
If $n > 2$, we may assume without loss of generality that $A_1$ and $A_2$ have the same rowspace.
The argument for $n=2$ shows that $J_A = J^B \cap K[x,y,z] \supseteq I$. The Hilbert function value of $K[x,y,z]/J_A$ in degree $e_1+e_2$ is again $8$, while the Hilbert function value of $K[x,y,z]/I$ in degree $e_1+e_2$ coincides with
the value $6$ for $K[x_1,y_1,z_1,x_2,y_2,z_2]/I$. So we again conclude that
$K[x,y,z]/J_A$ does not have Hilbert function (\ref{Vn_Hilbert_function}).
\end{proof}
For $G = \PGL(3,K)$, the product $G^n$ acts on $K[x,y,z]$ by left-multiplication
$$ (g_1,\ldots,g_n) \cdot \left[ \begin{array}{c} x_i \\ y_i \\ z_i \end{array} \right]
\,\,=\,\, \, g_i \left[ \begin{array}{c} x_i \\ y_i \\ z_i \end{array} \right].$$
An ideal $I$ in $K[x,y,z]$ is said to be {\em Borel-fixed} if it is fixed
under the induced action of ${\mathcal B}^n$ where ${\mathcal B}$ is the subgroup of lower triangular
matrices in $G$.
\begin{proposition}
The generic initial ideal $M_n$ is the unique ideal in $K[x,y,z]$
that is Borel-fixed and has the Hilbert function (\ref{Vn_Hilbert_function})
in the $\ensuremath{\mathbb{Z}}^n$-grading.
\end{proposition}
\smallskip\noindent {\it Proof: \ }
The proof is analagous to that of
\cite[Theorem 2.1]{CS}, where $Z_{d,n}$ plays the role of $M_n$.
The ideal $M_n$ is Borel-fixed because it is a generic initial ideal.
The same approach as in \cite[\S 15.9.2]{E} can be used to prove this fact.
The multidegree of any $\ensuremath{\mathbb{Z}}^n$-graded ideal is determined by its Hilbert series \cite[Claim 8.54]{MS}.
Thus any ideal $I$ with Hilbert function (\ref{Vn_Hilbert_function}) has multidegree (\ref{eq:multidegree}).
Let $I$ be such a Borel-fixed ideal. This is a monomial ideal.
Each maximum-dimensional associated prime $P$ of $I$ has multidegree either
$t_1^2t_2^2\cdots t_n^2/(t_it_jt_k)$ or $t_1^2t_2^2\cdots t_n^2/(t_i^2t_j)$, by \cite[Theorem 8.53]{MS}.
In the first case $P$ is generated by $2n-3$ indeterminates, one associated with each
of the three cameras $i,j,k$ and two each from the other $n-3$ cameras.
Borel-fixedness of $I$ tells us that the generators indexed by each camera must be the most
expensive variables with respect to the order $\prec$. Hence $P=P_{ijk}$.
Similarly, $P=Q_{ij}$ in the case when $P$ has multidegree $t_1^2t_2^2\cdots t_n^2/(t_i^2t_j)$.
Every prime component of $M_n$ is among the minimal associated primes of $I$.
This yields the containments $I\subseteq \sqrt{I}\subseteq M_n$.
Since $I$ and $M_n$ have the same $\ensuremath{\mathbb{Z}}^n$-graded Hilbert function,
the equality $I=M_n$ holds.
\hfill$\square$\medskip
The {\em Stanley-Reisner complex} of a squarefree monomial ideal $M$ in a polynomial ring
$K[t_1,\ldots,t_s]$ is the simplicial complex on $\{1, \ldots, s \}$ whose facets are the sets $[s] \backslash \sigma$ where
$P_\sigma := \{ t_i \,:\, i \in \sigma \}$ is a minimal prime of $M$. A {\em shelling} of a simplicial complex
is an ordering $F_1,F_2, \ldots, F_q$
of its facets such that, for each $1 < j \leq q$, there exists a unique
minimal face of $F_j$ (with respect to inclusion) among the faces of $F_j$ that are not faces of some earlier facet $F_i$, $i < j$; see \cite[Definition 2.1]{Stanley}.
If the Stanley-Reisner complex of $M$ is shellable, then
$K[t_1, \ldots, t_s]/M$ is Cohen-Macaulay \cite[Theorem 2.5]{Stanley}.
\begin{proposition}
The Stanley-Reisner complex of the generic initial ideal $M_n$ is shellable.
Hence the quotient ring $K[x,y,z]/M_n$ is Cohen-Macaulay.
\end{proposition}
\smallskip\noindent {\it Proof: \ }
This proof is similar to that for $Z_{d,n}$ given in \cite[Corollary 2.6]{CS}.
Let $\Delta_n$ denote the Stanley-Reisner complex of the ideal $M_n$.
By Proposition~\ref{prop:prime_decomposition}, there are two types of minimal primes for $M_n$,
namely $P_{ijk}$ and $Q_{ij}$, which we describe uniformly as follows. Let $P = (p_{ij})$ be the $3 \times n$ matrix whose $i$th column is $[x_i \,\, y_i \,\, z_i]^T$. For $u \in \{0,1,2\}^n$ define
$P_u := \langle p_{ij} \,:\, i \leq u_j, \, 1 \leq j \leq n \rangle$. Then the minimal primes $P_{ijk}$ of $M_n$ are precisely the primes $P_u$ as $u$ varies over all vectors with three coordinates equal to one and the rest equal to two, and the minimal primes $Q_{ij}$ are those $P_u$ where $u$ has one coordinate equal to zero, one coordinate equal to one and the rest equal to two. The facet of $\Delta_n$ corresponding to the minimal prime $P_u$ is then $F_u := \{ p_{ij} \,:\, u_j < i \leq 3, \, 1 \leq j \leq n \}$.
We claim that the ordering of the facets $F_u$ induced by ordering the $u$'s lexicographically starting with $(0,1,2,2, \ldots, 2)$ and ending with $(2,2, \ldots, 2,1,0)$ is a shelling of $\Delta_n$.
Consider the face $\eta_u := \{ p_{ij} \,: \, j > 1, i = u_j+1 \leq 2\}$ of the facet $F_u$.
We will prove that $\eta_u$ is the unique minimal one among the faces of $F_u$ that have not appeared in a facet $F_{u'}$ for $u' < u$. Suppose $G$ is a face of $F_u$ that does not contain $\eta_u$. Pick an element $p_{u_j+1,j} \in \eta_u \backslash G$. Then $j > 1$, $u_j \leq 1$ and so if $F_u$ is not the first facet in the ordering, then there exists $i < j$ such that $u_i > 0$ because $u >
(0,1,2,2,\ldots,2)$ and of the form described above. Pick $i$ such that $i < j$ and $u_i > 0$ and consider
$F_{u+e_j-e_i} = F_u \backslash \{p_{u_j+1,j} \} \cup \{p_{u_i,i} \}$. Then $u+e_j-e_i < u$ and
$G$ is a face of $F_{u+e_j-e_i}$. Conversely, suppose $G$ is a face of $F_u$ that is also a face of $F_{u'}$ where $u' < u$. Since $\sum u'_j = \sum u_j$, there exists some $j > 1$ such that $u'_j > u_j$. Therefore, $G$ does not contain $p_{u_j+1,j}$ which belongs to $\eta_u$. Therefore, $\eta_u$ is not contained in $G$.
\hfill$\square$\medskip
\section{A Toric Perspective}
In this section we examine multiview ideals $J_A$ that are toric.
For an introduction to toric ideals we refer the reader to \cite{GBCP}.
We now
assume that, for each camera $i$, each of the
four torus fixed points in $\PP^3$ either is the camera position
or is mapped to a torus fixed point in $\PP^2$.
This implies $n \leq 4$. We
fix $n=4$ and $f_i = e_i$ for $i=1,2,3,4$. Up to permuting and rescaling columns,
our assumption implies that the configuration $A$ equals
$$
\begin{small}
A_1 = \begin{bmatrix} 0 \! & \! 1 \! & \! 0 \! & \! 0 \\
0 \! & \! 0 \! & \! 1 \! & \! 0 \\
0 \! & \! 0 \! & \! 0 \! & \! 1 \end{bmatrix} \! ,\,\,
A_2 = \begin{bmatrix} 1 \! & \! 0 \! & \! 0 \! & \! 0 \\
0 \! & \! 0 \! & \! 1 \! & \! 0 \\
0 \! & \! 0 \! & \! 0 \! & \! 1 \end{bmatrix} \!,\,\,
A_3 = \begin{bmatrix} 1 \! & \! 0 \! & \! 0 \! & \! 0 \\
0 \! & \! 1 \! & \! 0 \! & \! 0 \\
0 \! & \! 0 \! & \! 0 \! & \! 1 \end{bmatrix} \!,\,\,
A_4 = \begin{bmatrix} 1 \! & \! 0 \! & \! 0 \! & \! 0 \\
0 \! & \! 1 \! & \! 0 \! & \! 0 \\
0 \! & \! 0 \! & \! 1 \! & \! 0 \end{bmatrix} \! .
\end{small}
$$
For this camera configuration, the multiview ideal $J_A$ is indeed a toric ideal:
\begin{proposition} \label{prop:toricideal}
The ideal $J_A$ is obtained by eliminating the diagonal unknowns $w_1$, $w_2$, $w_3$ and $w_4$
from the ideal
of $2 \times 2$-minors of the $4 \times 4$-matrix
\begin{equation}
\label{eq:fourbyfour}
\begin{pmatrix}
w_1 & x_2 & x_3 & x_4 \\
x_1 & w_2 & y_3 & y_4 \\
y_1 & y_2 & w_3 & z_4 \\
z_1 & z_2 & z_3 & w_4
\end{pmatrix}.
\end{equation}
This toric ideal is minimally generated by six quadrics and four cubics:
\begin{small}
$$ \! \begin{matrix} J_A \,=\, \langle
y_1 y_4{-}x_1 z_4, y_3 x_4{-}x_3 y_4, y_2 x_4{-}x_2 z_4, z_1 y_3{-}x_1 z_3, z_2 x_3{-}x_2 z_3,
z_1 y_2{-}y_1 z_2 ,\\ \qquad \quad
y_2 z_3 y_4-z_2 y_3 z_4, \,y_1 z_3 x_4-z_1 x_3 z_4,\, x_1 z_2 x_4-z_1 x_2 y_4,\,
x_1 y_2 x_3-y_1 x_2 y_3 \rangle
\end{matrix}
$$
\end{small}
\end{proposition}
\begin{proof}
We extend $A_i$ to a $4 \times 4$-matrix $B_i$ as in Section~2 by adding the
row $b_i = e_i^T$. The $B_i$'s are then all permutation matrices,
and the matrix in (\ref{eq:inverseB}) equals the matrix in (\ref{eq:fourbyfour}).
The ideal $J^B$ is generated by the $2 \times 2$ minors of that matrix of unknowns.
The multiview ideal is $J_A = J^B \cap K[x,y,z]$. We find the listed
binomial generators by performing the elimination with a
computer algebra package such as {\ensuremath{{\bf{t}}} Macaulay2}.
Toric ideals are precisely those prime
ideals generated by binomials and hence $J_A$ is a toric ideal.
\end{proof}
\begin{remark}
The {\em normalized coordinate system in multiview geometry}
proposed by Heyden and {\AA}str{\"o}m \cite{HA} is different from ours
and does not lead to toric varieties. Indeed, if one uses the camera matrices in
\cite[\S 2.3]{HA}, then $J_A$ is also generated by six quadrics
and four cubics, but seven of the ten generators are not binomials.
One of the cubic generators has six terms. \qed
\end{remark}
In commutative algebra, it is customary to represent
toric ideals by integer matrices. Given $\mathcal{A} \in \mathbb N^{p \times q}$ with columns
$a_1, \ldots, a_q$, the {\em toric ideal} of $\mathcal{A}$ is
$$ I_\mathcal{A} \,\, := \,\, \langle t^u - t^v \,:\, \mathcal{A}u
\,\,= \,\, \mathcal{A}v, \, u, v \in \mathbb N^q \rangle \,\, \subset \,\, K[t] \,:= \,K[t_1, \ldots, t_q], $$
where $t^u$ represents the monomial $t_1^{u_1}t_2^{u_2} \cdots t_q^{u_q}$.
If $\mathcal{A'}$ is the submatrix of $\mathcal A$ obtained by deleting the columns indexed by $j_1, \ldots, j_s$ for some $s < q$, then the toric ideal $I_{\mathcal{A'}}$ equals the elimination ideal $I_\mathcal{A} \cap K[t_j \,:\,
j \not \in \{j_1, \ldots, j_s\}]$; see \cite[Prop.~4.13 (a)]{GBCP}. The integer matrix $\mathcal{A}$
for our toric multiview ideal $J_A$ in Proposition \ref{prop:toricideal}
is the following {\em Cayley matrix} of format $8 \times 12$:
$$
\mathcal{A} \,\,\, = \,\,\,
\begin{bmatrix}
A_1^T & A_2^T & A_3^T & A_4^T \\
{\bf 1} & {\bf 0} & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 1} & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 0} & {\bf 1} & {\bf 0} \\
{\bf 0} & {\bf 0} & {\bf 0} & {\bf 1}
\end{bmatrix}
$$
where ${\bf 1} = [1 \,1\,1 ]$ and ${\bf 0} = [0\,0\,0 ]$.
This matrix $\mathcal{A}$ is obtained from the following $8 \times 16$ matrix
by deleting columns $1, 6, 11$ and $16$:
\begin{equation}
\label{eq:transportation}
\begin{bmatrix}
I_4 & I_4 & I_4 & I_4 \\
{\bf 1} & {\bf 0} & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 1} & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 0} & {\bf 1} & {\bf 0} \\
{\bf 0} & {\bf 0} & {\bf 0} & {\bf 1}
\end{bmatrix}
\end{equation}
The vectors ${\bf 1}$ and ${\bf 0}$ now have length four, $I_4$ is the $4 \times 4$ identity matrix and we assume that the columns of (\ref{eq:transportation}) are indexed by $$w_1, x_1, y_1,z_1, x_2,w_2,y_2,z_2,x_3,y_3,w_3,z_3,x_4,y_4,z_4,w_4.$$ The matrix
(\ref{eq:transportation}) represents the direct product of two tetrahedra,
and its toric ideal is known (by \cite[Prop. 5.4]{GBCP})
to be generated by the $2 \times 2$ minors of (\ref{eq:fourbyfour}).
Its elimination ideal in the ring $K[x,y,z]$ is $I_\mathcal{A}$, and hence
$J_A = I_\mathcal{A}$.
\begin{figure}
\includegraphics[width=0.45\linewidth]{V4_Toric_Blowup.png}\hfill
\includegraphics[width=0.55\linewidth]{V4_Toric_Explode.png}
\caption{
Initial monomial ideals of the toric multiview variety correspond to mixed subdivisions of
the truncated tetrahedron $P$. These have $4$ cubes and $12$ triangular prisms.}
\label{V4_Toric_Blowup}
\end{figure}
The matrix $\mathcal{A}$ has rank $7$ and its columns determine a
$6$-dimensional polytope ${\rm conv}(\mathcal{A})$
with $12$ vertices.
The normalized volume of $ {\rm conv}(\mathcal{A})$ equals $16$, and this
is the degree of the $6$-dimensional projective toric variety in $\PP^{11}$ defined by $J_A$.
In our context, we don't care for the $6$-dimensional variety in $\PP^{11}$
but we are interested in the threefold in
$\PP^2 {\times} \PP^2 {\times} \PP^2 {\times} \PP^2$ cut out by $J_A$.
To study this combinatorially, we apply the {\em Cayley trick}. This means we
replace the $6$-dimensional polytope
${\rm conv}(\mathcal{A})$ by the $3$-dimensional polytope
$$ P \,\, = \,\, {\rm conv}(A_1^T) + {\rm conv}(A_2^T) + {\rm conv}(A_3^T) + {\rm conv}(A_4^T) . $$
This is the Minkowski sum of the four triangles that form the facets of the standard tetrahedron.
Equivalently, $P$ is the scaled tetrahedron $4 \Delta_3$ with its vertices sliced off.
Triangulations of $\mathcal{A}$ correspond to mixed subdivisions of $P$.
Each $6$-simplex in $\mathcal{A}$ becomes a cube or a triangular prism in $P$.
Each mixed subdivision has four cubes $\PP^1 \times \PP^1 \times \PP^1$
and twelve triangular prisms $\PP^2 \times \PP^1$.
Such a mixed subdivision of $P$ is shown in Figure \ref{V4_Toric_Blowup}.
Note the similarities and differences relative to the complex $V(M_4)$ in Example \ref{ex:M_4}.
\smallskip
We worked out a complete classification of all mixed subdivisions of $P$:
\begin{theorem} \label{thm:1068}
The truncated tetrahedron $P$ has $1068$ mixed subdivisions, one
for each triangulation of the Cayley polytope ${\rm conv}(\mathcal{A})$.
Precisely $1002$ of the $1068$ triangulations are regular.
The regular triangulations form $48$ symmetry classes, and the
non-regular triangulations form $7$ symmetry classes.
\end{theorem}
We offer a brief discussion of this result and how it was obtained.
Using the software {\ensuremath{{\bf{t}}} Gfan} \cite{Gfan}, we found that $I_\mathcal{A}$ has
1002 distinct monomial initial ideals. These ideals fall into 48 symmetry classes under the
natural action of $(S_3)^4 \rtimes S_4$
on $K[x,y,z]$ where the $i$-th copy of $S_3$ permutes the variables $x_i,
y_i,z_i$, and $S_4$ permutes the labels of the cameras.
The matrix $\mathcal{A}$ being unimodular, each initial ideal of $I_\mathcal{A}$
is squarefree and each triangulation of $\mathcal{A}$ is unimodular.
To calculate all non-regular triangulations, we used the
bijection between triangulations and $\mathcal{A}$-graded monomial ideals
in \cite[Lemma 10.14]{GBCP}. Namely, we ran a second
computation using the software package {\ensuremath{{\bf{t}}} CaTS} \cite{CaTS}
that lists all $\mathcal{A}$-graded monomials ideals, and we
found their number to be $1068$, and hence $\mathcal{A}$ has 66 non-regular triangulations.
\begin{figure}
\includegraphics[width=0.36\linewidth]{adjacency_1.png}
\vskip -0.6cm
\caption{The dual graph of the mixed subdivision given by $Y_1$.}
\label{fig:dualgraph12gens}
\end{figure}
The $48$ distinct initial monomial ideals of the toric multiview ideal $J_A$
can be distinguished by various invariants. First, their
numbers of generators range from $12$ to $15$.
There is precisely one initial ideal with $12$ generators:
\begin{align*}
Y_1 \,\,\, = \,\,\, & \langle \,
y_1z_2, z_1 y_3, x_1z_4, z_2 x_3, y_2 x_4, x_3y_4, \\ & \,\,\,\,
x_1 y_2 x_3, z_1 y_2 x_3, x_1z_2 x_4,
z_1 x_3 z_4, z_2 y_3 x_4, z_2 y_3 z_4\,
\rangle.
\end{align*}
At the other extreme, there are two classes of initial ideals with $15$ generators.
These are the only classes having quartic generators, as all ideals with $\leq 14$ generators
require only quadrics and cubics. A representative is
\begin{align*}
Y_2 \,\,\, = \,\,\, & \langle \,
z_1 y_2, x_1 z_3, x_1 z_4, x_2 z_3, y_2 x_4, y_3 x_4,
\, y_1 z_2 x_3 y_4, \\ & \,\,\,\,
x_1 y_2 x_3, x_1 z_2 x_3,
x_1 z_2 x_4, x_4 z_2 y_1,
y_1 z_3 x_4, y_1 z_3 y_4,
y_2 x_3 y_4, y_2 z_3 y_4\,
\rangle.
\end{align*}
All non-regular $\mathcal{A}$-graded monomial ideal have $14$ generators.
One of them~is
\begin{align*}
Y_3 \,\,\, = \,\,\, & \langle \,
z_1y_2, z_1y_3, x_1z_4, x_2z_3, x_2z_4,
y_3x_4,\, x_1y_2z_3, y_1x_2y_3, \\ & \,\,\,\,
x_1y_2x_4, x_1z_2x_4,
x_1z_3x_4, y_1z_3x_4,
y_2z_3x_4, y_2z_3y_4\,
\rangle.
\end{align*}
A more refined combinatorial invariant of the $55$ types
is the dual graph of the mixed subdivision of $P$. The $16$ vertices of this
graph are labeled with squares and triangles to denote cubes and triangular prisms respectively,
and edges represent common facets.
The graph for $Y_1$ is shown in Figure~\ref{fig:dualgraph12gens}.
For complete information on the classification in
Theorem \ref{thm:1068} see the website \
\texttt{www.math.washington.edu/$\sim$aholtc/HilbertScheme}.
\smallskip
That website also contains the same information for the
toric multiview variey in the easier case of $n=3$ cameras. Taking
$A_1, A_2$ and $A_3$ as camera matrices, the
corresponding Cayley matrix has format $7 \times 9$ and rank $6$:
$$
\mathcal{A} \,\,\, = \,\,\,
\begin{bmatrix}
A_1^T & A_2^T & A_3^T \\
{\bf 1} & {\bf 0} & {\bf 0} \\
{\bf 0} & {\bf 1} & {\bf 0} \\
{\bf 0} & {\bf 0} & {\bf 1}
\end{bmatrix}
\,\,\, = \,\,\,
\begin{bmatrix}
\,0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \,\\
\,1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \,\\
\,0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \,\\
\,0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \,\\
\, 1& 1 & 1 & 0 & 0& 0 & 0 & 0 & 0 \,\\
\, 0& 0 & 0 & 1 & 1& 1 & 0 & 0 & 0 \,\\
\, 0& 0 & 0 & 0 & 0& 0 & 1 & 1 & 1 \,
\end{bmatrix}
$$
This is the transpose of the matrix $A_{\{123\}}$ in (\ref{eq:drei}) when
evaluated at $x_1 = y_1 = \cdots = z_3 = 1$.
The corresponding $6$-dimensional Cayley polytope $ {\rm conv}(\mathcal{A})$ has
$9$ vertices and normalized volume $7$, and the toric multiview ideal equals
\begin{equation}
\label{eq:4binomials}
J_A \,\,= \,\, \langle z_1 y_3 - x_1 z_3, z_2 x_3 - x_2 z_3,
z_1 y_2 -y_1 z_2 , x_1 y_2 x_3-y_1 x_2 y_3 \rangle.
\end{equation}
We note that the quadrics cut out $V_A$ plus an extra component
$\PP^1 \times \PP^1 \times \PP^1$:
\begin{equation}
\label{eq:HAThm56}
\langle z_1 y_3 - x_1 z_3, z_2 x_3 - x_2 z_3,
z_1 y_2 -y_1 z_2 \rangle \,\, = \,\, J_A \cap \langle z_1,z_2,z_3 \rangle
\end{equation}
This equation is precisely \cite[Theorem 5.6]{HA}
but written in toric coordinates.
\smallskip
The toric ideal $J_A$ has precisely $20$ initial monomial ideals, in three symmetry classes,
one for each mixed subdivision of the $3$-dimensional polytope
$$ P \,\, = \,\, {\rm conv}(A_1^T) + {\rm conv}(A_2^T) + {\rm conv}(A_3^T) . $$
Thus $P$ is the Minkowski sum of three of the four triangular facets of the
regular tetrahedron. Each mixed subdivision of $P$
uses one cube $\PP^1 \times \PP^1 \times \PP^1$
and six triangular prisms $\PP^2 \times \PP^1$.
A picture of one of them is seen in Figure~\ref{V3_J8_Blowup}.
\begin{remark} \label{rmk:important}
Our toric study in this section is universal in the sense that {\bf every}
multiview variety $V_A$ for $n\leq 4$ cameras in linearly general
position in $\PP^3$ is isomorphic to the toric multiview variety under
a change of coordinates in $(\PP^2)^n$. This fact can be proved
using the coordinate systems for the Grassmannian ${\rm Gr}(4,3n)$
furnished by the construction in \cite[\S 4]{SZ}.
Here is how it works for $n=4$. The coordinate change via ${\PGL}(3,K)^4$ gives
\begin{equation}
\label{eq:supportset} \,\,
\bmat{ A_1^T \! & \! A_2^T \! & \! A_3^T \! & \! A_4^T } \,\,=\,\,
\left[ \,
\begin{matrix} 0 & 0 & 0 \\ * & * & * \\ * & * & * \\ * & * & * \\ \end{matrix} \quad\,\,
\begin{matrix} * & * & * \\ 0 & 0 & 0 \\ * & * & * \\ * & * & * \\ \end{matrix} \quad\,\,
\begin{matrix} * & * & * \\ * & * & * \\ 0 & 0 & 0 \\ * & * & * \\ \end{matrix} \quad\,\,
\begin{matrix} * & * & * \\ * & * & * \\ * & * & * \\ 0 & 0 & 0 \\ \end{matrix}
\, \right] \!,
\end{equation}
where the $3 \times 3$-matrices indicated by the stars in the four blocks are invertible.
Now, the $4 \times 12$-matrix (\ref{eq:supportset}) gives a {\em support set} $\Sigma$
that satisfies the conditions in \cite[Proposition 3.1]{SZ}. The corresponding
Zariski open set $\mathcal{U}_\Sigma$ of the Grassmannian ${\rm Gr}(4,12)$
is non-empty. In fact, by \cite[Remark 4.9(a)]{SZ}, the set $\mathcal{U}_\Sigma$
represents configurations whose cameras $f_1, f_2, f_3,f_4$ are not coplanar.
Now, Theorem 4.6 in \cite{SZ} completes our proof because (the universal Gr\"obner basis of)
the ideal $J_A$ depends only on
the point in $\mathcal{U}_\Sigma \subset {\rm Gr}(4,12)$ represented by (\ref{eq:supportset})
and not on the specific camera matrices $A_1,\ldots,A_4$. \qed
\end{remark}
\section{Degeneration of Collinear Cameras}
In this section we consider a family of collinear camera positions.
The degeneration of the associated multiview variety will play a key role in
proving our main results in Section 6, but they may be of independent interest.
Collinear cameras have been studied in computer vision, for example in \cite{HartleyZisserman}.
Let $\varepsilon $ be a parameter and fix the configuration
$A(\varepsilon) := (A_1, \ldots, A_n)$ where
$$A_i \,:=\ \left[ \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ \varepsilon^{n-i} & 0 & 0 & 1 \end{array} \right]$$
The focal point of camera $i$ is $f_i = (-1:1:1:\varepsilon^{n-i})$ and hence the $n$ cameras given by $A(\varepsilon)$ are collinear in $\PP^3$. Note that these camera matrices stand in sharp contrast to those for which $A$ is generic which was the focus of Sections 2 and 3. They also differ from the toric situation in Section 4.
We consider the multiview ideal $J_{A(\varepsilon)}$ in the polynomial ring
$K(\varepsilon)[x,y,z]$, where $K(\varepsilon)$ is the field of rational functions in $\varepsilon$ with coefficients in $K$.
Then $J_{A(\varepsilon)}$ has the Hilbert function (\ref{Vn_Hilbert_function}),
by Theorem~\ref{thm:multiview ideals on Hilbert scheme}.
Let $\mathcal{G}_n$ be the set of polynomials in $K(\varepsilon)[x,y,z]$ consisting of the ${n \choose 2}$ quadratic
polynomials
\begin{equation}
\label{eq:sec5quadrics}
x_iy_j - x_jy_i \qquad \hbox{for} \,\,\,\, 1 \leq i < j \leq n
\end{equation}
and the $3{n \choose 3}$ cubic polynomials below for all choices of $1 \leq i < j < k \leq n$:
\begin{equation}
\label{eq:sec5cubics}
\begin{array}{c}
(\varepsilon^{n-k}-\varepsilon^{n-i}) x_iz_jx_k +
(\varepsilon^{n-j}-\varepsilon^{n-k}) z_ix_jx_k +
(\varepsilon^{n-i}-\varepsilon^{n-j}) x_ix_jz_k \\
(\varepsilon^{n-k}-\varepsilon^{n-i}) y_iz_jy_k +
(\varepsilon^{n-j}-\varepsilon^{n-k}) z_iy_jy_k +
(\varepsilon^{n-i}-\varepsilon^{n-j})y_iy_jz_k \\
(\varepsilon^{n-k}-\varepsilon^{n-i}) y_iz_jx_k +
(\varepsilon^{n-j}-\varepsilon^{n-k}) z_iy_jx_k +
(\varepsilon^{n-i}-\varepsilon^{n-j}) y_ix_jz_k
\end{array}
\end{equation}
Let $L_n$ be the ideal generated by (\ref{eq:sec5quadrics})
and the following binomials from the first two terms in (\ref{eq:sec5cubics}):
$$ L_n \,:=\, \bigl\langle x_iy_j - x_jy_i \,: \, 1 {\leq} i {<} j {\leq}n \bigr\rangle + \left\langle
\begin{array}{c}
\! x_iz_jx_k - z_ix_jx_k,\\
\! y_iz_jy_k - z_iy_jy_k,\\
\! y_iz_jx_k - z_iy_jx_k
\end{array} : \, 1 {\leq} i {<} j {<} k {\leq} n \right\rangle \! .$$
Let $N_n$ be the ideal generated by the leading monomials
in (\ref{eq:sec5quadrics}) and (\ref{eq:sec5cubics}):
$$ N_n \, \,:= \,\, \bigl\langle x_iy_j \,: \,1 {\leq} i {<} j {\leq} n \bigr\rangle
\, +\, \bigl\langle
x_iz_jx_k, \, y_iz_jy_k, \,y_iz_jx_k \,: \, 1 {\leq} i {<} j {<} k {\leq} n \bigr\rangle.
$$
The main result in this section is the following construction
of a two-step flat degeneration $J_{A(\varepsilon)} \rightarrow L_n \rightarrow N_n$. This
gives an explicit realization of~(\ref{eq:TriBiMono}).
We note that $V_{A(\varepsilon)}$ can be seen as a variant of the
{\em Mustafin varieties} in~\cite{CHSW}.
\begin{theorem} \label{thm:sec5main}
The three ideals $J_{A(\varepsilon)}$, $L_n$ and $N_n$ satisfy the following:
\begin{enumerate}
\item[(a)] The multiview ideal $J_{A(\varepsilon)}$ is generated by the set $\,\mathcal{G}_n$.
\item[(b)] The binomial ideal $L_n$ equals the special fiber of $J_{A(\varepsilon)}$ for $\varepsilon = 0$.
\item[(c)] The monomial ideal $N_n$ is the initial ideal of $L_n$,
in the Gr\"obner basis sense, with respect to the
lexicographic term order with $ x \succ y \succ z$.
\end{enumerate}
\end{theorem}
The rest of this section is devoted to explaining and proving these results.
Let us begin by showing that $\mathcal{G}_n$ is a subset of $J_{A(\varepsilon)}$.
The determinant of
$$ A(\varepsilon)_{\{ij\}} \,\, = \,\, \left[ \begin{array}{ccc} A_i & p_i & \bf{0} \\ A_j & \bf{0} & p_j \end{array} \right]$$
equals
$(\varepsilon^{n-j}-\varepsilon^{n-i})( x_iy_j - x_jy_i)$. Hence
$J_{A(\varepsilon)}$ contains (\ref{eq:sec5quadrics}), by the argument in
Lemma~\ref{lem:easyinclusion}.
Similarly, for any $1 {\leq} i {<} j {<} k {\leq} n$, consider the $9 \times 7$ matrix
$$A(n)_{\{ijk\}} \,\,=\,\, \left[ \begin{array}{ccccccc}
1 & 1 & 0 & 0 & x_i & 0 & 0 \\
1 & 0 & 1 & 0 & y_i & 0 & 0 \\
\varepsilon^{n-i} & 0 & 0 & 1 & z_i & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & x_j & 0 \\
1 & 0 & 1 & 0 & 0 & y_j & 0 \\
\varepsilon^{n-j} & 0 & 0 & 1 & 0 & z_j & 0 \\
1 & 1 & 0 & 0 & 0 & 0 & x_k \\
1 & 0 & 1 & 0 & 0 & 0 & y_k \\
\varepsilon^{n-k} & 0 & 0 & 1 & 0 & 0 & z_k
\end{array}
\right].
$$
The three cubics (\ref{eq:sec5cubics}), in this order and up to sign, are the determinants of the $7 \times 7$ submatrices of $A(\varepsilon)_{\{ijk\}}$ obtained by deleting the rows corresponding to $y_j$ and $y_k$, the rows corresponding to
$x_j$ and $x_k$, and the rows corresponding to $x_i$ and $y_k$ respectively.
We conclude that $\mathcal{G}_n$ lies in $J_{A(\varepsilon)}$.
\smallskip
We next discuss part (b) of Theorem \ref{thm:sec5main}.
Every rational function $c(\varepsilon) \in K(\varepsilon)$ has a unique expansion as a Laurent series
$c_1\varepsilon^{a_1} + c_2\varepsilon^{a_2} + \cdots $ where $c_i \in K$ and $a_1 < a_2 < \cdots$ are integers. The function ${\rm val}: K(\varepsilon) \rightarrow \ensuremath{\mathbb{Z}}$
given by $c(\varepsilon) \mapsto a_1$ is then a valuation on $K(\varepsilon)$, and $K[\![\varepsilon]\!] =
\{ c \in K(\varepsilon) \,:\, {\rm val}(c) \geq 0 \}$ is its valuation ring. The unique maximal ideal in
$K[\![\varepsilon]\!]$ is $m = \langle c \in K(\varepsilon) \,:\, {\rm val}(c) > 0 \rangle$.
The residue field $K[\![\varepsilon]\!]/m$ is isomorphic to $K$, so
there is a natural map $K[\![\varepsilon]\!] \rightarrow K$ that represents
the evaluation at $\varepsilon= 0$.
The {\em special fiber} of an ideal $I \subset K(\varepsilon)[x,y,z]$
is the image of $I \cap K[\![\varepsilon]\!][x,y,z]$ under the induced map
$K[\![\varepsilon]\!][[x,y,z] \rightarrow K[x,y,z]$. The special fiber is denoted ${\rm in}(I)$.
It can be computed from $I$ by a variant of Gr\"obner bases (cf.~\cite[\S 2.4]{TropicalBook}).
What we are claiming in Theorem \ref{thm:sec5main} (b) is the following identify
$$ {\rm in}( J_{A(\varepsilon)} ) \,\, = \,\, L_n \qquad {\rm in} \,\, K[x,y,z]. $$
It is easy to see that the left hand side contains the right hand side:
indeed, by multiplying the trinomials in (\ref{eq:sec5cubics}) by $\varepsilon^{k-n}$
and then evaluating at $\varepsilon = 0$, we obtain the binomial
cubics among the generators of $L_n$.
Finally, what is claimed in Theorem \ref{thm:sec5main} (c) is the following identity
$$ {\rm in}_\prec(L_n ) \,\, = \,\, N_n \qquad {\rm in} \,\, K[x,y,z]. $$
Here, ${\rm in}_\prec(L_n)$ is the lexicographic initial ideal
of $L_n$, in the usual Gr\"obner basis sense.
Again, the left hand side contains the right hand side
because the initial monomials of the binomial generators of $L_n$ generate $N_n$.
Note that $N_n$ is distinct from the generic initial ideal $M_n$.
Even though $M_n$ played a prominent role in Sections 2 and 3,
the ideal $N_n$ will be more useful in Section 6.
The reason is that $M_n$ is the most singular point on
the Hilbert scheme $\mathcal{H}_n$ while,
as we shall see, $N_n$ is a smooth point on $\mathcal{H}_n$.
In summary, what we have shown thus far is the following inclusion:
\begin{equation} \label{eq:thusfar}
N_n \,\, \subseteq \,\,{\rm in}_\prec \bigl( {\rm in}(J_{A(\varepsilon)}) \bigr)
\end{equation}
We seek to show that equality holds.
Our proof rests on the following lemma.
\begin{lemma}
\label{lem:N_Hilbert_function}
The monomial ideal $N_n$ has the $\ensuremath{\mathbb{Z}}^n$-graded Hilbert function~\eqref{Vn_Hilbert_function}.
\end{lemma}
\begin{proof}
Let $u=(u_1,\ldots,u_n)\in\ensuremath{\mathbb{N}}^n$, and let $\mathfrak{B}_u$ be the set of all monomials
of multidegree $u$ in $K[x,y,z]$ which are not in $N_n$. We need to show that
$$|\mathfrak{B}_u| \,=\, {u_1+\cdots+u_n+3\choose 3} - \sum_{i=1}^n{u_i+2\choose 3}.$$
It can be seen from the generators of $N_n$ that the monomials in $\mathfrak{B}_u$ are
of the form $z^a y^bx^c z^d$ for $a,b,c,d\in\ensuremath{\mathbb{N}}^n$ such that $u=a+b+c+d$ and
\begin{align*}
a &= (a_1,\ldots, a_i, 0,\ldots,0)\\
b &= (0,\ldots,0,b_i,\ldots,b_j,0,\ldots,0)\\
c &= (0,\ldots,0,c_j,\ldots, c_k, 0,\ldots,0)\\
d &= (0,\ldots, 0,d_k, \ldots, d_n)
\end{align*}
for some triple $i,j,k$ with $1\le i\le j\le k\le n$.
We count the monomials in $\mathfrak{B}_u$ using a combinatorial ``stars and bars'' argument.
Each monomial can be formed in the following way.
Suppose there are $u_1+\cdots+u_n+3$ blank spaces laid left to right.
Fill exactly three spaces with bars. This leaves $u_1+\cdots+u_n$
open blanks to fill in, which is the total degree of a monomial in $\mathfrak{B}_u$.
The three bars separate the blanks into four
compartments, some possibly empty. From these compartments we greedily form $a$, $b$, $c$, and $d$ to make
$z^ay^bx^cz^d$ as described below.
In what follows, $\star$ is
used as a placeholder symbol. Fill the first $u_1$ blanks with the symbol $\star_1$, the next
$u_2$ blanks with $\star_2$, and continue to fill up until the last $u_n$ blanks are filled with $\star_n$. Now we pass
once more through these symbols and replace each $\star_i$ with either $x_i$, $y_i$, or $z_i$ such that
all variables in the first compartment are $z$'s, those in the second are $y$'s, then $x$'s and in the fourth compartment $z$'s. Removing the bars gives $z^ay^bx^cz^d$ in $\mathfrak{B}_u$.
There are $\displaystyle{u_1+\cdots+u_n+3\choose 3}$ ways of choosing the three bars.
The monomials in
$\mathfrak{B}_u$ are overcounted only when $i=j=k$ if $z_i$ appears in both the
first and fourth compartments.
Indeed, in such cases if we require $a_i=0$, the monomial is uniquely represented, so we
are overcounting by the $\displaystyle{u_i+2\choose 3}$ choices when $a_i\ne 0$.
\end{proof}
We are now prepared to derive the main result of this section.
\medskip
\noindent {\em Proof of Theorem \ref{thm:sec5main}:}
Lemma~\ref{lem:N_Hilbert_function} and Theorem~\ref{thm:multiview ideals on Hilbert scheme} tell us that
$N_n$ and $J_{A(\varepsilon)}$ have the same $\ensuremath{\mathbb{Z}}^n$-graded Hilbert function \eqref{Vn_Hilbert_function}.
We also know from \cite[\S 2.4]{TropicalBook} that $\tin(J_{A(\varepsilon)})$ has the same Hilbert function, just as
passing to an initial monomial ideal for a term order preserves Hilbert function.
Hence the equality
$N_n \,\, \subseteq \,\,{\rm in}_\prec \bigl( {\rm in}(J_{A(\varepsilon)}) \bigr) $ holds in (\ref{eq:thusfar}).
This proves parts (b) and (c).
We have shown that $\mathcal{G}_n$ is a Gr\"obner basis
for the homogeneous ideal $J_{A(\varepsilon)}$ in the
valuative sense of \cite[\S 2.4]{TropicalBook}. This implies
that $\mathcal{G}_n$ generates $J_{A(\varepsilon)}$.
\qed
\medskip
\begin{remark} \label{rmk:decomp}
The polyhedral subcomplexes of $(\Delta_2)^n$ defined by
the binomial ideal $L_n$ and the monomial ideal $N_n$ are
combinatorially interesting. For instance,
$L_n$ has prime decomposition $I_3\cap I_4\cap\cdots\cap I_n\cap I_{n+1}$, where
\begin{align*}
I_t \,\,&:= \,\langle\, x_i,y_i :\, i=t,t+1,\ldots,n \,\rangle \ +\\
&\ \ \ \ \ \langle\, x_iy_j - x_jy_i : 1\le i < j < t \,\rangle\ + \\
&\ \ \ \ \ \langle\, x_iz_j - x_jz_i, \,y_iz_j - y_jz_i : 1\le i < j < t-1 \,\rangle.
\end{align*}
The monomial ideal $N_n$ is the intersection of
${\rm in}_\prec(I_t)$ for $t=3,\ldots,n+1$. \qed
\end{remark}
\section{The Hilbert Scheme}
We define $\mathcal{H}_n$ to be the multigraded Hilbert scheme which parametrizes all $\ensuremath{\mathbb{Z}}^n$-homogeneous ideals in $K[x,y,z]$ with the Hilbert function in (\ref{Vn_Hilbert_function}).
According to the general construction given in \cite{HaimanSturmfels},
$\mathcal{H}_n$ is a projective scheme.
The ideals $J_A$ and ${\rm in}_\prec(J_A)$ for $n$ distinct
camera positions, as well as the combinatorial ideals $M_n, L_n$ and $N_n$ all correspond to closed points on $\mathcal{H}_n$.
Our Hilbert scheme $\mathcal{H}_n$ is closely related to the Hilbert scheme $H_{4,n}$
which was studied in \cite{CS}. We already utilized results
from that paper in our proof of Theorem \ref{thm:UGB}.
Note that $H_{4,n}$ parametrizes degenerations of the
diagonal $\PP^3$ in $(\PP^3)^n$ while
$\mathcal{H}_n$ parametrizes blown-up images of that $\PP^3$
in $(\PP^2)^n$.
Let $G=\PGL(3,K)$ and ${\mathcal B}\subset G$ the Borel subgroup of lower-triangular $3\times 3$ matrices modulo scaling.
The group $G^n$ acts on $K[x,y,z]$ and this induces an action on the Hilbert scheme $\mathcal{H}_n$.
Our results concerning the ideal $M_n$ in Section 3 imply the following corollary,
which summarizes the statements analogous to Theorem 2.1 and Corollaries 2.4 and 2.6 in \cite{CS}.
\begin{corollary}
The multigraded Hilbert scheme $\mathcal{H}_n$ is connected.
The point representing the generic initial ideal $M_n$ lies on each irreducible component of $\mathcal{H}_n$.
All ideals that lie on $\mathcal{H}_n$ are radical and Cohen-Macaulay.
\end{corollary}
In particular, every monomial ideal in $\mathcal{H}_n$ is squarefree
and can hence be identified with its variety in $(\PP^2)^n$,
or, equivalently, with a subcomplex in the product of triangles $(\Delta_2)^n$.
One of the first questions one asks about any multigraded Hilbert scheme,
including $\mathcal{H}_n$, is to list its monomial ideals.
This task is easy for the first case, $n=2$. The Hilbert scheme $\mathcal{H}_2$
parametrizes $\ensuremath{\mathbb{Z}}^2$-homogeneous ideals in $K[x,y,z]$ having Hilbert function
$$ h_2:\ensuremath{\mathbb{N}}^2\to\ensuremath{\mathbb{N}}, \, (u_1,u_2)\mapsto {u_1+u_2+3\choose 3} - {u_1+2\choose 3} - {u_2+2\choose 3}. $$
There are exactly nine monomial ideals on $\mathcal{H}_2$, namely
$$ \langle x_1x_2 \rangle ,\ \langle x_1y_2 \rangle ,\ \langle x_1z_2 \rangle ,\ \langle y_1x_2 \rangle ,\
\langle y_1y_2 \rangle ,\ \langle y_1z_2 \rangle ,\ \langle z_1x_2 \rangle ,\ \langle z_1y_2 \rangle ,\ \langle z_1z_2 \rangle .$$
In fact, the ideals on $\mathcal{H}_2$ are precisely the
principal ideals generated by bilinear forms, and
$\mathcal{H}_2$ is isomorphic to an $8$-dimensional projective space
$$ \mathcal{H}_2 \,=\, \{ \langle c_0x_1x_2+c_1x_1y_2+\cdots+c_8z_1z_2 \rangle \
\,:
\, (c_0:c_1:\cdots:c_8)\in\PP^8\}.$$
The principal ideals $J_A$ which actually arise from two cameras
form a cubic hypersurface in this $\mathcal{H}_2 \simeq \PP^8$. To see this, we
write $A^j_i$ for the $j$-th row of the $i$-th camera matrix
and $[A_{i_1}^{j_1} A_{i_2}^{j_2} A_{i_3}^{j_3} A_{i_4}^{j_4}]$
for the $4 \times 4$-determinant formed by four such row vectors.
The bilinear form can be written as
$$ \ensuremath{{\bf{x}}}_2^T F \ensuremath{{\bf{x}}}_1 \,= \,
\bmat{x_2 & y_2 & z_2}
\bmat{c_0 & c_3 & c_6\\ c_1 & c_4 & c_7 \\ c_2 & c_5 & c_8}
\bmat{x_1\\y_1\\z_1},
$$
where $F$ is the \emph{fundamental matrix} \cite{HartleyZisserman}. In terms of the camera matrices,
\begin{equation}
\label{eq:fundmatrix}
F \, = \,
\bmat{
\phantom{-}[A_1^2 A_1^3 A_2^2 A_2^3 ] & - [A_1^1 A_1^3 A_2^2 A_2^3 ] & \phantom{-}[A_1^1 A_1^2 A_2^2 A_2^3 ] \\
-[A_1^2 A_1^3 A_2^1 A_2^3 ] & \phantom{-}[A_1^1 A_1^3 A_2^1 A_2^3 ] & -[A_1^1 A_1^2 A_2^1 A_2^3 ] \\
\phantom{-}[A_1^2 A_1^3 A_2^1 A_2^2 ] & -[A_1^1 A_1^3 A_2^1 A_2^2 ] & \phantom{-}[A_1^1 A_1^2 A_2^1 A_2^2 ] }.
\end{equation}
This matrix has rank $\leq 2$, and every $3 \times 3$-matrix of rank $\leq 2$ can be
written in this form for suitable camera matrices $A_1$ and $A_2$ of size $3 \times 4$.
The formula in (\ref{eq:fundmatrix}) defines a map $(A_1,A_2) \mapsto F$
from pairs of camera matrices with distinct focal points into the Hilbert scheme $\mathcal{H}_2$.
The closure of its image is a compactification of the space of camera positions.
We now precisely define the corresponding map for arbitrary $n \geq 2$.
The construction is inspired by the construction due to Thaddeus discussed in
\cite[Example 7]{CS}.
\smallskip
Let ${\rm Gr}(4,3n)$ denote the Grassmannian of
$4$-dimensional linear subspaces of $K^{3n}$.
The $n$-dimensional algebraic torus $(K^*)^n$ acts on
this Grassmannian by scaling the coordinates on $K^{3n}$,
where the $i$th factor $K^*$ scales the coordinates
indexed by $3i-2, 3i-1$ and $3i$. Thus, if we represent
each point in ${\rm Gr}(4,3n)$ as the row space of a
$(4 \times 3n)$-matrix $ \bmat{ A_1^T \! & \! A_2^T \! & \! \cdots \! & \! A_n^T }$, then
$\lambda = (\lambda_1,\ldots,\lambda_n) \in (K^*)^n$ sends this matrix to
$ \bmat{ \lambda_1 A_1^T \! & \! \lambda_2 A_2^T \! & \! \cdots \! & \! \lambda_n A_n^T }$.
The multiview ideal $J_A$ is invariant under this action by $(K^*)^n$. In symbols,
$J_{\lambda \circ A} = J_A$. In the next lemma,
GIT stands for {\em geometric invariant theory}.
\begin{lemma} \label{lem:cameramap}
The assignment $\, A \mapsto J_A \,$ defines
an injective rational map $\gamma$ from a
GIT quotient $ {\rm Gr}(4,3n) /\!/ (K^*)^n$ to the
multigraded Hilbert scheme~$\mathcal{H}_n$.
\end{lemma}
\begin{proof}
For the proof it suffices to check that $J_A \not= J_{A'}$ whenever
$A$ and $A'$ are generic camera configurations
that are not in the same $(K^*)^n$-orbit.
\end{proof}
We call $\gamma$ the camera map. Since we need
$\gamma$ only as a rational map, the choice of
linearization does not matter when we form the GIT quotient.
The closure of its image in $\mathcal{H}_n$ is well-defined
and independent of that choice of linearization.
We define the {\em compactified camera space}, for $n$ cameras, to be
$$ \Gamma_n \,\,:=\,\, \overline{\gamma({\rm Gr}(4,3n) /\!/ (K^*)^n)} \,\,\,\subseteq \,\, \mathcal{H}_n. $$
The projective
variety $\Gamma_n$ is a natural compactification of
the parameter space studied by Heyden in \cite{Heyden}.
Since the torus $(K^*)^n$ acts on ${\rm Gr}(4,3n)$ with a one-dimensional
stabilizer, Lemma \ref{lem:cameramap} implies that the
compactified space of $n$ cameras has the dimension we expect from \cite{Heyden}, namely,
$$ {\rm dim}(\Gamma_n) \,\, = \,\,
{\rm dim}({\rm Gr}(4,3n)) - (n - 1) \,\, = \,\, 4(3n-4) - (n - 1) \,\, = \,\, 11 n - 15. $$
We regard the following theorem as the main result in this paper.
\begin{theorem}
\label{thm:component}
For $n \geq 3$, the compactified camera space $\Gamma_n$ appears as
a distinguished irreducible component in the multigraded Hilbert scheme $\mathcal{H}_n$.
\end{theorem}
Note that the same statement if false for $n=2$:
$\Gamma_2$ is not a component of $\mathcal{H}_3 \simeq \PP^8$.
It is the hypersurface consisting of the fundamental matrices~(\ref{eq:fundmatrix}).
\begin{proof} By definition, the compactified camera space $\Gamma_n$ is a closed subscheme of $\mathcal{H}_n$.
The discussion above shows that the dimension of any
irreducible component of $\mathcal{H}_n$ that contains
$\Gamma_n$ is no smaller than $11n-15$.
We shall now prove the same $11n-15$ as an
upper bound for the dimension. This is done by exhibiting
a point in $\Gamma_n$ whose tangent space in
the Hilbert scheme $\mathcal{H}_n$
has dimension $11n-15$. This will imply the assertion.
For any ideal $I\in\mathcal{H}_n$, the tangent space to
the Hilbert scheme $\mathcal{H}_n$ at $I$ is the space
of $K[x,y,z]$-module homomorphisms $I\to K[x,y,z]/I$ of degree~{\bf 0}.
In symbols, this space is $\,{\rm Hom}(I,K[x,y,z]/I)_{\bf 0} $.
The $K$-dimension of the tangent space provides an upper bound
for the dimension of any component on which $I$ lies.
It remains to specifically identify a point on $\Gamma_n$ that is smooth on
$\mathcal{H}_n$, an ideal which has tangent space dimension exactly $11n-15$.
It turns out that the monomial ideal $N_n$ described in
the previous section has this desired property.
Lemmas \ref{lem:on_component} and \ref{lem:tangent_space}
below give the details.
\end{proof}
\begin{lemma}
\label{lem:on_component}
The ideals $L_n$ and $N_n$ from the previous section lie in $\Gamma_n$.
\end{lemma}
\begin{proof}
The image of $\gamma$ in $\mathcal{H}_n$ consists of
all multiview ideals $J_A$, where $A$ runs over configurations of $n$
distinct cameras, by Theorem \ref{thm:multiview ideals on Hilbert scheme}.
Let $A(\varepsilon)$ denote the collinear configuration in Section 5,
and consider any specialization of $\varepsilon$
to a non-zero scalar in $K$. The resulting ideal
$J_{A(\varepsilon)}$ is a $K$-valued point of $\Gamma_n$, for any
$\varepsilon \in K \backslash \{0\}$. The special fiber
$J_{A(0)} = L_n$ is in the Zariski closure of these points,
because, locally, any regular function vanishing on
the coordinates of $J_{A(\varepsilon)}$ for
all $\varepsilon \not= 0$ will vanish for $\varepsilon = 0$.
We conclude that $L_n$ is a $K$-valued point in the projective variety $\Gamma_n$.
Likewise, since $N_n = {\rm in}_\prec(L_n)$ is
an initial monomial ideal of $L_n$, it also lies on $\Gamma_n$.
\end{proof}
\begin{lemma}
\label{lem:tangent_space}
The tangent space of the multigraded Hilbert scheme $\mathcal{H}_n$ at the
point represented by the
monomial ideal $N_n$ has dimension $11n-15$.
\end{lemma}
\begin{proof}
The tangent space at $N_n$ equals
$\,{\rm Hom}(N_n,K[x,y,z]/N_n)_{\bf 0} $.
We shall present a basis for this space that is broken into three
distinct classes: those homomorphisms that act nontrivially only on the quadratic generators,
those that act nontrivially only on the cubics, and those with a mix of both.
Each $K[x,y,z]$-module homomorphism $\varphi:N_n\to K[x,y,z]/N_n$ below is described by its action
on the minimal generators of $N_n$. Any generator
not explicitly mentioned is mapped to 0 under $\varphi$.
One checks that each is in fact a well-defined $K[x,y,z]$-module homomorphism
from $N_n$ to $K[x,y,z]/N_n$.
\smallskip
\underline{Class I:} \
For each $1\le i < n$, we define the following maps
\begin{itemize}
\item $\alpha_i: x_iy_k\mapsto y_iy_k$ for all $i<k\le n$,
\item $\beta_i: x_iy_{i+1}\mapsto x_{i+1}y_i$.
\end{itemize}
For each $1<k\le n$, we define the following map
\begin{itemize}
\item $\gamma_k : x_iy_k \mapsto x_ix_k$ for all $1\le i < k$.
\end{itemize}
We define two specific homomorphisms
\begin{itemize}
\item $\delta_1: x_1y_2 \mapsto y_1z_2$,
\item $\delta_2:x_{n-1}y_n\mapsto z_{n-1}x_n$.
\end{itemize}
\smallskip
\underline{Class II:} \
For each $1<j<n$, we define the following maps. Each
homomorphism is defined on every pair $(i,k)$ such that
$1\le i < j < k\le n$.
\begin{itemize}
\item $\rho_j: x_iz_jx_k\mapsto x_ix_jx_k$ and $y_iz_jx_k \mapsto y_ix_jx_k$,
\item $\sigma_j: x_iz_jx_k \mapsto x_ix_jz_k$ and $y_iz_jx_k\mapsto y_ix_jz_k$,
\item $\tau_j: x_iz_jx_k \mapsto x_iz_jz_k$ and $y_iz_jx_k \mapsto y_iz_jz_k$,
\item $\nu_j: y_iz_jx_k \mapsto y_iy_jx_k$ and $y_iz_jy_k \mapsto y_iy_jy_k$,
\item $\mu_j: y_iz_jx_k \mapsto z_iy_jx_k$ and $y_iz_jy_k \mapsto z_iy_jy_k$,
\item $\pi_j: y_iz_jx_k \mapsto z_iz_jx_k$ and $y_iz_jy_k \mapsto z_iz_jy_k$.
\end{itemize}
\underline{Class III:} \
For each $1\le i < n$, we define the map
\begin{itemize}
\item $\epsilon_i: x_iy_k \mapsto z_iy_k$ and $x_iz_jx_k \mapsto z_iz_jx_k$ \
for $i<k\le n$ and $i<j<k$.
\end{itemize}
For each $1<k\le n$, we define the map
\begin{itemize}
\item $\zeta_k: x_iy_k \mapsto x_iz_k$ and $y_iz_jy_k\mapsto y_iz_jz_k$ \
for $1\le i < k$ and $i<j<k$.
\end{itemize}
\smallskip
All these maps are linearly independent over the field $K$. There are $n-1$
maps each of type $\alpha_i$, $\beta_i$, $\gamma_k$, $\epsilon_i$, and $\zeta_k$,
for a total of $5(n-1)$ different homomorphisms. Each subclass of maps in class II
has $n-2$ members, adding $6(n-2)$ more homomorphisms. Finally
adding $\delta_1$ and $\delta_2$, we arrive at the total count of
$5(n-1)+6(n-2)+2 = 11n-15$ homomorphisms.
We claim that any $K[x,y,z]$-module
homomorphism $N_n\to K[x,y,z]/N_n$ can be recognized as a $K$-linear combination
of those from the three classes described above.
To prove this, suppose that $\varphi:N_n\to K[x,y,z]/N_n$ is a module homomorphism.
For $1\le i < k \le n$, we can write $\varphi(x_iy_k)$ as a linear combination of monomials
of multidegree $e_i+e_k$ which are not in $N_n$. By subtracting appropriate multiples of
$\alpha_i$, $\epsilon_i$, $\gamma_k,$ and $\zeta_k$, we can assume that
$$\varphi(x_iy_k) = a\,y_ix_k + b\,y_iz_k + c\,z_ix_k + d\,z_iz_k$$
for some scalars $a,b,c,d\in K$. We show that this can be written as a linear combination
of the maps described above by considering a few cases.
In the first case we assume $i+1<k$.
We use $K[x,y,z]$-linearity to infer
$$\varphi(x_iy_{i+1}y_k) = a\,y_iy_{i+1}x_k + b\,y_iy_{i+1}z_k + c\,z_iy_{i+1}x_k + d\,z_iy_{i+1}z_k = y_k\, \varphi(x_iy_{i+1}).$$
Specifically, $y_k$ divides the middle polynomial. But none of the four monomials are
zero in the quotient $K[x,y,z]/N_n$. Hence, $0=a=b=c=d$.
For the subsequent cases we assume $k=i+1$. This allows us to further
assume that $a=0$, since we can subtract off $a\, \beta_i(x_iy_{i+1})$.
Now suppose that we have strict inequality $k<n$.
As before, the $K[x,y,z]$-linearity of $\varphi$ gives
$$\varphi(x_iy_ky_n) = d\,z_iz_ky_n = y_k\,\varphi(x_iy_n).$$
Specifically, $y_k$ divides the middle term. Hence, $d=0$. Similarly, $c=0$:
$$\varphi(x_iy_kz_kx_n) = c\,z_ix_kz_kx_n = y_k\,\varphi(x_iz_kx_n).$$
Suppose we further have the strict inequality $1<i$. Then necessarily $b=0$:
$$\varphi(y_1z_ix_iy_k) = b\, y_1z_iy_iz_k = x_i\,\varphi(y_1z_iy_k).$$
However, if $i=1$ and $k=2$, we have that $\varphi(x_1y_2) = b\, \delta_1(x_1y_2)$.
The only case that remains is $k=n$ and $i=n-1$. Here, we can also assume
that $c=0$ by subtracting $c\, \delta_2(x_{n-1}y_n)$. We will show that $d=0=b$
by once more appealing to the fact that $\varphi$ is a module homomorphism:
$$\varphi(x_1x_{n-1}y_n) = d\, x_1z_{n-1}z_n = x_{n-1}\, \varphi(x_1y_n),$$
which gives $d=0$. This subsequently implies the desired $b=0$, because
$$\varphi(y_1x_iz_iy_n) = b\, y_1y_iz_iz_n = x_i\, \varphi(y_1z_iy_n).$$
This has finally put us in a position where we can assume that
$\varphi(x_iy_k)=0$ for all $1\le i < k \le n$. To finish the proof that $\varphi$
is a linear combination of the $11n-15$ classes described above,
we need to examine what happens with the cubics.
Suppose $1\le i<j<k\le n$, and consider $\varphi(y_iz_jx_k)$.
This can be written as a linear sum of the 17 standard monomials
of multidegree $e_i+e_j+e_k$ which are not in $N_n$.
Explicitly, these standard monomials are:
$$\begin{array}{lllll}
x_ix_jx_k, & x_ix_jz_k, & x_iz_jz_k, & y_ix_jx_k,& y_ix_jz_k\\
y_iy_jx_k, & y_iy_jy_k, & y_iy_jz_k, & y_iz_jz_k,\\
z_ix_jx_k, & z_ix_jz_k, & z_iy_jx_k, & z_iy_jy_k,\\
z_iy_jz_k, & z_iz_jx_k, & z_iz_jy_k, & z_iz_jz_k.
\end{array}$$
By subtracting off multiples of the maps $\rho_j$, $\sigma_j$,
$\tau_j$, $\nu_j$, $\mu_j$, and $\pi_j$, we can assume that this is
a sum of the 11 monomials remaining after removing $y_ix_jx_k$,
$y_ix_jz_k$, $y_iz_jz_k$, $y_iy_jx_k$, $z_iy_jx_k$, and $z_iz_jx_k$.
However, now note that
$$\varphi(x_iy_iz_jx_k) = x_i\,\varphi(y_iz_jx_k) = y_i\,\varphi(x_iz_jx_k).$$
This means that for every one of the 11 monomials $m$ appearing in the sum,
either $x_im=0$ or $y_i$ divides $m$. Similarly,
$$\varphi(y_iz_jx_ky_k) = y_k\,\varphi(y_iz_jx_k) = x_k\,\varphi(y_iz_jy_k),$$
and so either $y_km=0$ or $x_k$ divides $m$.
Taking these both into consideration actually kills every one
of the 11 possible standard monomials (we spare the reader the explicit check),
and hence we can assume that $\varphi(y_iz_jx_k)=0$.
Now consider what happens with $\varphi(x_iz_jx_k)$. Indeed,
$$0=x_i\,\varphi(y_iz_jx_k) = \varphi(x_iy_iz_jx_k)=y_i\,\varphi(x_iz_jx_k).$$
So for every one of the 17 standard monomials $m$ which possibly appears
in the support of $\varphi(x_iz_jx_k)$ we must have that $y_im=0$
in $K[x,y,z]/N_n$. This actually leaves us with only two possible
such standard monomials -- namely $z_iz_jx_k$ and $z_iz_jy_k$.
We write $\varphi(x_iz_jx_k)=a\, z_iz_jx_k + b\, z_iz_jy_k$.
The fact that we assume $\varphi(x_iy_k)=0$ implies $a=0=b$.
This is because
$$0 = z_jx_k\, \varphi(x_iy_k) = \varphi(x_iz_jx_ky_k) = y_k\,\varphi(x_iz_jx_k).$$
To sum up, we have shown that, under our assumptions,
if $\varphi(y_iz_jx_k)=0$ holds then it also must be the case that
$\varphi(x_iz_jx_k)=0$. We can prove in a similar manner that
$\varphi(y_iz_jy_k)=0$, and this finishes the proof that $\varphi$
can be written as a $K$-linear sum of the $11n-15$ classes of maps described.
\end{proof}
We reiterate that Theorem \ref{thm:component} fails for $n=2$, since
$\mathcal{H}_2\simeq \PP^8$, and $\Gamma_2$ is a cubic hypersurface
cutting through $\mathcal{H}_2$. We offer a short report for $n=3$.
\begin{remark} \label{rmk:mysterious}
The Hilbert scheme $\mathcal{H}_3$ contains $13,824$ monomial ideals.
These come in $16$ symmetry classes under the action of $(S_3)^3\rtimes S_3$.
A detailed analysis of these symmetry classes and how we found the
$13,824$ ideals appears on the website
\texttt{www.math.washington.edu/$\sim$aholtc/HilbertScheme}.
For seven of the symmetry classes, the tangent space dimension
is less than ${\rm dim}(\Gamma_3) = 18$. From this
we infer that $\mathcal{H}_3$ has components other than~$\Gamma_3$.
We note that the number $13,824$ is exactly the number of monomial ideals
on $H_{3,3}$ as described in \cite{CS}. Moreover, the monomial ideals on $H_{3,3}$
also fall into $16$ distinct symmetry classes.
We do not yet fully understand the relationship between
$\mathcal{H}_n$ and $H_{3,n}$ suggested by this observation.
Moreover, it would be desirable to coordinatize
the inclusion $\Gamma_3 \subset \mathcal{H}_3$ and to relate it
to the equations defining {\em trifocal tensors}, as seen in \cite{AT, Heyden}.
It is our intention to investigate this topic in a subsequent publication.
\end{remark}
Our study was restricted to cameras that take $2$-dimensional
pictures of $3$-dimensional scenes. Yet, residents of {\em flatland} might be more interested
in taking $1$-dimensional pictures of $2$-dimensional scenes.
From a mathematical perspective, generalizing to arbitrary dimensions makes sense:
given $n$ matrices of format $r \times s$ we get a map
from $\PP^{s-1} $ into $(\PP^{r-1})^n$, and one could study
the Hilbert scheme parametrizing the resulting varieties.
Our focus on $r=3$ and $s=4$
was motivated by the context of computer vision.
|
2,869,038,154,944 | arxiv | \section{\label{sec:introduction}Introduction}
Quantum computers can make predictions of nonperturbative quantum field theories beyond the reach of classical resources~\cite{Feynman:1981tf,doi:10.1126/science.273.5278.1073,Jordan:2017lea} such as neutron star equations of state and the out-of-equilibrium dynamics of the early universe~\cite{Davoudi:2022cah}.
However, quantum simulations are constrained by limited and noisy resources, and will continue to be so for the foreseeable future. Current estimates suggest $\sim 10$ logical qubits per gluon link should suffice to digitize $SU(3)$~\cite{Raychowdhury:2018osk,Raychowdhury:2019iki,Alexandru:2019nsa,Davoudi:2020yln,Ciavarella:2021nmj,Kan:2021xfc,Alexandru:2021jpm}, with similar requirements for $U(1)$ and $SU(2)$~\cite{Zohar:2012xf,Zohar:2013zla,Zohar:2014qma,Zohar:2016iic,Bender:2018rdp,Haase:2020kaj,Bauer:2021gek,Alexandru:2019nsa,Raychowdhury:2018osk,Raychowdhury:2019iki,Davoudi:2020yln,Ciavarella:2021nmj,Ji:2020kjk,PhysRevD.99.114507,Bazavov:2015kka,Zhang:2018ufj,Unmuth-Yockey:2018ugm,Unmuth-Yockey:2018xak,Wiese:2014rla,Luo:2019vmi,Brower:2020huh,Mathis:2020fuo,Liu:2021tef,Gustafson:2021qbt,Gustafson:2022xlj}. The total number of qubits required is dependent upon the phenomena one wishes to study; for a $3+1d$ lattice gauge theories, $\mathcal{O}((L/a)^3)$ links are usually required, exacerbating the qubit requirement. Due to quantum noise, quantum error correction is likely required, with an overhead of $\mathcal{O}(10^{1-5})$~\cite{ionq_2020,ibm_2021, google_2020} physical qubits per logical qubit depending on platform. Preliminary work in specialized error correction for lattice gauge theories may be able to reduce these costs~\cite{Rajput:2021trn,Klco2021Hierarchy}.
Resource estimates must also consider the gate costs to implement time evolution of the theory under a lattice Hamiltonian.
The time evolution operator $\mathcal U(t)$ - a generically dense matrix - must usually be implemented approximately.
Most studies of gauge theories consider the Kogut-Susskind Hamiltonian $\hat H_{KS}$~\cite{PhysRevD.11.395} together with Trotterization. Currently, $\mathcal{O}(10^{49})$ T gates are estimated to be required to compute the shear viscosity with $\mathcal{O}(10^5)$ logical qubits~\cite{Kan:2021xfc}. This upper bound can be reduced, for example by controlling only errors on low-lying states~\cite{Sahinoglu2020hamiltonian,Hatomura:2022yga} or by requiring the algorithmic error be comparable to the $\mathcal{O}(1)$ theoretical uncertainties. Even with these reductions the quantum resources are far beyond near-term devices. Further, this estimate neglects state preparation, which often dominates the total gate costs~\cite{Jordan:2011ne}.
Given the large resource estimates, it is informative to consider the history of prime factorization~\cite{shor1994algorithms, PhysRevLett.76.3228, Zalka2006, Fowler2012,gidney2021factor} and quantum chemistry~\cite{doi:10.1126/science.1113479,doi:10.1073/pnas.1619152114,PhysRevLett.123.070503,doi:10.1073/pnas.1619152114,Lee:2020egw} where resources were reduced by using more clever quantum subroutines and performing better classical processing. Gate reductions may be possible via other approximations of $\mathcal U(t)$~\cite{PhysRevLett.123.070503,cirstoiu2020variational,gibbs2021longtime,yao2020adaptive,PhysRevLett.114.090502,Low2019hamiltonian}. Using other arithmetic subroutines can yield drastically fewer resources ~\cite{hadfield2016scientific,haner2018optimizing,Takahashi10,gidney2021factor}. Lattice-field-theory specific error correction~\cite{Rajput:2021trn,Klco2021Hierarchy} or mitigation~\cite{Stannigel:2013zka,Stryker:2018efp,Halimeh:2019svu,Lamm:2020jwv,Tran:2020azk,Halimeh:2020ecg,Halimeh:2020kyu,Halimeh:2020djb,Halimeh:2020xfd,VanDamme:2020rur,Kasper:2020owz,Halimeh:2021vzf} could help further. Recently, quantum circuits for Hamiltonians with reduced lattice artifacts~\cite{Luo:1998dx,Carlsson:2001wp} were constructed~\cite{Carena:2022kpg}.
A full accounting of resources should take advantage of any reductions through classical computations. In quantum chemistry, improved basis functions were found which render the Hamiltonian more amenable to quantum circuits~\cite{Lee:2020egw} at the cost of classical preprocessing. In the same way, digitization in gauge theories seeks to find efficient basis states. Here, Euclidean lattice simulations on classical computers help quantify the scheme-dependent systematic errors~\cite{Hackett:2018cel,Alexandru:2019nsa,Ji:2020kjk,Ji:2022qvr,Alexandru:2021jpm,Hartung:2022hoz}. We can draw another analogy to the case of prime factorization where Eker{\aa} and H{\aa}stad's modifications ~\cite{ekeraa2016modifying,ekeraa2017quantum,ekeraa2017pp,ekeraa2018general} of Shor's algorithms~\cite{shor1994algorithms} used classical processing to reduce qubits and quantum arithmetic while increasing the success rate. In the same way, lattice calculations have a number of steps that can potentially be offloaded to classical resources. The first suggested was to use Euclidean lattice ensembles to perform stochastic state preparation yielding shallower individual circuits~\cite{Lamm:2018siq,Harmalkar:2020mpd,Gustafson:2020yfe,Yang:2021tbp}. Further, classical simulations can be used to set the scales, which via analytical continuation~\cite{Osterwalder:1973dx,Osterwalder:1974tc} gives the lattice spacings of the quantum simulation with few or no quantum resources ~\cite{Carena:2021ltu,Clemente:2022cka}.
Although in~\cite{Carena:2021ltu} the connection between lattice Hamiltonian at finite real-time temporal lattice spacing $a_t$ and Euclidean temporal lattice spacing $a_\tau$ was made, the final step of connecting the Hamiltonian to the bare parameters used in Euclidean action was missing. The brute-force approach would compute multiple anisotropies $\xi = a/a_\tau$ for a fixed spatial lattice spacing $a$ and then extrapolate to the desired $\xi$ used in the quantum simulation. This is analogous to studies of the relation between Euclidean and Hamiltonian limits~\cite{Hamer:1995zj,Byrnes:2003gg}. Continuum extrapolations requires simulations at multiple lattice spacings. While this will become the practice as quantum lattice simulations becomes a precision endeavor, for now quantum noise and low shot rates dominate the error budget of calculations~\cite{Ciavarella:2021lel,Farrell:2022wyt,Rahman:2022rlg}, burying errors from determining $\xi$. Thus, the idea of a perturbative calculations of $\xi$ becomes attractive, as it directly gives a fixed $\xi$ trajectory in terms of the bare parameters. This implies that the only the measurement of the spatial lattice spacing $a$ through Euclidean simulation is required -- reducing the classical computing resources.
Through analytical continuation to Minkowski spacetime, spatial (temporal) spacings, $a$ ($a_t$), are determined for quantum simulations \cite{Carena:2021ltu}.
In this paper, we perform the one-loop perturbative calculation of $\xi$ using the background field method~\cite{DeWitt:1967ub,DeWitt:1967uc,Honerkamp:1972fd,tHooft:1973bhk}. In the early days of lattice QCD, this technique along with other methods~\cite{Callan:1978bm,Hasenfratz:1980kn} were used to compute the scale parameter $\Lambda$~\cite{Dashen:1980vm}. Of relevance to quantum simulations, this included matching isotropic $3+1d$ $SU(N)$ lattice results to the Hamiltonian limit ($\xi\rightarrow\infty$)~\cite{Hasenfratz:1981tw}. Later, this was extended to arbitrary anisotropy~\cite{Karsch:1982ve} and to the Hamiltonian limit in $2+1d$~\cite{Hamer:1996ub}. Here, we present a unified derivation of $\xi$ for $U(N)$ and $SU(N)$ for arbitrary dimensions and anisotropy.
Here we focus on the Wilson action and consider its connection to the Kogut-Susskind Hamiltonian. Similar studies can be carried out for quantum simulations of
improved Hamiltonians~\cite{Luo:1998dx,Carlsson:2001wp,Carena:2022kpg} following initial work in 3+1$d$ $SU(N)$~\cite{Iwasaki:1983zm,GarciaPerez:1996ft,Sakai:2000jm,Sakai:2003va,Drummond:2002yg}. Since continuous gauge theories can be digitized for quantum simulations with discrete subgroups, we further explore whether our perturbative calculations for the continuous group can predict discrete subgroup results.
This paper is organized as follows. In Sec.~\ref{sec:gencon}, we review the background field method and show how to perturbatively compute the renormalized anisotropy. This is followed by Sec.~\ref{sec:u1} and Sec.~\ref{sec:sun} where the special cases of $U(1)$ and $SU(N)$ respectively are investigated. We extend the calculations to $U(N)$ in Sec.~\ref{sec:un}. The anisotropy factors computed perturbatively are compared with Monte Carlo results for continuous and discrete groups in Sec.~\ref{sec:discretegroup}, to demonstrate the effectiveness of our perturbative computations. We leave Sec.~\ref{sec:Conclusions} to conclude and discuss future work. Details about the integrals involved in the perturbative calculations are in the Appendices.
\section{Background Field Method}\label{sec:gencon}
Euclidean anisotropic lattices are characterized by the anisotropy $\xi = a/a_\tau$. Throughout this work, we will use Greek indices $(\mu,\nu)$ to indicate spacetime dimensions, and Latin indices ($i,j$) to indication spatial dimensions. Consider the anistropic Wilson action:
\begin{eqnarray}
S(U) = \sum_x\bigg[ \beta_\sigma \sum_{i>j} \re\tr P_{ij} +\beta_\tau \sum_{i} \re\tr P_{0i}\bigg],
\label{eq:action}
\end{eqnarray}
with the plaquette term:
\begin{eqnarray}
P_{\mu\nu} =
\mathbb{1} - U_{x, x+\mu}U_{x+\mu, x+\mu+\nu}U^\dagger_{x+\nu, x+\mu+\nu}U^\dagger_{x, x+\nu}.
\end{eqnarray}
The two couplings in~\eq{action} are necessary in order to keep physics unchanged under independent variations of $a$ and $\xi$. They are parametrized as:
\begin{eqnarray}
\beta_\sigma = \frac{z}{g^2_\sigma(a, \xi)}\xi^{-1}~~
\text{ and }~~\beta_\tau = \frac{z}{g^2_\tau(a, \xi)} \xi.
\end{eqnarray}
We will use $z = 2$ for $SU(N)$ and $U(N)$ groups, and $z=1$ for $U(1)$ to ensure the canonical kinetic term in the continuum limit.
The speed of light is defined as $c = g_\sigma/g_\tau$. We will denote the two couplings as $g_\mu$, with $g_\mu = g_\sigma (g_\tau)$ for $\mu$ in the spatial direction (temporal direction). In the weak-coupling limit, the $g_\mu(a,\xi)$ can be expanded in terms of the isotropic value $\beta =z g^{-2}_E(a)$ as:
\begin{eqnarray}
\frac{1}{g^2_\mu(a, \xi)} = \frac{1}{g^2_E(a)} + c_\mu(\xi) + \mathcal{O}(g^2_E, \xi)
\label{eq:gmu}
\end{eqnarray}
and $\xi =1$ returns the usual isotropic formulation of a lattice gauge theory with $g_\sigma =g_\tau=g_E$.
In the weak-coupling regime, the speed of light is given by:
\begin{eqnarray}
c =\frac{g_\sigma(a, \xi)}{g_\tau(a,\xi)}.
\label{eq:cfactor}
\end{eqnarray}
In a more symmetric fashion, the action of \eq{action} can also be rewritten as:
\begin{eqnarray}
S(U) = \frac{z}{g^2_\xi} \sum_x \bigg[\bar{\xi}^{-1}\sum_{i>j}\re\tr P_{ij} +\bar{\xi}\sum_{i} \re\tr P_{0i}\bigg]
\end{eqnarray}
where the bare couplings $g^2_\xi = g_\sigma g_\tau \equiv z/\beta_\xi$ and the bare anisotropy $\bar{\xi} = c \xi$ are introduced; for every $(a, \xi)$ pair there is a corresponding pair of bare couplings $(\beta_\xi, \bar{\xi})$. Following \eq{gmu}, we have:
\begin{equation}
\frac{1}{g^2_\xi} \approx \frac{1}{g^2_E(a)} + \frac{c_\tau(\xi) + c_\sigma(\xi)}{2}.
\end{equation}
The functions $c_\tau(\xi)$ and $c_\sigma(\xi)$ can be found by calculating the effective action $S^{(\xi)}_{\rm eff}$ of the lattice gauge theory for the two different lattice regularization procedures with $\xi =1$ and $\xi \neq 1$. Requiring that in the continuum limit the effective action is independent of regularization, we have
\begin{eqnarray}
\Delta S_{\rm eff} = S^{(\xi = 1)}_{\rm eff} - S^{(\xi\neq 1)}_{\rm eff} = 0.
\end{eqnarray}
This leads to the determination of $c_\tau(\xi)$ and $c_\sigma(\xi)$. The effective action can be perturbatively calculated using the background field method on the lattice~\cite{Dashen:1980vm}.
We will denote $B_\mu(x)$ as the background field that solves the classical lattice equation of motion. With the fluctuating field $\alpha_\mu$, the lattice gauge variables can be parametrized as:
\begin{eqnarray}
U_{x, x+\mu} &=& e^{i u g_E a_\mu \alpha_\mu(x)}U^{(0)}_{x, x+\mu}\notag\\
U^{(0)}_{x, x+\mu} &=& e^{i u a_\mu B_\mu(x)}.
\end{eqnarray}
For general dimensions, the couplings and fields may not be dimensionless, thus we rescale the couplings by a factor of $u = a^{D/2-2}$. Note that for one-loop calculations, we can use the isotropic coupling $g_E$ in these exponents instead of $g_\mu$. The covariant derivatives are defined as:
\begin{eqnarray}
\label{eq:derivative}
D_\mu f(x) = \frac{1}{a_\mu}(U_{x, x+\mu}f(x+\mu)U^\dagger_{x, x+\mu}-f(x)),\notag\\
\overline{D}_\mu f(x) = \frac{1}{a_\mu}(U^\dagger_{x-\mu, x}f(x-\mu)U_{x-\mu, x}-f(x)).
\end{eqnarray}
The lattice derivatives $\Delta_\mu f(x)$ and $\overline{\Delta}_\mu f(x)$ follow from~\eq{derivative} by setting $U_{x, x+\mu} = \mathbb{1}$. Taking $U_{x, x+\mu}\rightarrow U^{(0)}_{x, x+\mu}$ defines $D^{(0)}_\mu f(x)$, $\overline{D}^{(0)}_\mu f(x)$. The lattice action can be expanded around $U_{x, x+\mu} = U_{x, x+\mu}^{(0)}$ as:
\begin{equation}
S(U) = S_0 + S_2 + ...
\end{equation}
where $S_0 = S(U^{(0)})$ and $S_2$ includes terms quadratic in $\alpha_\mu$. To preserve the gauge symmetry of the background field, we work in the background Feynman gauge~\cite{Capitani:2002mp} which requires adding the gauge-fixing term
\begin{eqnarray}
\label{eq:gfix}
S_{\rm gf} = a^{D-1} a_\tau \sum_x\tr\bigg(\sum_\mu \overline{D}^{(0)}_\mu\alpha_\mu(x)\bigg)^2
\end{eqnarray}
and an associated ghost term $S_{\rm gh}(\phi)$ for a ghost field $\phi$ when a non-abelian gauge theory is considered:
\begin{eqnarray}
S_{\rm gh} = 2 a^{D-1} a_\tau \sum_{x,\mu} \tr[(D^{(0)}_\mu\phi(x))^\dagger (D^{(0)}_\mu\phi(x))].
\label{eq:ghost}
\end{eqnarray}
The partition function can be calculated as:
\begin{eqnarray}
\label{eq:effaction}
Z&\equiv& \int[dU] e^{-S(U)}\notag\\
&\approx& e^{-S_0}\int [d\alpha][d\phi] e^{-(S_2+ S_{\rm gf}+S_{\rm gh})}\bigg(1+\mathcal{O}(g_E^2)\bigg)\notag\\
&\approx& e^{-S_0}\int[d\phi]e^{-S_{\rm gh}}\int [d\alpha] e^{-S_{\rm free}} e^{-S'_2}\notag\\
&\approx& \int[d\phi] e^{-S_{\rm gh}}\int [d\alpha]e^{-S_{\rm free}}\left(1 - S_0 - S'_2 + \frac{S_2^{'2}}{2} + \hdots\right)\notag\\
&\propto& e^{-S^{(\xi)}_{\rm eff}} \approx 1 - S^{(\xi)}_{\rm eff} + ...,
\end{eqnarray}
where we have extracted the free action $S_{\rm free}$ for the fluctuating field $\alpha_\mu$ from $S_2 + S_{\rm gf}$ and denote the rest as $S'_2$. On the fourth line, we have Taylor expanded $e^{-S_0-S'_2}$.
In this article, we consider the $F^2_{\mu\nu}$ term in $S_{\rm eff}(\xi)$ at one loop. This gives the $\mathcal{O}(g_E^0)$ corrections $c_\tau(\xi)$ and $c_\sigma(\xi)$. Matching terms in \eq{effaction} we see $S^{(\xi)}_{\rm eff}$ is related to expectation values computed with respect to $S_{\rm free}$:
\begin{eqnarray}
\label{eq:seffgen}
S^{(\xi)}_{\rm eff} =&& S_0 + \left<S'_2\right> - \frac{1}{2}\langle S_2^{'2}\rangle \notag\\
&&+ \left<S_{\rm gh}\right>_\phi,
\end{eqnarray}
Similarly, the contributions from $S_{\rm gh}$ can be calculated as $\left<S_{\rm gh}\right>_\phi$.
Higher loop corrections carries additional factors of the coupling $g_E^2$ and are negligible at weak coupling.
\section{\texorpdfstring{$U(1)$}{U(1)} gauge theory}\label{sec:u1}
We apply the background field methods to $U(N)$ and $SU(N)$ to compute the perturbative relations for Euclidean lattices
at any anisotropy and in any dimension. We will initially consider the simpler $U(1)$ gauge theory, then consider the more involved case of $SU(N)$ gauge theory, followed by the $U(N)$ gauge theory.
For the $U(1)$ gauge theory, $B_\mu$ and $\alpha_\mu$ are the single component electromagnetic fields and we can trivially perform the traces in \eq{gfix} to find
\begin{equation}
S_{\rm gf} = \frac{1}{2}a^{D-1} a_\tau \sum_{x}\bigg(\sum_\mu \bar{D}^{(0)}_\mu \alpha_\mu(x)\bigg)^2,
\end{equation}
while the $S_{\rm free}$ is found to be
\begin{eqnarray}
S_{\rm free} =\frac{1}{2} a^{D-1} a_\tau \sum_{x, \mu, \nu}(\Delta_\mu \alpha_\nu)(\Delta_\mu \alpha_\nu),
\end{eqnarray}
and $S'_2$ is given by
\begin{equation}
S'_{2} = -\frac{a^{2D-5}a_\tau}{8}\sum_{x,\mu,\nu,a}(a_\mu a_\nu F_{\mu\nu})^2(\Delta_\mu \alpha_\nu -\Delta_\nu\alpha_\mu)^2.
\end{equation}
The non-vanishing contributions to \eq{seffgen} are given by:
\begin{align}
\label{eq:effaction3}
S^{(\xi)}_{\rm eff} = & S_0 + \left<S'_2\right>,
\end{align}
where in $U(1)$ we can neglecting the $\langle S_2^{'2}\rangle$ term as it only contributes to higher orders of $F^2_{\mu\nu}$. Further, the ghost term is zero in $U(1)$.
$S^{(\xi)}_{\rm eff}$ for $U(1)$ in arbitrary dimensions can then be written as:
\begin{eqnarray}
\label{eq:seff_f}
S^{(\xi)}_{\rm eff} = \frac{1}{4}\int d^D x \bigg(\sum_{i}&&[(F^a_{i0})^2 + (F^a_{0i})^2][g^{-2}_{\tau}- f_{\tau}(\xi)] \notag\\
&&+\sum_{i,k} (F^a_{ik})^2[g^{-2}_{\sigma} - f_{\sigma}(\xi)]\bigg)
\end{eqnarray}
with
\begin{eqnarray}
\label{eq:u1fun}
f_{\tau}(\xi)&=& \frac{1}{2\xi}\bigg(1-\frac{D-2}{D-1}\xi^{-1}I_1(\xi)\bigg), \notag\\
f_{\sigma}(\xi) &=& \frac{I_1(\xi)}{D -1}.
\end{eqnarray}
$I_1(\xi)$ and other integrals required for this paper are defined in Appendix~\ref{sec:append}, following~\cite{Karsch:1982ve}. One can show $I_1(1) = \frac{D-1}{D}$ and $f_{\tau}(\xi\rightarrow \infty)=0$. For $\xi=1$, both $f_\mu(1)=1/D$ and thus $g^2_E(\text{one-loop}) = g^2_E[1+ f_{\tau}(1)] = g^2_E[1+ 1/D]$ which agrees with previous $D=4$ calculations~\cite{Cella:1997hw}.
From $f_{\mu}(\xi)$, we obtain
\begin{eqnarray}
\label{eq:u1result}
&c_\mu(\xi) = f_\mu(\xi) -f_\mu(\xi=1).
\end{eqnarray}
These functions are shown in Fig.~\ref{fig:U1} for 3 and 4 dimensions. In the $\xi\rightarrow \infty$ limit, we show in Appendix~\ref{apx:series} that
\begin{align}
I_1(\xi\to \infty) \approx \sqrt{\frac{D-1}{2}-\frac{1}{16}}
\end{align}
and that $c_{\tau}(\xi\rightarrow \infty) =-1/D$. Specific numerical values when $D=3,4$ are:
\begin{eqnarray}
c_{\tau}(\xi\rightarrow \infty) &&=-\frac{1}{3},\,-\frac{1}{4}~~(D=3,~4)
\notag\\
c_{\sigma}(\xi\rightarrow\infty) &&= 0.146,\,0.148~~(D=3,~4).
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=0.92\linewidth]{u1erik}
\caption{\label{fig:U1} Anisotropic coefficients for $U(1)$ in $D=3,4$.}
\end{figure}
\section{\texorpdfstring{$SU(N)$}{SU(N)} gauge theory}\label{sec:sun}
We now move to consider the more complicated case of $SU(N)$. $B_{\mu}$ and $\alpha_{\mu}$ can be expanded in terms of the group generators $\lambda^a, a = 1, ..., N^2-1$:
\begin{eqnarray}
B_\mu = B^a_\mu \lambda^a/2,~~~\alpha_\mu = \alpha^a_\mu \lambda^a/2,
\end{eqnarray}
with the generators normalized to $\tr \lambda^a\lambda^b = 2\delta_{ab}$. Using
the gauge-fixing term in \eq{gfix}
and the ghost term in \eq{ghost},
we can rewrite $S_2 + S_{\rm gf}$ in terms of a tensor $S_T$, a scalar $S_{\rm sc}$ and two vector interactions $S_A$ and $S_B$:
\begin{align}
S_{T} &= -\frac{a^{2D-5}a_\tau}{8 N}\sum_{x,\mu,\nu,a}(a_\mu a_\nu F^a_{\mu\nu})^2\tr(\Delta_\mu \alpha_\nu -\Delta_\nu\alpha_\mu)^2 \notag\\
S_{\rm sc} &= a^{D-1} a_\tau \sum_{x, \mu, \nu}\tr[(D^{(0)}_\mu \alpha_\nu)(D^{(0)}_\mu \alpha_\nu)]\notag\\
S_{A} &= a^{D-1} a_\tau\sum_{x,\mu\nu}a^{(D-4)/2}\tr(A_{\mu\nu}(x)F_{\mu\nu}(x)) \notag\\
S_{B} &= \frac{1}{2}a^{D-1}a_\tau\sum_{x,\mu,\nu}a^{(D-4)/2}\tr(B_{\mu\nu}(x)F_{\mu\nu}(x)).
\end{align}
$A_{\mu\nu}(x)$ and $B_{\mu\nu}(x)$ are anti-symmetric and symmetric in the vector indices, respectively, and given by:
\begin{eqnarray}
A_{\mu\nu}(x) = &&-i\{2[\alpha_\nu, \alpha_\mu] + a_\nu[\alpha_\nu, D^{(0)}_\nu\alpha_\mu]\notag\\
&&+ a_\mu[D^{(0)}_\mu\alpha_\nu, \alpha_\mu] + \frac{a_\mu a_\nu}{2}[D^{(0)}_\mu\alpha_\nu, D^{(0)}_\nu\alpha_\mu]\}\notag\\
B_{\mu\nu}(x)=&& -i (a_\mu[D^{(0)}_\nu \alpha^\mu, \alpha^\mu] + a_\nu[\alpha^\nu, D^{(0)}_\mu \alpha^\nu]).
\end{eqnarray}
From $S_{\rm sc}$, we extract the free action for the $\alpha_\mu$ field:
\begin{eqnarray}
\label{eq:free}
S_{\rm free} = a^{D-1} a_\tau \sum_{x, \mu, \nu}\tr[(\Delta_\mu\alpha_\nu)(\Delta_\mu\alpha_\nu)]
\end{eqnarray}
and define $S_{\rm sc,I} = S_{\rm sc} - S_{\rm free}$.
The non-vanishing contributions to the effective action are given by:
\begin{align}
\label{eq:effaction2}
S^{(\xi)}_{\rm eff} = & S_0 + \left<S_T\right>-\frac{1}{2}\left<S^2_A\right> -\frac{1}{2}\left<S^2_B\right>\notag\\ &+\frac{D-2}{D}\left[\left<S_{{\rm sc},I}\right>-\frac{1}{2}\left<S^2_{{\rm sc},I}\right>\right]
\end{align}
with other terms vanishing. Notice that unlike $U(1)$, $\langle S_2^2\rangle$ terms contribute at leading order. The factor $\frac{D-2}{D}$ comes from the fact that the ghost field contribution cancels $2$ out of $D$ degrees of freedom of $\alpha_\mu$. Where, again the expectation values are calculated with respect to $S_{\rm free}$. The final one-loop corrected action is given by:
\begin{eqnarray}
\label{eq:seff_f2}
S^{(\xi)}_{\rm eff} = \frac{1}{4}\int d^D x \bigg(&&\sum_{i}[(F^a_{i0})^2 + (F^a_{0i})^2][g^{-2}_{\tau}- f_{\tau}(\xi)] \notag\\
&&+\sum_{i,k} (F^a_{ik})^2[g^{-2}_{\sigma} - f_{\sigma}(\xi)].
\end{eqnarray}
$SU(N)$, $f_\mu(\xi)$ are defined as,
\begin{eqnarray}
\label{eq:sufun}
&&f_\tau(\xi) =4 N\bigg[ \frac{N^2-1}{16N^2}\{\frac{\xi^{-2} I_1(\xi)}{(D-1)} + \xi^{-1}I_5(\xi)\} +\frac{1}{64}I_{2b}(\xi) \notag\\&& + \frac{D-14}{384(D-1)}I_{2a}(\xi) + \frac{1}{256}I_4(\xi)\xi^{-2} + \frac{D-8}{192}\xi^{-2} I_6(\xi)\notag\\&& +\frac{2-D}{384}\xi^{-2} I_7(\xi) +\frac{26-D}{24}{\rm DIV}(\xi)\bigg],\notag\\
\notag\\
&&f_\sigma(\xi) = 4 N\bigg[\frac{N^2-1}{8N^2}\frac{I_1(\xi)}{(D-1)} +
\frac{D-14}{192(D-1)}I_{2a}(\xi)\notag\\&&+\frac{8-D}{192} I_3(\xi) +\frac{1}{128}I_4(\xi)+\frac{26-D}{24}{\rm DIV}(\xi)\bigg].
\end{eqnarray}
The DIV part defined as:
\begin{eqnarray}
{\rm DIV}(\xi) &&= \frac{2^{D-4}}{(2\pi)^D}\int^{\pi/2}_{-\pi/2}d^{D-1}x \int^{(\pi/2)\xi}_{(-\pi/2)\xi}dx_0\notag\\
&& \bigg(\sum^{D-1}_{i=1}\sin^2 x_i + \xi^2\sin^2(x_0/\xi)\bigg)^{-2},
\end{eqnarray}
is divergent. This divergence comes from $S_A$ and $S_{\rm sc, I}$ terms which do have corresponding continuum limit and contain logarithmic divergence as $a\rightarrow 0$. With the definition for $c_\mu(\xi)$ in \eq{u1result}, the divergence part in ${\rm DIV}(\xi)$ are subtracted out.
\begin{figure}
\centering
\includegraphics[width=0.92\linewidth]{su3erik}
\caption{\label{fig:SU3} Anisotropic coefficients for $SU(3)$ in $D=3,4$.}
\end{figure}
Our calculation gives the same results as~\cite{Karsch:1982ve} for $D=4$ and as~\cite{Hasenfratz:1981tw, Hamer:1996ub} in the $\xi\rightarrow \infty$ limit. The values of $c_\mu(\xi)$ for $SU(3)$ gauge theory are shown in~\fig{SU3} at different dimensions.
\section{\label{sec:un} \texorpdfstring{$U(N)$}{U(N)} gauge theory}
The Lie algebra for $U(N)$ group can be constructed by introducing the additional generator $\lambda^0 = \sqrt{\frac{2}{N}}\mathbb{I}_{N\times N}$ to the $SU(N)$ group. Corresponding to any index $a$ for $SU(N)$ group we introduce the index $A = (0, a)$, so that $A$ runs from 0 to $N^2-1$. With this construction, we still have $\tr[\lambda^A \lambda^B] = 2\delta_{AB}$; special care has to be taken for the anti-symmetric structure constant as $f_{0BC} = 0$. The final one-loop corrected action is given by:
\begin{eqnarray}
\label{eq:seff_f3}
S^{(\xi)}_{\rm eff} = \frac{1}{4}\int d^D x \bigg(&&\sum_{i}[(F^a_{i0})^2 + (F^a_{0i})^2][g^{-2}_{\tau}- f_{\tau}(\xi)] \notag\\
&&+\sum_{i,k} (F^a_{ik})^2[g^{-2}_{\sigma} - f_{\sigma}(\xi)]\notag\\&&+\sum_{i}[(F^0_{i0})^2 + (F^0_{0i})^2][g^{-2}_{\tau}- f_{0,\tau}(\xi)] \notag\\
&&+\sum_{i,k} (F^0_{ik})^2[g^{-2}_{\sigma} - f_{0,\sigma}(\xi)]\bigg),
\end{eqnarray}
with $f_\tau(\xi)$ and $f_\sigma(\xi)$ given by \eq{sufun} but replacing $N^2-1$ by $N^2$ which changes the factor $\frac{N^2-1}{N^2}$ to 1,
and $f_{0, \tau}(\xi)$ and $f_{0,\sigma}(\xi)$ corresponding to Eq.~(\ref{eq:u1fun}) multiplied by a factor of $N/2$.
\section{\label{sec:discretegroup}Comparing to Numerical Results}
Our values of $c_\sigma$ and $c_\tau$ computed in Sec.~\ref{sec:u1} and Sec.~\ref{sec:sun} can be used to calculate the renormalized anisotropy, $\xi$ using the relation $\bar{\xi} = c\xi$, with $c$ given in \eq{cfactor} and expressions for $g_\mu$ in \eq{gmu}.
In this section, we compare our one-loop calculations of $\xi$ as well as the bare anisotropy with nonperturbative results obtained from two sets of Monte Carlo results. The first are existing 2+1d $U(1)$ and $SU(2)$ results produced in Refs. \cite{Loan:2003wy,Teper:1998te}. The second are new ensembles produced by us for the discrete cyclic groups $\mathbb{Z}_{10}$ and $\mathbb{Z}_{100}$, and the binary icosahedral ($\mathbb{BI}$). These discrete groups are of interest because they are subgroups of $U(1)$ and $SU(2)$, respectively, and have been proposed as approximations for use on quantum computers. Thus, it is interesting to investigate how well perturbative lattice field theory for the continuous group can approximate $\xi$ for the discrete subgroups. In both previous works, smearing was used to reduce the need for higher statistics. This has the consequence of changing the lattice spacings by a unknown, potentially large amount and can introduce some discrepancy between the perturbative and the nonperturbative results. For this reason, in our simulations we avoided using smearing at the cost of a larger number of lattice configurations.
\begin{table}
\caption{Ensemble parameters for the lattice simulations: Group $G$, coupling $\beta$, bare anisotropy $\bar{\xi}$, Lattice dimensions $N_s^D\times N_t$, decorrelation length $n_{\text{decor}}$ and number of configurations $n_{\text{meas}}$.}
\label{tab:ensembleparams}
\centering
\begin{tabular}{c r c c c c}
\toprule
$G$&$\beta$&$\bar{\xi}$&$N_s^D\times N_t$&$n_{\rm decor}$&$n_{\rm meas}$\\
\hline
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.35&2.25&$16^2\times48$&10&$8\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.35&2.25&$24^2\times72$&10&$4\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.55&2.5&$16^2\times48$&10&$5\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.55&2.5&$24^2\times72$&10&$1\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.7&3.0&$16^2\times48$&10&$5\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.7&3.0&$20^2\times60$&10&$5\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&1.7&3.0&$24^2\times72$&10&$1\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&2.0&3.0&$16^2\times48$&10&$5\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&2.0&3.0&$20^2\times60$&10&$5\times10^6$\\
$\mathbb{Z}_{10}$, $\mathbb{Z}_{100}$&2.0&3.0&$24^2\times72$&10&$1\times10^6$\\
$\mathbb{BI}$ & 2.0 & 2.0 & $36^2\times72$ &10 & $5000$ \\
$\mathbb{BI}$ & 3.0 & 1.33 & $36^2\times72$ & 10& $5000$\\
$\mathbb{BI}$ & 3.0 & 1.33 & $36^3\times72$ & 10& $250$ \\
\hline
\end{tabular}
\end{table}
The discrete group ensembles were generated by sampling from the Wilson action using a multi-hit Metropolis update algorithm, which has been found to be as efficient as a heat-bath in terms of autocorrelation length but significantly cheaper to implement for discrete groups~\cite{Alexandru:2019nsa}. The various ensemble parameters are found in Tab.~\ref{tab:ensembleparams}. Using discrete groups, we must worry about crossing into the frozen phase where all the links take the value of group identity $\mathbb{1}$ at a critical coupling $\beta_f$ for isotropic lattice. For $\mathbb{Z}_n$ groups, it is known that~\cite{Petcher:1980cq}:
\begin{equation}
\label{eq:bfn}
\beta_{f,n}\approx\frac{A}{1-\cos\left(\frac{2\pi}{n}\right)},
\end{equation}
For $3+1d$ in \cite{Petcher:1980cq}, the theoretical value of $A$ was obtained to be $A^{\rm th}_{\rm 3d}\approx\log(1+\sqrt{2})$, while numerical simulations gave $A_{\rm 3d}=0.78$\cite{Creutz:1979kf,Creutz:1979zg}.
For the case of $2+1d$, using the value $\beta_{f, 2} = 0.761412$ obtained from Monte Carlo simulations in \cite{Hasenbusch:1992zz,Caselle:1994df,Agostini:1996xy}, the theoretical value of $A$ is calculated following \eq{bfn} to be $A^{\rm th}_{\rm 2d}=1.52282$. As a comparison, we compute $A_{\rm 2d}$ with the following procedure. For certain $n$, we measure the average plaquette energy $\left<E\right>$ as a function of $\beta$. $\beta_{f,n}$ is determined as the $\beta$ value that maximizes the specific heat $\left|\frac{\partial\left<E\right>}{\partial\beta}\right|$. We compute $\beta_{f, n}$ for $n=2, 3..., 10$ on $10^3$ lattices. As an example, we show the measured value of $\left<E\right>$ in \fig{z10} for $n=10$ at different $\beta$ values. Fitting the $\beta_{f,n}$ values to Eq.~(\ref{eq:bfn}), we obtain $A_{\rm 2d}=1.450(12)$. Two additional $\beta_{f, n}$ for $n=12, 15$ are computed and they agree well with the fit. These results are plotted in \fig{detA}.
Comparing to the $A^{\rm th}_{\rm 2d}$, we expect that corrections to the theoretical value are needed.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{u1z10.pdf}
\caption{Average plaquette energy $\langle E\rangle$ as a function of $\beta$ for $Z_{10}$ and $U(1)$, with $\beta_{f,10}=7.6$ indicated by the vertical line.}
\label{fig:z10}
\end{figure}
In the case of anisotropic lattices considered here, one should expect the effects of the freezing-out to occur when $\beta_{\xi}\bar{\xi} =\beta_{\xi, f}\bar{\xi} \approx\beta_f$. However, as we observe, for isotropic lattice, $\mathbb{Z}_{10}$ deviates from $U(1)$ around $\beta\approx5$ which is much smaller than $\beta_{f,10}=7.6$ (See Fig.~\ref{fig:z10}). Hence, we expect
that $\beta_{\xi}\bar{\xi}\ll \beta_f$ is necesssary to ensure discrete subgroups being a reasonable approximation in $2+1d$.
In $3+1d$, deviations occur at $\beta$ values relatively closer to $\beta_f$ compared to $2+1d$, as observed in~\cite{Alexandru:2019nsa,Alam:2021uuq}.
To compare with existing nonperturbative results for $U(1)$ in $2+1d$ studied by Loan et al \cite{Loan:2003wy}, we generate ensembles for$\mathbb{Z}_{10}$ and $\mathbb{Z}_{100}$ groups at the same set of $(\beta_{\xi},\bar{\xi})$ used by them (see \tab{anisotropyu1}). The two largest pairs investigated in \cite{Loan:2003wy} are $\beta_{\xi}\bar{\xi}=1.7\times3.0=5.1$ and $\beta_{\xi}\bar{\xi}=2.0\times3.0=6.0$, not much smaller than $\beta_{f, 10}$, and therefore we expect to observe breakdown of the agreement between $\mathbb{Z}_{10}$ and the $U(1)$ results. In contrast, $\beta_{f,100}>700$ so $\mathbb{Z}_{100}$ results should be indistinguishable from a equivalent $U(1)$ computation.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plot1.pdf}
\caption{$\beta_{f,n}$ versus $n$. $\mathbb{Z}_n$ for $n\leq10$ (black square) were used to perform the fix, while $n>10$ to test the extrapolation.}
\label{fig:detA}
\end{figure}
The results of 2+1d $SU(2)$ from \cite{Teper:1998te} consider $\beta_{\xi}\bar{\xi}$ (see \tab{gateperlink}) above or too close to $\beta_{f,\mathbb{BI}}=9.65(1)$ that we calculated by similar procedures described above. Hence in the following we will not make direct comparisons between discrete groups and continuous groups, but instead compare the viability of the one-loop calculations with the $SU(2)$ continuous group. Then we computed $\mathbb{BI}$ configurations at different values where $\beta_{\xi}\bar{\xi}=4$ and compare those results with our one-loop calculations.
Additonally, we performed one simulation of 3+1d $\mathbb{BI}$ and also compare with our one-loop calculations .
Different methods are available for determining $\xi$ from lattice results. Loan et al.~\cite{Loan:2003wy} utilized the ratio of subtracted static potentials, where a subtraction point must be picked. Teper et al. \cite{Teper:1998te} uses two methods:
the first compares correlators in the spatial and temporal direction which can also be used to determine $\xi$ in real-time simulations \cite{Carena:2021ltu}, the second computes the dispersion relation with low-lying momentum states and tunes $\xi$ to obtain $E(p)\approx \sqrt{m_P^2+p^2}$. These two results are then averaged to obtain a final estimate of $\xi$.
We determined $\xi$ for the discrete groups via the `ratio-of-ratios' method~\cite{Klassen:1998ua}. This method involves computing the ratios of Wilson loops:
\begin{equation}
R_{ss}(x,y)=\frac{W_{ss}(x,y)}{W_{ss}(x+1,y)},\, R_{st}(x,t)=\frac{W_{st}(x,t)}{W_{st}(x+1,t)},
\end{equation}
where $x,y,t$ are the integer lattice separations and the subscripts indicate the orientation of the Wilson loops, either spatial-spatial, or spatial-temporal. In the large $x$ limit where the excited state contamination is suppressed, we have $W_{ss}(x,y)\propto e^{-a x V_s(y a)}$ and $W_{st}(x,t)\propto e^{-a x V_s(t a_\tau)}$ with $V_s$ being the static quark-antiquark potential. This lead to:
\begin{eqnarray}
R_{ss}(x,y)|_{x\to\infty}=e^{-a V_s(y a)},\\
R_{st}(x,t)|_{x\to\infty}=e^{-a V_s(t a_\tau)}.
\end{eqnarray}
We define a variable
\begin{equation}
\delta(x,y,t)=\frac{R_{ss}(x,y)}{R_{st}(x,t)}-1,
\end{equation}
such that $\delta(x,y,t)=0$ will be satisfied in the large $x$ limit when $y a = t a_\tau$. We determine $\xi(y) = t/y$. \fig{interpolation}(top) shows the plateau behavior of $\delta(x, y, t)$ when we approach the large $x$ limit.
Typically, the zero crossing does not occur for integer $y,t$ and thus interpolation between values is required. An example of this calculation is shown in Fig.~\ref{fig:interpolation}(bottom) for $\mathbb{Z}_{100}$ using $y=3$, $\beta_{\xi}=1.7$, and $\bar{\xi}=3.0$ on a lattice of size $N_s^D\times N_t=20^2\times60$. The final step is to take the $\xi(y)$ value in the large $y$ limit as our renormalized $\xi$, to again remove excited states contamination (See Fig.~\ref{fig:anisotropycalculation}). The increasing uncertainty at larger $y$ is due to exponential decay of the Wilson loop $W_{ss}(x, y)$ leading to a signal-to-noise problem.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{anisotropies_extrapolationz100b=1_7xi=3_0ns=20nt=60y=3v2.pdf}
\caption{Example calculations of $\delta(x; y, t)$ for $y=3$ as a function of $x$ (top), $\delta(x\to\infty,3,t)$ for various values of $t$ (bottom) fitted to determine $\delta(x\to\infty, y; t)=0$, for $\mathbb{Z}_{100}$ using $\beta_\xi=1.7$, and $\bar{\xi}=3.0$ on a lattice of size $N_s^D\times N_t=20^2\times60$.}
\label{fig:interpolation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{anisotropy_fitZ100b=1_7xi0=3_0ns=20nt=60.pdf}
\caption{Measured $\xi$ as a function of y. The band corresponds to the 1$\sigma$ error band for best fit to the plateau region. The ensemble parameters are the same as for Fig. \ref{fig:interpolation}.}
\label{fig:anisotropycalculation}
\end{figure}
In \fig{u1matching}, we show the comparisons between the $\xi$ values from our one-loop calculation to the results from non-perturbative Monte Carlo simulation from Loan et al \cite{Loan:2003wy} for $U(1)$ gauge theory in $2+1d$, alongside with our results for $\mathbb{Z}_{10}$ and $\mathbb{Z}_{100}$ also in $2+1d$. It is encouraging to see that including one-loop effects shifts $\xi$ into better agreement with the nonperturbative results compared to $\bar{\xi}$. As a metric for comparison, we use the relative errors:
\begin{eqnarray}
\mathcal{F}_{g}&=&\left|1-\frac{\xi_{\rm 1-loop}}{\xi_{g}}\right|, \notag\\\mathcal{F}^{\bar{\xi}}_{g}&=&\left|1-\frac{\bar{\xi}}{\xi_{g}}\right|
\end{eqnarray}
where $g$ is the nonperturbative data to which we are comparing our one loop results. For the smeared results for $U(1)$, we find $\mathcal{F}_{U(1)}\leq 4.71(8)\%$ compared to $ \mathcal{F}^{\bar{\xi}}_{U(1)}\leq 13.3(15)\%$.
The situation with $\mathbb{Z}_n$ is more involved. For $\beta_{\xi}<1.65$, we find that $\xi_{\mathbb{Z}_{10}}=\xi_{\mathbb{Z}_{100}}$ but are systematically higher than the $U(1)$ results. At present, we do not understand why at lower $\beta_{\xi}$ greater disagreement is found between the discrete and continuous groups, since lower values of $\beta_{\xi}$ are further from the freezing-out regime and they should be in better agreement.
We investigated whether finite volume effects could be playing a role, but for all values of $\{\beta_{\xi},\bar{\xi}\}$, we observed no volume-dependence, as see in \tab{anisotropyu1} and \tab{gateperlink}. Two possible sources of the discrepancy could be the use of smearing in \cite{Loan:2003wy,Teper:1998te} or the different methods of measuring $\xi$. Future work should be undertaken where the discrete and continuous groups are analyzed under the same circumstances. As $\beta_{\xi}$ increases, $\xi_{\mathbb{Z}_{100}}$ approaches $\xi_{U(1)}$, with $\mathbb{Z}_{10}$ having noticeable and growing disagreement.
Across $\beta$, we found $\mathcal{F}_{\mathbb{Z}_{10}}\approx\mathcal{F}_{\mathbb{Z}_{100}}\leq9.5(3)\%$
compared to $\mathcal{F}^{\bar\xi}_{\mathbb{Z}_{10}, \mathbb{Z}_{100}}\leq 18.5(3)\%$.
Higher order loop corrections, of $\mathcal{O}(g^4_E)$ to the $\xi$ could be important for the $\beta_{\xi}$ regions considered and effects of monopoles may also be relevant~\cite{Cella:1997hw}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{U1matchingErik.pdf}
\caption{\label{fig:u1matching} Comparison of one-loop $\xi$ to $\bar{\xi}$, the nonperturbative $\xi$ value from Loan et al ($r_0 =\sqrt{2}$)~\cite{Loan:2003wy} for $2+1d$ $U(1)$ theory, and for $\mathbb{Z}_{n}$ discrete group.}
\end{figure}
\begin{table}[ht]
\caption{Renormalized Anisotropies of $U(1)$ from 1-loop calculation, lattice simulations of $\mathbb{Z}_{10}$ and $\mathbb{Z}_{100}$, and $U(1)$~\cite{Loan:2003wy}.}
\label{tab:anisotropyu1}
\begin{tabular}{cccc|cccc}
\hline\hline
$\beta_{\xi}$& $N_s$ & $N_t$ & $\bar{\xi}$ &$\xi_{\rm 1-loop}$ & \multicolumn{3}{c}{$\xi$}\\\hline
&&&&&$\mathbb{Z}_{10}$&$\mathbb{Z}_{100}$&$U(1)$~\cite{Loan:2003wy}\\
\hline
1.35&16&48&2.25&2.493&2.738(20)&2.732(30)&2.39(4)\\
1.35&24&72&2.25&2.493& 2.762(10) & 2.753(20) & $\cdots$\\
1.55&16&48&2.50&2.750&2.939(29)&2.972(40)&2.72(9)\\
1.55&24&72&2.50&2.750& 2.984(50) & 2.972(22)&$\cdots$\\
1.70&16&48&3.00&3.302&3.513(11)&3.572(10)&3.46(6)\\
1.70&20&60& 3.00& 3.302 & 3.512(20)& 3.527(16)&$\cdots$ \\
1.70&24&72&3.00&3.302& 3.527(24) & 3.555(20)&$\cdots$\\
2.00&16&48&3.00&3.253&3.259(17)&3.421(15)&3.42(3)\\
2.00&20&60&3.00&3.253&3.252(38)&3.379(20)&$\cdots$\\
2.00&24&72&3.00&3.253&3.228(48)&3.389(23)&$\cdots$\\
\hline
\end{tabular}
\end{table}
We can also compare our $\xi_{\rm 1-loop}$ for $SU(2)$ in $2+1d$ to the results from non-perturbative Monte Carlo simulations \cite{Teper:1998te}, which are shown in \fig{su2matching} and \tab{gateperlink}, and to the $\xi$ values we computed for the $\mathbb{BI}$ group (\tab{gateperlink}).
The effect of the one-loop correction is to increase $\xi$ by about $10\%$. The largest error from using $\bar{\xi}$ was found to be $\mathcal{F}^{\bar{\xi}}_{SU(2), \mathbb{BI}}\leq 8(4)\%$. In contrast, we observe $\mathcal{F}_{SU(2)}\leq1(4)\%$ and $\mathcal{F}_{\mathbb{BI}}=1.30(13)\%$ both consistent with 0 -- albeit the $SU(2)$ Monte Carlo results have larger uncertainties compared to the $U(1)$ case. This agreement is found in both $2+1d$ and $3+1d$ $\mathbb{BI}$ results (see \tab{gateperlink}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{SU2matchingErik}
\caption{\label{fig:su2matching} Comparison of one-loop $\xi$ to nonperturbative value of~\cite{Teper:1998te} for $2+1d$ $SU(2)$, with $\bar{\xi} = 4$.}
\end{figure}
\begin{table}[ht]
\caption{Renormalized Anisotropies from 1-loop calculation, discrete group $\mathbb{BI}$, and \cite{Teper:1998te}.}
\label{tab:gateperlink}
\begin{tabular}{cllc|ccc}
\hline\hline
$\beta_{\xi}$& $N_s$ & $N_t$ & $\bar{\xi}$ &$\xi_{\rm 1-loop}$ & \multicolumn{2}{c}{$\xi$}\\\hline
&&&&&$\mathbb{BI}$&$SU(2)$~\cite{Teper:1998te}\\
\hline
$D=3$&&&&&&\\
2.00&36&72&2.00&2.097&2.099(1)&$\cdots$\\
2.00&12\footnote{\label{tfv}This is the largest volume simulated}&60\footref{tfv}&4.00&4.278&$\cdots$&4.35(19)\\
2.65&16\footref{tfv}&64\footref{tfv}&4.00&4.207&$\cdots$&4.22(11)\\
3.00&36&72&1.33&1.351&1.369(19)&$\cdots$\\
4.00&24\footref{tfv}&96\footref{tfv}&4.00&4.136&$\cdots$&4.08(9)\\
\hline
$D=4$&&&&&&\\
3.0&36&72&1.33&1.351&1.36(1)&$\cdots$\\
\hline
\end{tabular}
\end{table}
In all the groups studied, $\mathcal{F}_g$ was found to decrease or remain constant as $\beta_{\xi}$ was increased -- in agreement with expectation for a weak-coupling calculation, with the caveat that for discrete groups, $\beta_{\xi}$ should be away from the freezing-out regime.
For all the $\beta_{\xi}$ values investigated here, we obtain that the systematic error of approximating the one loop results to the non-perturbative results for both discrete group and continuous group is less than 10\%. Given that these $\beta_{\xi}$ values are corresponding to relative strong coupling, 10\% systematic error are conservative for realistic simulations using quantum computers where larger $\beta_{\xi}$ values are used.
\section{\label{sec:Conclusions}Conclusions}
Quantum field theories simulated with quantum computers are naturally lattice-regularized theories, requiring renormalization before comparisons to
experiments can be made. Quantum simulations are constructed within the Hamiltonian formalism, where a spatial lattice with spacing $a$ is
time-evolved. Further approximations are required, as the time evolution operator $\mathcal{U}(t)$ built from the Kogut-Susskind Hamiltonian usually cannot be exactly implemented in an efficient manner. One common method for these approximations is trotterization, which introduces finite temporal lattice spacings $a_t$ and thus a finite anisotropy factor $a/a_t$ in the quantum simulations. As the trotterized $\mathcal{U}(t)$ can be related to the Euclidean transfer matrix on the anisotropic lattice via analytical continuation, it is thus beneficial to have the perturbative relations between the bare and renormalized quantities in Euclidean spacetime, e.g. the anisotropy factor $\xi$ as a function of $\beta_\xi$ and $\bar{\xi})$.
In the near term, studies of quantum field theory on quantum computing will be limited to low-dimensions at coarse $a_t$. In this article, we extended the perturbative matching of coupling constants to general $SU(N)$ and $U(N)$ gauge theories for any anisotropy factor $\xi$ and general dimensions. The results presented here can be easily used for Euclidean measurements as well as inputs to quantum simulations through analytical continuation. As examples, we compared anisotropy factors obtained via the one-loop Renormalization to those determined from Monte Carlo simulations,
and found great agreement for $SU(N)$ gauge theories. For $U(1)$ gauge theories, the one-loop calculation corrects most of the renormalization effects observed in the non-perturbative results. To best of our knowledge, these comparisons were not previously performed before and provide important guidance for the validity of the perturbative calculations. Taken holistically, our results suggest that the one-loop $\xi$ can serve as a replacement for the nonperturbative value in lattice calculations while inducing a systematic error $\leq10\%$, with $SU(2)$ appearing to have better agreement than $U(1)$ in $2+1d$. In the weak coupling regime at sufficiently small $a$, this error is subleading to quantum errors for near term quantum simulations. Comparing the $\xi$ parameters calculated perturbatively for continuous groups with those calculated non-perturbatively for discrete groups, we find satisfactory agreement, suggesting that the perturbative relations derived in this paper are also applicable to discrete groups in the parameter space of interest.
\begin{acknowledgments}
This work is supported by the Department of Energy through the Fermilab QuantiSED program in the area of ``Intersections of QIS and Theoretical Particle Physics". Fermilab is operated by Fermi Research Alliance, LLC under contract number DE-AC02-07CH11359 with the United States Department of Energy.
\end{acknowledgments}
|
2,869,038,154,945 | arxiv | \section{Introduction}
Among the potential sources of GW, the merger of double compact objects (DCOs) is
considered the most promising one for the first detection. The next generation of gravitational
wave observatories (i.e., Advanced LIGO, Advanced Virgo, KAGRA)
will probe the Universe in search for DCO signatures at unprecedented distances,
reaching cosmological scales ($z>0.1$).
In this paper we present predictions for DCO merger rates from isolated (i.e.,
field population) DCO progenitors as a function of cosmological redshift.
The distribution of binary coalescence as a function of redshift has been investigated
by several authors. An important initial work was the investigation of the
redshift distribution of GRBs (e.g. \cite{totani}). Preliminary work on
the importance of GW measurements of chirp mass distributions was done
by \cite{bbr}, while initial studies of the GW confusion background have
been presented in \cite{reghugh}.
In the first paper in this series (\cite{dominik}, first in the series) we investigated the sensitivity
of DCO formation to major uncertainties of binary evolution (regarding mostly supernovae and
common envelope episodes (CE)). We presented several models to bracket the
current uncertainty in the phenomena deciding the fate of DCO systems.
Building on this work, in the current study we present a set of four
evolutionary models. In addition to
a standard (reference) model, we have added models investigating a range of Hertzsprung gap CE donors,
supernova (SN) explosion engines, and BH natal kicks (see Section \ref{suite} and
Table \ref{list}). Additionally, for each model we have performed the evolutionary calculations for 11
metallicity values, allowing us to cover the abundance of metals in Population I
and II stars (see Sections \ref{galpop} and \ref{binary}).
To account for the varied chemical composition of the Universe, we perform the cosmological
calculations for two scenarios of metallicity evolution, that we will call ``low--end'' and ``high--end''
, respectively. These yield distinct rates of average metallicity growth, allowing us to ``bracket'' the
associated uncertainties (see Section \ref{gmet}, Fig.~\ref{masmet} and Fig.~\ref{zmet}).
In this study we investigate field stellar populations only. However, recent studies (e.g.,
\cite{kluster}) suggest that mergers in globular clusters may add a significant contribution to
the overall coalescence rates in the Universe. In this sense, our results can be taken as
conservative lower limits.
We present the intrinsic merger rate densities and observer frame merger rates of all three types
of DCOs in Figures \ref{rest4high}, \ref{metform}, \ref{rest4low}, \ref{obs4high}, and \ref{obs4low}.
Figs.~\ref{egdishigh} and~\ref{egdislow} show the BH-BH merger rate densities as
a function of the total masses of the systems. The results acquired in this study are
available online at {\tt www.syntheticuniverse.org}.
\section{Stellar populations}
In this section we describe the properties of stellar populations, and their evolution
with redshift. The formalism is mostly adopted from \cite{grb}.
\subsection{Star Formation History}
In order to determine the merger rates of DCOs we need the star formation rate (SFR).
We adopt the formula provided by \cite{strolger}:
\begin{equation} \label{sfr}
SFR=10^9a \left(t^b e^{-t/c} +de^{d(t-t_0)/c} \right) \,{\rm M}_{\odot} \textrm{yr$^{-1}$ Gpc$^{-3}$},
\end{equation}
where $t$ is the age of the Universe (Gyr) as measured in the rest frame, $t_0$ is the
present age of the Universe ($13.47$ Gyr, see Section \ref{cosmo}) and the parameters
have values: $a=0.182$, $b=1.26$, $c=1.865$ and $d=0.071$. The SFR described above is expressed
in comoving units of length and time.
\subsection{Galaxy Mass Distribution}
For redshifts $z<4$ we describe the distribution of galaxy masses using a Schechter-type
probability density function, calibrated to observations~\citep{fontana}:
\begin{equation} \label{galmass}
\Phi(M_{{\rm gal,z}})=\Phi^*(z)\ln(10)a^{1+\alpha(z)}e^{-a},
\end{equation}
where $\Phi^*(z)=0.0035(1+z)^{-2.2}$, $a=M_{\rm gal} \cdot 10^{-M_{\rm z}}$
($M_{\rm z}=11.16+0.17z-0.07z^2$), and $\alpha(z)=-1.18-0.082z$. A galaxy mass
is drawn from this distribution in solar units (${\rm M}_{\odot}$) and in the range
$7 < \log(M_{\rm gal}) < 12$. Beyond redshift $z=4$ we assume no
further evolution in galaxy mass, fixing the mass distribution
to the value at $z=4$. This assumption reflects the lack of information on
galaxy mass distribution at high redshift.
\subsection{Galaxy Metallicity} \label{gmet}
We assume the average oxygen to hydrogen number ratio
($F_{\rm OH}=\log(10^{12}{\rm O/H})$) in a typical galaxy to be given by
\begin{equation} \label{meteq}
\log(F_{\rm OH})=s+1.847 \log(M_{\rm gal})- 0.08026 (\log(M_{\rm gal}))^2.
\end{equation}
As suggested
by \cite{erb} and \cite{ynf}, the functional form of this mass-metallicity relation is
redshift independent, with only the normalization factor $s$ varying with redshift.
We describe the redshift dependence of galaxy metallicity using the average metallicity
relation from \cite{pei}:
\begin{equation}
Z \propto \left\{
\begin{array}{l r}
10^{-a_2 z} & z < 3.2\\
10^{-b_1-b_2 z} & 3.2 \leq z < 5\\
10^{-c_1-c_2 z} & z \geq 3.2\\
\end{array} \right. ,
\end{equation}
which implies the evolution of $s$ with redshift:
\begin{equation}
s \propto \left\{
\begin{array}{l l}
-a_2z -1.492 & z < 3.2 \\
-b_2z -3.2(a_2-b_2)-1.492 & 3.2 \leq z < 5 \\
-c_2z -5(b_2-c_2) -3.2(a_2-b_2)-1.492 & z \geq 3.2 \\
\end{array} \right.
\end{equation}
We assume that the oxygen abundance (used in $F_{\rm OH}$)
correlates linearly with the average abundance of elements heavier than Helium
(encapsulated in the metallicity measure, $Z$).
In this paper we employ two distinct scenarios for metallicity evolution with redshift in order
to investigate the uncertainties of the chemical evolution of the Universe. The construction of
these scenarios consists of several steps. (1.) We utilize two normalizations of
Eq.~\ref{meteq}. In the first, provided by \cite{pei}, the coefficients are: $a_2=0.5$, $b_1=0.8$,
$b_2=0.25$, $c_1=0.2$, $c_2=0.4$. This grants a rate of average metallicity evolution, which we label
\textit{slow}. The second, provided by \cite{ynf}, uses $a_2=0.12$, $b_1=-0.704$, $b_2=0.34$,
$c_1=0.0$, $c_2=0.1992$. It is based on ultraviolet-GALEX, SDSS, infrared-Spitzer and
neutrino-Super Kamiokande observations \citep{hopkins}. This normalization
produces a faster rate of chemical evolution, and we label the
results \textit{fast}. At this point, for each galaxy mass value at a given redshift (Eq.~\ref{galmass}) we
have two metallicity values (Eq.~\ref{meteq}). (2.) We then combine these (\textit{slow}
and \textit{fast}) metallicities into a single value being an average of the
two; we label this
profile as \textit{initial}. However, this profile yields an unrealistically high number of galaxies
with extrasolar (up to 3${\rm ~Z}_{\odot}$; ${\rm ~Z}_{\odot}=2\%$ of stellar mass) metallicities at
redshift $z\sim 0$.
(3.) In order to be consistent with observational data, we scale down the
profile so that it agrees with the
observed metallicities of galaxies in the local Universe (at $z\sim 0$). We explore two such ''extreme''
scalings resulting in a pair of final metallicity evolution
profiles. In the first, we divide the metallicity
values from the \textit{initial} profile by a factor of $1.7$. This grants a median value of metallicity
of $1.5{\rm ~Z}_{\odot}$ at $z\sim 0$ (see Fig.~\ref{masmet}), which corresponds to $8.9$ in the ''12+log(O/H)''
formalism. This calibration was designed to match the upper $1 \sigma$ scatter of metallicities according to
\cite{yuan} (see their Fig.~2, top-right panel). We label this profile as \textit{high--end}, as it
is the upper limit on metallicity at $z\sim 0$. In the second, we utilize SDSS observations \citep{panter},
from which we infer that one half of the star forming mass of the galaxies at $z\sim0$ has
$20\%$ solar metallicity, while the other
half has $80\%$ solar metallicity. This yields a median metallicity value of $0.8{\rm ~Z}_{\odot}$ and
requires the division of the \textit{initial} profile by a factor of $3$. We label this profile as
\textit{low--end}.
\subsection{Galaxy Stellar Populations} \label{galpop}
We distinguish three stellar populations:
\begin{equation}
\begin{array}{l l}
F_{\rm OH,gal} < 10^{-4} & \textrm{Population III} \\
10^{-4} \leq F_{\rm OH,gal} \leq 10^{-1} & \textrm{Population II} \\
F_{\rm OH,gal} > 10^{-1} & \textrm{Population I}
\end{array}.
\end{equation}
We choose $F_{\rm OH,gal}=10^{-4}$ as the delineation point between Population II and III stars.
A lower abundance of metals provides insufficient cooling in the collapse of gas
clouds, and thus significantly alters the star formation for Population III stars
\citep[e.g.][]{mackey,smith}. The border point between Population II and I stars is dictated by observations
in the Milky Way \citep[e.g.][]{binney,beers}.
We assume that the binary fraction is $50\%$: for each single star there exists one
binary. We additionally assume that all the stars within each galaxy share the same metallicity
value. The use of average metallicity seems to be appropriate since we draw a large
($10^4$) number of galaxies (Eq.~\ref{galmass}) via Monte Carlo simulations.
\section{Binary Star Modeling} \label{binary}
We present our calculations for a set of 4 models, each differing in major input
physics (see Table \ref{list} and the subsequent sections). For each model we use a
grid of 11 metallicity values ($Z=0.03$, $0.02$(solar,${\rm ~Z}_{\odot}$), $0.015$, $0.01$, $0.005$,
$0.002$, $0.0015$, $0.001$, $0.0005$, $0.0002$, $0.0001$) in order to accurately account for
the average metallicity evolution of the stellar populations with redshift.
\subsection{The {\tt StarTrack} code}
To calculate the evolution of the stellar populations we utilize the recently
updated \citep{onthemax,massgap,dominik} {\tt Startrack} population synthesis code
\citep{comprehensive,startrack}. This code can evolve isolated binary stars that are interacting
in quasi-hydrostatic equilibrium from the Zero Age Main Sequence (ZAMS), through mass transfer,
to the formation of compact objects, and to the ultimate merger of the binary components. The
code makes use of an extensive set of formulae and prescriptions that adequately approximate more
detailed binary evolution calculations (see \cite{hurley}).
{\tt StarTrack} allows to investigate stable and unstable mass transfers between
the binary components. Stable transfer calculations have been calibrated on massive binaries
that are relevant to DCO formation \citep[e.g.][]{tau1999,well2001}. It is yet unknown exactly
how conservative the stable mass transfer is. \cite{dp2003} suggest that in massive binaries the
fraction of the envelope of the donor accreted by its companion ranges between $40\%$ and $70\%$.
In our calculations we fix this value to be $50\%$ or in mathematical terms:
\begin{equation}
\dot{M_{\rm acc}}=f_{\rm a}\dot{M_{\rm don}},
\end{equation}
where $\dot{M_{\rm acc}}$ is the accretion rate, $\dot{M_{\rm don}}$ is the mass transfer rate from
the donor and $f_{\rm a}$ is the fraction of the rate transferred (here equal to $0.5$). The remaining
mass is expelled to infinity.
The dynamically unstable mass transfers (common envelope) is calculated according
to the energy balance formula \citep{webbink}. with the envelope binding energy parameter $\lambda$
adopted from \cite{chlambda}.
Tidal interactions and their influence on eccentricity, the semi-major axis and rotation is also
evaluated. The calculations are done with the standard equilibrium-tide, weak-friction approximation
\citep{zahn77,zahn89}, using the formalism of \cite{hut81}. However, the code does not allow to investigate
the influence the rotation of the components has on their internal structure.
Stellar winds are taken into account as a function of the metallicity and evolutionary stage of the
star. This piece of physics is especially important as it has a significant impact on the masses
of remnant objects, which are the centerpiece of this study. In short, the wind mass loss rates are
divided into categories specific to the evolutionary stage of the star: O/B--type, Red Giant, Asymptotic
Red Giant, Wolf-Rayet stars and Luminous Blue Variable (LBV) stars. The magnitude of the
rates increases with metallicity of the star except for the LBV phase. In this stage the winds are set
to be of the order of $10^{-4} {\rm M}_{\odot}$yr$^{-1}$. This value was calibrated to account for the highest
mass black holes in the Milky Way $\sim 15 {\rm M}_{\odot}$ (Cyg X-1 and GRS 1915). A detailed description of
wind mass loss rates can be found in \cite{onthemax}.
Besides stellar winds, the code also calculates changes of the angular momentum arising from gravitational
radiation and magnetic braking. The latter is adopted from \cite{ivan2003}.
Additionally, the utilizes the convection driven, neutrino--enhanced supernovae engines
\citep{chrisija} to determine the properties of the remnant objects (neutron stars and black holes).
For each metallicity value in each model we evolve $2\times 10^6$ binaries,
assuming that each component is created at the same time. Each binary system is initialized
by four parameters which are assumed to be independent. These are: primary
mass, $M_1$ (initially more massive component), mass ratio, $q=M_2/M_1$, where
$M_2$ is the mass of the secondary component (initially less massive), the
semi-major axis, $a$, of the orbit, and the eccentricity, $e$. The mass of the primary component
is randomly chosen from the initial mass function adopted from \cite{kro1} and
\cite{kro2}:
\begin{equation} \label{imf}
\Psi (M_1) \propto \left\{
\begin{array}{l c}
M_1^{-1.3} & \quad 0.08 \ {\rm M}_{\odot} \leq M_1 < 0.5 \ {\rm M}_{\odot} \\
M_1^{-2.2} & \quad 0.5 \ {\rm M}_{\odot} \leq M_1 < 1.0 \ {\rm M}_{\odot} \\
M_1^{-\alpha} & \quad 1.0 \ {\rm M}_{\odot} \leq M_1 < 150 \ {\rm M}_{\odot}, \\
\end{array}
\right.
\end{equation}
where $\alpha=2.7$ is our standard choice for field populations. The choice of the upper IMF
cutoff ($150{\rm M}_{\odot}$) is justified by observations of massive stars in the Milky Way \citep{figer,oey}.
Stars are generated from within an initial mass range, with the limits based on the targeted stellar
population. For example, studies of single neutron stars require their evolution within the range
$8$--$20\,{\rm M}_{\odot}$, while for single BHs the lower limit is $20\,{\rm M}_{\odot}$. Binary evolution
may broaden these ranges due to mass transfer episodes, and we therefore set the
minimum mass of the primary to $5\,{\rm M}_{\odot}$. We assume a flat mass ratio distribution,
$\Phi(q)=1$, over the range $q=0$--1, in agreement with recent observations~\citep{kob}.
Given a value of the primary mass and the mass ratio, we obtain the mass of the secondary
from $M_2=qM_1$. However, for the same reasons as for the primary, we don't consider binaries
where the mass of the secondary is below $3{\rm M}_{\odot}$. The distribution of initial binary
separations is assumed to be flat in $\log(a)$~\citep{abt}, and so $\propto 1/a$,
with $a$ ranging from values such that at ZAMS the primary fills no
more than 50\% of its Roche lobe to $10^5 \ {\rm ~R}_{\odot} $. For the initial eccentricity we adopt a thermal
equilibrium distribution \citep[e.g.][]{heggie,duq}: $\Xi(e)=2e$, with $e$ ranging from $0$ to $1$.
We find that the adopted parameters are in accordance with the most recent observations of O-star
populations \citep{sana}.
\subsection{The model suite} \label{suite}
\subsubsection{The Standard Model} \label{smodel}
In this subsection we define a reference model for this paper.
This model is identical with the ``Standard model -- submodel B'' in the previous paper
in this series \citep{dominik}.
The list of major parameters describing the input physics of binary evolution in this
model begins with the \textit{Nanjing} $\lambda$ \citep{chlambda} common envelope (CE)
coefficient used in the energy balance prescription \citep{webbink}. This $\lambda$ value
depends on the evolutionary stage of the donor, its mass at ZAMS and the mass
of its envelope, and its radius. In addition, all of these quantities depend on metallicity,
which in our simulations changes within a broad range ($Z=10^{-4}$--$0.03$).
However, before calculating the aforementioned energy balance to determine the outcome
of the CE we check the evolutionary type of the donor star. For example, Main Sequence
(MS) stars do not have clear core-envelope division, as the helium core is still in the
process of being developed. Donors on the HG behave similarly,
although it remains unclear if such a division can appear on the HG or not until
later stages, like the Core Helium Burning (P. Eggleton, private
communication). In our previous work we investigated two possibilities of the CE
outcome associated with the type of the donor star:
an automatic (premature) merger if the donor is a HG star, regardless of the energy balance
(labeled as ``Submodel B") or allow the CE energy balance to unfold (``Submodel A'').
The case in which we allow for potential survival of systems with HG donors
results in very high Advanced LIGO/VIRGO detection rates \citep[$\sim 8000$
yr$^{-1}$;][]{bhkick}, exceeding even the empirically estimated rates based on
IC10 X-1 and NGC 300 X-1 \citep[$\sim 2000$ yr$^{-1}$;][]{bulik2011,cygx3}.
Therefore, we only show one model with this generous assumption on CE physics,
which leads to the most optimistic of our predictions. This model
(Optimistic CE) will be tested (and probably quickly eliminated) by the upper limits from the
Advanced LIGO/VIRGO engineering runs. For all the other models, including our reference model,
we make the conservative assumption that none of the HG donor CE phases leads to the
formation of DCOs.
Observations suggest \citep{hobbs} that neutron stars formed in supernovae receive natal kicks,
with velocities drawn from a Maxwellian distribution with $\sigma=265\,\mbox{km}/\mbox{s}$. We employ these
findings in our simulations, and extend them so that black hole natal kicks match this distribution
as well. However, it is possible that some matter ejected during the explosion
will not reach the escape velocity, and will thus fall back on the remnant object, potentially stalling
the initial kick. To account for this, we modify the Maxwellian kicks by the amount of matter falling
back on the newly formed compact object:
\begin{equation} \label{vkick}
V_{\rm k}=V_{\rm max}(1-f_{\rm fb}),
\end{equation}
where $V_{\rm k}$ is the final magnitude of the natal kick, $V_{\rm max}$ is
the velocity drawn from a Maxwellian kick distribution, and $f_{\rm fb}$
is the fallback factor describing the fraction of the ejecta returning to the object.
The values of $f_{\rm fb}$ range between 0--1, with 0 indicating no fallback/full kick
and 1 representing total fallback/no kick \citep[a ``silent supernova'',
e.g.][]{mirabel}. We label this the ``constant velocity'' formalism. An alternative approach
is the ``constant momentum'' one, where the kick velocity is inversely proportional to the mass
of the remnant object. In general, constant velocity provides larger natal kicks on average than
constant momentum resulting in more frequent disruptions of binaries, especially for systems with BHs.
Therefore, we choose the ``constant velocity'' formalism over the ``constant momentum'' as it provides
a more conservative limit on the number and therefore merger rates of systems containing BHs.
This model also utilizes the ``Rapid'' convection driven, neutrino enhanced supernova
engine \citep{chrisija}. It allows for a successful explosion without the need for the
artificial injection of energy into the exploding star. In this scenario the explosion
starts from the Rayleigh-Taylor instability and occurs within the first $0.1$--$0.2\,\mbox{s}$.
For low mass stars ($M_{\rm zams} \lesssim 25 {\rm M}_{\odot}$) the result is a very strong (high
velocity kick) supernova, which generates a NS. For higher mass stars a BH is formed
through a direct collapse (failed supernova). This engine, incorporated into binary evolution,
successfully reproduces the mass gap \citep{massgap} observed in Galactic X-ray binaries
\citep{mg1,mg2}.
The list of major physical parameters used in this and subsequent models is given in
Table \ref{list}. More details on the physics described above can be found in \cite{dominik}.
\subsubsection{Variations on the standard model} \label{variations}
The uncertainties in the CE and the SN engine argue for exploring a range of
input physics beyond that in the standard model described in the previous subsection.
In this subsection we present three additional models which we have found to encapsulate the
full range of possible binary evolutions. All subsequent models use the same input physics as
the reference model, except for the parameters described below.
\textit{Optimistic Common Envelope}. In this model we allow HG stars to be CE donors
(see Section \ref{smodel}). When the donor initiates the CE phase the energy
balance determines the outcome. This model is identical to the ``standard
model -- submodel A'' from our previous paper in this series \citep{dominik}.
\textit{Delayed SN}. This model utilizes the ``Delayed'' supernova engine instead of the
Rapid one. The Delayed is also a convection driven, neutrino enhanced engine, but is
sourced from the standing accretion shock instability (SASI), and can produce an
explosion as late as $1\,\mbox{s}$ after bounce. The Delayed engine produces a continuous
mass spectrum of compact objects, from NSs, through light BHs, to massive BHs
(see \cite{massgap}). This model is identical to the ``Variation 10 -- submodel B''
model from our previous paper in this series \citep{dominik}.
\textit{High BH kicks}. In this model the BHs receive full natal kicks.
The newly formed BH acquires a velocity drawn from a Maxwellian distribution
(see Section \ref{smodel}) regardless of the fallback factor $f_{\rm fb}$ (see Eq.~\ref{vkick}).
This model is identical to the ``Variation 8 -- submodel B'' model in our previous paper
in this series \citep{dominik}.
\section{Cosmology calculations} \label{cosmo}
We utilize a flat cosmology with $H_0=70\,\mbox{km}\,\mbox{s}^{-1}\,\mbox{Mpc}^{-1}$, $\Omega_M=0.3$,
$\Omega_{\Lambda}=0.7$, and $\Omega_k=0.0$. The relationship between redshift and time
is given by:
\begin{equation}
t(z)=t_H\int^{\infty}_{z} \frac{dz'}{(1+z')E(z')},
\end{equation}
where $t_H=1/H_0=14$ Gyr is the Hubble time \citep[e.g.][]{hogg} and
$E(z)=\sqrt{\Omega_M(1+z)^3+\Omega_k(1+z)^2+\Omega_{\Lambda}}$.
The comoving volume element $dV$ is given by:
\begin{equation} \label{dvdz}
dV(z)=\frac{c}{H_0}\frac{D_c^2}{E(z)}d\Omega dz,
\end{equation}
where $c$ is the speed of light in vacuum, $d\Omega$ is the solid angle, and $D_c$ is
the comoving distance given by :
\begin{equation}
D_c(z)=\frac{c}{H_0} \int^z_0 \frac{dz'}{E(z')}.
\end{equation}
There are a series of steps to calculate the rates of events, as we now
describe.
We employ time as our reference coordinate and start by creating
time bins across the entire history of Universe, each bin $100$
Myrs wide, from $0.13$ Gyrs (birth) to $13.47$ Gyrs (today). At the center of each bin
we evaluate the star formation rate according to Eq.~\ref{sfr}. For the redshift value
corresponding to the center of a given time bin we generate a Monte Carlo sample of $10^4$
galaxies (a number sufficient to produce a smooth distribution) with masses drawn from the
distribution given in Eq.~\ref{galmass}. For each time bin we obtain a total mass
of galaxies $M_{\rm gal,tot}$.
For each galaxy we then estimate its average metallicity using
Eq.~\ref{meteq}. We assume that {\em all} stars within a given galaxy have identical
metallicity as obtained from Eq.~\ref{meteq}. Since we draw a large number of galaxies
in each time bin, and each galaxy has its own mass, and therefore is
described by its own average metallicity, we end up with a distribution of metallicity
in each time bin. This also yields
a total mass of galaxies with a specific metallicity
($M_{\rm gal,i}$) within each time bin. We then define the fraction of the total galaxy mass
capable of forming stellar population with a specific metallicity by
\begin{equation}
F_i=\frac{M_{\rm gal,i}}{M_{\rm gal,tot}}.
\end{equation}
However, because we use a finite number of metallicity points in our simulations (see Section \ref{binary})
we need to extrapolate our results in order to account for the continuous spectrum given by Eq.~\ref{meteq}.
Therefore, the metallicity points are extended into bins delineated by the average value of
neighbouring points. For example, given the set of points $Z=0.01,0.015,0.02$, the value $Z=0.015$
now corresponds to a bin that extends from $0.0125$ to $0.0175$. The border points $Z=0.0001$ and $Z=0.03$
extend to lower and higher values, respectively, to cover the rest of the spectrum.
Population synthesis provides us with a representative sample of DCOs.
The formation of a single DCO within a time
bin corresponds to a fraction, $f_{\rm fr}$, of the total formation rate:
\begin{equation}
f_{\rm fr}(t)=\frac{F_i}{M_{\rm sim}}SFR(t),
\end{equation}
where $M_{\rm sim}$ is the total mass in our population synthesis calculations (see Section
\ref{binary}). Repeating this calculation of $f_{\rm fr}$ for each metallicity yields a total
formation rate, $f_{\rm fr,tot}$, within a given time bin.
We now need to know the delay time until merger, $t_{\rm del}$, for each
DCO formed. The delay time is defined
as the interval between the formation of the progenitors of a DCO and the coalescence of two
compact objects. For each DCO originating from a specific metallicity we randomly choose a birth
point, $t_0$ (ZAMS), within each time bin. We then propagate the system forward in time
toward its merger using the delay time:
\begin{equation}
t_{\rm mer}=t_0+t_{\rm del}.
\end{equation}
As long as we consider DCOs with $t_{\rm mer}<t_H$, and as long as the width of the time bins throughout the
time line is constant, the formation rate ($f_{\rm fr}$) of a DCO from a given bin translates into
a merger rate in a later bin, propagated forward in time by $t_{\rm del}$. Repeating
the above calculations for every time bin yields a total density of
rest frame merger events, $n_{\rm rest}(t)$, in units of
Gpc$^{-3}\,\mbox{yr}^{-1}$. In other words
\begin{equation}
n_{\rm rest}(t)=\sum^N_i f_{{\rm fr},i}(t-t_{\rm del}),
\end{equation}
where $i$ sums over each representative DCOs.
\section{Results} \label{results}
We now provide results from our four models, presenting the intrinsic merger
rate densities and the observer frame merger rates, given by
\begin{equation}
n_{\rm obs}(<z)=4\pi \int^{z}_0 \frac{n_{\rm rest}}{1+z'}\frac{dV}{dz'}dz' \quad [\rm{yr}^{-1}],
\end{equation}
with $dV/{dz}$ given by Eq.~\ref{dvdz}, integrated over the solid angle $d\Omega$
(hence the factor of $4\pi$).
In the case of the standard/reference model (details in Section \ref{smodel}) we explain the
general redshift behavior of all three types of DCOs (NS-NS, BH-NS, and BH-BH) and compare the
reference model for two scenarios of metallicity evolution (\textit{high--end}
and \textit{low--end}). For our three variations (Optimistic CE, Delayed SN, and High BH
kicks, Section \ref{variations}) we investigate deviations from the
reference model, again incorporating our different metallicity evolution scenarios.
\subsection{Standard Model} \label{smodelres}
{\bf NS-NS}. As shown in Fig.~\ref{rest4high} the intrinsic merger rate densities of double
neutron star systems peaks at redshift
$z\approx 1$ ($\sim 200$ yr$^{-1}$Gpc$^{-3}$). As a general rule, the merger rates
of all types of DCO are directly related to the star formation rate. However, for
a given SFR value the formation efficiency of different DCO may vary. In other words,
the proportions of NS-NS, BH-NS, and BH-BH systems may differ beyond
the regime set by the IMF \citep[e.g.][]{dominik}. For example, NS-NS systems are on
average efficiently created in high metallicity environments (see Fig.~\ref{metform}).
When combined with the peak of the SFR at $z\sim 2$ (average \textit{high--end} metallicity is
$\sim 0.4{\rm ~Z}_{\odot}$, see Fig.~\ref{zmet}), high metallicity NS-NS formation efficiency is enhanced,
thus creating the profile shown on Fig.~\ref{rest4high}. What is characteristic for this profile is the
''hump'' that arises at $z\sim 1.6$, approaching from high redshifts.
As can be seen in Fig.~\ref{metform}, this shape is dominated by mergers originating from $0.75{\rm ~Z}_{\odot}$
environments. The reason for this increase in merger rate densities, when transiting from $0.5{\rm ~Z}_{\odot}$
environments (higher redshifts) to higher metallicity ones (lower redshifts), is a consequence of the applied
CE approach. Within the framework of the \textit{Nanjing} CE treatment adapted for the {\tt Startrack}
code, the binding energy of the CE decreases at the $0.5$--$0.75{\rm ~Z}_{\odot}$ boundary, allowing for the survival
of a larger number of NS-NS progenitors.
By comparison, for the \textit{low--end} metallicity profile
the NS-NS systems dominate the rates only up to $z\approx 0.5$
(Fig.~\ref{rest4low}). This is a consequence of the adopted metallicity evolution scenario.
Specifically galaxies of a given metallicity are shifted to
lower redshift when compared with the \textit{high--end}
scenario, causing a shift of the NS-NS systems also to lower redshifts.
As shown in Fig.~\ref{obs4high}, in the observer frame the systems dominate the merger rates up to
redshift $z\approx 2.4$. However, decreased metallicity for the \textit{low--end} case
shifts this point to $z\approx 0.6$ (Fig.~\ref{obs4low}).
{\bf BH-NS}. For the \textit{high--end} metallicity evolution, the rest frame
merger rate densities for BH-NS systems shown in Fig.~\ref{rest4high} peaks at a
value of $\sim 50\,\mbox{Gpc}^{-3}\,\mbox{yr}^{-1}$ at redshift $z\sim 3$. However,
the merger rate efficiency drops for low ($z\sim 0$) and high ($z\sim 6$)
redshifts. This is because of properties of the progenitor masses.
For metallicities $\sim {\rm ~Z}_{\odot}$ the bulk of the progenitors masses
are in the range $45$--$60\,{\rm M}_{\odot}$ for the primary component, and
$22$--$32\,{\rm M}_{\odot}$ for the secondary. Pairs of progenitors outside these ranges
are unlikely. The upper mass limit delineates between BH-NS and BH-BH systems; crossing it
results in the formation of the latter systems instead of the former. The lower mass
limit is set by a similar phenomenon, only this time through BH-NS/NS-NS formation. Progenitors of these
systems for metallicities a factor of $\sim 10$ lower than ${\rm ~Z}_{\odot}$ must have lower masses on average, primarily
because of the decreased stellar wind mass losses. Otherwise the binary would retain enough mass to
form a BH-BH system or go through a terminal CE event. Therefore, the mass ranges for BH-NS
progenitors for $Z\sim 10\%{\rm ~Z}_{\odot}$ are: $20$--$50\,{\rm M}_{\odot}$ for the primary and $12$--$25\,{\rm M}_{\odot}$ for the
secondary. Given that in this mass range the Initial Mass Function (IMF) scales as $M^{-2.7}$ (where
$M$ is the mass of the progenitor) there are more BH-NS progenitors available at moderately low
metallicity than at higher values. This, in turn, translates into increased merger rates arising from these
environments. Decreasing the metallicity to $\sim 1\% {\rm ~Z}_{\odot}$ decreases the masses of the
progenitors even further due to the same wind effects. However, in this case the BH progenitors are
closely approaching their lower mass limit ($\sim 20{\rm M}_{\odot}$), which leaves a narrow mass range:
$20$--$25{\rm M}_{\odot}$ for the primary and $18$--$22{\rm M}_{\odot}$ for the
secondary. There are fewer progenitors in these mass
ranges when compared to the previous case, and therefore we find a lower
merger rate. Overall, the BH-NS merger rates peak originates from systems
being created at moderate metallicities (see Fig.~\ref{metform}).
{\bf BH-BH}. For these systems the intrinsic merger rate has a peak-plateau at a
rate of $\sim 300$--$400\,\mbox{Gpc}^{-3}\,\mbox{yr}^{-1}$ at $z\sim 4$--$8$,
for the \textit{high--end} case.
The low metallicity galaxies abundant at high redshifts are efficient black hole factories
(see Fig.~\ref{egdishigh} and \ref{egdislow}). This also means that adopting the \textit{low--end}
metallicity scenario will allow for more BH-BH systems to form at lower redshifts, when compared to the
\textit{high--end}. Additionally, environments with low amounts of metals favor massive BHs. For
example, the most massive BH-BH system acquired in this model consisted of a $62\,{\rm M}_{\odot}$ and a $74\,{\rm M}_{\odot}$
BH pair. These systems originate from the extremely low metallicity environments ($Z=0.0001$). We find that
such systems merge up up until redshift $z\sim 3$ and $z\sim 2$ for the \textit{high--end} and \textit{low--end}
metallicity evolution models, respectively.
However due to statistical uncertainties these redshift values may be even lower. These massive systems
originate through the standard BH-BH formation channel. As an instructive example, we detail
the formation scenario of a $8.3$--$5.8\,{\rm M}_{\odot}$ BH-BH, for $Z=0.005$ -- a typical system for the average
metallicity acquired in our study: {\bf t=0 Myr.} The components start with masses $32\,{\rm M}_{\odot}$ and
$25\,{\rm M}_{\odot}$ for the primary and secondary, respectively and an orbital separation $a=995\,{\rm ~R}_{\odot}$.
{\bf t=6.7 Myr.} The primary, after becoming a HG star, expands and initiates a mass transfer through
Roche lobe overflow (RLOF). The transfer continues until the primary loses almost all of its hydrogen
envelope and becomes a Wolf-Rayet star with $10\,{\rm M}_{\odot}$ (the secondary component has $35\,{\rm M}_{\odot}$).
The orbital separation prior to RLOF was $a=1000\,{\rm ~R}_{\odot}$ and $a=1600\,{\rm ~R}_{\odot}$ after.
{\bf t=7.0 Myr.} The primary explodes as a supernova, forming a $7.8\,{\rm M}_{\odot}$ BH. The orbital separation
after the explosion was $a=1760\,{\rm ~R}_{\odot}$ {\bf t=8.7 Myr.} The secondary ($34\,{\rm M}_{\odot}$) initiates a CE phase
and becomes a Wolf-Rayet star with $11\,{\rm M}_{\odot}$ as a result of the outcome (the primary gained
$\sim 0.5\,{\rm M}_{\odot}$) The orbital separation prior to the CE was $a=1780\,{\rm ~R}_{\odot}$ and $a=2.6\,{\rm ~R}_{\odot}$ after.
{\bf t=9.4 Myr.} The secondary undergoes a SN explosion and becomes a $5.8\,{\rm M}_{\odot}$ BH. The orbital
separation prior to the explosion was $a=2.8\,{\rm ~R}_{\odot}$ and $a=3\,{\rm ~R}_{\odot}$ after.
{\bf t=26 Myr.} The coalescence of a $8.3\,{\rm M}_{\odot}$--$5.8\,{\rm M}_{\odot}$ BH-BH system occurs. This example is
illustrated by a diagram in Fig.~\ref{diagram}.
On a side note, the formation of the most massive BH-BH systems on close orbits may be questionable.
The progenitors of the aforementioned $62\,{\rm M}_{\odot}$-$74\,{\rm M}_{\odot}$ BH-BH system are massive stars
($140\,{\rm M}_{\odot}$--$150\,{\rm M}_{\odot}$ at ZAMS). A recent theoretical study by \cite{yusof2013} suggests that such objects
($150\,{\rm M}_{\odot}$--$500\,{\rm M}_{\odot}$) will not increase in size significantly during their evolution. Therefore,
it is more likely for such binaries to bypass the CE phase and avoid the reduction of orbital
separation. This in turn will prevent the resulting BH-BH system from merging within Hubble time.
In the observer frame, BH-BH systems begin to dominate the merger rates at $z\sim 2$. For the
\textit{low--end} case this happens closer to $z\sim 1$.
\subsection{Optimistic CE} \label{relaxed}
In this model we relax one of the conditions on CE survivability. Specifically, Hertzsprung
gap donors are now allowed to undergo full energy balance calculations. In the standard model, CEs
with HG donors resulted in an immediate merger, terminating further binary evolution. This has
been shown to have a significant impact on the number of DCOs, altering the merger rates
by orders of magnitude \citep{rarity,nasza}. When HG donors survive CE,
their numbers naturally increase, as do their merger rates.
{\bf NS-NS}. When compared with the standard model, \textit{high--end} case, the intrinsic merger
rate of NS-NS systems peaks at higher redshift ($z\sim 3$) and at higher values
($\sim 1000$ Gpc$^{-3}$ yr$^{-1}$).
The shift in the peak towards higher redshifts is associated with the systems having shorter
delay times on average, which allows them to merge more quickly after formation. As expected, the decrease
of average delay times for NS-NS systems is caused by the new CE condition. In the standard model
the only surviving binaries were those that did not initiate the CE while the potential donor was an
HG star. In order to prevent a rapidly expanding HG star from overfilling its Roche lobe these
binaries had to have a significant initial separation, which resulted in relatively large delay
times. In this model the CE phase with an HG donor is allowed, so initial separation is no longer
such a crucial issue. Therefore, binaries with smaller initial
separations are able (if they have sufficient orbital energy) to survive and form NS-NS systems.
This results in shorter delay times. In the
\textit{low--end} case the same mechanism causes the peak to shift towards $z\sim 2$.
In the observer frame the merger rate of NS-NS systems is a few times higher
than in the standard model for both \textit{high--end} and \textit{low--end} metallicity evolutions.
In the former case NS-NS systems dominate the merger rates up to $z\lesssim0.5$.
In the latter case they are always sub-dominant compared to BH-BH systems.
{\bf BH-NS}. The binaries forming these systems usually undergo two CE events in their lifetime,
due to their relatively high initial mass ratios \citep[for details see][]{dominik}. The two CEs
reduce the initial separation, which makes the relaxed CE condition much less relevant than for the
NS-NS case mentioned above. The result is that there are no significant changes
in the intrinsic merger rate density for BH-NS systems. As in the standard model, the mergers of BH-NS
systems are the rarest of all types of DCOs. This is true for both of the metallicity scenarios.
{\bf BH-BH}. These systems do not experience two CE events, unlike the BH-NS
systems, and therefore they do not reduce their initial separations as efficiently.
The peak of the intrinsic merger rate density shifts slightly towards
lower redshift ($z\sim 4$, \textit{high--end}) when contrasted with the
reference model (see Fig.~\ref{rest4high}). This is because of the
effect of metallicity on the outcome of the CE phase.
The larger the fraction of metals in a star, the bigger its radius
\citep[e.g.][]{hurley}. This effect is particularly strong during the HG phase. Therefore, high
metallicity BH progenitors are more likely to initiate CE on the HG.
In the standard model this is not allowed and such systems are removed from the population. However, here
we relax this condition, and as a consequence we add more BH-BH systems originating from higher
metallicities (see Figs.~\ref{egdishigh} and~\ref{egdislow}). For the \textit{low--end} case
this results in a peak-plateau between redshifts $3<z<4$. This is because of the higher
metallicities appearing at lower redshifts when compared with the \textit{high--end} case.
In the observer frame the BH-BH systems start to dominate the merger rates at $z\approx 0.5$ in
the \textit{high--end} case. For the \textit{low--end} case these DCOs are
always primary mergers.
\subsection{Delayed SN}
In this model we change the supernova explosion engine with respect to the standard model.
The standard model uses the Rapid engine, which yields a gap between $2$--$5 {\rm M}_{\odot}$ in the
masses of the resulting compact objects. Here we utilize the Delayed scenario (for details
see Section \ref{variations}). The main feature of this engine is that it produces a continuous
mass spectrum of remnant objects \citep{massgap}. As suggested by \cite{kreidberg}, the presence
of the mass gap feature may be a result of systematic errors arising from misinterpretation of
the BH binary light curve analyses. The resulting errors in estimating the inclination of the
binary may shift low mass BHs from the gap. The distinction
between the two engines is clearly visible on Fig.~\ref{egdishigh} and Fig.~\ref{egdislow}.
The minimal total mass for this model is $\sim 5{\rm M}_{\odot}$. Such a system is composed of two BHs of
$2.5{\rm M}_{\odot}$ each ($2.5{\rm M}_{\odot}$ being the delineation between upper NS and lower BH mass). For other
models the minimal BH mass is $\sim 5{\rm M}_{\odot}$, thus yielding a minimal total mass $\sim 10{\rm M}_{\odot}$.
However, the supernova engine effects do not play a significant role on the merger rates of any
type of DCOs.
\subsection{High BH kicks}
Here, we employ full natal kicks (as measured for NSs) just on BHs (see Section
\ref{variations}). This is performed regardless of the amount of fallback (see Eq.~\ref{vkick}).
The kicks for NS-NS systems remain unchanged, as does their population with respect to the standard model.
In this variation the velocity of the natal kick acquired upon BH formation will disrupt many
binaries that would otherwise (in the standard model) form coalescing BH-NS or
BH-BH systems, as is clearly visible in Fig.~\ref{rest4high}. In consequence the NS-NS systems will
dominate the merger rates in the observer frame.
In addition, the full natal kick will affect the most massive BHs. In the standard model,
stars with masses $M_{\rm zams} > 40 {\rm M}_{\odot}$ would collapse directly into a black hole after
the SN explosion; with no asymmetric ejecta, they do not receive a kick ($f_{\rm fb}=1$, Eq.~\ref{vkick}).
However, in this model these stars receive a maximum velocity kick, and thus often disrupt the system.
As a consequence the probability of the formation and eventual merger of the most massive BH-BH systems
is lowered significantly, which can be seen on the bottom panel of Figs.~\ref{egdishigh} and~\ref{egdislow}.
\section{Summary \& Discussion}
We have performed a series of cosmological calculations for four populations of DCOs.
Each population was generated with different input physics for describing binary
evolution and compact object formation. The first model (standard) utilizes the
current state-of-the-art description of physical mechanisms governing
DCOs. In particular, it uses a \textit{Rapid} explosion engine, which
yields results accurately describing the mass distribution of X-ray binaries (see Section
\ref{smodel} and references therein). Another major improvement in the model is the realistic
treatment of the common envelope parameter $\lambda$, which now depends on the evolutionary
stage, radius, mass, metallicity, etc. of the donor star. The three subsequent models explore
alternative outcomes of binary evolution, and the resulting properties of remnants.
The mechanisms investigated in these models are: the sensitivity of the CE outcome to the
type of donor, the Delayed SN explosion mechanism, and the natal kick survivability
of DCOs containing BHs (see Section \ref{variations}). Additionally, for each model we
have created a grid of 11 metallicities to account for the chemical evolution throughout
the lifetime of the Universe. We present both the intrinsic and the
observer frame merger rates as a function of redshift.
The variation in the rates of our different binary systems as a function of redshift
depends upon metallicity, as well as common envelope and supernova physics.
In this paper we have studied how these impact the rates for different types of
DCOs. Here we review our main findings.
We find that NS-NS systems merge most efficiently at low redshifts ($z\lesssim 1$;
see Figs.~\ref{rest4high} and~\ref{rest4low}), where metallicities become relatively high
($\sim 0.5{\rm ~Z}_{\odot}$). However, in the case of the Optimistic CE model the merger rate densities
peak at higher redshifts ($z\sim 2$--$3$). This results from relaxing the condition
for the termination of binaries initiating a CE with a Hertzsprung gap donor.
This optimistic CE treatment enriches the merging population with systems with short
merger times. As a result the overall number of NS-NS systems increases and, due to the shorter
merger times, these systems coalesce earlier (see Sections \ref{variations} and \ref{relaxed}).
BH-NS systems merge most infrequently in all but one of the models. The exception
is the Full BH Kicks model, where full natal kicks are applied to BH
remnants. The kicks eliminate binaries containing BHs
from the populations by disrupting them. However, this doesn't affect BH-NS systems, as strongly
as BH-BH systems since they contain only one BH. In general, the low merger rates of BH-NS
systems arise from their unique mixed nature. Forming two different compact objects
in a single binary generally requires the masses of the progenitors to be significantly separated.
This plays an important role at first contact between the components, since if
the mass ratio of the binary is larger than $2$--$3$ the otherwise stable mass transfer
through Roche lobe overflow may become a CE event. These episodes often cause
a premature merger and eliminate further binary evolution. Another important
factor in making the BH-NS systems small in numbers is that the progenitors don't
have a large range of masses to draw from. The upper
limit is set by the binary containing enough mass to form a BH-BH system
instead, while the lower limit is set by not having enough mass and instead
forming a NS-NS system.
For BH-BH systems, the highest merging efficiency occurs earlier in the
Universe when compared with other DCOs ($z \sim 4$--$6$). This arises from the fact that these
systems form most efficiently at the lowest metallicities. For any of the two
scenarios of metallicity evolution, the Optimistic CE model blurs this trend. In this case the population
is enriched by BHs, which originated from high metallicity environments (see Section
\ref{relaxed}). Another interesting case is the model with High BH Kicks, where BH-BH systems are
efficiently disrupted by natal kicks throughout the lifetime of the Universe. This is
clearly visible on the bottom panel of Figs.~\ref{rest4high} and~\ref{rest4low}. The
kicks affect high mass systems the most. As a consequence of the full natal kicks, the formation
and merger rates for BH-BH systems in low metallicity galaxies (higher redshifts) are reduced
significantly, and this effect is even more dramatic for high metallicity environments (lower redshifts;
see Figs.~\ref{egdishigh} and~\ref{egdislow}). The High BH kicks model produces a difference between
the merger rates in the observer frame of BH-BH and NS-NS systems that is
roughly 100 times larger than within the standard model. This may be a promising avenue for
determining the magnitude of the natal kicks imparted to BHs during their formation.
Since (only) NS-NS systems have been observed, we can use observed rates to put constraints on our
models. The NS-NS merger rates in each of our models, at $z\sim 0$, fit within the
observational limits for NS-NS systems in the Milky Way:
$34.8$--$2204\,\mbox{yr}^{-1}\,\mbox{Gpc}^{-3}$~\citep{kimkal}, using the galaxy density
$\rho_{\rm gal}=0.0116\,\mbox{Mpc}^{-3}$.
\cite{petrillo} used the observed rate of short GRBs to calculate the merger rates of
NS-NS and BH-NS systems, since these systems are thought to be the progenitors
of short GRBs. The resulting merger rates of DCOs (NS-NS + BH-NS) in
the local Universe ranges between $500$ and
$1500\,\mbox{Gpc}^{-3}\,\mbox{yr}^{-1}$. At $z\sim 0$ our models find a NS-NS
merger rate of $\sim 100\,\mbox{Gpc}^{-3}\,\mbox{yr}^{-1}$, with a BH-NS rate
lower by a factor of $\sim10$. However, the authors of the
aforementioned study state that their results are sensitive primarily to the poorly constrained beaming
angle of the colimated emission from short GRBs. They used a beaming
angle of $\sim 20$ deg, while to match our rate the beaming
angle would have to be $\sim 50$ deg (see Fig.~3 therein).
In our previous study \citep{dominik}, we found one model that would reproduce the merger rates of
NS-NS + BH-NS from \cite{petrillo} ($\sim 900\,\mbox{Gpc}^{-3}\,\mbox{yr}^{-1}$ at
${\rm ~Z}_{\odot}$). It is the model described by fully conservative mass transfer episodes and optimistic
CE description (labeled ''Variation 12 -- submodel A'').
Additional constraints may be provided by observing the potential electromagnetic
signatures, other than GRBs,
of DCO mergers. One example is the optical/radio afterglow of the GRB, which can
be detected even if the GRB itself is not seen (an ``orphan afterglow''). Another
possibility is a ``kilonova'', resulting from the ejection of matter from a
neutron star. Since this matter may be enriched in heavy elements through
the r--process, the resulting radioactive decay may generate observable light,
thereby providing a promising electromagnetic counterpart to the gravitational wave emission
\citep{metzger,piran,barnes}.
Finally, it will be interesting to investigate how statistical ensembles of GW observations could constrain
properties of compact binary populations and of their formation scenarios
\citep[see e.g.][]{mandel09,oshaughnessy12,gerosa}.
\acknowledgements
We thank Alexander Heger for a helpful discussion on pair--instability supernovae.
We would also like to thank the N. Copernicus Astronomical Centre in Warsaw, Poland, and
the University Of Texas, Brownsville, TX, for providing computational
resources. DEH acknowledges support from National Science Foundation CAREER grant PHY-1151836.
KB and MD acknowledge support from MSHE grant N203 404939 and N203 511238, NASA Grant NNX09AV06A to
the UTB, Polish Science Foundation Master 2013 Subsidy and National Science Center DEC-2011/01/N/ST9/00383.
TB was supported by the DPN/N176/VIRGO/2009 grant. EB acknowledges support from National Science Foundation
CAREER Grant No. PHY-1055103. Work by CLF was done under the auspices of the National Nuclear Security
Administration of the U.S. Department of Energy under contract No. DE-AO52-06NA25396.
\clearpage
\bibliographystyle{aa}
|
2,869,038,154,946 | arxiv | \section{Introduction and Related Work} \label{sec:1}
\input{sections/intro}
\section{Method} \label{sec:2}
\input{sections/method}
\section{Evaluation} \label{sec:3}
\input{sections/eval}
\section{Conclusion} \label{sec:4}
\input{sections/disc}
\subsection{Localization and Segmentation}
We evaluated our approach on the CMU Kitchen Occlusion Dataset from~\citet{cmu_occlusion}. This dataset was chosen because (1) it provides extensive labelled training data in the form of images with bounding boxes and object masks, and (2) the dataset is challenging and offers the opportunity to compare against an algorithm designed specifically to handle occlusion. For the localization task we generated false positives per image (FPPI) vs. recall curves, while for the segmentation task we measured the mean segmentation error against ground truth as defined by the Pascal VOC segmentation challenge in ~\citet{pas_voc}. $C = 25$ (see eq.~\ref{eq:3.2}) was chosen by 5-fold cross-validation. While both results are presented for the single pose part of the dataset, multiple poses are easily handled in our algorithm as different components of the feature vector. Figure~\ref{fig:3} shows FPPI vs. recall curves compared with those reported by the rLINE2d+OCLP algorithm of~\citet{cmu_occlusion} and those generated from our implementation of ~\citet{segaware}. Table~\ref{table:1} presents segmentation errors compared with~\citet{segaware}.~\citet{cmu_occlusion} do not report a segmentation of the object.
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{bakingpan_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{colander_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{thermos_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{pitcher_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{saucepan_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{cup_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{scissors_evals.pdf}}
\subfigure{\includegraphics[trim=0 5 0 7,clip,width=0.24\textwidth]{shaker_evals.pdf}}
\caption{Object localization results on the CMU Kitchen Occlusion dataset}
\label{fig:3}
\end{figure}
\begin{table}[h]
\renewcommand{\arraystretch}{1.3}
\parbox{0.5\linewidth} {
\caption{Mean object segmentation error}
\label{table:1}
\centering
\begin{tabular}{*3c}
\hline
\textbf{Object} & \textbf{\citet{segaware}} & \textbf{SD-HOP}\\ \hline
Bakingpan & 0.2904 & \textbf{0.1516}\\ \hline
Colander & 0.2095 & \textbf{0.1249}\\ \hline
Cup & 0.2144 & \textbf{0.1430}\\ \hline
Pitcher & 0.2499 & \textbf{0.1131}\\ \hline
Saucepan & 0.1956 & \textbf{0.1103}\\ \hline
Scissors & 0.2391 & \textbf{0.1649}\\ \hline
Shaker & 0.2654 & \textbf{0.1453}\\ \hline
Thermos & 0.2271 & \textbf{0.1285}\\ \hline
\end{tabular}
}
\hfill
\parbox{0.5\linewidth} {
\caption{Mean 3D pose estimation error}
\label{table:2}
\centering
\begin{tabular}{*3c}
\hline
\textbf{Pose parameter} & \textbf{IRLS} & \textbf{OR-IRLS}\\ \hline
X (cm) & 1.6874 & \textbf{0.5774}\\ \hline
Y (cm) & 1.4953 & \textbf{0.6516}\\ \hline
Z (cm) & 8.228 & \textbf{2.1506}\\ \hline
Roll (degrees) & 1.1711 & \textbf{0.7152}\\ \hline
Pitch (degrees) & 7.9100 & \textbf{2.3191}\\ \hline
Yaw (degrees) & 5.7712 & \textbf{2.6055}\\ \hline
\end{tabular}
}
\end{table}
Figure~\ref{fig:3} shows that while both SD-HOP and~\citet{segaware} have similar recall at 1.0 FPPI, SD-HOP consistently preforms better in terms of area under the curve (AUC). Averaged over the 8 objects, SD-HOP achieves 16.13\% more AUC than~\citet{segaware}. Table~\ref{table:1} shows that SD-HOP consistently outperforms~\citet{segaware} in terms of segmentation error, achieving 42.44\% less segmentation error averaged over the 8 objects. Figure~\ref{fig:5} shows examples of the algorithm's output on various images from the CMU Kitchen Occlusion dataset.
\subsection{Ablation Study}
We conducted an ablation study on the `pitcher' object of the CMU Kitchen Occlusion dataset to determine the individual effect of our contributions. Using the loss function from~\citet{segaware} caused the segmentation error to increase from 0.1131 to 0.1547 and area under curve (AUC) of FPPI vs. recall to drop from 0.7877 to 0.7071. To discern the effect of 4-connected pairwise terms we removed the higher order terms from the model too. Using the pairwise terms as described in~\citet{segaware} caused the segmentation error to increase from 0.1547 to 0.2499 and AUC to decrease from 0.7071 to 0.6414.\par
Lastly, to quantify the effect of higher order potentials, we compared the full SD-HOP model against one with higher order potentials removed. Removing higher order potentials caused the segmentation error to increase from 0.1131 to 0.1430 and AUC to drop from 0.7877 to 0.7544. We hypothesize that for small objects like the ones in the CMU Kitchen Occlusion dataset, 4-connected pairwise terms are almost as informative as higher order terms. To check this hypothesis we tested the effect of removing higher order potentials on a close-up dataset of 41 images of a pasta-box occluded by various amounts through various household objects. Removing the higher order potentials caused the segmentation error to increase from 0.1308 to 0.1516 and area under curve AUC to drop from 0.9546 to 0.9008. This indicates that higher order terms are more useful for objects with larger and hence more informative segments.
\subsection{3D Pose Estimation} \label{subsec:3.3}
We collected 3D pose estimation results produced by IRLS and OR-IRLS on a dataset which has 17 images of a car-door in an indoor environment. The ground truth pose for the cardoor was obtained by an ALVAR marker~\citet{alvar_website}. Table~\ref{table:2} shows the mean errors in the six pose parameters. To discern the effect of errors inherent in the pose estimation process from the effect of occlusion reasoning, the pose of the cardoor was constant throughout the dataset, with various partial occlusions being introduced.\par
The granular HOG cell-level mask produced by SD-HOP caused some important silhouette edges to be missed for pose estimation. To solve this problem we utilized the unsupervised segmentation done earlier for defining higher order terms. If more than 80\% of the area within a segment was marked 1, we marked the whole segment with 1. Since segments follow object boundaries, this produced much cleaner masks for pose estimation. Figure~\ref{fig:4} shows the masks and pose estimation results for an example image from the dataset, with more such examples presented in the supplementary material. Note that the segmentation errors mentioned in Table~\ref{table:1} use the raw masks.
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=0.24\textwidth]{37_wo}}
\subfigure{\includegraphics[width=0.24\textwidth]{37_mask}}
\subfigure{\includegraphics[width=0.24\textwidth]{37_rmask}}
\subfigure{\includegraphics[width=0.24\textwidth]{37_w}}
\caption{3D pose estimation. Left to right: Pose estimation with IRLS, SD-HOP raw segmentation mask, SD-HOP refined segmentation mask, Pose estimation with OR-IRLS. Best viewed in colour.}
\label{fig:4}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=0.3\textwidth]{bakingpan}}
\subfigure{\includegraphics[width=0.3\textwidth]{bakingpan_mask}}
\subfigure{\includegraphics[width=0.3\textwidth]{bakingpan_rmask}}\\[-2ex]
\subfigure{\includegraphics[width=0.3\textwidth]{colander}}
\subfigure{\includegraphics[width=0.3\textwidth]{colander_mask}}
\subfigure{\includegraphics[width=0.3\textwidth]{colander_rmask}}\\[-2ex]
\subfigure{\includegraphics[width=0.3\textwidth]{cup}}
\subfigure{\includegraphics[width=0.3\textwidth]{cup_mask}}
\subfigure{\includegraphics[width=0.3\textwidth]{cup_rmask}}\\[-2ex]
\subfigure{\includegraphics[width=0.3\textwidth]{pitcher}}
\subfigure{\includegraphics[width=0.3\textwidth]{pitcher_mask}}
\subfigure{\includegraphics[width=0.3\textwidth]{pitcher_rmask}}\\[-2ex]
\subfigure{\includegraphics[width=0.3\textwidth]{shaker}}
\subfigure{\includegraphics[width=0.3\textwidth]{shaker_mask}}
\subfigure{\includegraphics[width=0.3\textwidth]{shaker_rmask}}
\caption{Object localization and segmentation results on the CMU Kitchen Occlusion dataset. Left: Image, Center: Raw mask from SD-HOP, Right: Refined mask from SD-HOP}
\label{fig:5}
\end{figure}
\subsection{Notation}
\noindent
The label for an object in an image \textbf{x} is represented as $\textbf{y} = (\textbf{p}, \textbf{v}, a)$, where $\textbf{p}$ is the bounding box, \textbf{v} is a vector of binary variables indicating the visibility of HOG cells within $\textbf{p}$ and $a \in [1, A]$ indexes the discrete viewpoint. $\mathbf{p} = (p_x, p_y, p_{\sigma})$ indicates the position of the top left corner and the level in a scale-space pyramid. The width and height of the box are fixed per viewpoint as $w_a$ and $h_a$ HOG cells respectively. Hence \textbf{v} has $w_a \cdot h_a$ elements. All training images are also over-segmented to collect statistics for higher-order potentials. Any unsupervised algorithm can be used for this, e.g.~\citet{fz_seg} and~\citet{gpb_ucm}.
\subsection{Feature Extraction}\label{subsec:features}
\noindent
Given an image $\mathbf{x}$ and a labelling $\mathbf{y}$, a sparse joint feature vector $\Psi(\mathbf{x},\mathbf{y})$ is formed by stacking $A$ vectors. Each of these vectors has features for a different discretized viewpoint. All vectors except for the one corresponding to viewpoint $a$ are zeroed out. Below, we describe the components of this vector.
\begin{enumerate}
\itemsep0em
\item 31-dimensional HOG features are extracted for all cells of 8x8 pixels in $\mathbf{p}$ as described in \citet{dpm_main}. The feature vector is is constructed by stacking two groups which are formed by zeroing out different parts, similarly to~\citet{vedaldi_structured}. The visible group $\phi_v(\mathbf{x},\textbf{p})$ has the HOG features zeroed out for cells labelled 0 and the occlusion group $\phi_{nv}(\mathbf{x},\textbf{p})$ has them zeroed out for cells labelled 1.
\item Complemented visibility labels, to learn a prior for a cell to be labelled 0: $\left[\textbf{1}_{wh} - \textbf{v}\right]$.
\item Count $c(\mathbf{p})$ of cells in bounding box \textbf{p} lying outside the image boundaries, to learn a cost for truncation by the image boundary, similarly to~\citet{vedaldi_structured}.
\item Number of 4-connected neighbouring cells in the bounding box that have different labels, to learn a pairwise cost.
\item Each segment in the bounding box obtained from unsupervised segmentation defines a clique of cells. To learn higher-order potentials, we need a vector $\theta_{HOP}$ that captures the distribution of 0/1 label agreement within cliques. A vector $\theta_c \in \mathbb{R}^{K+1}$ is constructed for each clique $c$ as $(\theta_c)_k = 1$ if $\sum_{i \in c}v_i = k$. The sum of all $\theta_c$ within $\mathbf{p}$ gives $\theta_{HOP}$. In practice, since cliques do not have the same size we employ the normalization strategy described in~\citet{low_lin} and transform statistics of all cliques to a standard clique size $K$ ($K=4$ in our experiments).
\item The constant $1$, used to learn a bias term for different viewpoints.
\end{enumerate}
\subsection{Learning}
\noindent
Suppose $\mathbf{w}$ is a vector of weights for elements of the joint feature vector. We define $\mathbf{w}^T\Psi(\mathbf{x},\mathbf{y})$ as the `energy' of the labelling $\mathbf{y}$. The aim of learning is to find $\mathbf{w}$ such that the energy of the correct label is minimum. Hence we define the label predicted by the algorithm as
\begin{equation}\label{eq:3.1}
f(\mathbf{x}; \mathbf{w}) = \mathbf{y^*} = \argmin_{\mathbf{y}} \mathbf{w}^T\Psi(\mathbf{x},\mathbf{y})
\end{equation}
We use a labelled dataset $\left(\mathbf{x}_i, \mathbf{y}_i\right)_{i=1}^N$ and learn $\mathbf{w}$ by solving the following constrained Quadratic Program (QP)
\begin{equation}\label{eq:3.2}
\min_{\mathbf{w},\xi}\frac{1}{2}\|\textbf{w}\|_2+C\sum_{i=1}^{N}\xi_i
\end{equation}
\begin{align*}
\text{s.t.}\quad &\mathbf{w}^T (\Psi(\mathbf{x}_i, \hat{\mathbf{y}}_i) - \Psi(\mathbf{x}_i, \mathbf{y}_i)) + \xi_i \geq \Delta(\mathbf{y}_i, \hat{\mathbf{y}}_i)~\forall i, \hat{\mathbf{y}}\in Y_i\\
&\xi_i \geq 0~\forall i\\
&\mathbf{D}^2\mathbf{w} \geq \mathbf{0}
\end{align*}
Intuitively this formulation requires that the score $\mathbf{w}^T \Psi(\textbf{x}_i, \textbf{y}_i)$ of any ground truth labelled image $\textbf{x}_i$ must be smaller than the score $\mathbf{w}^T \Psi(\textbf{x}_i, \hat{\textbf{y}}_i)$ of any other labelling $\hat{\textbf{y}}_i$ by the distance between the two labellings $\Delta(\textbf{y}_i, \hat{\textbf{y}}_i)$ minus the slack variable $\xi_i$, where $\|\mathbf{w}\|_2$ and $\xi_i$ are minimized. The regularization constant $C$ adjusts the importance of minimizing the slack variables. The above formulation has exponential constraints for each training image. For tractability, training is performed by using the cutting plane training algorithm of~\citet{joa_svm} which maintains a working set $Y_i$ of most violated constraints (MVCs) for each image.~\citet{low_lin} adapts this algorithm for training higher-order potentials. It uses $\mathbf{D}^2$ as a second order curvature constraint on the $K+1$ weights for the higher-order potentials, which forces them to make a concave lower envelope. This encourages most cells in the image segments to agree in visibility labelling. $\mathbf{D}^2$ is an appropriately 0-padded (to the left and right) version of
\begin{equation*}
\begin{bmatrix}
-1 & 2 & 1 & 0 &\ldots\\
\vdots & \vdots & \ddots & \vdots\\
0 & \ldots & -1 & 2 & -1
\end{bmatrix}.
\end{equation*}
The distance between two labels $\mathbf{y}$ and $\mathbf{\hat{y}}$ is called the loss function. It depends on the amount of overlap between the two bounding boxes and the Hamming distance between the visibility labellings
\begin{equation}\label{eq:3.3}
\Delta(\textbf{y}, \hat{\textbf{y}}) = \left(1-\frac{\text{area}(\mathbf{p} \bigcap \hat{\mathbf{p}})}{\text{area}(\mathbf{p} \bigcup \hat{\mathbf{p}})}\right)+\frac{\text{area}(\mathbf{p} \bigcap \hat{\mathbf{p}})}{\text{area}(\mathbf{p} \bigcup \hat{\mathbf{p}})}\cdot H(\mathbf{v}, \hat{\mathbf{v}})
\end{equation}
The mean Hamming distance $H(\mathbf{v}, \hat{\mathbf{v}})$ between two labellings $\mathbf{v}$ and $\hat{\mathbf{v}}$ (potentially having different sizes as they might belong to different viewpoints) is calculated after projecting them to the lowest level of the pyramid. By construction of the loss function, the difference in segmentation starts contributing to the loss only after the two bounding boxes start overlapping each other. It also has the nice property of decomposing over the energy terms, as described in Section~\ref{subsubsec:mvc}.
\subsection{Inference}\label{subsec:inference}
\noindent
To perform the inference as described in Eq.~\ref{eq:3.1} we have to search through $Y=A \times P \times V$ where $A$ is the set of viewpoints, $P$ is the set of all pyramid locations and $V$ is the exponential set of all combinations of visibility variables. We enumerate over $A$ and $P$ and use an $s-t$ mincut to search over $V$ at every location.\par
By construction, the feature vector \textbf{w} can be decomposed into weight vectors for the different viewpoints i.e. $\mathbf{w} = [\mathbf{w}^1, \mathbf{w}^2, \ldots, \mathbf{w}^A]$. In the following description, we will consider one viewpoint and omit the superscript for brevity of notation. $\mathbf{w}$ can also be decomposed as $[\mathbf{w}_v, \mathbf{w}_{nv}, \mathbf{w}_{pr}, w_{trunc}, W, \mathbf{w}_{HOP}, w_{bias}]$ into the six components described in Section~\ref{subsec:features}. We define the following terms that are used to construct the graph shown in Figure~\ref{fig:2b}. $\phi_i(\mathbf{x},\mathbf{p})$ are the vectorized HOG features extracted at cell $i$ in bounding box $\mathbf{p}$. Unary terms $F_i(\mathbf{p})={\mathbf{w}_{v,i}}^T\phi_i(\mathbf{x},\mathbf{p})$ and $B_i(\mathbf{p})={\mathbf{w}_{nv,i}}^T\phi_i(\mathbf{x},\mathbf{p})$ are the responses at cell $i$ for object and occlusion filters respectively. $R_i = \mathbf{w}_{pr, i}$ is the prior for cell $i$ to be labelled 0. Constant term $C(\mathbf{y}) = w_{trunc} \cdot c(\mathbf{p}) + w_{bias}$ is the sum of image boundary truncation cost and bias. $\mathcal{E}$ is the set of 4-connected neighbouring cells in $\mathbf{p}$ and $W$ is the pairwise weight. $\mathcal{C}(\mathbf{p})$ is the set of all cliques in $\mathbf{p}$ and $\psi_c(\mathbf{v}_c)$ is the higher-order potential for clique $c$ having nodes with visibility labels $\mathbf{v}_c$. Combining these terms, the energy for a particular labelling is formulated as
\begin{align}\label{eq:3.4}
\begin{aligned}
E(\mathbf{x}, \mathbf{y}) = \textbf{w}^{T} \Psi(\textbf{x}, \textbf{y})
&= \sum_{i=1}^{wh}F_i(\mathbf{p})v_i+B_i(\mathbf{p})(1-v_i)+R_i(1-v_i)\\
&+\sum_{(i,j) \in \mathcal{E}}W|v_i-v_j|+\sum_{c \in \mathcal{C}(\mathbf{p})}\psi_c(\mathbf{v}_c) + C(\mathbf{y})
\end{aligned}
\end{align}
$\psi_c(\mathbf{v}_c)$, the higher-order potential for clique $c$ is defined as $\min_{k = 1 \ldots K}\left(s_k \sum_{i \in c}{v_i} + b_k\right)$, following~\citet{low_lin}. Intuitively, it is the lower envelope of a set of lines whose slope is defined as $s_k=\frac{M}{K}\left((w_{HOP})_k-(w_{HOP})_{k-1}\right)$ and intercept as $b_k=(w_{HOP})_k-s_kk$ (recall that $\mathbf{w}_{HOP}$ is a $K+1$ dimensional weight vector). $M$ is the size of the clique. The normalization in $s_k$ makes the potential invariant to the size of the clique (refer to~\citet{low_lin} for details). Figure~\ref{fig:2a} shows a sample higher-order potential curve for a clique of $K$ cells.\par
Given an image, a location, and a viewpoint we use $s-t$ mincut on the graph construction shown in Figure~\ref{fig:2b} to find the labelling $\mathbf{v}$ that minimizes the energy in Eq.~\ref{eq:3.4}. Each variable $v_i$, $i \in \lbrace 1,2,\ldots ,wh\rbrace$ defines a node and each clique has $K-1$ auxiliary nodes in the graph, $z_1 \ldots z_{K-1}$. For a detailed derivation of this graph structure please see~\citet{kol_gc} and~\citet{low_lin}.
\begin{figure}[t]
\centering
\subfigure[]{\label{fig:2a}\includegraphics[width=0.45\textwidth]{concave}}
\subfigure[]{\label{fig:2b}\includegraphics[width=0.45\textwidth]{network}}
\caption{(a): Concave higher-order potentials encouraging cells in a clique to have the same binary label, (b): Construction of graph to compute the energy minimizing binary labelling of cells by $s-t$ mincut.}
\label{fig:2}
\end{figure}
After the maxflow algorithm finishes, the nodes $v_i$ still connected to $s$ are labelled $1$ and others are labelled $0$.
\subsubsection{Loss-augmented Inference}\label{subsubsec:mvc}
Loss-augmented inference is an important part of the cutting plane training algorithm (`separation oracle' in~\citet{joa_svm}) and is used to find the most violated constraints. It is defined as $\mathbf{y_{MVC}} = \argmin_{\mathbf{\hat{y}}} \mathbf{w}^T\Psi(\mathbf{x},\mathbf{\hat{y}}) - \Delta(\mathbf{y}, \mathbf{\hat{y}})$, where $\mathbf{y}$ is the ground-truth labelling. Our formulation of the loss function makes it solvable with the same complexity as normal inference (Eq.~\ref{eq:3.1}) by decomposing the loss over the terms in Eq~\ref{eq:3.4}. The first term of Eq.~\ref{eq:3.3} is added to $C(\mathbf{y})$, while the second term is distributed across $F_i(\mathbf{p})$ and $B_i(\mathbf{p})$ in Eq.~\ref{eq:3.4}.
\subsection{Detection of Multiple Objects} \label{subsec:multiple}
\noindent
Multiple objects of interest might overlap. Running the individual object detectors separately leaves regions of ambiguity in overlapping areas if multiple detectors mark the same location as visible. We find that running one iteration of $\alpha$-expansion (see~\citet{boy_ae}) in overlapping areas resolves ambiguities coherently. The detectors are run sequentially. We maintain a label map $\mathcal{V}$ that stores for each cell the label of the object that last marked it visible, and a collected response map $C$ that stores for each cell the object filter response ($F_i(\mathbf{p})$) from the object that last marked it visible. While running the location search for object $o$, we transfer object filter responses from $C$ to the occlusion filter response map ($B(\mathbf{p})$) for the current object as described in Algorithm~\ref{alg:multiple}.
\begin{algorithm}[t]
\caption{Response-transfer between object detectors in overlapping regions}\label{alg:multiple}
\begin{algorithmic}
\FORALL[$L$ is the number of objects, $\circ$ denotes the Hadamard product]{$o \in [1, L]$}
\FORALL{$\mathbf{p} \in P$}
\STATE{$B(\mathbf{p})=C(\mathbf{p}) \circ \mathbf{1}[\mathcal{V}(\mathbf{p}) \neq 0]$} \COMMENT{Transfer equation for all cells in $\mathbf{p}$}
\ENDFOR
\STATE{$\mathbf{y^*} = \argmin_{\mathbf{y}} \mathbf{w}^T\Psi(\mathbf{x},\mathbf{y})$}
\STATE{$C(\mathbf{y^*}(\mathbf{p})) = F(\mathbf{y^*}(\mathbf{p})) \circ \mathbf{y^*}(\mathbf{v})$} \COMMENT{Update equations for all cells in $\mathbf{p}$}
\STATE{$\mathcal{V}(\mathbf{y^*}(\mathbf{p})) = o \cdot \mathbf{y^*}(\mathbf{v})$}
\ENDFOR
\end{algorithmic}
\end{algorithm}
This is effectively one iteration of $\alpha$-expansion (see supplementary material for details). It causes decisions in overlapping regions to be made between responses of well-defined object filters rather than between responses of an object filter and a generic occlusion filter.\par
Such response-transfer requires the object models to be compatible with each other. We achieve this by training the object models together as if they were different viewpoint components of the same object. The bias term in the feature vector makes the filter responses of different components comparable.
\subsection{3D Pose Estimation}
\noindent
The basic principle of many model based 3D pose estimation algorithms is to fit a given 3D model of the object to its corresponding edges in the image e.g. in~\citet{choi_pose}, the 3D CAD model is projected into the image and correspondences between the projected model edges and image edges are set up. The pose is estimated by solving an Iterative Re-weighted Least Squares (IRLS) problem. However, partial occlusion causes these approaches to fail by introducing new edges. We make the algorithm robust to partial occlusion by first identifying visible pixels of the object using SD-HOP and discarding correspondences outside the visibility mask. We call our extension of the algorithm Occlusion Reasoning-IRLS (OR-IRLS). |
2,869,038,154,947 | arxiv | \section{Linear Magneto-optic Kerr effect (MOKE)}
The magneto-optical Kerr effect (MOKE) is a powerful tool to probe crystal magnetization. When time reversal symmetry in a material is broken due to the presence of an external magnetic field or intrinsic magnetization, linearly polarized light becomes elliptically polarized upon reflection from the sample surface. While MOKE is a standard tool to characterize magnetic ordering in ferromagnets, it can also be used to detect noncollinear magnetic structure in antiferromagnets \cite{siddiqui2020metallic, higo2018large}. Additionally, it has been shown that MOKE can also occur in collinear antiferromagnetic metals with a chiral crystal structure due to a non-zero Berry curvature of the occupied electronic bands \cite{zhou2021crystal}.
In this work, we perform polar MOKE in which the light is normally incident upon the sample a-b plane. As the lattice of Co$_{1/3}$NbS$_2$ corresponds to space group $P6_322$ [182], the dielectric tensor can be written in terms of the diagonal components $\epsilon_{xx}$ and $\epsilon_{zz}$, and the off-diagonal components $\epsilon_{xy}$ as follows:
\begin{equation*}
\epsilon =
\begin{pmatrix}
\epsilon_{xx} & \epsilon_{xy} & 0 \\
-\epsilon_{xy} & \epsilon_{xx} & 0 \\
0 & 0 & \epsilon_{zz} \\
\end{pmatrix}
\end{equation*}
Here the coordinate axes have been taken to lie along the principal axes of the crystal. A complete description of MOKE is then given by the complex refractive index:
\begin{equation*}
N(\omega) = \sqrt{\epsilon(\omega)} = n(\omega) + ik(\omega)
\end{equation*}
for light with a given polarization propagating along certain directions in the sample. Here $n$ and $k$ are the refractive index and the extinction coefficient, respectively. For circularly polarized light that is normally incident on the sample (propagating along the z-axis), the complex refractive index is given by:
\begin{equation*}
N_{\pm} = n_{\pm} + ik_{\pm} = \sqrt{\epsilon_{xx} \pm \epsilon_{xy}}
\end{equation*}
here $\pm$ denotes left and right circularly polarized light. Note that linearly polarized light can always be written as a combination of right and left circularly polarized light and so the polar complex Kerr angle ($\theta_K + i\eta_K$) for normally incident linearly polarized light can be determined from:
\begin{equation*}
\frac{1+\mathrm{tan} \eta_K}{1-\mathrm{tan}\eta_K} e^{2i\theta_K} = \frac{(1+N_{+})(1-N_{-})}{(1-N_{+})(1+N_{-})}
\end{equation*}
For small $\theta_K$ and $\eta_K$, the expression above can be approximated as:
\begin{equation*}
\theta_K + i\eta_K \sim \frac{-\epsilon_{xy}}{(\epsilon_{xx} - 1)\sqrt{\epsilon_{xx}}}
\end{equation*}
The optical conductivity tensor $\sigma_{ij}$ is related to the dielectric tensor $\epsilon_{ij}$ as:
\begin{equation*}
\epsilon_{ij}(\omega) = \delta_{ij} + \frac{4 \pi i}{\omega}\sigma_{ij}(\omega)
\end{equation*}
Thus, the complex Kerr angle is directly related to the off-diagonal components of the conductivity tensor (i.e., the Hall conductivity) as follows:
\begin{equation*}
\theta_K + i\eta_K \sim \frac{-\sigma_{xy}}{\sigma_{xx}\sqrt{1+i(4\pi/\omega)\sigma_{xx}}}
\end{equation*}
Note that the above is a general result that applies whenever light is normally incident onto a sample surface with greater than three-fold rotational symmetry \cite{Kahn_1969}.
\section{Irreducible Representations for $\mathbf{k} = (0.5, 0, 0)$}
The magnetically ordered state in Co$_{1/3}$NbS$_{2}$ is described by the propagation vector $\bf{k}$ = (0.5,0,0). Assuming moments on cobalt sites of the lattice structure described in the main text, the magnetic space groups associated with this value of $\bf{k}$ were generated systematically using the ISODISTORT software\cite{isodistort_1, isodistort_2}. The simplest models with only one propagation vector, $\bf{k}$, were examined closely. The four different magnetic space groups consistent with the single propagation vector, $\bf{k}$, are P$_C$222$_{1}$, P$_C$2$_{1}$2$_{1}$2$_{1}$, P$_B$2$_{1}$2$_{1}$2 and P$_B$2$_{1}$2$_{1}$2. The spin arrangements associated with these groups are shown in Fig.~\ref{fig:state} using a conventional hexagonal chemical unit cell and cell doubling along the a-axis. These models are in one-to-one correspondence to irreducible representations (IRs) of the little group G$_{\bf{k}}$, and are labelled as $\Gamma_{1}$, $\Gamma_{2}$, $\Gamma_{3}$, and $\Gamma_{4}$, respectively. Two of the models, $\Gamma_{1}$ and $\Gamma_{4}$, have collinear spins lying purely along the a-axis. The other two, $\Gamma_{2}$ and $\Gamma_{3}$, allow tilting of spins along the c-axis and form more complex non-collinear arrangements in the bc-plane.
\newline
\begin{figure*}[h]
\includegraphics[width=0.75\textwidth]{CoNb3S6_magneticState_p4_v4.pdf}
\caption{\label{fig:state} Plots of the four irreducible representations (IRs) associated with the P6$_{3}$22 spacegroup and a $\mathbf{k} = (0.5, 0, 0)$ propagation vector. Arrows denote Co$^{2+}$ spin directions, and octahedra denote the local crystal environments set by sulfur positions. All models are displayed in the hexagonal chemical unit cell and show antiparallel spins between neighboring chemical unit cells in the a-axis direction. $\Gamma_{1}$ and $\Gamma_{4}$ are two different in-plane antiferromagnetic models with moments parallel to the a-axis. $\Gamma_{2}$ and $\Gamma_{3}$ are canted models, where spins lie in the bc-plane. The canting angle away from the ab-plane is a free parameter in these IRs which is fit during refinement. The MSG for each IR is labelled at the top the respective panel. Plots were made using the VESTA structure visualization program \cite{momma2011vesta}. }
\end{figure*}
\clearpage
\section{Refinements of neutron diffraction data}
As described in the main text, diffraction patterns were acquired from our single crystal sample from both DEMAND and WAND$^2$ instruments at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory. Data from the two instruments were analyzed separately to independently confirm conclusions about the symmetry of the lattice and ordered spin state. Refinements of the lattice structure were performed assuming P6$_{3}$22 structural model with Co at the 2c Wyckoff sites. Refined sulfur positions were (0.3223, -0.0026, 0.1333) when considering WAND$^2$, and (0.33815753, -0.0972, 0.1329) when considering DEMAND data. Refinements of magnetic structure were performed by considering each of the irreducible representations for the $\mathbf{k} = (0.5, 0, 0)$ propagation vector in turn, assuming spins on the cobalt sublattice of the lattice structure. For both instruments, fits employing the $\Gamma_2$ model provided the best description of the data. For for data taken with DEMAND, the determination of magnetic Bragg peak intensity was complicated by scattering at the same angular positions from nuclear Bragg peaks with a small number of neutrons with $\lambda/2$. The improvement of $\Gamma_2$ over competing models was marginal due to large statistical error bars. When considering the WAND$^2$ data, the correctness of the $\Gamma_2$ was more definitive. Comparison of calculated and measured Bragg peak intensities for both lattice and magnetic scattering on the two instruments is plotted in Fig.~\ref{fig:fitting}.
\begin{figure*}[h]
\includegraphics[width=\textwidth]{CoNb3S6_fitting_p5_v3.pdf}
\caption{\label{fig:fitting} Plots of observed versus calculated structure factors for magnetic (right) and nuclear (left) refinements of neutron scattering data presented in this Letter. Panels (a) and (b) represent fits of data taken using the DEMAND instrument and panels (c) and (d) represent fits of data taken using the WAND$^2$ instrument, both of the HFIR at ORNL. }
\end{figure*}
\clearpage
\section{Density Functional Theory calculations}
As described in the main text, we used density functional theory (DFT) to check the stability of the magnetic ground state observed with neutron diffraction and calculate the associated Kerr signal. Beginning with the lattice and spin configurations obtained from the neutron analysis, we allowed the system to relax to the lowest energy state. The resultant magnetic order was very similar to that determined by our neutron scattering analysis. The main difference were a slightly smaller ordered moment predicted by DFT and a slight larger tilting angle. The exact DFT predictions for the four moments in the magnetic unit cell are shown in Table \ref{tab:dft}, alongside those determined by neutron diffraction.
\begin{table}[h]
\caption{Non-collinear magnetic ordering of four Co magnetic sites determined from neutron scattering measurement and DFT relaxed result using the experimental values. Magnetization is in Cartesian coordinate with unit $\mu_B$.
}
\begin{center}
\begin{tabular}{c|c c c c}
\hline
Exp. & $M_x$ & $M_y$ & $M_z$ & $|\textbf{M}|$ \\
\hline
Co (\MakeUppercase{\romannumeral 1})& $-0.965$ & 1.671 & 0.358 & 1.962\\
Co (\MakeUppercase{\romannumeral 2})& 0.965 & $-1.671$ & 0.358 & 1.962\\
Co (\MakeUppercase{\romannumeral 3})& $-0.965$ & 1.671 & $-0.358$ & 1.962\\
Co (\MakeUppercase{\romannumeral 4})& 0.965 & $-1.671$ & $-0.358$ & 1.962\\
\hline
DFT & $M_x$ & $M_y$ & $M_z$ & $|\textbf{M}|$ \\
\hline
Co (\MakeUppercase{\romannumeral 1})& $-0.712$ & 1.252 & $-0.103$ & 1.444\\
Co (\MakeUppercase{\romannumeral 2})& 0.723 & $-1.247$ & $-0.102$ & 1.445\\
Co (\MakeUppercase{\romannumeral 3})& $-0.726$ & 1.245 & 0.080 & 1.443\\
Co (\MakeUppercase{\romannumeral 4})& 0.716 & $-1.249$ & 0.124 & 1.445\\
\hline
\end{tabular}
\end{center}
\label{tab:dft}
\end{table}
Using the same numerical convergence parameters, we further calculated the Kerr signals for several additional magnetic and lattice configurations for comparison:
Two cases used the same magnetic ordering as discussed in the paper but instead of 0.31 as fractional sulfur coordinate used in the main text, sulfur was located at fractional coordinates of 0.33 and 0.66 of the $a$-lattice dimension, which are the positions for completely undistorted CoS$_6$ octahedra. The two values here represent opposite choices of chirality. For sulfur at 0.31 and 0.66, we generated two more cases by reversing the sign of all magnetic moments. All four of these additional cases show large Kerr signals of the same order of magnitude as the case presented in the main text. See Fig.\ \ref{fig:fig1} for these results.
\begin{figure}
\includegraphics[width=12cm]{SI_moke.png}
\caption{\label{fig:fig1} Kerr rotation $\theta$ and ellipticity $\eta$ calculated from density functional theory using four magnetic unit cells based on the refinement of neutron diffraction measurements.
}
\end{figure}
In Fig.\ \ref{fig:fig2} we show the \textbf{k}-point convergence of the Kerr signals computed in this work for the atomic geometry and magnetic configuration used in the main text.
\begin{figure}
\includegraphics[width=12cm]{SI_convergence.pdf}\caption{\label{fig:fig2} $\bf{k}$ convergence of Kerr rotation $\theta$ and ellipticity $\eta$ is shown with 5$\times$10$\times$5 and 4$\times$8$\times$4 Monkhorst-Pack grid.
From this figure we determine the remaining error due to \textbf{k}-point convergence as described in the main text.
}
\end{figure}
\clearpage
|
2,869,038,154,948 | arxiv | \section{Introduction}
\label{s:intro}
A \textit{ranking} is an ordered sequence resulting from the comparative evaluation of a given set of \textit{items}.
This type of data
is common in a large number of research fields, as testified by the
widespread
applications of ranking data analysis.
For example, various political election systems
allow voters to express multiple preferences in the ballots rather than the only most-liked candidate
~\citep{Stern1993,Gormley:Murphy-American},
implying that a method to aggregate the resulting rankings into a final \textit{consensus} is needed, as in~\cite{Davenport} and~\cite{Meila:Phadnis:al}.
Furthermore, marketing and psychological studies are typically aimed at investigating individual preferences~\citep{Vigneau1999,Vitelli} and attitudes~\citep{Yu2005,Gormley:Murphy-Royal} towards multiple alternatives.
In the biomedical context,~\cite{Mollica:Tardella} proposed the conversion of the quantitative
multivariate profiles resulting from a bioassay experiment into
ranking sequences.
The ranking transformation was motivated as a
possible data normalization method
when a well-established pre-processing technique is lacking.
Moreover, sports and competitions
naturally
motivate the development of methods for describing ranking outcomes, in order to quantify the ability of the competitors from the ordinal evidence, see for example~\cite{Henery-Royal},~\cite{Stern1990} and~\cite{Caron:Doucet}. Further references
can be found in the
the recent review
of the ranking literature supplied by~\cite{Alvo}.
A public data repository devoted to
preference data, although not necessarily in the form of rankings, is
available at \url{http://www.preflib.org/} \citep{Mattei:Walsh:2013}.
From a mathematical perspective, ranking data represent a well-characterized form of multivariate ordinal data, with a direct correspondence with the set of permutations~\citep{Plackett1968,Stern1990}.
By following the reviews in \cite{Critch:Flig:Verd},~\cite{Marden} and~\cite{Alvo},
the main approaches
to conceive
parametric ranking models
can be classified in four classes:
\begin{enumerate}
\item \textit{order statistics models} (OS), also known
as random utility models,
whose cornerstone is
the \textit{Thurstone model}
\citep{Thurstone};
\item \textit{paired comparison models},
where the
most popular parametric family is the Bradley-Terry model (BT) described in~\cite{Bradley:Terry} and~\cite{Bradley84};
\item \textit{distance-based models} (DB), originally introduced by \cite{Mallows} and also referred to as Mallows' models;
\item \textit{stagewise models}.
\end{enumerate}
Each model class alludes to a specific generative process of the ordinal judgment
on the given set of alternatives.
This work concentrates on the last parametric family
bearing on
the decomposition of the ranking process into
consecutive
stages, that is, the
sequential selections
of the
items in order of preference.
In particular, our interest is in
the \textit{Plackett-Luce model} (PL) and its finite mixture extension by assuming the Bayesian inferential perspective.
Despite the numerous methodological contributions of the last decades,
enhancing
the flexibility of the aforementioned parametric classes, the application of more sophisticated ranking models is still limited in practice.
Likely, the main reason lies in the computational complexity emerging from the peculiar multivariate structure of ranking data, requiring the development of specialized software.
This might have
slowed down a wider use of the most recent model proposals.
The \pkg{PLMIX} package version 1.0.1, released on CRAN (Comprehensive R Archive Network) and available at \url{ https://CRAN.R-project.org/package=PLMIX}, offers a comprehensive framework aimed at
enriching
the \proglang{R} environment~\citep{Rsoft}
with
one of the recent methodological advances
in modeling and clustering partially ranked data,
by adequately accounting for the related computational issues.
The paper is organized as follows: Section~\ref{s:review} provides a detailed review of the existing packages
in \proglang{R} for ranking data and highlights the
main differences with the novel \pkg{PLMIX} package. The PL is briefly
reviewed in Section~\ref{s:pl} and its Bayesian extension to the finite mixture setting is detailed in Section~\ref{s:bayinf} with the related inferential procedures. Section~\ref{s:package} describes the application of the functions included in \pkg{PLMIX}
on simulated and real data examples.
The paper ends
in Section~\ref{s:concl}
with some
remarks and suggestions for future developments.
\section{Review of R packages for ranking data analysis}
\label{s:review}
\begin{table}[t]
\centering
\begin{threeparttable}
\caption{Characteristics of the existing \proglang{R} packages for ranking data compared with the novel \pkg{PLMIX} package.}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lcccc}
Package & Ranking type & Model class & Mixture & Inference \\
\hline
\pkg{PerMallows} & Complete & DB, GMM & No & MLE \\
\pkg{PlackettLuce} & Partial & PL & No & MLE \\
\pkg{pmr} & Complete & DB, WDB, PL & No & MLE \\
\pkg{prefmod} & Partial & BT & Yes & MLE \\
\pkg{Rankcluster} & Partial & ISR & Yes & MLE \\
\pkg{rankdist} & Partial & DB & Yes & MLE \\
\pkg{RMallow} & Partial & DB & Yes & MLE \\
\pkg{StatMethRank} & Complete & MNOS, WDB & Yes & MLE$^*$ \\
\pkg{StatRank} & Partial & OS & Yes & MLE \\
\hline
\pkg{PLMIX} & Partial & PL & Yes & Bayesian$^{**}$ \\
\hline
\end{tabular}
\label{t:pckchar}
\begin{tablenotes}
\small
\item $^*$ a single function to fit the Bayesian MNOS is available
\item $^{**}$ MLE can be recovered as special case under noninformative prior setting
\end{tablenotes}
\end{threeparttable}
\end{table}
Several \proglang{R} packages are currently available to conduct model-based
analysis of ranking data and their main features are summarized in
Table~\ref{t:pckchar}, in comparison with the novel \pkg{PLMIX}
package. Some
essential features
of
each package are provided in the following:
\begin{itemize}
\item[-] \pkg{PerMallows}, described in \cite{Irurozki}, provides a suite of functions for the MLE of DBs and their multiparametric extensions, referred to as \textit{Generalized Mallows models} (GMM) in the seminal work by \cite{Fligner:Verducci-Royal}. Various metrics on the ranking space are considered, but partial rankings and finite mixtures are not contemplated;
\item[-] \pkg{PlackettLuce}, recently released on CRAN, performs ML inference of the PL from complete and partial rankings and includs methods to derive point estimates and standard errors even in critical situations when the likelihood function is not well-behaved. Additionally, the package can handle ties and admits the inclusion of covariates to accomplish a model-based partitioning of the sample units via PL trees. A full description of the package can be found in the vignette by \cite{PlackettLuce};
\item[-] \pkg{pmr}, presented in \cite{Lee:Yu2013}, applies standard
MLE methods to infer several parametric ranking models, such as the
DB,
the \textit{weighted distance-based model} (WDB) proposed by \cite{Lee:Yu2010} and the PL. Ranking models are considered in their basic form (no mixture) and only complete rankings are allowed;
\item[-] \pkg{prefmod}, introduced in \cite{Hatz:Ditt:2012}, focuses on the analysis of preference data
expressed in the form of
paired comparisons
(PC) and on the application of the BT and extensions thereof under the MLE approach.
This package allows for the handling of partial observations, ties and the inclusion of individual and item-specific covariates. The generalization to
latent class settings
is also possible via a nonparametric method, but it is limited to complete rankings;
\item[-] \pkg{Rankcluster},
widely described in \cite{Rankcluster}, implements the mixture of \textit{Insertion Sort Rank data models} (ISR), see \cite{Jacques2014}.
The ISR mixture is motivated as a model-based clustering tool of partial and potentially multivariate (hierarchical) ranking data;
\item[-] \pkg{rankdist}, based on the the methodological contribution by~\cite{Murphy:Martin}, fits mixtures of DBs with various metrics through the EM algorithm; it accepts both complete and partial rankings;
\item[-] \pkg{RMallow}
implements the mixture of DBs with the Kendall distance as metric on the ranking space. Both complete and partial rankings are allowed;
\item[-] \pkg{StatMethRank} is in
support of
the
monograph by \cite{Alvo}.
Regarding the parametric distributions,
it implements the mixture of WDBs from the MLE perspective and the Bayesian Multivariate Normal ordered statistics model (MNOS) described in \cite{Yu}, but exclusively on complete rankings;
\item[-] \pkg{StatRank}
covers the class of random utility models,
involving the PL as special instance, and its generalization to the finite mixture context.
Frequentist estimation is
carried on by means of the Generalized Method-of-Moments~\citep{Soufiani2014} and can be performed also on partial observations.
\end{itemize}
The
outline
in Table~\ref{t:pckchar}
points out that the existing
libraries cover a wide range
of the parametric options reviewed in Section~\ref{s:intro}.
Most of
them account also for the possible presence of incomplete observations and for the generalization of the ranking generative mechanism to the mixture framework.
Nevertheless, with the only exception of the function
\code{mvnos.model}
of the \pkg{StatMethRank} package implementing the Bayesian MNOS model on complete rankings via MCMC methods, all the available packages address inference from the frequentist point of view. Moreover, although \pkg{pmr} and \pkg{StatRank} encompass the PL distribution and its mixture extension, they either work only with complete observations or lack of computational efficiency, making sometimes prohibitive to perform a partial ranking analysis based on the PL mixture. The novelties introduced by the \pkg{PLMIX} package to overcome these limitations are widely described in Section~\ref{s:package}. An account of the methodological aspects implemented by \pkg{PLMIX} is provided in the next section.
\section{The Plackett-Luce model for partial orderings}
\label{s:pl}
\subsection{Preliminaries and data format}
\label{ss:data}
Let us first clarify the basic terminology for the data input, in
particular the difference between
ranking and ordering.
Formally, a \textit{full} (or \textit{complete}) \textit{ranking}
$\pi: I \to R$
is a bijective mapping of a finite set
$I=\{1,\dots,K\}$ of labeled \textit{items} (or alternatives) into a set of \textit{ranks} $R=\{1,\dots,K\}$,
resulting from the attribution of a position to each item according to a determined criterion.
The result of
the mapping
can be represented in terms of
the
$K$-tuple $\pi=(\pi(1),\dots,\pi(K))$, where the generic entry $\pi(i)$ indicates the rank assigned to the $i$-th item.
If $\pi(i)<\pi(i')$, then item $i$ is said to be ranked higher than/preferred to item $i'$.
Ranking data admit an alternative format in terms of orderings. Specifically,
the \textit{full} (or \textit{complete}) \textit{ordering} $\pi^{-1}: R \to I$ is simply the inverse function of the ranking $\pi$, yielding the ordered vector $\pi^{-1}=(\pi^{-1}(1),\dotsc,\pi^{-1}(K))$ whose generic component $\pi^{-1}(j)$ denotes the item ranked in the $j$-th position.
In many real applications, for example when $K$ is large,
the ranking elicitation could be not completely carried out. A typical
situation is when the ranker specifies only her most-liked $t<K$ items
and leaves the remaining $K-t$ positions undefined. In this case, the
generic observation consists in the so-called \textit{top-$t$ partial
ordering} of the form
$\pi^{-1}=(\pi^{-1}(1),\dots,\pi^{-1}(t))$. With a slight abuse of
notation, the remaining $K-t$ alternatives are tacitly assumed to be
ranked lower, formally $\pi(i)>t$ for all
$i\notin\{\pi^{-1}(1),\dots,\pi^{-1}(t)\}$. Notice that a complete
ordering is a special instance of top-$t$ partial ordering with
$t=K-1$,
since the single missing
$K$-th entry
can be unambiguously determined.
Finally, we remark that in the present context ties, i.e., the case when multiple items occupy the same position,
are not contemplated.
\subsection{The Plackett-Luce model}
\label{ss:pl}
The PL is
one of the most successfully applied
\textit{stagewise models}
to describe partially ranked data, whose paternity is jointly attributed to~\cite{Luce} and~\cite{Plackett}.
The ranking elicitation is conceived as a random sampling without replacement from an urn:
at each stage the most-liked item is specified among
the
alternatives
not selected at the previous stages. The sequential draws of the items are governed by the \textit{support parameters} $\underline{p}=(p_1,\dots,p_K)$, that is, positive constants representing a measure of liking toward each item.
Let $\pi^{-1}_s=(\pi^{-1}_s(1),\dots,\pi^{-1}_s(n_s))$ be a generic top partial ordering, where $n_s$ is the number of items ranked by unit $s$ in the first $n_s$ positions. The PL postulates
\begin{equation}
\label{e:pl}
\PP_{\text{PL}}(\pi^{-1}_s|\underline{p})=\prod_{t=1}^{n_s}\dfrac{p_{\pi_s^{-1}(t)}}{\sum_{i=1}^{K}p_i-\sum_{\nu=1}^{t-1}p_{\pi_s^{-1}(\nu)}}.
\end{equation}
For a given a random sample $\underline{\pi}^{-1}=\{\pi_s^{-1}\}_{s=1}^N$ of $N$ partial top orderings with varying lenghts, the observed-data log-likelihood turns out to be
\begin{equation}
\label{e:plobsloglik}
l(\underline{p})=\sum_{i=1}^K\gamma_i\log{p_i}-\sum_{s=1}^N\sum_{t=1}^{n_s}\log\sum_{i=1}^K\delta_{sti}p_i,
\end{equation}
where $\gamma_i=\sum_{s=1}^Nu_{si}$ with $u_{si}=\mathbb{I}_{[i\in\{\pi_s^{-1}(1),\dots,\pi_s^{-1}(n_s)\}]}$ and $\delta_{sti}=\mathbb{I}_{[i\notin\{\pi_s^{-1}(1),\dots,\pi_s^{-1}(t-1)\}]}$ with $\delta_{s1i}=1$ for all $s=1,\dots,N$ and $i=1,\dots,K$.
\section{The Bayesian Plackett-Luce mixture model}
\label{s:bayinf}
In this section we give a brief outline of the Bayesian approach based on the data augmentation strategy to make inference on the PL parameters, both in the case of homogeneous population without an underlying group structure and in the more general finite mixture framework. It represents the methodological background implemented in the \pkg{PLMIX} package.
\subsection{The homogeneous case}
\label{ss:homo}
Because of the normalization term $\sum_{i=1}^K\delta_{sti}p_i$, the direct maximization of the log-likelihood~\eqref{e:plobsloglik} is not straightforward. In the Bayesian setting,
simple and effective estimation
procedures
were introduced by \cite{Caron:Doucet} to overcome this inconvenience.
Their crucial idea
relies on a data augmentation step with continuous latent variables
associated to each entry of the observed matrix.
More specifically,
\cite{Caron:Doucet} suggest to employ
auxiliary variables $\underline{y}=(y_{st})$ for $s=1,\dots,N$ and $t=1,\dots,n_s$
with a suitable parametric assumption for their joint conditional distribution,
given by
\begin{equation}
\label{e:fullcond}
f(\underline{y}|\upi^{-1},\underline{p})=\prod_{s=1}^N\prod_{t=1}^{n_s}f_{\Exp}\biggl(y_{st}\biggr\vert\sum_{i=1}^K\delta_{sti}p_i\biggr),
\end{equation}
where $f_{\Exp}(\cdot|\lambda)$ is the Negative Exponential density
function
indexed by the rate parameter
$\lambda$.
Additionally, assumption~\eqref{e:fullcond}
is conveniently combined with a conjugate prior distribution $f_0(\underline{p})=\prod_{i=1}^Kf_{\text{Ga}}(p_{i}|c,d)$ for the support parameters, where $c$ and $d$ denote the shape and rate parameters of the Gamma densities, leading to a straightforward Bayesian inference.
\subsubsection{MAP estimation via EM algorithm}
\label{s:MAPhomo}
In the presence of latent variables, the popular EM algorithm introduced by~\cite{Demp:Lai:Rub} can be applied to optimize the posterior distribution and achieve the Maximum A Posteriori (MAP) estimate of
the PL parameters, i.e., the posterior mode.
At the generic iteration $l+1$, the EM algorithm described by~\cite{Caron:Doucet} updates the support parameters as follows
\begin{equation*}
\label{}
p_i^{(l+1)}=\dfrac{c-1+\gamma_i}{d+\sum_{s=1}^N\sum_{t=1}^{n_s}\dfrac{\delta_{sti}}{\sum_{i=1}^K\delta_{sti}p_i^{(l)}}}\qquad i=1,\dots, K.
\end{equation*}
By setting noninformative hyperparameters $c=1$ and $d=0$, the EM procedure reduces to the
Minorization-Maximization algorithm
described by~\cite{Hunter}
for the MLE of the PL.
\subsubsection{Gibbs sampling}
\label{s:GShomo}
\cite{Caron:Doucet} describe also the Gibbs sampling (GS) procedure, that is, a
simulation-based
method to approximate the joint posterior distribution
and to
assess the uncertainty of the parameter estimates with empirical summaries of posterior variability.
At the generic iteration $l+1$, the GS alternates
the following two sampling steps
\begin{eqnarray*}
y_{st}^{(l+1)}|\pi_s^{-1},\underline{p}^{(l)} & \sim & \Exp\left(\sum_{i=1}^K\delta_{sti}p_i^{(l)}\right),\\
p_{i}^{(l+1)}|\underline\pi^{-1},\underline{y}^{(l+1)} & \sim & \text{Ga}\left(c+\gamma_{i},d+\sum_{s=1}^{N}\sum_{t=1}^{n_s}\delta_{sti}y_{st}^{(l+1)}\right),
\end{eqnarray*}
where
the full-conditional of $\underline{y}$
is imposed by the data augmentation assumption~\eqref{e:fullcond} and
the full-conditionals of the $p$'s
belong to the Gamma
family, thanks to the conjugate
prior specification.
\subsection{The finite PL mixture}
\label{ss:hetero}
We now review the proposal recently developed by \cite{Mollica:Tardella2017} to extend the data augmentation approach~\eqref{e:fullcond}
to the finite mixture context.
Formally, the $G$-component PL mixture model assumes that observations are sampled from a heterogeneous population composed of $G$ subpopulations called \textit{mixture components}
\begin{equation}
\label{e:mpl}
\pi_s^{-1}|\underline{p},\uomega\,\overset{\text{iid}}{\sim}\,\sum_{g=1}^G\omega_g\PP_{\text{PL}}(\pi^{-1}_s|\underline{p}_g)
\end{equation}
where each component $g$ follows a basic PL distribution with a specific support parameter vector $\underline{p}_g$ and $\uomega=(\omega_1,\dots,\omega_G)$ are
the \textit{mixture weights}.
Let $\underline{z}_s=(z_{s1},\dots,z_{sG})|\uomega\sim\text{Multinom}(1,\uomega=(\omega_1,\dots,\omega_G))$ be the vector describing the latent group membership of unit $s$,
such that
\begin{equation*}
z_{sg}=\begin{cases}
1\qquad \text{if unit $s$ belongs to the $g$-th mixture component}, \\
0\qquad \text{otherwise}.
\end{cases}
\end{equation*}
To account for the latent group structure $\underline{z}$, \cite{Mollica:Tardella2017} generalize~\cite{Caron:Doucet}'s approach
with the following conjugate Bayesian model setup
\begin{eqnarray*}
\uomega & \sim & \text{Dir}(\alpha_1,\dots,\alpha_G)\\
p_{gi} & \overset{\text{i}}{\sim} & \text{Ga}(c_{gi},d_g) \\
\underline{z}_s|\uomega & \overset{\text{iid}}{\sim} & \text{Multinom}(1,\uomega)\\
\pi_s^{-1}|\underline{z}_s,\underline{p} & \overset{\text{i}}{\sim} & \prod_{g=1}^G\PP_{\text{PL}}(\pi^{-1}_s|\underline{p}_g)^{z_{sg}} \\
y_{st}|\pi_s^{-1},\underline{z}_s,\underline{p} & \overset{\text{i}}{\sim} & \text{Exp}\left(\prod_{g=1}^G\left(\sum_{i=1}^K\delta_{sti}p_{gi}\right)^{z_{sg}}\right).
\end{eqnarray*}
\subsubsection{MAP estimation via EM algorithm}
\label{sss:MAPhete}
In the mixture setting, the $(l+1)$-th
iteration of the EM algorithm
consists in updating
the unknown quantities until convergence according to the following formulas
\begin{align*}
\hat z_{sg}^{(l+1)}&=\dfrac{\omega_g^{(l)}\PP_{\text{PL}}(\pi^{-1}_s|\underline{p}_g^{(l)})}{\sum_{g'=1}^G\omega_{g'}^{(l)}\PP_{\text{PL}}(\pi^{-1}_s|\underline{p}_{g'}^{(l)})},\\[10pt]
\omega_g^{(l+1)}&=\dfrac{\alpha_g-1+\sum_{s=1}^N \hat z_{sg}^{(l+1)}}{\sum_{g'=1}^G\alpha_{g'}-G+N},\\[10pt]
p_{gi}^{(l+1)}&=\dfrac{c_{gi}-1+\hat\gamma_{gi}^{(l+1)}}{d_g+\sum_{s=1}^N \hat z_{sg}^{(l+1)}\sum_{t=1}^{n_s}\dfrac{\delta_{sti}}{\sum_{i=1}^{K}\delta_{sti}p_{gi}^{(l)}}},
\end{align*}
where $\hat\gamma_{gi}^{(l+1)}=\sum_{s=1}^N \hat z_{sg}^{(l+1)}u_{si}$.
Interestingly, under the
noninformative prior setting
($c_{gi}=1$, $d_g=0$ and $\alpha_g=1$), the above MAP procedure recovers the MLE method to infer the PL mixture described by \cite{Gormley:Murphy-Royal}.
\subsubsection{Gibbs sampling}
\label{sss:GShete}
Thanks to the conjugate prior specification, all the full-conditional distributions have known form and are easy to be sampled.
At the generic iteration $l+1$,
the GS algorithm
consists in iteratively generating random values
from the following full-conditionals
\begin{eqnarray*}
\underline\omega^{(l+1)}|\underline{z}^{(l)} & \sim & \text{Dir}\left(\alpha_1+\sum_{s=1}^Nz_{s1}^{(l)},\dots,\alpha_G+\sum_{s=1}^Nz_{sG}^{(l)}\right)\\[10pt]
y_{st}^{(l+1)}|\pi_s^{-1},\underline{z}_s^{(l)},\underline{p}^{(l)} & \sim & \text{Exp}\left(\prod_{g=1}^G\left(\sum_{i=1}^{K}\delta_{sti}p_{gi}^{(l)}\right)^{z_{sg}^{(l)}}\right)\\[10pt]
p_{gi}^{(l+1)}|\underline\pi^{-1},\underline{y}^{(l+1)},\underline{z}^{(l)} & \sim & \text{Gam}\left(c_{gi}+\gamma_{gi}^{(l)},d_g+\sum_{s=1}^Nz_{sg}^{(l)}\sum_{t=1}^{n_s}\delta_{sti}y_{st}^{(l+1)}\right)\\[10pt]
\underline{z}_s^{(l+1)}|\pi_s^{-1},\underline{y}_s^{(l+1)},\underline{p}^{(l+1)},\underline\omega^{(l+1)} &
\sim & \text{Multinom}\left(1,\left(m_{s1}^{(l+1)},\dots,m_{sG}^{(l+1)}\right)\right)
\end{eqnarray*}
where $\gamma_{gi}^{(l)}=\sum_{s=1}^N z_{sg}^{(l)}u_{si}$ and
$m_{sg}^{(l+1)}\propto\omega_g^{(l+1)}\prod_{i=1}^K(p_{gi}^{(l+1)})^{u_{si}}
e^{-p_{gi}^{(l+1)}\sum_{t=1}^{n_s}\delta_{sti}y_{st}^{(l+1)}},$
see \cite{Mollica:Tardella2017} for more analytical details.
The MAP solution represents a suitable starting point to
initialize the GS algorithm.
\subsubsection{Label-switching issue}
\label{sss:LS}
The \textit{label switching} (LS) is an identifiability issue that can hamper the straightforward use of the MCMC simulations for the Bayesian estimation of mixture models \citep{Marin:Mengersen:Robert}.
It
reflects the arbitrary attribution of the indices $\{1,\dots,G\}$ to denote the mixture components,
such that the
relabeling of the latent classes does not modify the resulting sampling distribution.
To solve the LS problem
in the GS output,
we focus
on the relabeling algorithms (RA), where the basic idea
is the \textit{ex-post} relabeling of the raw MCMC samples in order to derive
meaningful posterior estimates.
A comprehensive review
can be found in~\cite{Papastamoulis},
describing their implementation in the
\proglang{R}
package \pkg{label.switching}, that we exploited
to handle the LS in our Bayesian PL mixture applications.
\subsection{Bayesian model comparison criteria}
\label{ss:mc}
A crucial step in the finite mixture analysis
is the determination of the optimal number $\hat G$ of components
that, in general, is not known \textit{a priori}.
\begin{table}[t]
\caption{Model selection criteria implemented in the \pkg{PLMIX} package.}
\renewcommand{\arraystretch}{1.8}
\label{t:Bayselection}
\centering
\begin{tabular}
{ccccc}
\textbf{DIC}$_\textbf{1}$ & \quad & \textbf{BPIC}$_\textbf{1}$ & \quad & \textbf{BICM}$_\textbf{1}$ \\
\hline
$\bar D + (\bar D-D(\hat\theta_{\text{MAP}}))$ & \quad & $\bar D + 2(\bar D-D(\hat\theta_{\text{MAP}}))$ & \quad & $\bar D + \frac{\mathbb{VAR}[D(\theta)|\underline\pi^{-1}]}{2}(\log N -1)$\\
\textbf{DIC}$_\textbf{2}$ & \quad & \textbf{BPIC}$_\textbf{2}$ & \quad & \textbf{BICM}$_\textbf{2}$ \\
\hline
$\bar D + \frac{\mathbb{VAR}[D(\theta)|\underline\pi^{-1}]}{2}$ & \quad & $\bar D + \mathbb{VAR}[D(\theta)|\underline\pi^{-1}]$ & \quad & $D(\hat\theta_{\text{MAP}}) + \frac{\mathbb{VAR}[D(\theta)|\underline\pi^{-1}]}{2}\log N$
\end{tabular}
\end{table}
The \pkg{PLMIX} package includes several Bayesian model selection criteria to compare PL mixture models with a different number of components fitted on the same data set. The considered measures include two alternative versions of each of the following criteria: (i) \textit{Deviance Information Criterion} (DIC), originally defined in~\cite{Spiegelhalter}; (ii) \textit{Bayesian Predictive Information Criterion} (BPIC), proposed by~\cite{Ando} and (iii) \textit{Bayesian Information Criterion-Monte Carlo} (BICM), described in~\cite{Raftery2007}. Their formula
are recalled in Table~\ref{t:Bayselection},
where
$D(\theta)=-2\log L(\theta)$ denotes the \textit{deviance function} and
$\bar D=\mathbb{E}[D(\theta)|\underline\pi^{-1}]$ is its posterior expectation. For analytic details, see~\cite{Mollica:Tardella2017}.
As apparent in Table~\ref{t:Bayselection}, we advocate
the use of $\hat\theta_\text{MAP}$ as point estimate for the mixture model parameters,
instead of the posterior mean $\mathbb{E}[\theta|\underline\pi^{-1}]$, since
the MAP estimate: (i) straightforwardly provides a meaningful estimate not affected by the LS problem;
(ii) guarantees a positive value of model complexity (\textit{effective number of parameters}); (iii) coincides with the MLE solution $\hat\theta_\text{MLE}$ under uninformative prior specification.
It follows that, given the likelihood invariance described in~\ref{sss:LS}, all the considered model comparison measures do not suffer of the presence of LS. Thus, their estimation does not require the preliminary relabeling of the MCMC output.
\subsection{Bayesian model assessment}
\label{ss:ma}
Evaluating the fitting performance of a parametric model can be less
straightforward in ranking data applications than in other
multivariate
contexts.
In the frequentist domain, for example,
model assessment
is typically addressed with the computation of
the \textit{p-value} associated to a goodness-of-fit statistic,
such as
the likelihood ratio
or Pearson's chi-squared test.
However, in sparse data situations
serious issues arise with this approach, since the chi-squared
distribution of the test statistics under the posited model $H$
no longer applies.
\cite{Cohen:Mallows} suggested to overcome this difficulty by comparing observed and expected frequencies regarding relevant partitions of the ranking space.
The same approach
has been successfully applied also within the Bayesian paradigm,
where the classical test statistic can be generalized into a parameter-dependent quantity,
referred to
as \textit{discrepancy variable} \citep{Gelman:Meng:Stern,Meng}.
In order to assess the adequacy of the Bayesian PL mixture, the \pkg{PLMIX} package provides diagnostic tools
derived from two
significant
summary statistics:
\begin{enumerate} %
\item the most-liked item frequency vector
$\underline{r}(\upi^{-1})$, whose generic entry is
$$r_i(\upi^{-1}) =\sum_{s=1}^N I_{[\pi^{-1}_{s}(1)=i]}$$
corresponding to the number of times that item $i$
is ranked first;
\item
the PC frequency matrix
$\tau(\upi^{-1})$,
whose generic entry is
$$\tau_{ii'}(\upi^{-1})=\sum_{s=1}^N(u_{si}+u_{si'}-u_{si}u_{si'})I_{[\pi_s(i)<\pi_s(i')]}$$
corresponding to the number of times that item $i$
is preferred to item $i'$.
\end{enumerate}
One could then employ the two
sample quantities to define the
chi-squared discrepancies $X^2_{(1)}(\upi^{-1};\theta)$ and $X^2_{(2)}(\upi^{-1};\theta)$ comparing observed and expected frequencies under the PL mixture scenario $H$, see~\cite{Mollica:Tardella2017} for the explicit formulas.
For a given discrepancy variable $X^2(\upi^{-1};\theta)$,
the posterior predictive check of model goodness-of-fit relies on the computation of the
\textit{posterior predictive $p$-value}
\begin{equation}
\label{e:postp}
p_B=\PP(X^2(\upi_{\text{rep}}^{-1};\theta)\geq X^2(\upi^{-1}_\text{obs};\theta)|\upi^{-1}_\text{obs},H),
\end{equation}
that can be easily approximated
once an MCMC sample
from the posterior distribution is available \citep{Gelman:Meng:Stern}.
Clearly, an efficient simulation device is needed to assist the drawing of replicated datasets $\upi_{\text{rep}}^{-1}$ from the posterior predictive distribution.
\cite{Mollica:Tardella2017} also showed the usefulness of model assessment conditionally on the number $m=1,\dots,K-1$ of ranked items.
To this aim, they introduced two additional discrepancies $\tilde X^2_{(1)}$ and $\tilde X^2_{(2)}$ that
parallel $X^2_{(1)}$ and $X^2_{(2)}$,
given by
\begin{equation*}
\label{e:chitilde}
\tilde X^2_{(1)}(\upi^{-1};\theta)=\sum_{m=1}^{K-1}X^2_{(1)}(\upi^{-1}_m;\theta)
\qquad\qquad
\tilde X^2_{(2)}(\upi^{-1};\theta)= \sum_{m=1}^{K-1}X^2_{(2)}(\upi^{-1}_m;\theta),
\end{equation*}
where the presence of $m$ in the subscript refers to the evaluation of
the discrepancies in the subsample $\upi^{-1}_m=\{\pi^{-1}_s:
n_s=m\}$. The computation of $\tilde p_B$, obtained
from equality~\eqref{e:postp} by replacing $X^2$ with
$\tilde X^2$,
permits to assess the adequacy of the model estimated
on the entire dataset to recover the considered summary statistics within the subsets of partial orderings with the same length.
Finally, similarly to the model comparison step, the LS adjustment of the posterior samples is not necessary for the posterior predictive check. This is due to
the use of the marginal support parameters $p_i=\sum_{g=1}^G\omega_gp_{gi}$ in the computation of the expected frequencies,
which are invariant to the LS phenomenon.
\section{The PLMIX package}
\label{s:package}
\begin{table}[t]
\centering
\begin{threeparttable}
\caption{Classification of the 23 objects included in the novel \pkg{PLMIX} package.}
\addtolength{\tabcolsep}{-3pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccc}
\multicolumn{2}{c}{\textbf{Ranking data manipulation}} & & \multicolumn{2}{c}{\textbf{Ranking data simulation}} \\
\cline{1-2}\cline{4-5}
\color{cyan}{\code{binary_group_ind}} & \color{cyan}{\code{freq_to_unit}} & & \multicolumn{2}{c}{\color{cyan}{\code{rPLMIX}}} \\
\color{cyan}{\code{make_complete}} & \color{cyan}{\code{make_partial}} & & \multicolumn{2}{c}{\textbf{Ranking data description}} \\
\cline{4-5}
\color{cyan}{\code{rank_ord_switch}} & \color{cyan}{\code{unit_to_freq}} & &
\color{cyan}{ \code{paired_comparisons}} & \color{cyan}{\code{rank_summaries}} \\
\multicolumn{2}{c}{\textbf{Model estimation}} & & \multicolumn{2}{c}{\textbf{Model selection}} \\
\cline{1-2}\cline{4-5}
\color{cyan}{ \code{likPLMIX}} & \color{cyan}{\code{loglikPLMIX}} & & \color{cyan}{ \code{selectPLMIX}} & \color{cyan}{\code{bicPLMIX}} \\
\color{cyan}{ \code{mapPLMIX}} & \color{cyan}{\code{mapPLMIX_multistart}} & & \multicolumn{2}{c}{\textbf{Model assessment \& \textbf{LS}}} \\
\cline{4-5}
\color{cyan}{\code{gibbsPLMIX}} & & & \color{cyan}{\code{ppcheckPLMIX}} & \color{cyan}{\code{ppcheckPLMIX_cond}} \\
& & & \color{cyan}{\code{label_switchPLMIX}} & \\
\multicolumn{5}{c}{\textbf{Data}} \\
\hline
\multicolumn{5}{c}{\color{BurntOrange}{\code{d\char`_apa}}\qquad\color{BurntOrange}{\code{d\char`_carconf}}\qquad\color{BurntOrange}{\code{d\char`_dublinwest}}\qquad\color{BurntOrange}{\code{d\char`_german}}\qquad\color{BurntOrange}{\code{d\char`_nascar}}} \\
\hline
\end{tabular}
\label{t:overview}
\begin{tablenotes}
\small
\item[\color{cyan}{$\blacksquare$}] object of class \code{"function"}\qquad\color{BurntOrange}{$\blacksquare$} \color{black}{object of class \code{"matrix"}}
\end{tablenotes}
\end{threeparttable}
\end{table}
The novel \pkg{PLMIX} is
the first \proglang{R} package
devoted to Bayesian inference for partially ranked data.
More specifically, \pkg{PLMIX} performs Bayesian estimation of ranking models
by focusing on the PL and its finite mixture extension as the sampling distribution.
In the present setting,
the MLE approach is recovered as a special case of the Bayesian analysis
with a noninformative (flat) prior specification.
To address the issue of computationally demanding procedures, typical in ranking contexts,
\pkg{PLMIX} can take advantage of a hybrid code linking
the \proglang{R} environment with the \proglang{C++} programming language. The parallelization option is also implemented, such that finite mixtures with a different number of components can be simultaneously analyzed.
\pkg{PLMIX}
contains 24 objects visible to the user, classified
according the their task in Table~\ref{t:overview}. There are 19 objects of class \code{"function"} and 5 datasets.
As revealed by the overview,
the novel
package provides a suite of functions assisting each step of the ranking data
analysis.
In fact,
in addition to data manipulation tools, descriptive summary and estimation techniques, the package assists other fundamental phases related to the PL mixture analysis, such that the selection of the optimal number of components and the goodness-of-fit assessment, aimed at a more critical exploration of the group structure in the sample.
Also the treatment of the LS problem is supported in
our package.
The 5 datasets are all provided in ordering format as objects of class \code{"matrix"}.
Missing positions/items in the partial top orderings are denoted with zero entries and Rank $=$ 1 indicates the most-liked alternative.
The next subsections illustrate in greater detail the application of the \pkg{PLMIX} commands to simulated and real ranking data.
\subsection{Ranking data manipulation: Dublin West and German sample data}
\label{ss:data}
Before performing a ranking data analysis, it is important to know
exactly the data format
and employ the suitable one, in order
to avoid
erroneous implementation or
misleading interpretations.
The preliminary conversion of the data into the appropriate format
can be performed by means of the
\code{rank_ord_switch} function,
switching
from orderings to rankings and vice-versa for both complete and partial
observations.
The following instructions show the simple application of the \code{rank_ord_switch} routine to the first 6 partial orderings of the 2002 Dublin West election dataset \citep{Mattei:Walsh:2013} called \code{d_dublinwest}, in order to convert them into the ranking format. After loading the package and the data
\begin{CodeChunk}
\begin{CodeInput}
> data(d_dublinwest)
> head(d_dublinwest)
\end{CodeInput}
\begin{CodeOutput}
rank1 rank2 rank3 rank4 rank5 rank6 rank7 rank8 rank9
[1,] 7 9 4 2 8 0 0 0 0
[2,] 5 3 7 6 0 0 0 0 0
[3,] 5 7 3 0 0 0 0 0 0
[4,] 9 2 7 0 0 0 0 0 0
[5,] 3 2 0 0 0 0 0 0 0
[6,] 5 3 2 0 0 0 0 0 0
\end{CodeOutput}
\end{CodeChunk}
one can apply the command as follows
\begin{CodeChunk}
\begin{CodeInput}
> rank_ord_switch(data=head(d_dublinwest), format="ordering")
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 0 4 0 3 0 0 1 5 2
[2,] 0 0 2 0 1 4 3 0 0
[3,] 0 0 3 0 1 0 2 0 0
[4,] 0 2 0 0 0 0 3 0 1
[5,] 0 2 1 0 0 0 0 0 0
[6,] 0 3 2 0 1 0 0 0 0
\end{CodeOutput}
\end{CodeChunk}
where the input arguments are: i) \code{data}: the numeric $N\times K$ data
matrix of partial sequences to be converted, ii) \code{format}: the character string
indicating the format of the input \code{data}
and iii) \code{nranked}: the optional numeric vector of length $N$ with the number of items ranked by each sample unit (default is \code{NULL}).
Another useful task is the aggregation of the replicated sequences in the observed dataset. The \code{unit_to_freq} routine constructs the frequency distribution of the observed sequences from the dataset of individual rankings/orderings supplied in the single argument \code{data}. Here is the output of \code{unit_to_freq} when applied to the German Sample dataset \code{d_german}, collecting complete orderings of $K=4$ political goals
\begin{CodeChunk}
\begin{CodeInput}
> data(d_german)
> unit_to_freq(data=d_german)
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 3 4 137
[2,] 1 2 4 3 29
[3,] 1 3 2 4 309
[4,] 1 3 4 2 52
[5,] 1 4 2 3 255
[6,] 1 4 3 2 93
[7,] 2 1 3 4 48
[8,] 2 1 4 3 23
[9,] 2 3 1 4 330
[10,] 2 3 4 1 21
[11,] 2 4 1 3 294
[12,] 2 4 3 1 30
[13,] 3 1 2 4 61
[14,] 3 1 4 2 33
[15,] 3 2 1 4 117
[16,] 3 2 4 1 29
[17,] 3 4 1 2 70
[18,] 3 4 2 1 35
[19,] 4 1 2 3 55
[20,] 4 1 3 2 59
[21,] 4 2 1 3 69
[22,] 4 2 3 1 52
[23,] 4 3 1 2 34
[24,] 4 3 2 1 27
\end{CodeOutput}
\end{CodeChunk}
The observed frequencies are indicated in the last $(K+1)$-th column. The frequency distribution helps to explore the possible presence of multimodal patterns in the sample and to compare the observed frequencies with those expected under specific parametric assumptions. Additionally, it can be exploited to prepare the data for the analysis with methods implemented in other \proglang{R} packages requiring the aggregate format.
Conversely, the \code{freq_to_unit} function expands the frequency distribution supplied in the argument \code{freq_distr}
into the dataset of individual rankings/orderings.
In the following toy example, we consider a synthetic sample of size $N=6$ with 2 top-1, 1 top-2 and 3 top-3 partial rankings
\begin{CodeChunk}
\begin{CodeInput}
> obs_rankings <- rbind(c(0,0,1,0), c(0,1,0,2), c(4,1,2,3))
> freq_to_unit(freq_distr=cbind(obs_rankings, c(2,1,3)))
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4]
[1,] 0 0 1 0
[2,] 0 0 1 0
[3,] 0 1 0 2
[4,] 4 1 2 3
[5,] 4 1 2 3
[6,] 4 1 2 3
\end{CodeOutput}
\end{CodeChunk}
Further helpful commands for
data manipulation are \code{make_partial} and
\code{make_complete},
that can be regarded
specularly.
The former allows for the truncation of complete sequences according
to different censoring patterns, either in a
deterministic
or a random way.
The deterministic approach requires the user to specify the number of top positions to be retained for each sample unit in the \code{nranked} argument. The random approach, instead, makes use of the probabilities of top-1, top-2, \dots, top-$(K-1)$ censoring patterns, supplied in the \code{probcens} vector, to perform a stochastic truncation of the complete sequences.
For example,
a random truncation
of the \code{d_german} dataset with a 60\% overall rate of censored observations and equal chance of top-1 and top-2 orderings can be obtained with the following code
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> d_german_cens <- make_partial(data=d_german, format="ordering",
+ probcens=c(0.3, 0.3, 0.4))
\end{CodeInput}
\end{CodeChunk}
It returns a list with two named objects given by the numeric data matrix \code{partialdata} of censored sequences and the numeric vector \code{nranked} with the number of items ranked by each sample unit after the random censoring. Here is the code to extract them and to verify the consistency of the resulting censored dataset with the nominal probability values specified in the \code{probcens} argument
\begin{CodeChunk}
\begin{CodeInput}
> head(d_german_cens$partialdata)
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4]
[1,] 1 2 0 0
[2,] 1 0 0 0
[3,] 1 2 0 0
[4,] 1 2 3 4
[5,] 1 2 0 0
[6,] 1 0 0 0
\end{CodeOutput}
\begin{CodeInput}
> round(table(d_german_cens$nranked)/nrow(d_german), 2)
\end{CodeInput}
\begin{CodeOutput}
1 2 4
0.30 0.29 0.41
\end{CodeOutput}
\end{CodeChunk}
The \code{make_partial} function is especially useful in simulation studies to investigate the impact of the censoring mechanism on the ability of the estimation procedures to recover the true generating distribution and, additionally, to verify their robustness to the censoring rate. See, for example, the simulation study in~\cite{Mollica:Tardella2017}.
Conversely, the \code{make_complete} function is conceived for the completion of partial orderings by filling in the missing (zero) positions/items
with the remaining not-selected alternatives. More specifically, the
completion of the partial data is performed with the random procedure
determined the Plackett-Luce scheme, that is, with a sampling without
replacement of the
unranked items. To this aim, the positive values specified in the \code{probitems} argument are used as support parameters. For instance, the random completion of the \code{d_dublinwest} dataset with decreasing support over the $K=9$ candidates can be implemented as follows
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> d_dublinwest_compl <- make_complete(data=d_dublinwest, format="ordering",
+ probitems=ncol(d_dublinwest):1)
> head(d_dublinwest_full$completedata)
\end{CodeInput}
\begin{CodeOutput}
rank1 rank2 rank3 rank4 rank5 rank6 rank7 rank8 rank9
[1,] 7 9 4 2 8 3 6 5 1
[2,] 5 3 7 6 2 8 1 4 9
[3,] 5 7 3 4 1 2 8 6 9
[4,] 9 2 7 1 4 3 6 5 8
[5,] 3 2 5 6 4 7 1 8 9
[6,] 5 3 2 1 7 8 4 6 9
\end{CodeOutput}
\end{CodeChunk}
Other possible input values for the vector \code{probitems} could be the observed frequencies that each item has been ranked in the first position, in order to preserve the univariate feature
of the observed sample.
\subsection{Ranking data simulation and likelihood function: simulated data}
\label{ss:simulation}
Data simulation and likelihood function are
essential tasks to be suitably implemented in
a model-oriented statistical package.
The random generation of complete orderings is accomplished with the \code{rPLMIX} routine.
A random sample of $N=5$ complete orderings of $K=6$ items can be drawn from a $3$-component PL mixture with parameters
$$\underline{p}=
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 \\
6 & 5 & 4 & 3 & 2 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 \\
\end{pmatrix}
\qquad\uomega=(0.50, 0.25, 0.25)$$
with the following instructions
\begin{CodeChunk}
\begin{CodeInput}
> K <- 6
> p_par <- rbind(1:K, K:1, rep(1, K))
> w_par <- c(0.50, 0.25, 0.25)
> set.seed(57524)
> simulated_data <- rPLMIX(n=5, K=K, G=3, p=p_par, weights=w_par,
+ format="ordering")
\end{CodeInput}
\end{CodeChunk}
where the argument \code{p} requires the numeric $G\times K$ matrix of the component-specific support parameters
and \code{weights} is the vector of mixture weights.
If $G>1$, the \code{rPLMIX} function returns a list of two
named objects corresponding, respectively, to the
vector \code{comp} of simulated
component memberships and to the
matrix \code{sim_data} of
simulated orderings, given by
\begin{CodeChunk}
\begin{CodeInput}
sim_orderings$comp
\end{CodeInput}
\begin{CodeOutput}
[1] 1 2 3 1 1
\end{CodeOutput}
\begin{CodeInput}
sim_orderings$sim_data
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 2 6 5 4 1 3
[2,] 2 4 3 1 5 6
[3,] 3 6 2 1 5 4
[4,] 4 3 5 6 1 2
[5,] 3 2 5 6 4 1
\end{CodeOutput}
\end{CodeChunk}
As evident in equation~\eqref{e:plobsloglik}, the calculation of the PL log-likelihood is computationally intensive, especially for large data sets, since the normalization of the support parameters varies across sample units and is
performed sequentially
in the ranking process. Of course, the computational demand increases in the finite mixture setting. On the other hand, an efficient evaluation of the likelihood is crucial for the application of iterative optimization methods such as the EM algorithm, both in the MLE perspective and in the MAP estimation detailed in Section~\ref{sss:MAPhete}.
In this regard, the \code{loglikPLMIX} function included in the \pkg{PLMIX} package
calls a \proglang{C++} routine from \proglang{R}
to reduce the computational
burden. To show
the efficiency of the \code{loglikPLMIX} function
for the
evaluation of the log-likelihood~\eqref{e:plobsloglik}, we first simulated a large
dataset of $N=15000$ orderings of $K=6$ items
from the (default) uniform ranking model, corresponding to the PL with
constant support parameters
\begin{CodeChunk}
\begin{CodeInput}
> K <- 6
> set.seed(57524)
> unif_data <- rPLMIX(n=15000, K=K, G=1, format="ordering")
\end{CodeInput}
\end{CodeChunk}
Then we have compared
the time needed to obtain the maximized log-likelihood value with \code{loglikPLMIX} and with the
\code{Likelihood.PL} command of the \pkg{StatRank} package
\begin{CodeChunk}
\begin{CodeInput}
> PLpar <- rep(1, K)
> system.time(loglikPLMIX(p=t(PLpar), ref_order=t(1:K), weights=1,
+ pi_inv=unif_data))
\end{CodeInput}
\begin{CodeOutput}
user system elapsed
0.005 0.000 0.005
\end{CodeOutput}
\begin{CodeInput}
> library(StatRank)
> system.time(Likelihood.PL(Data=unif_data, parameter=list(m=K, Mean=PLpar)))
\end{CodeInput}
\begin{CodeOutput}
user system elapsed
0.181 0.002 0.182
\end{CodeOutput}
\end{CodeChunk}
Finally, notice that the \code{rPLMIX} and \code{loglikPLMIX} functions share the
\code{ref_order} argument relative to the \textit{reference order}
parameters of the mixture of Extended Plackett-Luce models (EPL)
introduced by~\cite{Mollica:Tardella}. The traditional PL is a special
instance of the EPL with reference order parameter equal to the identity
permutation $(1,\dots,K)$. Since the current version of \pkg{PLMIX}
implements the mixture of PL models, the \code{ref_order} argument
must be a matrix with
$G$ rows equal to the identity permutation.
\subsection{Ranking data description: CARCONF data}
\label{ss:descriptive}
Useful utilities to conduct a preliminary exploratory analysis
are included in \pkg{PLMIX}. Unlike similar functions from other packages, these functions can handle partial observations. To this purpose,
the main command is named
\code{rank_summaries} that
accomplishes
the computation of summary statistics and censoring patterns for a partial ordering/ranking dataset.
The basic application of the \code{rank_summaries} routine requires the same inputs (\code{data} and \code{format})
of the \code{rank_ord_switch} function. For the \code{d_carconf} dataset, the command returns the following information
\begin{CodeChunk}
\begin{CodeInput}
> data(d_carconf)
> rank_summaries(data=d_carconf, format="ordering")
\end{CodeInput}
\begin{CodeOutput}
$nranked
[1] 6 6 6 6 4 4 3 6 6 6 3 6 6 4 6 2 6 6 6 6 6 6 2 6 6 6 6 6 6 6 6 6 6
[34] 3 6 6 6 6 6 6 6 6 6 3 6 6 6 6 6 4 6 4 6 3 3 4 6 6 6 3 6 4 6 6 6 6
[67] 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 6 6 6 6 6 6 6 3 3 6 6
[100] 6 4 4 6 3 4 4 6 6 6 6 6 6 4 6 6 6 6 4 6 6 4 6 6 4 6 6 6 6 6 6 6 6
[133] 6 6 6 6 6 6 4 4 4 6 6 6 6 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 4 6 3
[166] 6 6 6 6 6 6 6 3 6 6 3 6 6 6 6 6 6 6 4 6 6 6 4 6 6 4 6 6 6 6 6 6 6
[199] 6 6 3 3 6 6 6 6 6 6 6 2 6 6 2 6 6 4 6 6 6 6 2 6 6 6 6 6 6 6 6 6 6
[232] 6 4 6 6 6 6 6 6 6 6 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 6 6 4 6 3 4
[265] 4 6 6 6 6 6 6 6 6 6 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 6
[298] 6 6 6 1 6 6 6 6 6 6 6 6 6 6 6 6 6 4 6 6 6 6 6 6 6 6 4 6 2 6 6 6 4
[331] 6 6 6 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
[364] 6 6 6 6 6 6 6 6 6 4 6 6 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
[397] 6 6 6 6 6 6 6 6 4 4 6 6 6 6 6 3 4 3 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
[430] 6 6 4 6 4 6
$nranked_distr
Top-1 Top-2 Top-3 Top-4 Top-6
1 8 18 43 365
$missing_pos
[1] 42 17 0 29 62 27
$mean_rank
[1] 3.559796 2.882775 3.165517 3.113300 4.493298 3.203431
$marginal_rank_distr
Item 1 Item 2 Item 3 Item 4 Item 5 Item 6
Rank 1 86 101 87 78 28 55
Rank 2 53 87 85 86 27 96
Rank 3 46 84 78 76 46 96
Rank 4 61 74 81 69 51 72
Rank 5 57 50 62 72 74 50
Rank 6 90 22 42 25 147 39
$pairedcomparisons
Item 1 Item 2 Item 3 Item 4 Item 5 Item 6
Item 1 0 171 179 178 250 181
Item 2 257 0 238 237 329 242
Item 3 256 197 0 230 324 226
Item 4 243 193 205 0 306 225
Item 5 148 92 111 105 0 106
Item 6 239 187 209 199 307 0
\end{CodeOutput}
\end{CodeChunk}
The resulting list includes the following named objects:
\begin{description}[leftmargin=!,labelwidth=\widthof{\code{missing\char`_positions}}]
\item[\code{nranked}] numeric vector with the number of items ranked by each sample unit;
\item[\code{nranked\char`_distr}] the frequency distribution of the \code{nranked} vector;
\item[\code{missing\char`_positions}] numeric vector with the number of missing positions for each item;
\item[\code{mean\char`_rank}] numeric vector with the mean rank of each item;
\item[\code{marginals}] numeric $K\times K$ matrix of the marginal rank distributions;
\item[\code{pairedcomparisons}] numeric $K\times K$ matrix of PCs.
\end{description}
Specifically, the first row of the matrix \code{marginals}, labeled as \code{Rank 1}, corresponds to the vector $\underline{r}(\upi^{-1})$, whereas the matrix \code{pairedcomparisons} is $\tau(\upi^{-1})$.
The command \code{rank_summaries} has additional logical arguments
indicating, respectively, whether the mean rank vector, the marginal rank distribution and the PC frequencies have to be actually computed (default is \code{TRUE}). The PC matrix is implemented in \proglang{C++} to speed up the execution and can be separately computed also with the \code{paired_comparisons} function.
As better detailed in Section~\ref{ss:assess}, descriptive summaries are also involved in the model assessment step to investigate the compatibility between the observed dataset and specific parametric assumptions. Thus, their efficient implementation is crucial to reduce the computational time needed for the goodness-of-fit diagnostics.
\subsection{Model estimation: APA data}
\label{ss:estimate}
The core
inferential
part of the \pkg{PLMIX} package consists of the following three functions, fitting a Bayesian $G$-component PL mixture according to the estimation procedures reviewed in Sections \ref{sss:MAPhete} and \ref{sss:GShete}
\begin{description}[leftmargin=!,labelwidth=\widthof{\code{mapPLMIX\char`_multistart}}]
\item[\code{mapPLMIX}]
maximizes the posterior
distribution
via EM algorithm and returns the MAP point estimate of the PL mixture parameters;
\item[\code{mapPLMIX\char`_multistart}]
does the same with multiple
starting
values, in order to address the issue of possible local maxima in the posterior distribution;
\item[\code{gibbsPLMIX}] implements the MCMC posterior simulation via GS,
aimed at quantifying estimation uncertainty from a fully Bayesian perspective.
\end{description}
The above functions can be conveniently applied in a sequential way:
first the MAP procedure can be launched with multiple starting values by using
with \code{mapPLMIX_multistart} and, then, the resulting MAP estimate
can be employed to initialize the MCMC chain in the \code{gibbsPLMIX}
command.
Since the PL
is parametrized by the item-specific quantities $\underline{p}$
governing the sequential
drawings of the items in order of preference,
the ordering format $\underline\pi^{-1}$
is the natural choice for the input dataset
of the inferential process.
For this reason, all the functions concerning model estimation
share the \code{pi_inv} argument, indicating the numeric $N\times K$ matrix of observed partial top orderings.
Here is an example illustrating how to obtain the posterior
mode for a Bayesian 3-component PL mixture fitted to the \code{d_apa}
dataset
under the
noninformative
prior scenario
\begin{CodeChunk}
\begin{CodeInput}
> data(d_apa)
> set.seed(57524)
> MAP_3 <- mapPLMIX_multistart(pi_inv=d_apa, K=5, G=3,
+ n_start=30, n_iter=400*3, centered_start=TRUE, parallel=TRUE)
\end{CodeInput}
\end{CodeChunk}
We run the EM algorithm with \code{n_start=30} starting values
which, if not supplied by the user in the \code{init} argument, are
randomly generated from a uniform distribution (default). The optional \code{centered_start} input is a logical value to constraint the random starting values to be centered around the
observed relative frequency that each item has been ranked
first. Additionally, the \code{hyper} argument contains the
hyperparameters values ($c_{gi}$, $d_g$ and $\alpha_g$) of the
conjugate prior setting arranged in a list of objects named \code{shape0}, \code{rate0} and \code{alpha0}. By default, flat priors are assumed, implying that the MAP estimate
coincides with the MLE solution. From a computational point of view,
note the logical argument
\code{parallel}
that allows to parallelize
the initializations and,
hence,
to
significantly
reduce
the execution time.
The \code{mapPLMIX_multistart} automatically selects the best solution in terms of maximum value of the posterior distribution and returns a list containing the main information on the implemented MAP procedure. The MAP estimates of the component-specific support parameters and the mixture weights can be extracted by accessing to the corresponding list elements
as follows
\begin{CodeChunk}
\begin{CodeInput}
> MAP_3$mod$P_map
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5]
[1,] 0.06247449 0.03295813 0.01664217 0.51188738 0.37603783
[2,] 0.27331708 0.04903217 0.61671929 0.02382562 0.03710584
[3,] 0.18807113 0.22080423 0.14093403 0.22727853 0.22291209
\end{CodeOutput}
\begin{CodeInput}
> MAP_3$mod$W_map
\end{CodeInput}
\begin{CodeOutput}
[1] 0.1035369 0.2732693 0.6231937
\end{CodeOutput}
\end{CodeChunk}
The model-based clustering of the sample units into the $G=3$ mixture components based on the MAP allocation
is recorded in the list element named \code{class_map}. For the \code{d_apa} example,
the class distribution turns out to be
\begin{CodeChunk}
\begin{CodeInput}
> table(MAP_3$mod$class_map)
\end{CodeInput}
\begin{CodeOutput}
1 2 3
621 4106 10722
\end{CodeOutput}
\end{CodeChunk}
Notice that a PL mixture can be fitted in \proglang{R} with the function \code{Estimation.RUM.MultiType.MLE} of the \pkg{StatRank} package, by specifying the exponential distribution for the latent random utility. Unfortunately, the long computational time makes the implementation of the PL mixtures unfeasible
for a large dataset such as the \code{d_apa}. Indeed, the comparison of the timings elapsed for fitting the PL reported in \cite{PlackettLuce} shows that the \pkg{PLMIX} remarkably outperfoms all the other packages dealing with the PL in terms of computational efficiency.
Subsequently, we can perform an approximation of the posterior distribution by means of the GS simulation
implemented in the \code{gibbsPLMIX} command. An example to run the GS initialized with the MAP estimates just obtained from the EM algorithm is
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> GIBBS_3 <- gibbsPLMIX(pi_inv=d_apa, K=5, G=3, init=list(p=MAP_3$mod$P_map,
+ z=binary_group_ind(MAP_3$mod$class_map,G=3)), n_iter=22000,
+ n_burn=2000)
\end{CodeInput}
\end{CodeChunk}
In the \code{init} argument, the user can provide the list of initial values for the support parameters \code{p} and the binary component membership indicators \code{z}. For the latter, \pkg{PLMIX} offers the utility \code{binary_group_ind} converting the vector of group labels into the binary matrix $\underline{z}$. If \code{init} values are not supplied, random initialization from the uniform distribution is performed (default). Additionally, \code{n_iter} and \code{n_burn} correspond to the total number of GS drawings and the length of the burn-in phase, implying that the final posterior MCMC sample has size $L=\text{\code{n\char`_iter}}-\text{\code{n\char`_burn}}$. The output is a list of named objects including
the parameter drawings
\begin{CodeChunk}
\begin{CodeInput}
> round(head(GIBBS_3$P), 3)
\end{CodeInput}
\begin{CodeOutput}
p1,1 p2,1 p3,1 p1,2 p2,2 p3,2 p1,3 p2,3 p3,3
[1,] 0.110 0.656 0.390 0.250 0.119 0.022 0.196 0.030 0.307
[2,] 0.099 0.655 0.339 0.244 0.083 0.022 0.204 0.033 0.427
[3,] 0.097 0.543 0.358 0.260 0.100 0.012 0.192 0.036 0.413
[4,] 0.104 0.647 0.346 0.258 0.083 0.022 0.177 0.032 0.385
[5,] 0.107 0.633 0.336 0.249 0.061 0.032 0.199 0.031 0.337
[6,] 0.114 0.580 0.426 0.237 0.098 0.032 0.201 0.044 0.306
p1,4 p2,4 p3,4 p1,5 p2,5 p3,5 p1,6 p2,6 p3,6
[1,] 0.199 0.094 0.064 0.078 0.008 0.004 0.168 0.093 0.212
[2,] 0.206 0.107 0.068 0.075 0.008 0.005 0.171 0.114 0.138
[3,] 0.203 0.127 0.060 0.076 0.008 0.004 0.172 0.185 0.153
[4,] 0.207 0.106 0.081 0.077 0.011 0.005 0.176 0.121 0.160
[5,] 0.202 0.102 0.094 0.077 0.012 0.006 0.166 0.161 0.195
[6,] 0.193 0.115 0.057 0.078 0.022 0.005 0.177 0.141 0.173
\end{CodeOutput}
\begin{CodeInput}
> round(head(GIBBS_3$W), 3)
\end{CodeInput}
\begin{CodeOutput}
w1 w2 w3
[1,] 0.858 0.070 0.072
[2,] 0.888 0.066 0.046
[3,] 0.913 0.045 0.042
[4,] 0.885 0.073 0.042
[5,] 0.868 0.080 0.051
[6,] 0.893 0.051 0.056
\end{CodeOutput}
\end{CodeChunk}
and the posterior likelihood and deviance values at each iteration
\begin{CodeChunk}
\begin{CodeInput}
> head(GIBBS_3$log_lik)
\end{CodeInput}
\begin{CodeOutput}
[1] -2715.182 -2714.959 -2720.991 -2716.492 -2716.751 -2715.400
\end{CodeOutput}
\begin{CodeInput}
> head(GIBBS_3$deviance)
\end{CodeInput}
\begin{CodeOutput}
[1] 5430.365 5429.919 5441.981 5432.983 5433.502 5430.799
\end{CodeOutput}
\end{CodeChunk}
\subsection{Model comparison: APA data}
\label{ss:comparison}
The \code{selectPLMIX} function assists the user in the choice of the number of mixture components
via computation
of the
criteria described in Section~\ref{ss:mc}.
Let us suppose that Bayesian PL mixtures have been fitted to the \code{d_apa} dataset with $G$ varying from 1 to 3 with the code just described in Section \ref{ss:estimate}. The comparison of the three estimated mixtures can be performed with the following instruction
\begin{CodeChunk}
\begin{CodeInput}
> SELECT <- selectPLMIX(pi_inv=d_apa, seq_G=1:3, parallel=TRUE,
+ MAPestP=list(MAP_1$mod$P_map, MAP_2$mod$P_map, MAP_3$mod$P_map),
+ MAPestW=list(MAP_1$mod$W_map, MAP_2$mod$W_map, MAP_3$mod$W_map),
+ deviance=list(GIBBS_1$deviance, GIBBS_2$deviance, GIBBS_3$deviance))
\end{CodeInput}
\end{CodeChunk}
Besides the number of components of the competing mixtures specified in the vector \code{seq_G}, the command requires the lists of the point estimates and the posterior \code{deviance} values. More specifically, the function privileges the use of the MAP estimates \code{MAPestP} and \code{MAPestW} but, by setting them to NULL values, the user can alternatively compute the selection measures by relying on the a different posterior summary (\code{"mean"} or \code{"median"}) specified in the \code{post_summary} argument. In the latter case, the command needs also the MCMC samples to compute the desired posterior summary, that have to be supplied in the \code{MCMCsampleP} and \code{MCMCsampleW} arguments. The drawback when working with point estimates other than the MAP is that the presence of LS has to be previously removed from the traces to obtain meaningful results. Notice also the \code{parallel} option to parallelize the computation over the alternative number of groups specified in the \code{seq_G} argument.
The final values of the criteria can be extracted by typing
\begin{CodeChunk}
\begin{CodeInput}
> SELECT$selection_criteria
\end{CodeInput}
\begin{CodeOutput}
DIC1 DIC2 BPIC1 BPIC2 BICM1 BICM2
G=1 103204.4 103204.3 103208.3 103208.0 103233.0 103232.9
G=2 100771.9 100772.7 100779.4 100780.9 100835.6 100836.3
G=3 100591.1 100593.0 100601.3 100605.1 100685.7 100687.6
\end{CodeOutput}
\end{CodeChunk}
In this example, the decreasing trend of all the measures clearly
suggests that more complex mixtures with additional components should be explored. Finally, in the case of an uninformative analysis, a comparison with the frequentist solution is allowed. In this regard, the BIC value is returned by the \code{mapPLMIX_multistart} when flat priors are adopted. For the three mixtures, one has
\begin{CodeChunk}
\begin{CodeInput}
> rbind(MAP_1$mod$bic, MAP_2$mod$bic, MAP_3$mod$bic)
\end{CodeInput}
\begin{CodeOutput}
[,1]
[1,] 5475.685
[2,] 5484.724
[3,] 5504.845
\end{CodeOutput}
\end{CodeChunk}
Alternatively, the computation of the BIC can be accomplished with the \code{bicPLMIX} utility which, similarly to the \code{loglikPLMIX} function, accommodates for the more general EPL mixture setting.
\subsection{Model assessment: APA data}
\label{ss:assess}
The posterior predictive check, unconditionally and conditionally on
the
length of the partial sequences, can be performed,
respectively, with the
\code{ppcheckPLMIX} and \code{ppcheckPLMIX_cond} functions. As
described in Section~\ref{ss:ma}, the model assessment tools require
the simulation of a replicated dataset from the posterior predictive
distribution for each
GS drawing.
This means
that the execution time depends on both the sample sizes $N$ and $L$
and, hence, the computation of goodness-of-fit diagnostics is
particularly time-consuming. Thanks to the combination of the
\proglang{R}
and \proglang{C++} languages, the assessment
of ranking models becomes feasible with the \pkg{PLMIX} package,
even for moderately large datasets.
The code to perform the posterior predictive check based on
$ X^2_{(1)}$ and $ X^2_{(2)}$ and to extract the corresponding $p$-values is
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> CHECK <- ppcheckPLMIX(pi_inv=d_apa, seq_G=1:3, parallel=TRUE,
+ MCMCsampleP=list(GIBBS_1$P, GIBBS_2$P, GIBBS_3$P),
+ MCMCsampleW=list(GIBBS_1$W, GIBBS_2$W, GIBBS_3$W))
> CHECK$post_pred_pvalue
\end{CodeInput}
\begin{CodeOutput}
post_pred_pvalue_top1 post_pred_pvalue_paired
G_1 0 0.0000
G_2 0 0.6330
G_3 0 0.4805
\end{CodeOutput}
\end{CodeChunk}
The syntax is similar to that shown for the \code{selectPLMIX} command, with the difference that the lists \code{MCMCsampleP} and \code{MCMCsampleW} collecting the MCMC samples are necessary inputs for the posterior predictive simulation.
Similarly, the script for the conditional posterior predictive check based on
$\tilde X^2_{(1)}$ and $\tilde X^2_{(2)}$ is
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> CHECKCOND <- ppcheckPLMIX_cond(pi_inv=d_apa, seq_G=1:3, parallel=TRUE,
+ MCMCsampleP=list(GIBBS_1$P, GIBBS_2$P, GIBBS_3$P),
+ MCMCsampleW=list(GIBBS_1$W, GIBBS_2$W, GIBBS_3$W))
> CHECKCOND$post_pred_pvalue
\end{CodeInput}
\begin{CodeOutput}
post_pred_pvalue_top1_cond post_pred_pvalue_paired_cond
G_1 0 0
G_2 0 0
G_3 0 0
\end{CodeOutput}
\end{CodeChunk}
Remind that under correct model specification, $p_B$ values are
expected to be
centered
around 0.5, whereas values
smaller than 0.05 are typically considered as indication of model
lack-of-fit.
In this example, the posterior predictive check conditionally on the
number of ranked items reveals the inadequacy of all estimated
mixtures for both the summary statistics. This should be interpreted
as an indication that a better account of the missingness mechanism
is
needed
and, hence, a separate PL mixture analysis on each subsample $\upi^{-1}_m$
would be preferable. See \cite{Mollica:Tardella2017} for a more in-depth analysis of the \code{d_apa} dataset.
\subsection{Label switching adjustment: simulated data}
\label{ss:ls}
The \code{label_switchPLMIX} command can be employed to remove the possible presence of LS in the posterior MCMC samples. This step is necessary to derive meaningful point estimates
other than the MAP and the related uncertainty measures. The
function relies on the application of the Pivotal Reordering Algorithm (PRA) proposed by~\cite{Marin:Mengersen:Robert} by means of a call to the \code{pra} routine of the R package \code{label.switching}~\citep{Papastamoulis}.
To illustrate the LS adjustment, we first generated a sample of $N=300$ orderings of $K=4$ items from a 2-component PL mixture
\begin{CodeChunk}
\begin{CodeInput}
> p_par <- rbind(c(.7,.2,.08,.02), c(.55,.3,.03,.12))
> w_par <- c(0.7, 0.3)
> set.seed(70476)
> sim_orderings <- rPLMIX(n=300, K=4, G=2, p=p_par,
+ weights=w_par, format="ordering)$sim_data
\end{CodeInput}
\end{CodeChunk}
With this
parameter setting, the component-specific
modal orderings turn out to be adjacent
in terms of the Kendall distance, since only their last two positions are switched. Of course, the closeness of the PL components facilitates the occurrence of LS.
Then, we fitted the 2-PL mixture with uninformative priors
by means of
the EM algorithm
and finally we used the resulting MAP solutions to initialize the GS
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(70476)
> MAP <- mapPLMIX_multistart(pi_inv=sim_orderings, K=4, G=2,
+ n_start=30, n_iter=1000, parallel=TRUE)
MAP$mod$P_map
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4]
[1,] 0.6535795 0.252212857 0.061412559 0.03279511
[2,] 0.5023897 0.001954253 0.002959641 0.49269641
\end{CodeOutput}
\begin{CodeInput}
MAP$mod$W_map
\end{CodeInput}
\begin{CodeOutput}
[1] 0.96190009 0.03809991
\end{CodeOutput}
\begin{CodeInput}
> set.seed(70476)
> GIBBS <- gibbsPLMIX(pi_inv=sim_orderings, K=4, G=2, ,
+ init=list(p=MAP$mod$P_map, z=binary_group_ind(MAP$mod$class_map, G=2)))
\end{CodeInput}
\end{CodeChunk}
\begin{figure}[t]
\centering
\includegraphics[scale=.6]{Rplot-LSeffect.pdf}
\caption{Traceplots of mixture weights before and after the application of the PRA.}
\label{fig:LS}
\end{figure}
The two samples of the mixture weights
are shown
in Figure~\ref{fig:LS} (left).
The occurrence of LS is
testified by the multiple swaps of the traceplots,
indicating several transitions of the sampler from an artificial mode to another.
Indeed, a remarkable percentage of the chain
is affected by LS, leading to similar (and invalid)
posterior means.
Here are
those of the support parameters
\begin{CodeChunk}
\begin{CodeInput}
> matrix(colMeans(GIBBS$P), ncol=4)
[,1] [,2] [,3] [,4]
\end{CodeInput}
\begin{CodeOutput}
[1,] 0.5761867 0.2621617 0.07650548 0.08514619
[2,] 0.5415158 0.2795709 0.08259536 0.09631793
\end{CodeOutput}
\end{CodeChunk}
The post-processing of the raw MCMC output with the PRA
can be implemented as follows
\begin{CodeChunk}
\begin{CodeInput}
> LS <- label_switchPLMIX(pi_inv=sim_orderings, seq_G=2,
+ MCMCsampleP=list(GIBBS$P), MCMCsampleW=list(GIBBS$W),
+ MAPestP=list(MAP$mod$P_map), MAPestW=list(MAP$mod$W_map))
\end{CodeInput}
\end{CodeChunk}
whose inputs values are defined as those of the functions for the model selection and the posterior predictive check.
The traceplots in Figure~\ref{fig:LS} (right) reveal a very good performance of the PRA in removing the artificial multimodality. The adjusted
posterior summaries are
\begin{CodeChunk}
\begin{CodeInput}
> apply(LS$final_sampleP$G_2, 2, rowMeans)
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4]
[1,] 0.6742229 0.2278699 0.06266152 0.03524561
[2,] 0.4434795 0.3138627 0.09643932 0.14621852
\end{CodeOutput}
\begin{CodeInput}
> colMeans(LS$final_sampleW$G_2)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.8511395 0.1488605
\end{CodeOutput}
\end{CodeChunk}
We can
detect
a certain discrepancy between the adjusted GS estimates and the true parameter values, although the actual order of the support parameters within each group is fully recovered.
As expected, when the two mixture components considerably overlap, it is more difficult to reconstruct the actual group membership of the sample units,
with consequent negative effects on the final estimates.
On the other hand, the performance of the GS turns out to be better than the MAP estimate
(MLE solution), since the latter completely fails to infer the minor mixture component.
\subsection{A comparison with the prefmod package: CARCONF data}
\label{ss:apprealCARCONF}
To further highlight the possible advantages of the \pkg{PLMIX} package, a comparison with some methods implemented in the \proglang{R} package \pkg{prefmod}~\citep{Hatz:Ditt:2012} is provided. \pkg{prefmod} represents a
flexible package
for the analysis of preference data expressed in the form of
PCs. The same framework is applicable also for ranking data. A ranking of $K$ items, in fact, can be
decomposed into the equivalent \textit{pattern} of $K(K-1)/2$ PCs, where the alternatives are compared two at a time and the preferred one is specified. For this reason, ranking models based on PCs are also referred to as \textit{pattern models}.
To explore the unobserved sample heterogeneity of the CARCONF data with the \pkg{prefmod} package, we considered the nonparametric maximum likelihood approach (NPML) described in~\citep{Hatz:Ditt:2012} and estimated pattern models with discrete random effects. In this way, the resulting NPML clustering of the sample units into latent classes can be more straightforwardly compared with the classification via finite PL mixtures.
Since the NPML method in \pkg{prefmod} accepts only full observations as input data, we first performed a completion of the partial ordering dataset \code{d_carconf} with the function \code{make_complete}, by using the frequencies $\underline{r}(\upi^{-1})$ for the random imputation
\begin{CodeChunk}
\begin{CodeInput}
> N <- nrow(d_carconf)
> K <- ncol(d_carconf)
> summaries <- rank_summaries(data=d_carconf_compl, format="ordering",
+ mean_rank=FALSE, pc=TRUE)
> top_freq <- summaries$marginals["Rank_1",]
> set.seed(57524)
> d_carconf_compl <- make_complete(data=d_carconf, format="ordering",
+ probitems=top_freq)$completedata
\end{CodeInput}
\end{CodeChunk}
and we then converted the dataset into a \code{data.frame} of rankings with labeled columns denoting the $K=6$ car modules
\begin{CodeChunk}
\begin{CodeInput}
> d_carconf_compl_r <- data.frame(rank_ord_switch(d_carconf_compl,
+ format="ordering"))
> names(d_carconf_compl_r) <- c("price", "exterior", "brand",
+ "tech.equip", "country", "interior")
\end{CodeInput}
\end{CodeChunk}
After constructing the design matrix needed for the \pkg{prefmod} commands
\begin{CodeChunk}
\begin{CodeInput}
> library(prefmod)
> dsg <- patt.design(obj=d_carconf_compl_r, nitems=K,
+ objnames=names(d_carconf_compl_r), resptype="ranking")
\end{CodeInput}
\end{CodeChunk}
four random effects pattern models were estimated
with the function \code{pattnpml.fit}, by varying the number of latent classes from $G=1$ to $G=4$
\begin{CodeChunk}
\begin{CodeInput}
> npml1 <- pattnpml.fit(formula= y ~ price + exterior + brand +
+ tech.equip + country + interior, k=1, design=dsg, seed=57524)
> npml2 <- pattnpml.fit(formula= y ~ 1, random= ~price + exterior + brand +
+ tech.equip + country + interior, k=2, design=dsg, seed=57524)
> npml3 <- update(npml2, k=3)
> npml4 <- update(npml2, k=4)
\end{CodeInput}
\end{CodeChunk}
The corresponding BIC values are listed below
\begin{CodeChunk}
\begin{CodeInput}
> BIC(npml1, npml2, npml3, npml4)
\end{CodeInput}
\begin{CodeOutput}
df BIC
npml1 6 1385.398
npml2 12 1398.626
npml3 18 1431.977
npml4 24 1468.038
\end{CodeOutput}
\end{CodeChunk}
suggesting the homogeneous ($G=1$) pattern model as the optimal one (minimum BIC value).
For comparison purposes, we re-fitted the selected 1-class pattern model within the MLE framework and computed the corresponding BIC
\begin{CodeChunk}
\begin{CodeInput}
> patt.mod <- pattR.fit(obj=d_carconf_compl_r, nitems=K,
+ obj.names=names(d_carconf_compl_r))
> -2*patt.mod$ll + (K-1)*log(N)
\end{CodeInput}
\begin{CodeOutput}
[1] 5509.968
\end{CodeOutput}
\end{CodeChunk}
By adopting the MAP procedure with flat priors to fit $G$-component PL mixtures with $G=1,\dots,4$ and to recover the MLE solutions, we obtained the following BIC results
\begin{CodeChunk}
\begin{CodeInput}
> for(i in 1:4){
+ set.seed(57524)
+ assign(paste0("MAP_",i), mapPLMIX_multistart(pi_inv=d_carconf_compl, K=K,
+ G=i, n_start=30, n_iter=400*i, parallel=TRUE))
+ }
> rbind(MAP_1$mod$bic, MAP_2$mod$bic, MAP_3$mod$bic, MAP_4$mod$bic)
\end{CodeInput}
\begin{CodeOutput}
[,1]
[1,] 5475.685
[2,] 5484.724
[3,] 5504.845
[4,] 5530.541
\end{CodeOutput}
\end{CodeChunk}
Interestingly, the minimum BIC value is still achieved in correspondence of the homogeneous model, but it turns out to be significantly smaller than that associated to the pattern model, meaning that the PL assumption considerably improves the fitting of the CARCONF data. To stress the importance of goodness-of-fit diagnostics,
we also checked the adequacy of the two frequentist models to recover the sample statistics described in Section~\ref{ss:ma},
given by
\begin{CodeChunk}
\begin{CodeInput}
> top_freq <- rank_summaries(data=d_carconf_compl, format="ordering",
+ mean_rank=FALSE, pairedcomparisons=TRUE)$marginals["Rank_1",]
> pc_freq <- summaries$pairedcomparisons
> pc_freq <- pc_freq[lower.tri(pc_freq)]
\end{CodeInput}
\end{CodeChunk}
By adopting the traditional chi-squared test, for the 1-class pattern model we obtained
\begin{CodeChunk}
\begin{CodeInput}
> worthPATT <- patt.worth(patt.mod)
> chisq.test(x=top_freq, p=c(worthPATT), correct=FALSE, rescale.p=TRUE)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.000244741
\end{CodeOutput}
\begin{CodeInput}
> df2 <- K*(K-1)/2-1
> n.tot.matches=rep(N,df2+1)
> exp.freq.pcPATT=Freq_th(p=worthPATT,n.matches=n.tot.matches)[,2]
> obs.chisq.pcPATT <- sum((pc_freq-exp.freq.pcPATT)^2/exp.freq.pcPATT)
> pchisq(q=obs.chisq.pcPATT, df=df2, lower.tail=FALSE)
\end{CodeInput}
\begin{CodeOutput}
[1] 1.39225e-13
\end{CodeOutput}
\end{CodeChunk}
where the function \code{patt.worth} returns the estimated support parameters of the pattern model, needed for the computation of the expected frequencies. As evident, both $p$-values are well below the critical threshold 0.05, indicating a remarkably poor fitting.
However, some deficiencies to recover the marginal most-liked item distribution can be highlighted also for the 1-component PL mixture, whereas they do not seem to emerge for the PCs
\begin{CodeChunk}
\begin{CodeInput}
> chisq.test(x=top_freq, p=c(MAP_1$mod$P_map), correct=FALSE, rescale.p=TRUE)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.000244741
\end{CodeOutput}
\begin{CodeInput}
> exp.freq.pcPL=Freq_th(p=MAP_1$mod$P_map,n.matches=n.tot.matches)[,2]
> obs.chisq.pcPL <- sum((pc_freq-exp.freq.pcPL)^2/exp.freq.pcPL)
> pchisq(q=obs.chisq.pcPL, df=df2, lower.tail=FALSE)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.9834508
\end{CodeOutput}
\end{CodeChunk}
We finally estimated Bayesian PL mixtures up to $G=4$ components
by means of the GS algorithm. The MCMC chains were initialized with the MAP solutions and a sample of $\text{\code{n\char`_iter}}=22000$ drawings was obtained for each mixture, including a burn-in phase of $\text{\code{n\char`_burn}}=2000$ iterations
\begin{CodeChunk}
\begin{CodeInput}
> for(i in 1:4){
+ set.seed(57524)
+ assign(paste0("GIBBS_",i), gibbsPLMIX(pi_inv=d_carconf_compl, K=K, G=i,
+ init=list(p=get(paste0("MAP_",i))$mod$P_map,
+ z=binary_group_ind(get(paste0("MAP_",i))$mod$class_map,G=i)),
+ n_iter=22000, n_burn=2000))
+ }
\end{CodeInput}
\end{CodeChunk}
The Bayesian model selection criteria are equal to
\begin{CodeChunk}
\begin{CodeInput}
> selectPLMIX(pi_inv=d_carconf_compl, seq_G=1:4,
+ MAPestP=list(MAP_1$mod$P_map, MAP_2$mod$P_map,
+ MAP_3$mod$P_map, MAP_4$mod$P_map),
+ MAPestW=list(MAP_1$mod$W_map, MAP_2$mod$W_map,
+ MAP_3$mod$W_map, MAP_4$mod$W_map),
+ deviance=list(GIBBS_1$deviance, GIBBS_2$deviance,
+ GIBBS_3$deviance, GIBBS_4$deviance))$selection_criteria
\end{CodeInput}
\begin{CodeOutput}
DIC1 DIC2 BPIC1 BPIC2 BICM1 BICM2
G_1 5455.352 5455.504 5460.374 5460.678 5476.590 5476.742
G_2 5442.707 5442.993 5455.114 5455.684 5494.715 5495.000
G_3 5443.550 5445.584 5464.543 5468.612 5539.429 5541.463
G_4 5453.448 5446.901 5484.768 5471.674 5547.859 5541.312
\end{CodeOutput}
\end{CodeChunk}
where, with the exception of BICMs, minimal values are reached by the 2-component PL mixture. The evidence in favour of unobserved sample heterogeneity is reinforced by the posterior predictive $p$-values, given by
\begin{CodeChunk}
\begin{CodeInput}
> set.seed(57524)
> ppcheckPLMIX(pi_inv=d_carconf_compl, seq_G=1:4,
+ MCMCsampleP=list(GIBBS_1$P, GIBBS_2$P, GIBBS_3$P, GIBBS_4$P),
+ MCMCsampleW=list(GIBBS_1$W, GIBBS_2$W, GIBBS_3$W, GIBBS_4$W),
+ parallel=TRUE)$post_pred_pvalue
\end{CodeInput}
\begin{CodeOutput}
post_pred_pvalue_top1 post_pred_pvalue_paired
G_1 0.00025 0.33580
G_2 0.09930 0.51095
G_3 0.10675 0.50740
G_4 0.10385 0.49965
\end{CodeOutput}
\end{CodeChunk}
that, for $G>1$, are all above the reference threshold 0.05.
The posterior samples for the optimal Bayesian 2-component PL mixture can be finally adjusted to remove the LS
\begin{CodeChunk}
\begin{CodeInput}
> LS <- label_switchPLMIX(pi_inv=d_carconf_compl, seq_G=2,
+ MCMCsampleP=list(GIBBS_2$P), MCMCsampleW=list(GIBBS_2$W),
+ MAPestP=list(MAP_2$mod$P_map), MAPestW=list(MAP_2$mod$W_map))
\end{CodeInput}
\end{CodeChunk}
The final posterior means and standard deviations can be easily computed as follows
\begin{CodeChunk}
\begin{CodeInput}
> round(colMeans(LS$final_sampleW$G_2), 3)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.769 0.231
\end{CodeOutput}
\begin{CodeInput}
> round(apply(LS$final_sampleP$G_2, 2, rowMeans), 3)
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.092 0.261 0.182 0.195 0.069 0.201
[2,] 0.445 0.113 0.176 0.135 0.041 0.090
\end{CodeOutput}
\begin{CodeInput}
> round(apply(LS$final_sampleW$G_2, 2, sd), 3)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.098 0.098
\end{CodeOutput}
\begin{CodeInput}
> round(apply(LS$final_sampleP$G_2, c(1,2), sd), 3)
\end{CodeInput}
\begin{CodeOutput}
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.014 0.019 0.016 0.014 0.007 0.020
[2,] 0.124 0.039 0.059 0.036 0.018 0.027
\end{CodeOutput}
\end{CodeChunk}
\section{Conclusions}
\label{s:concl}
When approaching a ranking data analysis, several issues may arise, mainly due to the peculiar structure of ranked sequences as multivariate ordinal data. First, ranking data take values in the finite discrete set of permutations, whose size $K!$ explodes with the total number of items to be ranked. In this perspective, some related difficulties
are the possible occurrence of sparse data situations or the need of a manageable exploration of the ranking space.
Secondly, the presence of partial
observations adds further complications.
When the sample is composed
of complete sequences, in fact, the PL could be estimated with methods
related to the log-linear models~\citep{Fienberg} but, in the case of partial
orderings, the likelihood has to be suitably updated and the existing
methods are no longer applicable. All these issues lead to
computationally demanding methods and to the need of developing
specialized softwares to avoid prohibitive execution time. On the
other hand, this has been traditionally
an obstacle for a wider
use of more sophisticated models.
In order to efficiently address the aforementioned practical issues
and
promote the effective
exploration of methodological
advances,
we developed the
\proglang{R} package \pkg{PLMIX}. This is the first package in \proglang{R} that implements
ranking data analysis from a Bayesian inferential perspective. It relies on a hybrid code combining \proglang{R} and \proglang{C++} and exploits the advantages of both programming languages, in particular the flexibility of the \proglang{R} environment and the speed of compiled \proglang{C++} code to guarantee computational efficiency. \pkg{PLMIX} is not limited to inferential techniques but represents a comprehensive toolkit, paying special attention to each step of the
Bayesian PL mixture approach to identify clusters of rankers with similar preferences.
In this regard, the comparative application in Section~\ref{ss:apprealCARCONF} motivates the effectiveness of the Bayesian PL mixture as a profitable parametric alternative
for model-based clustering of partial ranking data in \proglang{R}, and the usefulness of the novel goodness-of-fit diagnostics introduced by \cite{Mollica:Tardella2017}.
As possible directions of future work, the functions of the
\pkg{PLMIX} package could be further extended for the Bayesian
analysis of
EPL mixtures, introduced by \cite{Mollica:Tardella} in the frequentist
domain, or to accommodate for the additional information provided by
subject- and/or item-specific covariates. Also
we note that
there is little availability of
\proglang{R} routines to handle
the presence
of ties in the ranking outcome and
this can stimulate a further improvement. Finally, visualization
techniques for the graphical
illustration
of ranking data analysis could be also
included in a
forthcoming version of the \pkg{PLMIX} package.
\newcommand{\noop}[1]{}
|
2,869,038,154,949 | arxiv | \section{introduction}
Gamma-ray burst (GRB) 060218 associated with SN 2006aj discovered by
Swift (Campana et al. 2006) provides a new example of low-luminosity
GRBs (LL-GRBs), as its isotropic equivalent energy
($\sim6\times10^{49}$ erg) is 100 to 1000 times less but its
duration ($T_{90}=2100\pm100$ s) is much longer than those of
conventional high-luminosity GRBs. More interestingly, besides an
usual non-thermal component in its early X-ray spectrum, a
surprising thermal component was observed by the Swift XRT during
both burst and afterglow phases. Fitting with a blackbody spectrum,
the temperature of this thermal component was inferred to be
$kT\sim0.17$ keV during the first 3 ks. When $t>$10 ks, however, the
peak energy of the blackbody decreased and then passed through the
Swift UVOT energy range at $\sim$100 ks (Campana et al. 2006;
Blustin 2007).
To explain the prompt emission, in principle, a model based on the
internal dissipation of relativistic ejecta may be valid in the case
of GRB 060218. The relativistic GRB ejecta, which interacts with a
dense wind surrounding the progenitor, has also been required to
understand a power-law decaying afterglow in X-ray and radio bands
about $\sim$10 ks after the burst, although some complications
beyond the standard afterglow model are involved (Soderberg et al.
2006; Fan et al. 2006; Waxman et al. 2007). Furthermore, as
suggested by Wang \& M\'esz\'aros (2006), the soft X-ray thermal
emission could arise from a shock breakout, namely, a hot cocoon
that breaks out from the supernova ejecta and the stellar wind. In
detail, a more rapid part of a jet moving in the envelope and the
dense wind is accelerated to a highly-relativistic velocity to
produce the GRB, while a slower part of the jet together with the
outermost parts of the envelope becomes a mildly relativistic cocoon
(M\'esz\'aros \& Rees 2001; Ramirez-Ruiz et al. 2002; Zhang et al.
2003), which locates behind the GRB blast wave.
Although the GRB blast wave runs in front of the shock breakout, the
thermal emission from the latter outshines the former persistently
until the emission of the breakout is switched off. Thus, the
emission properties of the GRB blast wave (consisting of external
shock-accelerated electrons and protons) should be influenced by the
incoming thermal photons significantly during both burst and early
afterglow phases. On one hand, the cooling of the relativistic
electrons, which upscatter the thermal photons, could be dominated
by inverse-Compton (IC) radiation rather than synchrotron radiation
(Wang \& M\'esz\'aros 2006). On the other hand, inferred from the
observations, the intensity of the thermal emission could be
comparable in the same band to the one of the prompt emission due to
internal dissipations and much larger than the one of the afterglow
emission due to an external shock. Therefore, the thermal photons as
target photons for photopion interactions could also play an
important role in the energy loss of the relativistic protons and
thus influence or even dominate neutrino emission of the GRB blast
wave.
It has been widely studied that conventional GRBs in the standard
internal-external shock model emit high energy neutrinos during the
burst, early afterglow, and X-ray flare phases (Waxman \& Bahcall
1997, 2000; Dai \& Lu 2001; Dermer 2002; Dermer \& Atoyan 2003;
Asano 2005; Murase \& Nagataki 2006a, 2006b; Murase 2007; Gupta \&
Zhang 2007). In contrast to the conventional GRBs, LL-GRBs may have
a much higher event rate (several hundred to thousand events $\rm
Gpc^{-3}yr^{-1}$), which is inferred from the fact that two typical
nearby LL-GRBs, i.e., GRB 060218 and GRB 980425, have been observed
within a relatively short period of time (Cobb et al. 2006;
Soderberg 2006; Liang et al. 2007). The high event rate implies that
the contribution from LL-GRBs to the diffuse neutrino background may
be important and that we have more opportunities to detect neutrinos
from a very nearby single LL-GRB event. Therefore, Murase et al.
(2006) and Gupta \& Zhang (2006) recently studied the neutrino
emission properties of LL-GRBs during their burst phase using the
internal shock model, in which the target photons for photopion
interactions are mainly provided by internal shock-driven
non-thermal emission. In this paper, however, we will focus on the
early afterglow neutrino emission. During this phase, relativistic
protons are accelerated by an external shock (rather than internal
shocks) and target photons are dominated by the incoming thermal
emission and even its IC scattered component.
This paper is organized as follows. In section 2 we briefly describe
the dynamics of a GRB blast wave propagating into a surrounding
dense wind. In section 3, we give the photon distribution in the
blast wave by considering the incoming thermal emission and its IC
scattered component, but the weak synchrotron radiation of the
electrons is ignored. In section 4, neutrino spectra are derived
formally with an energy loss timescale of protons due to photopion
interactions. In section 5, we calculate the timescale and then the
neutrino spectra using the target photon spectra obtained in section
3 and an experiential fitting formula of the cross section of
photopion interactions. In addition, the peak of a neutrino spectrum
is also estimated analytically by using $\Delta-$approximation.
Finally, a summary is given in section 6.
\section{dynamics of a GRB blast wave}
We consider a GRB jet with isotropic equivalent energy
$E=10^{50}E_{50}~{\rm erg}$ (hereafter $Q_{x}=Q/10^{x}$) expanding
into a dense wind medium with density profile $\rho(r)=Ar^{-2}$.
Here, the coefficient $A$ is determined by the mass loss rate and
velocity of the wind of the progenitor, i.e., $A=\dot{M}/4\pi v_{\rm
w}=5.0\times10^{11}{\rm g~cm^{-1}}A_{*}$, where
$A_{*}\equiv[\dot{M}/(10^{-5}{M_{\odot}~\rm yr^{-1}})][v_{\rm
w}/(10^3\rm km~s^{-1})]^{-1}$. From Dai \& Lu (1998) and Chevalier
\& Li (2000), we get the Lorentz factor and radius of the GRB blast
wave (i.e., the external-shocked wind gas) respectively as
\begin{equation}
\Gamma=\left({9E\over128\pi
Ac^3t}\right)^{1/4}=3.6~E_{50}^{1/4}A_{*}^{-1/4}t_3^{-1/4},
\end{equation}
\begin{equation}
r=\left({9Et\over2\pi Ac}\right)^{1/2}=3.1\times10^{15}{\rm
cm}~E_{50}^{1/2}A_{*}^{-1/2}t_3^{1/2}.
\end{equation}
They satisfy $r=8\Gamma^2ct$, which gives rise to a relationship,
$t'=(16/3)\Gamma t$, between the dynamic time $t'$ measured in the
rest frame of the blast wave and the observed time $t$ (Dai \& Lu
1998).
As the circum-burst wind materials are swept up and shocked, most of the heated electrons before cooling
concentrate at the minimum Lorentz factor $\gamma'_{e, m}\sim\bar{\epsilon}_{e}{m_{p}\over
m_{e}}\Gamma=659~\bar{\epsilon}_{e,-1}E_{50}^{1/4}A_{*}^{-1/4}t_3^{-1/4}$. The symbol
$\bar{\epsilon}_{e}\equiv\epsilon_{e}(p-2)/(p-1)$, where $\epsilon_{e}$ is the usual equipartition factor of the
hot electrons and $p$ is the electron's energy distribution index (where $p>2$ is only considered). Meanwhile, a
fraction $\epsilon_{B}$ of the internal energy is assumed to be occupied by a magnetic field, and then the
strengthen of the magnetic field is calculated by $B'\sim(32\pi \epsilon_{B}\Gamma^2\rho c^2)^{1/2}=78{\rm
G}~\epsilon_{B,-1}^{1/2}E_{50}^{-1/4}A_{*}^{3/4}t_3^{-3/4}$. Finally, the other energy (a fraction of
$\epsilon_p=1-\epsilon_e-\epsilon_B$) is carried by the accelerated protons. For these protons, we can estimate
their maximum energy by $E_{p,\rm max}=2eB'r/3=4.8\times10^{10}{\rm
GeV}~\epsilon_{B,-1}^{1/2}E_{50}^{1/4}A_{*}^{1/4}t_3^{-1/4}$ by equating the acceleration time to the shorter of
the dynamic time and the synchrotron cooling time (Razzaque el al. 2006). However, the minimum energy of the
protons is unknown, but the corresponding Lorentz factor $\gamma'_{p,\rm min}$ is thought to be close to
$\sim\Gamma$.
\section{photon emission}
The photons in the GRB blast wave have two origins, i.e., the blast
wave self and the inner supernova shock breakout. The electrons in
the blast wave emit photons via synchrotron and IC scattering
processes. Moreover, as analyzed by Wang \& M\'esz\'aros (2006), the
synchrotron radiation (peaking within X-ray band) of the blast wave
electrons is inferred from the observations to be much weaker than
the incoming thermal emission, and thus the cooling of the electrons
should be dominated by their IC scattering off the thermal photons.
Therefore, in following calculations, we consider the thermal
emission and its subsequent IC scattered component only.
The properties of the supernova shock breakout have been unclear to
date. We suppose that it has a constant blackbody temperature of
$kT=0.1 {\rm keV}(kT)_{-1}$ and a constant radius of the emission
region of $R=10^{12}{\rm cm}R_{12}$. The lifetime $t_{\rm SB}$ of
this high-temperature emission is about thousands of seconds, which
is considered to be several to several ten times longer than the
duration of the GRB. Then, the isotropic equivalent luminosity and
energy of the shock breakout can be estimated by $L_{\rm SB}=4\pi
R^2\sigma T^4=1.3\times 10^{45}{\rm erg~s^{-1}}~(kT)_{-1}^4R_{12}^2$
and $E_{\rm SB}=1.3\times 10^{48}{\rm erg}~(kT)_{-1}^4R_{12}^2t_{\rm
SB,3}$, respectively. Meanwhile, it is easy to write the
monochromatic number density of these thermal photons at the
breakout as
\begin{equation}
n(E_{\gamma})={8\pi\over h^3c^3}{E_{\gamma}^2\over
\exp(E_{\gamma}/kT)-1}= {8\pi k^2T^2\over
h^3c^3}\phi\left({E_{\gamma}\over kT}\right),\label{nphbo}
\end{equation}
where the function $\phi(x)={x^2/(e^{x}-1})$. Assuming that the
photons propagate freely before they reach the GRB blast wave at
radius $r$, we can calculate the density of the thermal photons in
the blast wave by multiplying a factor $(R/r)^2$ to Eq.
(\ref{nphbo}). Subsequently, after Lorentz transformation, we obtain
the density of the incoming photons in the blast wave measured in
its rest frame by
\begin{equation}
n'_{\rm in}(E'_{\gamma,\rm in})={R^2\over r^2}n(\Gamma
E'_{\gamma,\rm in})={R^2\over r^2}{8\pi k^2T^2\over
h^3c^3}\phi\left({3E'_{\gamma,\rm in}\over E'_{\gamma,\rm
pk1}}\right),\label{nphin}
\end{equation}
where $E'_{\gamma,\rm pk1}\equiv 3kT/\Gamma$ is the peak energy of
the black body spectrum. When these photons cross the blast wave, a
part of them should be upscattered by the relativistic electrons.
The energy of the IC scattered photons can be estimated by
${E'}_{\gamma, \rm IC}=2{\gamma'}_{e,m}^2E'_{\gamma,\rm in}$ and the
corresponding density by
\begin{eqnarray}
n'_{\rm IC}({E'}_{\gamma, \rm IC}) & = & {\tau\over
2{\gamma'}_{e,m}^2} n'_{\rm in}\left({{E'}_{\gamma, \rm IC}}\over
2{\gamma'}_{e,m}^2\right)\nonumber \\ & = & {\tau\over
2{\gamma'}_{e,m}^2}{R^2\over r^2}{8\pi k^2T^2\over
h^3c^3}\phi\left({3E'_{\gamma,\rm IC}\over E'_{\gamma,\rm
pk2}}\right),\label{nphic}
\end{eqnarray}
where $E'_{\gamma,\rm pk2}\equiv 6{\gamma'}_{e,m}^2kT/\Gamma$. The
probability of the scattering is represented by the photon optical
depth of the blast wave, $ \tau=\sigma_{\rm
T}({A/m_pr})=6.5\times10^{-5}~E_{50}^{-1/2}A_{*}^{3/2}t_3^{-1/2}$,
where $\sigma_{\rm T}$ is the Thomson cross section. According to
this estimation, Wang \& M\'esz\'aros (2006) predicted that the
early afterglow spectra of GRB 060218-like GRBs may have a bimodal
profile peaking at
\begin{equation}
{E}_{\gamma,\rm pk1}=0.3~{\rm keV}~(kT)_{-1}\label{egb1}
\end{equation}
and
\begin{equation}
{E}_{\gamma,\rm pk2}=0.26~{\rm GeV}~\bar{\epsilon}_{\rm
e,-1}^{2}(kT)_{-1}E_{50}^{1/2}A_{*}^{-1/2}t_3^{-1/2}.\label{egb2}
\end{equation}
Thus, a significant sub-GeV or GeV emission component accompanying
the thermal emission would be detectable with the upcoming
\textit{Gamma-ray Large Area Space Telescope}, which could provide
evidence for the GRB jet.
\section{neutrino production}
Since relativistic protons in the GRB blast wave are immersed in the photon field described above, the protons
would lose their energy to produce mesons such as $\pi^0$ and $\pi^\pm$ etc, and subsequently generate neutrinos
by the decay of $\pi^{\pm}$, i.e., $\pi^{\pm}\rightarrow \mu^{\pm}+\nu_{\mu}(\bar{\nu}_{\mu})\rightarrow
e^{\pm}+\nu_{e}(\bar{\nu}_{e})+\bar{\nu}_{\mu}+\nu_{\mu}$. During these processes, the energy loss rate of a
proton with energy $E'_{p}=\gamma'_{p}m_{p}c^2$ can be calculated by (Waxman \& Bahcall 1997)\footnote{To obtain
this expression, an isotropic target photon field is required. However, in our model, the radially incoming
photon field is seen by the protons in the blast wave anisotropically. This gives an extra complication for a
more realistic consideration. For simplicity, we ignore the anisotropic effect in our calculations.}
\begin{eqnarray}
{t'}_{\pi}^{-1} & \equiv & -{1\over E'_{p}}{dE'_{p}\over dt'}\nonumber \\
& = & {c\over 2{\gamma'}_{p}^2}\int_{\tilde{E}_{\rm
th}}^{\infty}\sigma_{\pi}(\tilde{E})\xi(\tilde{E})\tilde{E}\nonumber
\\& & \times \left[\int_{\tilde{E}/2\gamma'_{p}}^{\infty}n'(E'_{\gamma})
{E'}_{\gamma}^{-2}dE'_{\gamma}\right]d\tilde{E},\label{tpg1}
\end{eqnarray}
where $\sigma_{\pi}(\tilde{E})$ is the cross section of photopion
interactions for a target photon with energy $\tilde{E}$ in the
proton's rest frame, $\xi$ is the inelasticity defined as the
fraction of energy loss of a proton to the resultant pions, and
$\tilde{E}_{\rm th}=0.15\rm GeV$ is the threshold energy of the
interactions. Equation (\ref{tpg1}) yields that the energy of the
protons decreases as
$\exp\left[-\int_0^{t'}({dt'/{t'}_{\pi}})\right]$. In our scenario,
if the shock-breakout emission could last for a period of $t_{\rm
SB}$, the fraction of the energy loss of the protons to pions could
be calculated by
\begin{equation}
f_{\pi}=1-\exp\left(-\int_0^{t'_{\rm
SB}}{dt'\over{t'}_{\pi}}\right),\label{fpi1}
\end{equation}
where $t'_{\rm SB}=(16/3)\Gamma t_{\rm SB}$. In order to calculate
$t'_{\pi}$, the crucial input in the model is the target photon
spectrum $n'(E'_{\gamma})$. From Eqs. (\ref{nphin}) and
(\ref{nphic}), we know that $n'(E'_{\gamma})$ depends on both $r$
and $\Gamma$ and thus the value of $t'_{\pi}$ could evolve with
time. However, if $t'_{\pi}$ is independent of time or varies with
time slowly, Eq. (\ref{fpi1}) can be also approximated by
\begin{equation}
f_{\pi}\approx 1-\exp(-t'_{\rm SB}/{t'}_{\pi})\approx \min[t'_{\rm
SB}/{t'}_{\pi},1]\label{fpiapp}
\end{equation}
as usual, especially for analytical calculations.
To be specific, the energy loss of the protons is shared by
$\pi^{\pm}$ and $\pi^{0}$ with a certain ratio. Unfortunately, it is
not easy to fix this ratio due to the complications arising from
various single-pion and multipion production processes. In following
calculations, we simply take it to be a constant,
$\pi^{\pm}:\pi^0=2:1$, as in Asano (2005). Furthermore, two
resultant muon-neutrinos from the decay of a $\pi^{\pm}$ could
inherit half of the pion's energy roughly evenly. Therefore, we can
relate the neutrino energy $E_{\nu}$ to the energy loss of the
primary proton by
\begin{equation}
E_{\nu}={1\over4}\xi E_{p},\label{enu}
\end{equation}
and give an observed time-integrated muon-neutrino spectrum by
\begin{equation}
E_{\nu}^2\phi_{\nu}\equiv {1\over 4\pi
D_{l}^2}E_{\nu}^2{dN_{\nu}\over dE_{\nu}}={1\over 4\pi
D_{l}^2}{f_{\pi}\over 3} {E}_{p}^2{dN_{p}\over
dE_{p}}\label{nuspectra},
\end{equation}
where $D_l$ is the luminosity distance of the burst. As usual, we
assume the energy distribution of the shock-accelerated protons to
be $({dN_{p}/ dE_{p}})\propto{E}_{p}^{-2}$, where the proportional
coefficient can be calculated by $\epsilon_pE/\ln(E_{p,\rm
max}/E_{p,\rm min})$.
In addition, because of the presence of the magnetic field, the
ultrahigh energy pions and muons would lose their energy via
synchrotron radiation before decay. This leads to breaks in the
neutrino spectrum at (Murase 2007)
\begin{eqnarray}
E_{\nu, \rm b}^{s\pi} & = & {1\over4}E_{\rm
\pi,b}={1\over4}\Gamma\left({6\pi m_{\pi}^5c^5\over \sigma_{\rm
T}m_{\rm e}^2B'^2\tau_{\pi}}\right)^{1/2}\nonumber \\ & = &
1.2\times 10^{9}{\rm
GeV}~\epsilon_{B,-1}^{-1/2}E_{50}^{1/2}A_{*}^{-1}t_3^{1/2},
\end{eqnarray}
\begin{eqnarray}
E_{\nu, \rm b}^{s\mu} & = & {1\over3}E_{\mu, \rm
b}={1\over3}\Gamma\left({6\pi m_{\mu}^5c^5\over \sigma_{\rm T}m_{\rm
e}^2B'^2\tau_{\mu}}\right)^{1/2}\nonumber \\ & = & 8.9\times
10^{7}{\rm
GeV}~\epsilon_{B,-1}^{-1/2}E_{50}^{1/2}A_{*}^{-1}t_3^{1/2},
\end{eqnarray}
where $\tau_{\pi}=2.6\times 10^{-8}$s and $\tau_{\mu}=2.2\times 10^{-6}$s are the mean lifetimes of pions and
muons in their rest frames. Above $E_{\nu, \rm b}^s$, the neutrino flux would be suppressed by a factor
$(E_{\nu}/E_{\nu, \rm b}^s)^{-2}$ (Rachen \& M\'esz\'aros 1998; Razzaque et al. 2006). However, as pointed out
by Asano \& Nagataki (2006), neutral kaons can survive in the magnetic field, while the ultrahigh-energy
charged pions and muons cool rapidly. Moreover, because kaons have a larger rest mass than pions and muons,
charged kaons can reach higher energy although they also suffer from synchrotron cooling. Thus, decay of kaons,
which is not taken into account in our calculations, may dominate neutrino emission above $\sim10^8-10^9$GeV.
Now, by inserting Eqs. (\ref{nphin}) and (\ref{nphic}) into Eq.
(\ref{tpg1}) and then into Eq. (\ref{fpi1}) to get $f_\pi$, we can
easily obtain the observed neutrino spectra from Eq.
(\ref{nuspectra}) for our scenario. The remaining task is only to
express the cross section $\sigma_{\pi}(\tilde{E})$ and inelasticity
for photopion interactions.
\section{results}
Since the cross section of photopion interactions peaks at
$\tilde{E}_{\Delta}\simeq0.3\rm GeV$ due to the
$\Delta$(1232)-resonance, the integration over $\tilde{E}$ in Eq.
(\ref{tpg1}) can be roughly approximated by
\begin{equation}
{t'}_{\pi}^{-1}\approx{c\over 2{\gamma'}_{p}^2}\sigma_{\pi,\Delta}\xi_{
\Delta}\tilde{E}_{\Delta}\delta\tilde{E}\int_{\tilde{E}_{\Delta}/2\gamma'_{
p}}^{\infty}n'(E'_{\gamma}){E'}_{\gamma}^{-2}dE'_{\gamma},\label{tpg}
\end{equation}
where $\sigma_{\pi,\Delta}\approx0.5\rm mbarn$,
$\xi_{\Delta}\approx0.2$, and the peak width is about
$\delta\tilde{E}\approx0.2\rm GeV$. Inserting Eq. (\ref{nphin}) or
(\ref{nphic}) into Eq. (\ref{tpg}), we use the approximative formula
$f_{\pi}=\min[t'_{\rm SB}/t'_{\pi}$,1] to obtain
\begin{eqnarray}
{f}_{\pi} &=& \min\left\{{16\over3}\varsigma{R^2\over r^2}{8\pi
k^2T^2\over h^3c^2}{2E'_{\gamma,\rm pk}\over
3\tilde{E}_{\Delta}}\sigma_{\pi,\Delta}\xi_{\Delta}\delta\tilde{E}\Gamma
t_{\rm SB}\right. \nonumber\\
&&\left.\times\varepsilon_{*}^2\left[\varepsilon_{*}-\ln
(e^{\varepsilon_{*}}-1)\right],~1\right\}
.\label{fpi}
\end{eqnarray}
where $\varsigma=1$ and $\tau/(2{\gamma'}_{e,m}^2)$ for pre- and
post-upscattered target photons, respectively. The dimensionless
variable $\varepsilon_{*}$ is defined by
$\varepsilon_{*}\equiv3\tilde{E}_{\Delta}/(2\gamma'_{p}E'_{\gamma,\rm
pk}) = {3\xi_{\Delta}\Gamma^2\tilde{E}_{\Delta}m_{p}c^2
/(8{E}_{\nu}E_{\gamma,\rm
pk}})$. In the case of $f_{\pi}<1$,
the peak value of $f_{\pi}$ reading
\begin{equation}
f_{\pi,\rm pk}=3\varsigma{R^2\over r^2}{8\pi k^2T^2\over h^3c^2}{2E'_{\gamma,\rm pk}\over
3\tilde{E}_{\Delta}}\sigma_{\pi,\Delta}\xi_{\Delta}\delta\tilde{E}\Gamma t_{\rm SB}
\end{equation}
is at $\varepsilon_*=1.8$, which gives rise to the relationship between the peak energies of the neutrino and
photon spectra as $E_{\nu,\rm pk}E_{\gamma,\rm pk}=0.01\Gamma^2\rm GeV^2$. Considering the bimodal distribution
of the target photons peaking at $E_{\gamma,\rm pk1}$ and $E_{\gamma,\rm pk2}$, two peaks are also expected in
the resultant neutrino spectrum but only the one determined by $E_{\gamma,\rm pk1}$ could fall into the high
energy range ($E_{\nu}>\rm TeV$) of our interest at
\begin{equation}
{E}_{\nu,\rm pk}=4.9\times10^5{\rm
GeV}~(kT)_{-1}^{-1}E_{50}^{1/2}A_{*}^{-1/2}t_{\rm SB,3}^{-1/2}.
\end{equation}
In other words, the target photons for photopion interactions of
interest are contributed by the incoming thermal emission mainly.
The value of the differential neutrino fluence at ${E}_{\nu,\rm pk}$
reads
\begin{equation}
[E_{\nu}^2\phi_{\nu}]_{\rm pk}=2.0\times 10^{-6}{\rm
erg~cm^{-2}}~\epsilon_p(kT)_{-1}^3R_{12}^2A_{*}D_{l,25.5}^{-2},
\end{equation}
which is calculated by using the peak value of $f_{\pi}$ as
\begin{equation}
f_{\pi,\rm pk}=0.02~(kT)_{-1}^3R_{12}^{2}E_{50}^{-1}A_{*}.\label{fpipk}
\end{equation}
On the other hand, when $f_{\pi}=1$, $E_{\nu}^2\phi_{\nu}$ would
reach an upper limit as $1.2\times10^{-4}{\rm
erg~cm^{-2}}~\epsilon_pE_{50}D_{l,25.5}^{-2}$, which is determined
by the total energy carried by the protons in the GRB blast wave.
Although it is convenient and effective to use the $\Delta-$approximation to estimate the peak of a neutrino
spectrum, the $\Delta-$approximation would lead to an remarkable underestimation of the neutrino flux above the
peak energy due to the non-zero cross section of photopion interactions in high energy regions. So, for more
careful calculations, we provide an experiential fitting formula for the cross section as shown in Eq.
(\ref{sigtotal}), which is extrapolated from experimental data taken from particle data group (Yao et al. 2006).
However, since we can not find a simple expression for the inelasticity, we take $\xi =0.2$ for all energy
regions roughly, which may leads to a mild underestimation of the neutrino flux in the high energy regions.
Finally, with these inputs, we plot the observed time-integrated muon-neutrino spectra in Fig. 1. Obviously, two
plateaus exist in the neutrino spectra. To be specific, as shown in the upper panel of Fig.1, the high-energy
plateau is produced by the lower energy thermal photons, while the low-energy plateau is produced by the higher
energy IC scattered photons. In addition, from a comparison shown in the lower panel of Fig. 1, we can see that
the approximation for $f_{\pi}$ in Eq. (\ref{fpiapp}) is feasible to some extent for the thermal seed
photon-dominated photopion interactions, but not for the IC scattered photon-dominated interactions. This
difference of these two kinds of interaction arises from different temporal behaviors of $t'_{\pi}$.
Next let's discuss the detectability of the afterglow neutrinos,
using the following fitting formula for the probability of detecting
muon-neutrinos
by IceCube (Ioka et al. 2005; Razzaque et al. 2004)
\begin{equation}
P_{\nu}=7\times10^{-5}\left({E_{\nu}\over 3.2\times10^4\rm
GeV}\right)^{\beta},\label{deprob}
\end{equation}
where $\beta=1.35$ for $E_{\nu}<3.2\times10^4\rm GeV$, while
$\beta=0.55$ for $E_{\nu}\geq3.2\times10^4\rm GeV$. The number of
muon events from muon-neutrinos above TeV energy is given by
\begin{equation}
N_{\mu}=A_{\rm det}\int_{\rm TeV}\phi_{\nu}P_{\nu}dE_{\nu},
\end{equation}
where $A_{\rm det}\sim1\rm km^2$ is the geometrical detector area.
Inserting Eqs. (\ref{nuspectra}) and (\ref{deprob}) into the above
integral, we obtain $N_{\mu}\sim0.1$ for the parameter set
($E_{50}=1$, $A_{*}=10$, $(kT)_{-1}=2$, $R_{12}=1$ and $t_{\rm
SB,3}=3$) inferred from GRB 060218 for a very nearby LL-GRB at 50
Mpc, where a LL-GRB event is expected to be observed within a
many-years observation. According to this estimation, we expect
optimistically that IceCube may be able to detect afterglow
neutrinos from one LL-GRB event in the following decades. If such a
detection comes true, the afterglow neutrino emission accompanying
the soft X-ray thermal and sub-GeV or GeV emissions from a GRB
060218-like GRB event would provide strong evidence for the picture
that a supernova shock breakout locates behind a relativistic GRB
jet, and further would be used to constrain the model parameters
severely.
Besides the possible detection of neutrinos from a single LL-GRB
event, the contribution to the neutrino background from LL-GRBs is
also expected to be important. We can estimate the diffuse
muon-neutrino flux arising from afterglow neutrino emission of
LL-GRBs by (Waxman \& Bahcall 1998; Murase et al. 2006)
\begin{eqnarray}
E_{\nu}^2\Phi_{\nu} & \sim & {c\over 4\pi
H_0}{f_{\pi}\over3}f_b\epsilon_pE_p^2{dN_p\over dE_p}R_{\rm
LL}(0)f_{z}\nonumber \\
& = & 2.5\times 10^{-11}{\rm
GeV~cm^{-2}s^{-1}sr^{-1}}~\epsilon_p(kT)_{-1}^3R_{12}^2A_{*}
\nonumber
\\ & & \times f_b\left({R_{\rm LL}(0)\over 500{\rm
Gpc^{-3}yr^{-1}}}\right)\left({f_z\over 3}\right),\label{diffuse}
\end{eqnarray}
where $H_0=71 \rm km~s^{-1}Mpc^{-1}$, $f_b$ is the beaming factor, and $f_z$ is the correction factor for the
possible contribution from high-redshift sources. In the above estimation, the approximative value of $f_{\pi}$
in Eq. (\ref{fpipk}) is applied. By comparing Eq. (\ref{diffuse}) to Eq. (3) of Murase et al. (2006), we find
that, for LL-GRBs, the contribution to the diffuse neutrino background by the early afterglow neutrino emission
may be relatively smaller than or even comparable to (e.g., for model parameters $\epsilon_p=0.6$,
$(kT)_{-1}=2$, $R_{12}=1$, and $A_{*}=10$) that by the burst neutrino emission.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig1.eps}} \caption{The
time-integrated afterglow muon-neutrino
($\nu_{\mu}+\bar{\nu}_{\mu}$) spectra for one GRB event. The solid
lines are calculated by using the expressions for $f_{\pi}$ in Eq.
(\ref{fpi1}) and for $\sigma_{\pi}(\tilde{E})$ in Eq.
(\ref{sigtotal}). \textit{Upper panel}: The contributions to the
total neutrino emission by the two target photon components are
represented by the dashed and dotted lines, respectively.
\textit{Lower panel}: The dashed line is obtained by an
approximation for the time integration as in Eq. (\ref{fpiapp}), and
the peak estimated by the $\Delta-$approximation is labeled by an
open circle. In all cases, we take the model parameters $E_{50}$,
$A_{*}$, $(kT)_{-1}$, $R_{12}$, $D_{l,25.5}$ and $t_{\rm SB,3}$ to
be unity and the equipartition factors ${\epsilon}_{e}=0.3,
\epsilon_{B}=0.1$ and thus $\epsilon_{p}=0.6$.}
\end{figure}
Finally, we would like to refer the reader to neutrino oscillation, which will change neutrino flavor ratio from
$\nu_e : \nu_{\mu} : \nu_{\tau} \simeq 1 : 2 : 0$ at the source to $1 : 1 : 1$ at the earth. This thus leads to
the fact that the observed muon-neutrino fluxes estimated above should be reduced further by a factor of
$\sim2$.
\section{Summary}
The surprising soft X-ray thermal emission during both burst and afterglow phases of GRB 060218/SN 2006aj was
proposed to be due to the breakout from a strong stellar wind of a radiation-dominated shock. This shock
breakout was further thought to locate behind a relativistic GRB jet, which is required by understanding the
burst emission and the power-law decaying afterglow emission. Wang \& M\'esz\'aros (2006) suggested that a
sub-GeV or GeV emission produced by IC scattering of the thermal photons by the relativistic electrons in the
GRB blast wave could give evidence for this astronomical picture. In this paper, we studied another possible
implication, namely, afterglow neutrino emission. The neutrinos are produced by photopion interactions of
relativistic protons, which could be accelerated by a relativistic external shock. The target photons in the
interactions are contributed by the incoming thermal emission and its upscattered component. By considering the
high event rate of LL-GRBs, we argue optimistically that the afterglow neutrinos from very nearby (several tens
of Mpc) LL-GRBs may be detected by IceCube in the following decades. We believe the detection of these expected
afterglow neutrinos is helpful to uncover the nature of GRB 060218-like GRBs.
\section*{Acknowledgements}
We would like to thank the referee for helpful comments and
suggestions. This work is supported by the National Natural Science
Foundation of China (grants 10221001 and 10640420144) and the
National Basic Research Program of China (973 program) No.
2007CB815404. Y.W.Y. is also supported by the Visiting PhD Candidate
Foundation of Nanjing University and partly by the National Natural
Science Foundation of China (grants 10603002 and 10773004).
|
2,869,038,154,950 | arxiv | \section*{Acronyms and Abbreviations}
\begin{center}
\begin{tabular}{c l l}
\hline
Characters & Expansion & First Use \\
\hline
AA & Adaptive Aggressive (adaptive trading strategy) & Section~\ref{sec:intro} \\
AF & Asymptotic Form ({\em PMF} envelope) & Section~\ref{sec:przi-motivation} \\
AI & Artificial Intelligence & Section~\ref{sec:intro} \\
AKA & Also Known As & Section~\ref{sec:intro} \\
ABM & Agent-Based Model & Section~\ref{sec:intro} \\
ACE & Agent-Based Computational Economics & Section~\ref{sec:intro} \\
BG & Balanced Group (experiment design) & Section~\ref{sec:coevolve} \\
BSE & Bristol Stock Exchange (simulation platform) & Section~\ref{sec:shvr} \\
CDA & Continuous Double Auction & Section~\ref{sec:background} \\
CDF & Cumulative Distribution Function & Section~\ref{sec:przi-details} \\
CIAO & Current Individual vs.\ Ancestral Opponent & Section~\ref{sec:prsh_coev} \\
GD & Gjerstad-Dickhaut (adaptive trading strategy) & Section~\ref{sec:intro} \\
GDX & {\em GD} Extended (adaptive trading strategy) & Section~\ref{sec:intro} \\
GVWY & Giveaway (nonadaptive trading strategy) & Section~\ref{sec:intro} \\
HBL & Heuristic-Based Learning (adaptive trading strategy) & Section~\ref{sec:intro} \\
IBM & International Business Machines Corp. & Section~\ref{sec:intro} \\
IID & Independent and Identically Distributed & Section~\ref{sec:prsh_solo} \\
IPRZI & Imbalance-sensitive {\em PRZI} (nonadaptive trading strategy) & Section~\ref{sec:impact} \\
ISHV & Imbalance-sensitive {\em SHVR} (nonadaptive trading strategy) & Section~\ref{sec:impact} \\
JPE & Journal of Political Economy & Section~\ref{sec:background} \\
LOB & Limit Order Book & Section~\ref{sec:background} \\
LOI & Line of Identity & Section~\ref{sec:coevolve} \\
LUT & Look-Up Table & Section~\ref{sec:przi-details} \\
MAB & Multi-Armed Bandit & Section~\ref{sec:prsh_defn} \\
MGD & Modified {\em GD} (adaptive trading strategy) & Section~\ref{sec:intro} \\
MI & Minimal Intelligence & Section~\ref{sec:intro} \\
ML & Machine Learning & Section~\ref{sec:intro} \\
MLOFI & Multi-Level Order-Flow Imbalance & Section~\ref{sec:impact} \\
NZI & Near-Zero Intelligence (nonadaptive trading strategy) & Section~\ref{sec:opinions} \\
OD & Opinion Dynamics & Section~\ref{sec:opinions} \\
OIM & One-in-Many (experiment design) & Section~\ref{sec:coevolve} \\
ONZI & Opinionated {\em NZI} (nonadaptive trading strategy) & Section~\ref{sec:opinions} \\
OPRZI & Opinionated {\em PRZI} (nonadaptive trading strategy) & Section~\ref{sec:opinions} \\
OZIC & Opinionated {\em ZIC} (nonadaptive trading strategy) & Section~\ref{sec:opinions} \\
PMF & Probability Mass Function & Section~\ref{sec:ZIC} \\
PRSH & {\em PRZI} Stochastic Hillclimber (adaptive trading strategy) & Section~\ref{sec:coevolve} \\
PRZI & Parameterized-Response {\em ZI} (nonadaptive trading strategy)& Section~\ref{sec:intro} \\
RDA & Replicator Dynamics Analysis & Section~\ref{sec:coevolve} \\
RP & Recurrence Plot & Section~\ref{sec:prsh_coev} \\
RMS & Root Mean Square & Section~\ref{sec:ZIC} \\
RQA & Recurrence Quantification Analysis & Section~\ref{sec:prsh_coev} \\
SHVR & Shaver (nonadaptive trading strategy) & Section~\ref{sec:intro} \\
SNPR & Sniper (nonadaptive trading strategy) & Section~\ref{sec:intro} \\
ZI & Zero Intelligence (class of trader-agents) & Section~\ref{sec:intro} \\
ZIC & {\em ZI} Constrained (nonadaptive trading strategy) & Section~\ref{sec:intro} \\
ZIP & {\em ZI} Plus (adaptive trading strategy) & Section~\ref{sec:intro} \\
ZIU & {\em ZI} Unconstrained (nonadaptive trading strategy) & Section~\ref{sec:ZIC}
\end{tabular}
\end{center}
\section*{Notations}
\begin{center}
\begin{tabular}{c l l}
\hline
Symbol & Denotes & First Use \\
\hline
$\alpha$ & Smith's equilibration metric & Section~\ref{sec:ZIC} \\
$\delta_p$ & The market's tick-size & Section~\ref{sec:3strats} \\
$\Delta_m(t)$ & Top-of-LOB imbalance metric $\Delta_m(t)=p_\mu(t) - p_m(t)$ & Section~\ref{sec:impact} \\
$\Delta_P$ & Difference between minimum and maximum price on a PMF & Section~\ref{sec:przi-motivation} \\
$\Delta_s$ & Step-size in strategy space when mapping fitness landscape & Section~\ref{sec:coevolve} \\
$\Delta_t$ & Timestep duration in the BSE simulation: $\Delta_t=1/N_T$ seconds & Section~\ref{sec:3strats} \\
$\epsilon$ & Tiny threshold value to avoid divide-by-zero errors in $\theta(x)$ & Section~\ref{sec:przi-details} \\
$F_P(p)$ & Cumulative Distribution Function (CDF) for PRZI trader & Section~\ref{sec:przi-details} \\
$ \widehat{F_P^{-1}}(c)$ & Approximation to inverse CDF, via reverse table-lookup & Section~\ref{sec:przi-details} \\
${\cal F}(\cdot)$ & Fitness function used in stochastic hillclimber & Section~\ref{sec:coevolve} \\
${\cal G}(\cdot)$ & Genesis function used in stochastic hillclimber to create ${\cal S}_{i,0}$ & Section~\ref{sec:coevolve} \\
${\cal I}_i(\cdot)$ & Market-impact function for trader $i$ & Section~\ref{sec:impact} \\
$k$ & Number of different strategies held by a PRSH trader & Section~\ref{sec:coevolve} \\
$\lambda_b$ & Buyer's limit-price & Section~\ref{sec:3strats} \\
$\lambda_s$ & Seller's limit-price & Section~\ref{sec:3strats} \\
$\lambda_{s(i:\text{max})}(t_0)$ & Largest limit price assigned to seller $i$ at any time $t\leq t_0$ & Section~\ref{sec:3strats} \\
${\cal M}(\cdot)$ & Mutation function used in stochastic hillclimber & Section~\ref{sec:coevolve} \\
$N_\text{Buy}$ & Number of buyers in the market & Section~\ref{sec:prsh_solo} \\
$N_\text{Sell}$ & Number of sellers in the market & Section~\ref{sec:prsh_solo} \\
$N_R$ & Number of IID repetitions of an experiment & Section~\ref{sec:coevolve} \\
$N_S$ & Number of discrete strategies available to choose between & Section~\ref{sec:coevolve} \\
$N_T$ & Number of traders in the market: $N_T = N_\text{Buy} + N_\text{Sell}$ & Section~\ref{sec:coevolve} \\
$\Omega$ & Opinionated limit price & Section~\ref{sec:opinions} \\
$\omega_i$ & Opinion value for trader $i$ & Section~\ref{sec:opinions} \\
$\pi_B$ & Total profit/surplus extracted by the set of buyers & Section~\ref{sec:coevolve} \\
$\pi_S$ & Total profit/surplus extracted by the set of sellers & Section~\ref{sec:coevolve} \\
$\pi_T$ & Total profit/surplus extracted by all traders: $\pi_T = \pi_B + \pi_S$ & Section~\ref{sec:coevolve} \\
$p_0$ & Equilibrium price & Section~\ref{sec:background} \\
$p^*_{\text{ask}}(t)$ & Price of the best ask on the LOB at time $t$ & Section~\ref{sec:background} \\\
$p^*_{\text{bid}}(t)$ & Price of the best bid on the LOB at time $t$ & Section~\ref{sec:background} \\
$p_{\text{max}}$ & Arbitrary maximum price allowed in the market & Section~\ref{sec:ZIC} \\
$p_m(t)$ & Mid-price on the LOB at time $t$ & Section~\ref{sec:impact} \\
$p_\mu(t)$ & Micro-price on the LOB at time $t$ & Section~\ref{sec:impact} \\
$P_{bq(\text{STRAT})}(t)$ & Price quoted at time $t$ by a buyer of strategy-type {\sc strat} & Section~\ref{sec:3strats} \\
$P_{sq(\text{STRAT})}(t)$ & Price quoted at time $t$ by a seller of strategy-type {\sc strat} & Section~\ref{sec:3strats} \\
${\mathbb P}(P=p)$ & Probability that random variable $P$ is equal to price $p$ & Section~\ref{sec:przi-motivation} \\
${\cal P}(\cdot)$ & PMF envelope profile function & Section~\ref{sec:przi-details} \\
${\cal PMF}_i(\cdot)$ & PMF for trader $i$ & Section~\ref{sec:przi-details} \\
$q_0$ & Equilibrium quantity & Section~\ref{sec:background} \\
$q^*_{\text{ask}}(t)$ & Quantity available at price $p^*_{\text{ask}}(t)$ on the LOB at time $t$ & Section~\ref{sec:impact} \\\
$q^*_{\text{bid}}(t)$ & Quantity available at price $p^*_{\text{bid}}(t)$ on the LOB at time $t$ & Section~\ref{sec:impact} \\\
$s_i$ & PRZI strategy-value for trader $i$: $s_i \in [-1,+1] \in {\mathbb R}$ & Section~\ref{sec:intro} \\
$\hat{s}$ & moving average of $s$ & Section~\ref{sec:prsh_solo} \\
$\widehat{S_{T}}$ & The set of terminal $\hat{s}$ values from a population of traders & Section~\ref{sec:prsh_solo} \\
$\vec{S}(t)$ & Strategy-vector for all-PRSH market: $|\vec{S}(t)|=N_T$; $[\vec{S}(t)]_i=s_i$ & Section~\ref{sec:coevolve} \\
${\cal S}_{i,t} $ & Set of $k$ different $s_i$-values at time $t$ for individual PRSH trader $i$ & Section~\ref{sec:coevolve} \\
$t$ & Time: $t \geq 0 \in {\mathbb R}$ & Section~\ref{sec:background} \\
$\theta(x)$ & Linear rectifier function & Section~\ref{sec:przi-details} \\
${\cal U}(a, b)$ & Draws from a uniform random distribution over the range $[a,b]$ & Section~\ref{sec:3strats} \\
\end{tabular}
\end{center}
\section{Introduction}
\label{sec:intro}
In attempting to understand and predict the fine-grained dynamics of financial markets, there is a long tradition of studying simulation models of such markets. Simulation studies nicely complement the two primary alternative lines of enquiry: analysis of real market data recorded at fine-grained temporal resolution, as is studied in the branch of finance known as {\em market microstructure}; and running carefully planned experiments where human subjects interact in artificial markets under controlled laboratory conditions, i.e.\ {\em experimental economics}. Simulation modelling of financial markets very often involves creating agent-based models (ABMs) that populate a market mechanism with some number of {\em trader-agents}: autonomous entities that have ``agency'' in the sense that they are empowered to buy and/or sell items within the particular market mechanism that is being simulated. This approach, known as {\em agent-based computational economics} (ACE), has a history stretching back for more than 30 years. Over that multi-decade history, a small number of specific zero-intelligence (ZI) and/or minimal-intelligence (MI) trader-agent algorithms, i.e. precise mathematical and procedural specifications of particular trading strategies, have been frequently used for modelling various aspects of financial markets, and the convention that has emerged is to refer to each such strategy via an acronym or short sequence of letters, reminiscent of a stock-market ticker-symbol.\footnote{Notable trading strategies in this literature include (in chronological sequence): SNPR ({\sc aka} {\em Kaplan's Sniper}, as described in \cite{rust_etal_1992}), ZIC ({\em Zero Intelligence Constrained}: \cite{gode_sunder_1993}); ZIP ({\em Zero Intelligence Plus}: \cite{cliff_1997_zip}); RE ({\em Roth-Erev}: \cite{erev_roth_1998}); GD ({\em Gjerstad-Dickhaut}: \cite{gjerstad_dickhaut_1998}); MGD ({\em Modified GD}: \cite{tesauro_das_2001}); GDX ({\em GD eXtended}: \cite{tesauro_bredin_2002}); HBL ({\em Heuristic Based Learning}: \cite{gjerstad_2003}); AA ({\em Adaptive Aggressive}: \cite{vytelingum_etal_2008}) and IEL ({\em Individual Evolutionary Learning}: \cite{arifovic_ledyard_2011}); several of which are explained in more detail later in this paper. }
Of these, ZIC \cite{gode_sunder_1993} is notable for being both highly stochastic and extremely simple, and yet it gives surprisingly human-like market dynamics; GD \cite{gjerstad_dickhaut_1998} and ZIP \cite{cliff_1997_zip} were the first two strategies to be demonstrated as superior to human traders, a fact first established in a landmark paper by IBM researchers \cite{das_etal_2001} (see also: \cite{deluca_cliff_2011_icaart,deluca_cliff_2011_ijcai,deluca_etal_2011_foresight}), which is now commonly pointed to as initiating the rise of algorithmic trading in real financial markets; and until very recently AA \cite{vytelingum_etal_2008} was widely considered to be the best-performing strategy in the public domain. With the exception of SNPR \cite{rust_etal_1992} and ZIC, all later strategies in this sequence are adaptive, using some kind of machine learning (ML) or artificial intelligence (AI) method to modify their responses over time, better-fitting their trading behavior to the specific market circumstances that they find themselves in, and details of these algorithms were often published in major AI/ML conferences and journals.
The supposed dominance of AA has recently been questioned in a series of publications \cite{vach_2015,cliff_2019,snashall_cliff_2019,rollins_cliff_2020,cliff_rollins_2020} which demonstrated AA to have been less robust than was previously thought. Notably, \cite{rollins_cliff_2020,cliff_rollins_2020} report on trials where AA is tested against two novel nonadaptive algorithms that each involve no AI or ML at all: these two newcomer strategies are known as GVWY and SHVR \cite{cliff_2012_bse,cliff_2018_bse}, and each share the pared-back minimalism of Gode \& Sunder's ZIC mechanism. In the studies that have been published thus far, depending on the circumstances, it seems (surprisingly) that GVWY and SHVR can each outperform not only AA but also many of the other AI/ML-based trader-agent strategies in the set listed above. Given this surprising recent result, there is an appetite for further zero-intelligence ACE-style market-simulation studies involving GVWY and SHVR. One compelling issue to explore is the co-adaptive dynamics of markets populated by traders that can choose to play one of the three strategies from GVWY, SHVR, and ZIC, in a manner similar to that studied by \cite{walsh_etal_2002} who employed `replicator dynamics' modelling techniques borrowed from theoretical evolutionary biology to explore the co-adaptive dynamics of markets populated by traders that could choose between SNPR, ZIP, and GD.
One way of studying co-adaptive dynamics in markets where the traders can choose to either deploy GVWY, SHVR, or ZIC is to give each trader a discrete choice of one from that set of three strategies, such that at any one time any individual trader is either operating according to GVWY or SHVR or ZIC. However it is appealing to instead design experiments where the traders can {\em continuously vary} their trading strategy, exploring a potentially infinite range of differing strategies, where the space of possible strategies includes GVWY, SHVR, and ZIC; and that is the motivation for this paper. Here, I introduce a new trading strategy that has a {\em parameterised response}: that is, its trading behavior is determined by a strategy parameter $s \in [-1,+1] \in {\mathbb R} $: when $s=0$, the trader behaves identically to ZIC; and when $s = \pm 1$ it behaves the same as GVWY or SHVR; but $s$ can also take on any other value in its range, such as $-0.75$ or $+0.5$, which gives novel ``hybrid'' trading behavior part-way between ZIC and either GVWY or SHVR. As is explained in more detail later in this paper, GVWY, ZIC, and SHVR are each members of the class of {\em zero intelligence} (ZI) trading strategies, and hence I've named the new strategy described here as the Parameterized-Response Zero-Intelligence (PRZI) trading strategy. The acronym PRZI is pronounced like ``pressie''.
To provide a zero-intelligence model trader for studying evolutionary or adaptive markets (as discussed in, for example: \cite{friedman_1991_evolecon,blume_easley_1992,friedman_1998_evolecon,lo_2004,lo_2019,nelson_2020_evolecon}), each PRZI trader needs some adaptation mechanism, allowing it to adjust its individual $s$-value over time, to be better suited to the prevailing market conditions that the particular trader finds itself in. There are many potential adaptation mechanisms that could be used, but the results in this paper come -- in the minimalist style of ZI traders -- from a very basic adaptive algorithm (arguably, the simplest possible): a $k$-point stochastic hill-climber, the operation of which is described in detail below, in Section~\ref{sec:coevolve}. I refer to traders with this adaptation mechanism, {\bf PR}ZI with {\bf S}tochastic {\bf H}ill-climbing, as PRSH traders (pronounced ``pursh''). PRSH is offered here as an absolutely minimal model of an adaptive trader -- it has only a single parameter (unlike, for example, ZIP which has between 8 and 60 parameters, depending on which version is used: see \cite{cliff_2009_zip60}), and it does usefully adapt the value of that parameter over time (although, as discussed further below, there are many better ways of doing adaptation). Section~\ref{sec:coevolve} presents results and analysis from multiple experiments with markets populated entirely by PRSH traders -- these are zero-intelligence {\em adaptive markets}, in the sense popularised by Lo \cite{lo_2004,lo_2019}.
While one motivation for devising PRZI was just described: to enable explorations of co-adaptive dynamics in continuous strategy-spaces, that is not the only motivation. Two other compelling reasons for wanting a ZI-style trader with a variable response are as follows:
\begin{itemize}
\item
Recent publications \cite{church_cliff_2019,zhang_cliff_2021} have described methods for making these simple trader-agents exhibit a form of sensitivity to temporary imbalances between supply and demand in the marketplace, and that imbalance-sensitivity then gives rise to so-called ``market impact'' effects, in which the prices quoted by traders shift in the {\em anticipated} direction of future transaction prices, where the shift is anticipated from the degree of supply/demand imbalance instantaneously evident in the market. Market impact is a significant issue for traders in real markets who are looking to buy or sell an unusually large amount of some asset, and major exchange operators such as the London Stock Exchange have introduced specialised mechanisms to try to reduce the ill-effect of market impact (see e.g. \cite{church_cliff_2019} for further discussion). For markets populated by ZI trader-agents to be able to exhibit impact effects, the traders need to be able to modulate their trading activity according to the direction and degree of imbalance in the market, becoming either more ``urgent'' or more ``relaxed'' in their trading: varying PRZI's strategy-parameter $s$ implements exactly this kind of tuneable response, as is illustrated in Section~\ref{sec:impact}.
\item
Shiller has recently proposed \cite{shiller_2017,shiller_2019} that certain economic phenomena which defy easy explanation via classical assumptions of individual economic rationality can be better understood by reference to the {\em narratives} (i.e., the stories) that economic agents tell themselves and each other about current and future economic factors. Shiller refers to this new approach as {\em narrative economics}. Noting that stories are merely externalisations of an agent's internally-held {\em opinions}, a recent publication \cite{lomas_cliff_2021} described an agent-based modelling platform for studying issues in narrative economics, in which two types of ZI traders were extended to also each include a real-valued {\em opinion} variable (the value of which could be altered by interactions with other agents, thereby modelling the way in which an agent's opinions are shifted by the narratives it is exposed to) and adapted so that their trading strategies alter as a function of their individual opinion: this also gives rise to a need for ZI traders that can smoothly vary the nature of their trading behavior, and PRZI was designed with the intention of being used in such opinion dynamics modelling work: exploring the use of PRZI in ACE-style agent-based models is a topic of current research, discussed further in Section~\ref{sec:opinions}.
\end{itemize}
This paper is intended merely as an introduction to PRZI; it is beyond the scope of this document to provide a comprehensive and detailed literature review. Readers seeking further details of market microstructure are referred to \cite{ohara_1998,harris_2002,lehalle_laruelle_2018}, and for overviews of experimental economics see for example \cite{kagel_roth_1997,smith_2000,plott_smith_2008}. For reviews of ACE research, see \cite{tesfatsion_judd_2006,chen_2011_jedc,chen_2018,hommes_lebaron_2018}; and for discussions of ZI traders in finance research, see e.g.\ \cite{farmer_etal_2005,ladley_2012,cartlidge_etal_2012}.
\section{Background: Experimental Economics}
\label{sec:background}
In a landmark 1962 paper \cite{smith_1962} published in the {\em Journal of Political Economy} (JPE), Vernon Smith described a seminal set of experiments in which human traders interacted within a {\em continuous double auction} (CDA), the mechanism embodied in most major real-world financial markets, under laboratory conditions. The introduction to Smith's 1962 JPE paper rightly cited the earlier work of Chamberlin who had described results from an experimental market in a JPE paper published in 1948 \cite{chamberlin_1948}. Smith's 1962 paper, and Chamberlin's before that, are widely regarded as marking the birth of experimental economics, and Smith's work in this field led to him being awarded the 2002 Nobel Prize in Economics.
In the simplest case, such experimental economics work involves a market with only one type of a tradeable asset (think of it as a stock market on which only one stock is listed), and each human trader can buy or sell a specific quantity of the asset by issuing one or more {\em quotes} to the market's central exchange. A quote would specify: the trader's desired {\em direction}\/ (i.e., buy or sell) for the transaction; the quantity (number of units) that the trader is seeking to transact; and the price-per-unit that they want to pay or be paid. Each trader would be given instructions, referred to here as {\em assignments}, that are private and specific to that individual trader, and each trader is told to keep these instructions secret. Some traders will be instructed to buy some quantity of the asset, paying no more than a trader-specific maximum price per unit (i.e., these traders are {\em buyers} each with a specific {\em limit price}); other traders will have been instructed to sell some quantity of the asset, accepting no less than some trader-specific minimum price per unit (so they are {\em sellers}, again each with their own limit-price). In this way, instructing some traders to be buyers and other traders to be sellers, and by varying the limit-prices in each trader's instructions, the market's underlying supply and demand curves could be controlled, along with that market's competitive equilibrium price (denoted by $p_0$) and equilibrium quantity ($q_0$).
Smith's 1962 paper (which reported on a sequence of experiments that he had commenced several years earlier) was notable for establishing a set of experiment methods that have since been reproduced and replicated by researchers around the world, and also for being the first empirical demonstration that CDA markets could show reliable equilibration even when populated with only very small numbers of buyers and sellers.
In many experimental economics studies, equilibration is not the only factor of interest. Another significant question is how much {\em surplus} is extracted from the market by the specific trading behaviors of the traders. In everyday language, surplus can be thought of a a seller's profit or a buyer's saving: if a seller has been given a limit-price of \$10 per unit, but manages to agree a transaction at \$15, then the \$5 difference between the seller's limit-price and the sale-price is that seller's surplus, her profit; similarly if a buyer is given a limit-price of \$10 but manages to instead buy for \$8, then her saving, her surplus, is \$2.
The initial experiments reported by Smith in 1962 were conducted with entirely manual issuing of assignments to traders, and with the traders simply shouting out their quote-prices in a laboratory version of an open-outcry trading-pit, a common sight on the trading floors of major exchanges for many decades prior to the arrival of automated market-trading technology. More recently, as in real financial exchanges, so in experimental economics: most experimental economics studies for the past 30 years or more have involved the human traders interacting with one another by each being sat at an individual trader-terminal (e.g. a PC running specialised trader-interface software, networked to a central exchange-server PC). The display-screen on a trader-terminal (whether in a real market or in a laboratory experiment) will often show a summary of all the currently outstanding bid-quotes received from buyers, and all the currently outstanding ask-quotes received from sellers, in a tabular form known as a {\em Limit Order Book} (LOB). Whole books have been written on the LOB (see e.g.\ \cite{osterrieder_2009,nolte_etal_2014,abergel_etal_2016}) but for the purposes of this paper, all we need to know is that the LOB allows all traders in the market to see the best (lowest) ask-price from a seller and the best (highest) bid-price from a buyer. In the rest of this paper I'll use $p^*_{\text{ask}}(t)$ to denote the price of the best ask at time $t$, and $p^*_{\text{bid}}(t)$ to denote the price of the best bid.
Any study in experimental economics where human traders interact with one another, via some market mechanism, subject to the constraints imposed by the design of the experiment, gives rise to the question of just how big a role the intelligence of the human traders plays in determining the market's equilibration behavior. A genuinely shocking answer to that question was provided in 1993 by Gode \& Sunder, also publishing in the JPE \cite{gode_sunder_1993}, who presented results which appeared to show that the answer was simple: zero. That is, Gode \& Sunder showed that markets populated entirely by so-called {\em zero-intelligence} (ZI) traders could give rise to market dynamics, to equilibration behavior, which was statistically indistinguishable from that of human traders, when measured by the then-standard metric known as {\em allocative efficiency}, i.e. how much of the available surplus in the market was extracted by the traders. Gode \& Sunder's ZI traders manifestly have no intelligence at all, and are discussed in more detail in the next section.
\section{Three ZI Trading Strategies}
\label{sec:3strats}
This section describes the three trading strategies {\em Zero-Intelligence-Constrained} (ZIC), {\em Shaver} (SHVR), and {\em Giveaway} (GVWY). As will be seen, all three of these can very reasonably be described as ZI trading strategies.
In the following text, ${\cal U}(a,b)$ is used to denote draws from a uniform random distribution over the range $[a, b]$; the integer $\delta_p$ denotes the market's {\em tick-size}, the minimum price change allowed in the market (very often --but not always -- one cent of the national currency in real financial exchanges, see e.g.\ \cite{darley_outkin_2007,chung_lee_roesch_2020,chung_chuwonganant_2022,cartea_chang_penalva_2022}); all prices are integer multiples of $\delta_p$, and hence members of ${\mathbb Z}$; and $P_{bq(\text{\sc strat})}(t)$ and $P_{sq(\text{\sc strat})}(t)$ denotes the prices quoted by a buyer and a seller, respectively, at time $t$, by a trader of strategy-type {\sc strat} (where {\sc strat} is one of a set of known strategy-types, for example {\sc strat} $\in$ \{GD, ZIC, ZIP\}). Note that a capital $P$ is used to denote a price that is (or could be, in principle) randomly generated, i.e.\ a random variable; and the various lower-case $p$ values are nonrandom. Time proceeds in discrete steps of $\Delta_t \in {\mathbb R}$, and each strategy is summarised as an equation that specifies the price that will be quoted by a trader at time $t+\Delta_t$ on the basis of information available, the state of the market, at time $t$. The limit-price assigned to a buyer is denoted by $\lambda_b$, and the seller's limit-price is $\lambda_s$. The subscripted index $i$ is used where necessary to distinguish values that are specific to trader $i$.
\subsection{ZIC}
\label{sec:ZIC}
In their seminal 1993 JPE paper \cite{gode_sunder_1993}, Gode \& Sunder presented results from three sets of experimental economics studies: in the first, groups of human traders interacted in an electronic CDA, the mechanism found in real-world financial markets, under laboratory conditions, as described above. As is commonplace in much experimental economics work, Gode \& Sunder fixed the limit-quantity to one for all traders, so that transactions always involved a single unit of the asset changing hands, and hence the primary variable of interest was the transaction prices in the market. This first set of experiments established baseline data from the human traders, and Gode \& Sunder recorded a key metric, $\alpha$, first introduced in Smith's 1962 paper, which measures the variation of transaction prices around the market's $p_0$ value, as the RMS difference between $p_0$ and the transaction prices over some period, i.e. $\alpha$ is a measure of equilibration; and they also recorded the {\em allocative efficiency} for each market, which is a measure of how much of the total theoretically available surplus is actually extracted by the traders in that market.
In their second set of experiments, they replaced all the human traders with simple autonomous software agents that could electronically issue quotes to the market's central exchange mechanism: as with the human traders in the first experiments, the software agents were each assigned a direction (buy or sell) and given a limit price for their transactions. Gode \& Sunder performed two sets of experiments with these software agents: one set with a type of trader-agent that they named {\em ZI-Unconstrained} (ZIU); and another set with a modified version of the ZIU strategy, one that involved imposition of an additional constraint, and so those traders were named {\em ZI-Constrained} (ZIC). If ever a seller ZI trader issued a quote-price below the current best bid-price (i.e., in the terminology of financial markets, if the quote is for a price that {\em crosses the spread}), that seller sold its unit the buyer who had issued that best bid; and similarly if ever a ZI buyer issued a quote-price that was higher than the current best ask-price issued by a seller, the buyer would buy from that seller, because the buyer's quote crossed the spread. (The {\em spread} is the difference between the current best ask price and the current best bid price).
ZIU traders were very basic: they generated a randomly-selected quote-price drawn from ${\cal U}(\delta_p, p_{\text{max}})$, where $p_{\text{max}}$ is an arbitrarily-chosen maximum-allowable price in the market. That is, ZIU traders were designed to ignore their limit prices. Unsurprisingly, the time-series of transaction prices in markets populated by ZIU traders looked much like random noise, and ZIU traders would often enter into loss-making deals, because they were buying at prices above their limit price or selling at prices below their limit price.
Gode \& Sunder then modified the ZIU strategy, giving the ZIC strategy, by introducing a just one constraint: to not quote prices that were potentially loss-making. ZIC traders still quote random prices drawn from a uniform distribution over some range, but now there is a difference depending on whether the ZIC trader is a buyer or a seller:
\begin{equation}
P_{bq(\text{ZIC})}(t+\Delta_t) = {\cal U}(\delta_p, \lambda_b);
\end{equation}
\begin{equation}
P_{sq(\text{ZIC})}(t+\Delta_t) = {\cal U}(\lambda_s, p_{\text{max}}).
\end{equation}
That is, a ZIC buyer randomly generates its quote-prices equiprobably from $[\delta_p, \lambda_b]$, where $\lambda_b$ is that buyer's limit price; and a ZIC seller with limit-price $\lambda_s$ generates its quote-prices equiprobably over the range $[\lambda_s, p_{\text{max}}]$. Surprisingly, markets populated by ZIC traders showed equilibration behaviors (as measured by Smith's $\alpha$ metric) and allocative effriciency scores that were virtually indistinguishable from the comparable human-populated markets. This notable result quickly became highly-cited, as it seemed to demonstrate that if there was any `intelligence' in the system at all, it was in the CDA market mechanism rather than residing in the traders. Gode \& Sunder were also careful to note that a different measure of surplus-extraction, called {\em profit dispersion}, was worse for ZIC traders than for human traders. .
Because quote-prices in any market are quantized by that market's tick-size $\delta_p$, the prices quoted by a ZIC trader are samples of a discrete random variable, and the probability mass function (PMF) for that variable has a rectangular profile, because the distribution is uniform, as illustrated in Figure~\ref{fig:ZIC_uniform}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{ZICuniform.jpg}
\end{center}
\caption{Illustrative Probability Mass Function (PMF) for the uniformly-distributed discrete quote-prices generated by a ZIC buyer (upper figure) and seller (lower figure). In this and all other PMF graphs in this paper, the horizontal axis is price, and the vertical axis is probability.}
\label{fig:ZIC_uniform}
\end{figure}
One problematic aspect of ZIC traders that becomes clear to anyone who actually implements them to run live in an operational market is that while ZIC buyers have a PMF domain that is bounded from below by the smallest nonzero price $\delta_p$ and bounded from above by the trader's limit price $\lambda_b$, the PMF domain for a ZIC seller is bounded from below by its limit price $\lambda_s$ but the upper bound is an arbitrary exogenous system limit $p_\text{max}$, the highest price allowed in the market. In theory, $p_\text{max}$ might appear to be unimportant, but if its value is set to a large multiple of the largest buyer's limit price $\lambda_{b:\text{max}}$ then the ZIC sellers will spend an awful lot of their time quoting at prices $P_{sq}(t)>\lambda_{b:\text{max}} $, i.e. at prices that can never lead to a transaction, and so the market is flooded with unfulfillable ask-quotes, and you have to wait a long time before a ZIC seller just happens to randomly generate a plausible ask, i.e. one for which $P_{sq}(t)\leq \lambda_{b:\text{max}}$. If the only issue of interest in the market is the temporally-ordered sequence of transaction prices, this is not a problem; but if you care about the actual time intervals between successive transactions, that can be greatly affected by setting $p_\text{max}$ too high. Experimenters also need to be careful to ensure that $p_\text{max}$ is not set below the highest limit-price assignable to a seller in the market, which can be a problem in practice if the seller limit-prices are generated by an unbounded random-walk process such as geometric Brownian motion with positive drift. We'll return to these issues, the ``$p_\text{max}$ problem'', in Section~\ref{sec:PRZI}.
\subsection{SHVR}
\label{sec:shvr}
Source-code for the {\em Shaver} strategy, abbreviated to SHVR, was made public in 2012 \cite{cliff_2012_bse} when it was released as one of the several trader-agent strategies available in the {\em Bristol Stock Exchange} (BSE) which is a freely-available open-source simulation of a LOB-based financial exchange, written in Python. BSE was initially developed as a teaching resource, used by masters-level students studying on a Financial Technology module taught at the University of Bristol. In the years since it was first released, it has been used by hundreds of students and has increasingly also found use as a trusted and stable test-bed for exploring research questions in ACE.
Like ZIC, SHVR is minimally simple, involving no intelligence at all. Unlike ZIC, SHVR is entirely deterministic. SHVR can be explained very simply: a SHVR buyer sets its quote-price at time $t+\Delta_t$, denoted by $P_{bq(\text{SHVR})}(t)$, to be one tick (i.e., $\delta_p$) more than the current best bid $p^*_{\text{bid}}(t)$, so long as that does not exceed its limit price $\lambda_b$, i.e.:
\begin{equation}
P_{bq(\text{SHVR})}(t+\Delta_t) = \min(p^*_{\text{bid}}(t)+\delta_p, \lambda_b);
\label{eq:SHVRbid}
\end{equation}
and similarly a SHVR seller sets its quote-price to be:
\begin{equation}
P_{sq(\text{SHVR})}(t+\Delta_t) =\max(p^*_{\text{ask}}(t)-\delta_p, \lambda_s).
\label{eq:SHVRask}
\end{equation}
If the best bid or ask is undefined at time $t$, e.g. because no quotes have yet been issued, a SHVR buyer will start with a very low $P_{bq}(t)$, and a SHVR seller with a very high $P_{sq}(t)$. If there is no prior trading data available in the market, the low value could be $\delta_p$ and the high value $p_{\text{max}}$, i.e. the same lower and upper bounds as for ZIC traders; if there is prior trading data, the low and high initial values could instead be the lowest and highest prices seen in prior trading over some recent time-window.
SHVR was introduced to BSE as something of a joke, as a tongue-in-cheek approximation to a high-frequency trading algorithm. It does serve the purpose of being a minimal illustrative implementation of a trader that actually uses information available from the LOB. The surprising result (discussed in more detail in \cite{rollins_cliff_2020,cliff_rollins_2020}) is that if the circumstances are in its favour then SHVR can out-perform well-known strategies like ZIP or AA, which had previously been hailed as examples of AI-powered super-human robot-trader systems. And, in one further surprise, the same study that revealed SHVR can outperform ZIP and AA also revealed that an even simpler strategy, called GVWY, can do just as well as SHVR and sometimes better.
\subsection {GVWY}
Like SHVR, source-code for the {\em Giveaway} strategy (GVWY) was made public when the first version of BSE was released as open-source in 2012 \cite{cliff_2012_bse}.
The correlates of Equations~\ref{eq:SHVRbid} and~\ref{eq:SHVRask} for GVWY are as follows:
\begin{equation}
P_{bq(\text{GVWY})}(t+\Delta_t) = \lambda_b;
\label{eq:GVWYbid}
\end{equation}
\begin{equation}
P_{sq(\text{GVWY})}(t+\Delta_t) = \lambda_s.
\label{eq:GVWYask}
\end{equation}
As can be seen, a GVWY trader simply sets its quote price to whatever its currently-assigned limit price is, regardless of the time. As the name implies, {\em prima facie} this trading strategy gives away any chance of surplus, because there is no difference between its quote price and its limit price: if its quote results in a transaction at that price, it yields zero surplus.
However, the spread-crossing rule (which is standard in most LOB-based markets) means that it is possible for a GVWY trader to enter into surplus-generating trades. For example, consider a situation in which a GVWY buyer has a limit price $\lambda_b= \$10$, and the current best ask is $p^*_\text{ask}=\$7$: when the GVWY buyer quotes its limit price (i.e., $P_{bq}(t)=\lambda_b$), the \$10 price on the quote crosses the spread and so the GVWY buyer is matched with whichever seller issued that best ask, and the transaction goes through at a price of \$7, yielding a \$3 surplus for the GVWY buyer (and yielding whatever surplus is arising for the seller, dependent on that seller's value of $\lambda_s$).
In the next section, I explain how PRZI traders have a continuously-variable strategy space that includes GVWY, ZIC, and SHVR: by setting a strategy-parameter $s$ to an appropriate value, PRZI traders will act either as one of those three ZI strategies, or as some kind of novel hybrid, intermediate between two of them; that is, a PRZI trader's response is a parameterised version of one or more of these three ZI trading strategies, hence the name.
\subsection{Summary}
\label{sec:3strats_summary}
Table~\ref{tbl:stratcompare} summarises the three ZI strategies (GVWY, SHVR, and ZIC) that are integrated within PRZI, and also includes Gode \& Sunder's ZIU, for completeness.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
& \multicolumn{2}{l}{\bf Buyer} & \multicolumn{2}{l}{\bf Seller} \\
\hline
{\bf Strategy} & $p_{\text{lo}}$ & $p_{\text{hi}}$ & $p_{\text{lo}}$ & $p_{\text{hi}}$ \\
\hline
GVWY & $\lambda_{b}$ & $p_\text{lo} $ & $\lambda_{s}$ & $p_\text{lo} $ \\
SHVR & $\min(p^*_{\text{bid}}(t)+\delta_p,\lambda_b)$ & $p_\text{lo} $ & $\max(p^*_{\text{ask}}(t)-\delta_p,\lambda_s)$ & $p_\text{lo}$ \\
ZIC & $\delta_p$ & $\lambda_{b}$ & $\lambda_{s}$ & $p_{\text{max}}$ \\
ZIU & $\delta_p$ & $p_\text{max}$ & $\delta_p$ & $p_{\text{max}}$ \\
\hline
\end{tabular}
\end{center}
\caption{Summary of the three ZI strategies (GVWY, SHVR, and ZIC) that are integrated within PRZI, and also of Gode \& Sunder's ZIU. Each ZI strategy generates quote-prices at random from a uniform distribution ${\cal U}(p_\text{lo}, p_\text{hi})$, although for both GVWY and SHVR $p_\text{lo}=p_\text{hi}$. The bounds on the quote-price generator distribution are different for buyers and sellers: $\lambda_b$ is the buyer's limit price; $\lambda_s$ is the seller's; $\delta_p$ is the market's tick-size; $p^*_\text{bid}(t)$ and $p^*_\text{ask}(t)$ are the best bid-price and best ask-price on the LOB at time $t$, respectively; $p_\text{max}$ is an arbitrary system constant, the largest price quotable in the market.}
\label{tbl:stratcompare}
\end{table}
\section{Parameterised-Response ZI Traders}
\label{sec:PRZI}
\subsection{Motivation}
\label{sec:przi-motivation}
The initial motivation for developing PRZI came from a desire to give ZI traders a sense of {\em urgency}, of how keen they are to find a counterparty for a transaction. Intuitively, there is a tradeoff between time-to-transact and the expected surplus (profit or saving) on that transaction. In human-populated markets, over time, while working a trade, an individual buyer will typically announce increasing bid prices, in the hope that the better prices increase the chance of finding a counterparty seller to transact with, but each of those increases in the bid-price reduces the surplus that the transaction will generate for that buyer; the situation is the same for sellers, gradually reducing their ask prices, again in the hope that each price-cut makes it more likely that a buyer will come forward, but each cut slices away at the seller's final profit for this transaction. In both cases, if the trader is urgent to get a deal done they can increase their chances of finding a counterparty by cutting their potential surplus, i.e. by making bigger step-changes in the prices that they're quoting.
Lacking the intelligence of human traders, ZIC traders just issue their next quote price by drawing from a uniform distribution over a specified range of prices; for an individual ZIC trader, the change in price from one quote to the next may be positive or may be negative, and there is no control over the step-size. ZIC traders have no sense of urgency.
In contrast, a SHVR agent {\em can} be given some approximation to urgency: in \cite{church_cliff_2019}, SHVR was extended by applying a multiplying coefficient $k \in {\mathbb Z}^+ $ to the $\delta_p$ term in Equations~\ref{eq:SHVRbid} and~\ref{eq:SHVRask}, such that when the market circumstances dictate it, a value of $k>1$ allows SHVR to make larger step-changes in its quote prices. Work on PRZI started as a response to asking: how can we do something similar for ZIC?
Figure~\ref{fig:ZIC_triangles} illustrates the reasoning underlying PRZI, when viewed as a way of making ZIC traders quote in a way that is more or less likely to lead to a transaction within any particular time-window: the rectangular PMF from the uniform distribution of the original ZIC can be replaced by a PMF that has either a right- or left-handed right-angled-triangle; depending on which way the triangle is oriented, the ZIC trader is either more or less likely to generate a quote-price that leads to a transaction. Let's refer to these two right-triangle PMFs as the {\em urgent} and {\em relaxed} variants of ZIC.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{ZICtriangular.jpg}
\end{center}
\caption{Illustrative PMF for the variants of ZIC traders that are either {\em urgent} (upper two figures) or {\em relaxed} (lower two figures).}
\label{fig:ZIC_triangles}
\end{figure}
But why stop at these variants? Figure~\ref{fig:ZICcurves} illustrates two more extreme ZIC variants, where the PMF profile is nonlinear: let's refer to these as the {\em super-urgent} and {\em super-relaxed} variants of ZIC.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{ZICnonlinear.jpg}
\end{center}
\caption{Illustrative PMF for the variants of ZIC traders that are either {\em super-urgent} (upper two figures) or {\em super-relaxed} (lower two figures).}
\label{fig:ZICcurves}
\end{figure}
Clearly, the degree of nonlinearity in the variant-ZIC PMFs shown in Figure~\ref{fig:ZICcurves} could be made even more extreme, and eventually when the degree of curvature is at its most extreme the PMF would have only one price at nonzero probability: the price that had the highest probability in the triangular PMF. And, as there is only one price with nonzero probability, that probability must be one (i.e., 100\%, total certainty). Let's call these absolute extremes the {\em urgent-AF} and {\em relaxed-AF} (`AF' might stand for {\em asymptotic form}); they're illustrated in Figures~\ref{fig:ZIC-AF-GVWY} and~\ref{fig:ZIC-AF-SHVR}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{GVWY_PMF.jpg}
\end{center}
\caption{Illustrative PMF for the {\em urgent-AF} variant of ZIC: a single price (the trader's limit-price) is quoted with probability one; the probability for all other prices is zero. This is identical to GVWY, as discussed in the text.}
\label{fig:ZIC-AF-GVWY}
\end{figure}
But the shapes of the urgent-AF PMFs in Figures~\ref{fig:ZIC-AF-GVWY} and~\ref{fig:ZIC-AF-SHVR} are familiar: the urgent-AF PMFs are simply a probabilistic representation of GVWY, because equations \ref{eq:GVWYbid} and \ref{eq:GVWYask} can be expressed in (redundantly) probabilistic language as: {\em with probability one, the price quoted by a GVWY trader is its current limit-price}.
This then prompts the question: can a useful link be made from the relaxed-AF form of ZIC to SHVR? The PMFs of SHVR and relaxed-AF ZIC have essentially the same shape, the difference is in where they occur on the number-line of prices: SHVR as specified in Equations~\ref{eq:SHVRbid} and~\ref{eq:SHVRask} quotes a price that is a one-tick (i.e. $1 \times \delta_p$) improvement on the current best price on the LOB; whereas a relaxed-AF ZIC would quote the most extreme price available, either the lowest nonzero price (i.e., $\delta_p$) for a buyer, or the arbitrary system maximum $P_{max}$ for a seller, neither of which is going to do anything at all to help equilibrate the market's transaction prices. Because the relaxed-AF versions of ZIC would not equilibrate, it makes most sense to move the relaxed-AF price away from the most extreme value, and toward the best price on the LOB, as the degree of nonlinearity in the variant-ZIC PMF increases. This would mean that by the time the nonlinearity is maximally extreme, i.e. when the PMF has the -AF shape, the variant ZIC is doing the same thing as a SHVR.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{SHVR_PMF.jpg}
\end{center}
\caption{Illustrative PMF for SHVR which can be considered as the equilbrating {\em relaxed-AF} variant of ZIC. The only price with a nonzero probability is a one-tick improvement on the best bid or ask at time $t$, and that price has a probability of one. The arrow pointing right (left) indicates the direction of travel of the lower (upper)-bound on the PMF domain; the upper (lower) bound is given by the trader's limit-price $\lambda$.}
\label{fig:ZIC-AF-SHVR}
\end{figure}
And that is the motivation for creating PRZI: now all that is needed is a function that smoothly varies from the urgent-AF variant that is GVWY, on through the nonlinear super-urgent variant, on through the triangular-PMF of the urgent variant, then on to the original ZIC, then onwards to the triangular-PMF of the relaxed variant, and then on through the super-relaxed nonlinear PMF to the relaxed-AF PMF that implements SHVR: this progression is controlled by PRZI's strategy parameter, denoted by $s \in [-1,+1] \in {\mathbb R}$, as illustrated in Figure~\ref{fig:PRZI-PMFs}.
\begin{figure}[ph]
\begin{center}
\includegraphics[width=0.75\linewidth]{PRZI_PMFs.png}
\end{center}
\caption{The full spectrum of quote-price PMFs for PRZI: left-hand column is Buyer PMFs; right-hand column is Seller PMFs. Top row (where $s=+1.0$) is the {\em urgent-AF} PMF, equivalent to GVWY; then on each successive row the PMF envelope warps to become closer to the shape of the original ZIC PMF, which is in the middle row ($s=0.0$); after that, each lower row warps closer to the {\em relaxed-AF} PMF that implements SHVR (at $s=-1.0$). Note that the scale of the vertical axis is the same for the two graphs in each row, but varies between rows. At the upper and lower extremes where $s= \pm1$, the PMF is a single point at ${\mathbb P}(P=p)=1$; in the middle row ($d.s=0$), the PMF is a rectangle of height ${\mathbb P}(P=p)=1/\Delta_P$ where $\Delta_P$ is the difference between the minimum and maximum prices that form the bounds of the PMF on the horizontal axis. }
\label{fig:PRZI-PMFs}
\end{figure}
\subsection{Details}
\label{sec:przi-details}
The various seller PMFs shown in Figure~\ref{fig:PRZI-PMFs} have a domain that is bounded from below (the left-hand limit of the PMF) by the individual seller's limit-price $\lambda_s$, but the upper bound on that domain, the right-hand limit of the PMF, brings us back to ``the $p_{\text{max}}$ problem'' discussed previously in Section~\ref{sec:ZIC}. Each PRZI seller $i$ addresses this problem by determining its own private estimate of the highest plausible price, denoted by $p_{i:\text{max}}$, according to the following set of heuristic criteria:
\begin{itemize}
\item
If the PRZI seller has no other information available to it (e.g., it is the start of the first session in a market experiment, and the LOB is empty) then the only available information it has is the highest limit price it has been assigned thus far, denoted by $\lambda_{s(i:\text{max})}(t)$ (and if it has not been assigned a limit price, it is unable to participate in the market). In this situation, the PRZI seller sets $p_{i:\text{max}}=c_i \lambda_{s(i:\text{max})}(t)$ for some coefficient $c_i>1$. Informally, this models a naive and uninformed seller making a wild guess at the highest price that the market might tolerate. Different sellers can have different values of $c_i$ -- that is, some might guess cautiously while others might be much more optimistic.
\item
If ever another seller issues an ask-quote at a price $P_{sq} > p_{i:\text{max}}$ then seller $i$ sets $p_{i:\text{max}}=P_{sq}$. Informally, this models a seller realising that her current best guess of the highest price the market will tolerate was too low, because there is some other seller in the market who has quoted a higher price.
\end{itemize}
In principle, if reliable price information is available from an earlier market session, then that information could be used instead. However, in any one previous market session there are likely to be multiple potential candidate values (e.g: the highest ask-price quoted in that session, or the highest transaction-price in that session, or the final transaction price, or the linear-regression prediction of the next transaction price in the market, or a nonlinear prediction, and so on), and that would soon take us away from the appealing minimalism of ZI traders.
This approach, of letting each seller form its own private estimate of the highest tolerable ask-price could be criticised as merely swapping one arbitrary system parameter (the global constant $p_{\text{max}}$) for a bunch of new arbitrary exogenously-imposed parameters (the set of individual $c_i$ values, one per trader). If all we care about is minimising parameter-count, that would be a valid criticism. However, this approach has the advantage that only uses information that is locally available to an individual trader (i.e., it does not require knowledge of a system-wide constant) and it never requires manual re-calibration to the highest price in the market's supply curve. In practice setting the $c_i$ values at random over a moderate range is sufficient to generate useful results: the results presented here used $c_i={\cal U}(1,10)^{0.5}$; this makes it unlikely that all traders will have the same $c_i$ value, and the fractional exponent gives a nonlinear bias toward smaller $c_i$ values.
Finally note that, if introduced alone, this solution to the $p_\text{max}$ problem can give rise to behavioral asymmetries between PRZI buyers and sellers when $s_i \approx -1$, (i.e., SHVR-style strategies) because PRZI buyers still have the lower-bound of their PMF set to the system minimum price $\delta_p$. That is, unlike the sellers, the buyers are not using some multiple of their lowest limit-price or any prices observed in the market to form their initial lowest bid-price. To illustrate the problem, consider a market populated entirely by PRZI traders all with $s_i=-1$ and in which the supply and demand schedules are set such that the equilibrium price is a large multiple of $\delta_p$ relative to the number of buyers: the sellers, no longer limited by having to start their quotes at some arbitrary system maximum price, will drop their quote prices to near-equilibrium values relatively quickly, but in contrast the SHVR-style buyers will start by quoting $\delta_p$, then $2\delta_p$, then $3\delta_p$ and so on, potentially making much slower progress toward the equilibrium if that lies at, say, $10,000\delta_p$: there is, in this sense, also a $p_\text{min}$ problem. To counter this, PRZI buyers can set their $p_\text{min}$ in the same style that PRZI sellers set their $p_\text{max}$: initially using $p_{i:\text{min}} = \text{max}(\frac{1}{c_i}\lambda_{b(i:\text{min})},\delta_p)$ in the absence of any other information, and using the lowest price quoted by another buyer if that quote is less than $i$'s initial $p_{i:\text{min}}$ value.\footnote{To illustrate the behavioral asymmetry that arises when the $p_{\text{min}}$ problem is not remedied in this way, in all the experiments described in Section~\ref{sec:coevolve}, the PRZI buyers simply use the original ZIC-style $p_{\text{min}}=\delta_p$: the strategies of the buyer population then consistently diverge from those of the sellers. This nevertheless does yield rich co-evolutionary dynamics, serving to illustrate issues in the visualization and analysis of such high-dimensional co-evolutionary systems. }
As described thus far, we have a method for setting the upper limit of the PRZI seller PMF when $s \geq 0$,
i.e., for the PRZI range of strategies from ZIC to GVWY. However, for the $s=-1$ seller-case that implements
SHVR, we need the upper limit of the PMF to be set such that $p_{i:\text{max}}=p^*_{\text{ask}}-\delta_p$, and
we need to get there smoothly from the $s=0$ case where PRZI is implementing ZIC and $p_{i:\text{max}} $ and $p_{i:\text{min}} $ are
set by the method just described. The simplest way of doing that is to have $p_{i:\text{max}}$ be a linear
combination of the two. For a PRZI seller, let $p_{i:\text{max:ZIC}}$ denote the $p_{i:\text{max}} $ value at $s=0$, then for $s \in [-1,0]$
we use:
\begin{equation}
p_{i:\text{max}} = (1+s)p_{i:\text{max:ZIC}} - s.\max(p^*_{\text{ask}}-\delta_p,\lambda_{s:i})
\end{equation}
\noindent
and the same form of linear combination for PRZI buyers, to pull the lower limit on the buyer PMFs progressively away from $P_{i:\text{min:ZIC}} \rightarrow \min(p^*_\text{bid}+\delta_p,\lambda_{b:i})$. In extremis, this approach will narrow the PMF interval to just a single discrete price, i.e.\ $p_\text{min}=p_\text{max} = \lambda_{b|s}$, generated with probability one.
Next, let $s_i$ denote the strategy-value for trader $i$; and let $p_{i:\text{min}} \in {\mathbb Z}^+$ and $p_{i:\text{max}} \in {\mathbb Z}^+$ be the bounds on trader $i$'s discrete-valued price-range $[p_{i:\text{min}}, p_{i:\text{max}}]$, with $p_{i:\text{min}} < p_{i:\text{max}}$ and let the extent of that range be $r_i=p_{i:\text{max}}-p_{i:\text{min}}$. Also define a price-range normalization function $N(p)$:
\[
N: [p_{i:\text{min}}, p_{i:\text{max}}] \in {\mathbb Z}^+ \mapsto [0,1] \in {\mathbb R} ; N(p)=(p-p_{i:\text{min}})/r_i
\]
Then note that over the domain $x \in [0,1] \in {\mathbb R}$, the function ${\cal P}$ in Equation~\ref{eq:P-curves} has the right profile, makes the right shapes, for the buyer PMFs that we want (as were illustrated in Figure~\ref{fig:PRZI-PMFs}):
\begin{equation}
{\cal P}(x,s_i) =
\begin{cases}
\frac{e^{cx}-1}{e^c-1} & \text{if } s_i>0 \\
\frac{1}{r_i} & \text{if } s_i=0 \\
1-\frac{e^{cx}-1}{e^c-1} & \text{if }s_i<0
\end{cases}
\label{eq:P-curves}
\end{equation}
\noindent
where
\begin{equation}
c = \theta(m\tan(\pi(s_i+\frac{1}{2})))
\label{eq:c_fn}
\end{equation}
and $\theta(x)$ is the linear-rectifier threshold function symmetrically bounded by a cutoff constant $\theta_0$, which also (to avoid divide-by-zero errors) clips near-zero values at $\pm \epsilon$ for some sufficiently small value of $\epsilon$ (e.g $\epsilon=10^{-6}$):
\begin{equation}
\theta(x) =
\begin{cases}
\max(-\theta_0, \min(\theta_0, x)) & \text{if } |x|>\epsilon \\
\epsilon & \text{if } 0<x<\epsilon \\
-\epsilon & \text{if } -\epsilon<x<0
\end{cases}
\label{eq:threshold_fn}
\end{equation}
\noindent
and then finally use:
\begin{equation}
{\cal PMF}_i(p,s_i) =
\begin{cases}
{\cal P}(N(p),s_i) & \text{when $i$ is a buyer} \\
{\cal P}(1-N(p),s_i) & \text{when $i$ is a seller}
\end{cases}
\label{eq:PMFs}
\end{equation}
After a little trial-end-error exploration, it was found that values for the constants $m=4$ and $\theta_0=100$ give the desired shapes, i.e.\ similar to the qualitative PMF envelopes illustrated in Figure~\ref{fig:PRZI-PMFs}: these are illustrated in Figure~\ref{fig:P_raw}. Also, turning the ${\cal PMF}_i$ envelope into a usable PMF requires scaling and normalisation such that:
\begin{equation}
{\mathbb P}(P=p)=\frac{{\cal PMF}_i(p, s_i)}{\sum\limits_{j=0}^{r_i}{\cal P}(j/r_i,s_i)}: p \in [p_{i:\text{min}}, p_{i:\text{max}}]
\label{eq:P-PMF}
\end{equation}
\begin{figure}
\includegraphics[width=0.99\linewidth]{P_raw_envelopes.png}
\caption{Plots for ${\cal P}(p, s_i)$ (Equation~\ref{eq:P-curves}) for values of $s_i$ over the range $[0,1]$ in steps of $0.1$: horizontal axis is $p$ in steps of 0.01; vertical axis is ${\cal P}$. The ${\cal P}$ curves have the desired shape, but require normalisation before use as PMF envelopes.}
\label{fig:P_raw}
\end{figure}
\begin{figure}
\includegraphics[width=0.99\linewidth]{P_PMF_envelopes.png}
\caption{Log-linear plots for ${\mathbb P}(P=p)$ (Equation~\ref{eq:P-PMF}) for values of $s_i$ varying from 0 to 1 in steps of $0.1$: horizontal axis is the interval $p \in [0,1]$ in steps of 0.01; vertical axis is ${\mathbb P}$. The ${\mathbb P}$ curves are normalised to have a discrete definite integral over $p=[0,1]$ equal to 1.0, and hence are valid as PMF envelopes. }
\label{fig:P_PMF}
\end{figure}
From this we can then compute the cumulative distribution function (CDF) for trader $i$ as $F_P(p)=P(P\leq p)$ which is defined over the domain $[p_{i:\text{min}}, p_{i:\text{max}} ]$ as:
\begin{equation}
F_P(p)=\sum\limits_{k=0}^{p-p_{i:\text{min}}}{\mathbb P}(P=p_{i:\text{min}}+k)
\label{eq:PRZI_CDF}
\end{equation}
The $F_P(p)$ CDF in Equation~\ref{eq:PRZI_CDF} maps from its domain $[p_{i:\text{min}}, p_{i:\text{max}} ]$ to cumulative probabilities in the interval $[0,1]\in{\mathbb R}$: for each of the $r_i$ discrete points in the domain, exact values of $F_P(p)$ can be computed and stored in a look-up table (LUT). It is then simple to use reverse-lookup on the LUT to give an inverse CDF function $\widehat{F_P^{-1}}(c)$ such that:
\begin{equation}
\widehat{F_P^{-1}}(c):[0,1]\in {\mathbb R} \mapsto [p_{i:\text{min}}, p_{i:\text{max}} ]\in {\mathbb Z}
\end{equation}
\noindent
Separate versions of $\widehat{F_P^{-1}}(c)$ need to be generated for buyers and sellers, which will be denoted by subscripted parenthetic $s$ and $b$ characters, and can then be fed samples from a uniformly distributed pseudo-random number generator to produce quote-prices for PRZI traders:
\begin{equation}
P_{bq({\text {PRZI}})}(t+\Delta_t)=\widehat{F_{P(b)}^{-1}}({\cal U}(0,1))
\label{eq:inv_cdf_buy}
\end{equation}
\begin{equation}
P_{sq({\text {PRZI}})}(t+\Delta_t)=\widehat{F_{P(s)}^{-1}}({\cal U}(0,1))
\label{eq:inv_cdf_sell}
\end{equation}
And note that in the special case when $s=0$, $\widehat{F_{P}^{-1}}$ reduces to the identity function.
That completes the definition of PRZI traders. Section~\ref{sec:implement} discusses implementation issues, and Section~\ref{sec:results} presents illustrative results.
\section{PRZI Implementation}
\label{sec:implement}
A reference implementation of PRZI written in {\em Python 3.8} has been added to the BSE sourcecode repository on GitHub \cite{cliff_2012_bse}. For ease of intelligibility, the implementation follows the mathematics laid out in the previous section of this paper, computing an individual LUT for each trader:
this approach is conceptually simple, but is manifestly inefficient in space and in time. To illustrate this, consider the case where all traders in the market have the same value $s$ for their PRZI strategy parameter, and all sellers are assigned the same limit price $\lambda_s$ and all buyers the same $\lambda_b$: in such a situation, only two LUTs are needed: one to be shared among all buyers, and another to be shared among all sellers; but the current implementation wastes time and space by blindly computing an entire LUT for each trader. A more efficient implementation could be built around compiling any one LUT for a particular instance of an $(s, p_{\text{min}}, p_{\text{max}})$ triple only once, when it is first required, and storing it in some central shared key-value store or document database where the triple is the key and the LUT is the associated value or document. This would be considerably less inefficient, but would add considerably to the complexity of the code. As the Python code in BSE is intended to be simple, as an illustrative aid for non-expert programmers, the current BSE implementation does not use this approach.
\section{Example Use-Cases}
\label{sec:results}
Section~\ref{sec:intro} listed three motivations for developing PRZI: to provide a mechanism for ZI traders to give a market-impact response; to enable ZI traders to be {\em opinionated}, thereby enabling the creation of ACE models exploring matters arising from Shiller's notion of {\em narrative economics}; and to facilitate the study of coevolutionary dynamics in markets populated by adaptive agents that can smoothly vary their trading strategies through a continuous space. Here I briefly summarise current work-in-progress on all three of those fronts, in sequence.
\subsection{PRZI as a Generator of Market-Impact Responses}
\label{sec:impact}
In \cite{church_cliff_2019} I introduced an altered version of the SHVR zero-intelligence trader strategy that is extended to be imbalance-sensitive, altering its behavior in response to instantaneous imbalances in market supply and demand, thereby giving ZI-populated ACE models in which the population of traders exhibit a market-impact effect. Market impact is here defined as the situation where the prices quoted by traders in a market shift in the direction of anticipated change in the equilibrium price, before any transactions have occurred, where the coming change in equilibrium price is anticipated because of an imbalance in the orders in the market, a sudden shift to excess demand or supply; and where it's safe to assume that actual market transaction prices are typically close to the equilibrium price. For an extensive and insightful analysis of market impact in financial markets, see \cite{farmer_etal_2013}.
As the basic SHVR was made sensitive to {\em imbalance} for the purpose of market {\em impact}, the extended SHVR was named ISHV (pronounced ``eye-shave''). Here I briefly show how exactly the same mechanism developed for ISHV can be used to create an impact-sensitive version of PRZI, which I'll refer to as IPRZI.
ISHV's impact-sensitivity is based on the difference between the current market {\em mid-price}, denoted here by $p_m(t)=(p^*_{\text{bid}}(t)+p^*_{\text{ask}}(t))/2$, and the current market {\em micro-price}, denoted here by $p_\mu(t)$, where
\begin{equation}
p_\mu(t)=\frac{ p^*_{\text{ask}}(t) q^*_{\text{bid}}(t) + p^*_{\text{bid}}(t) q^*_{\text{ask}}(t) } {q^*_{\text{bid}}(t)+q^*_{\text{ask}}(t)}
\label{eq:p_mu}
\end{equation}
\noindent
in which $p^*_{\text{ask}}(t)$ is the price of the best ask at time $t$ -- i.e., it is the price at the top of the bid-side of the CDA market's limit order book (LOB); $p^*_{\text{bid}}(t)$ is the price of the best bid at time $t$ -- i.e. the price at the top of the ask side of the LOB; $q^*_{\text{ask}}(t)$ is the total quantity available at $p^*_{\text{ask}}(t)$; and $q^*_{\text{bid}}(t)$ is the total quantity available at $p^*_{\text{bid}}(t)$. Equation~\ref{eq:p_mu} is how the micro-price is defined by \cite{cartea_etal_2015}.
When there is zero supply/demand imbalance at the top of the LOB (i.e., $q^*_{\text{bid}}(t) = q^*_{\text{ask}}(t)$), Equation~\ref{eq:p_mu} reduces to the equation for the market mid-price, and hence the difference between the two prices, denoted by $\Delta_m(t)=P_\mu(t) - P_m(t)$, is zero. However if $\Delta_m(t)>>0$ then the imbalance indicates that subsequent transaction prices are likely to increase (which should increase urgency in IPRZI buyers, and reduce it in IPRZI sellers); and if $\Delta_m(t)<<0$ then the indication is that subsequent transaction prices are likely to fall (so IRPZI sellers should increase urgency, while buyers relax). The mapping from $\Delta_m(t)$ to IRPZI $s$-value is achieved by giving each IPRZI trader $i$ an {\em impact function}, denoted here by ${\cal I}_i$, such that $s_i(t)={\cal I}_i(\Delta_m(t))$ and ${\cal I}_i : {\mathbb Z} \mapsto [-1.0,+1.0] \in {\mathbb R}$. This form of IPRZI was recently implemented by my student Owen Coyne in his Masters thesis \cite{coyne_2021}.
As illustration, Figure~\ref{fig:przi_mktimpact} shows the change in strategy of an IPRZI buyer when it reacts to a sudden change in imbalance, a sudden injection of excess demand at the top of the LOB.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{przi_scatter.png}
\end{center}
\caption{Illustration of shift in quote-prices from an IPRZI buyer reacting as the supply/demand imbalance changes. The horizontal axis is time $t$ in seconds, and the vertical axis is price. Each marker shows a price quoted by a single IPRZI buyer with a current limit price $\lambda=\$150$. Initially there is no imbalance, so the IPRZI buyer has $s=0$ and is generating quote-prices from a uniform distribution, i.e.\ it is playing the ZIC strategy. At $t=10$ an imbalance is deliberately introduced into the market, a sudden injection of excess demand, and the IPRZI buyer reacts by shifting its $s$ value to increase {\em urgency}, with the PMF becoming heavily skewed toward $\lambda$: the resultant change in the distribution of actual quote prices is manifest.
}
\label{fig:przi_mktimpact}
\end{figure}
This version of IPRZI is the simplest to articulate, but it suffers from the vulnerability that Equation~\ref{eq:p_mu} is sensitive only to imbalances at the very top of the LOB -- any imbalance at deeper levels of the LOB is simply ignored, and hence this is quite a fragile measure of imbalance. This is discussed at more length by \cite{zhang_cliff_2021} who instead use {\em multi-level order-flow imbalance}\/ (MLOFI) as a more robust imbalance metric. Briefly, MLOFI is a specific implementation of the novel findings of
\cite{cont_cucuringu_zhang_2021} who used a principal component analysis of {\em order-flow imbalance} (i.e., whether the number/amount of buy orders submitted to the exchange/LOB is in balance with the number/amount of sell orders, or not) to show that taking into account multiple levels of the LOB when defining order-book supply/demand imbalance leads to higher explanatory power for the short-term predictions of market-impact price movements.
It is straightforward to extend IPRZI by replacing the ISHV-like $s_i(t)={\cal I}_i(\Delta_m(t))$ method described here with the MLOFI method developed by Zhang \& Cliff, thereby making IPRZI more robustly sensitive to order imbalances: early results from exploring this approach are presented in \cite{cliff_zhang_taylor_2022}.
\subsection{PRZI as Opinionated Traders}
\label{sec:opinions}
Lomas \& Cliff \cite{lomas_cliff_2021} describe results from extending two well-known types of ZI trader, Gode \& Sunder's ZIC \cite{gode_sunder_1993} and Duffy \& {\"U}nver's NZI \cite{duffy_unver_2006}, where in both cases the extension adds an {\em opinion}\/ variable to each trader. This forms a novel intersection between work on ZI traders and work in {\em opinion dynamics} (see e.g. \cite{krause_2000,meadows_cliff_2012,meadows_cliff_2013}. Lomas \& Cliff prefix the ZIC and NZI acronyms with an O (for `opinionated') to give OZIC and ONZI. As was discussed in Section~1 of this paper, Lomas \& Cliff's work was motivated by the desire to develop ZI-trader agent-based models (ABMs) in the ACE tradition that would facilitate study of Shiller's recent notion of {\em narrative economics} \cite{shiller_2017,shiller_2019}; and one of the motivations for developing PRZI was to address some deficiencies in the original OZIC model.
To recap, in brief: Shiller argues that traditional economic analyses too often under-emphasise (or wholly ignore) the extent to which the buying and/or selling behavior of agents within a particular market-based system is influenced by the {\em narratives} (i.e., the stories) that the agents tell each other -- and themselves -- about the past, present, and future states of that market system: if the narratives that the agents are telling each other are consistent with conventional economic theory, then there is nothing much to report; but if the stories in circulation run counter to the predictions of theory, then sometimes the actual market outcomes are difficult or impossible to explain by reference only to those factors favoured in orthodox economic analyses. Shiller's 2019 book discusses at length the extent to which the phenomenal rise in prices of cryptocurrencies such as Bitcoin is much better explained by reference to the narratives circulated and believed by the active participants in the markets for such crypto ``assets'' than it is by any conventional analysis or estimates of ultimate value.\footnote{Shiller's 2019 book pre-dates the explosive rise of trading in blockchain-backed non-fungible tokens and the roller-coaster price gyrations in ``meme stocks'' such as GameStop (see e.g.\ \cite{jakab_2022}) and hence now seems ripe for a second edition, or at least for an epilogue to the first edition.} Schiller argues for an empirical approach to studying narrative economics: gathering as much data (e.g., records of the texts of news-articles and social-media discussions) as is practicable about the narratives circulating among agents in real economic systems, and then tying analysis of these narratives to analyses of the actual market dynamics and eventual outcomes. As Kenny Lomas and I argued in \cite{lomas_cliff_2021}, what Shiller proposes is solely an {\em a posteriori} analysis of the roles of narratives in economics systems, but there is an alternative approach, which is to build {\em constructive} models of economic systems in which narratives are an important factor, using ZI/MI ABM/ACE methods. This can be done by recognising that what Shiller describes as {\em narratives} are nothing more than the external expressions, the verbalizations, of agents' internally-held {\em opinions}, and hence that issues in narrative economics can be studied by developing ABMs in which the trader-agents each hold an opinion that can to some extent influence the opinion of other traders in the system, and that can in turn to some extent be influenced by the opinions of other agents that the agent interacts with; so long as each agent's opinion then also to some extent influences its economic behavior, the overall ABM can act as a test-bed for exploring aspects of narrative economics, in much the same way as laboratory experiments involving a few tens of human subjects acting as traders in a CDA can provide genuine insights on the dynamics of real financial markets. The \cite{lomas_cliff_2021} paper was our first report on extending ZI/MI traders so that they also held opinions that not only influenced their own trading behaviors but also could influence the opinions of other traders too; but, as is the case with very many exploratory first-attempts, analysis of our initial results revealed problems that needed to be addressed in further work.
Figure~\ref{fig:zic_ozic_przi} illustrates the problem with OZIC traders that PRZI is intended to remedy: OZIC uses the value of the trader's opinion variable to set an {\em opinionated limit price} (here denoted by $\Omega$) which becomes a bound on the trader's PMF, introducing a region on the PMF between $\Omega$ and the trader's original limit price $\lambda$ in which the PMF is at the zero probability level, rather than at the positive uniform-probability level that it would be in a ZIC trader. While this does give a desirable link between the trader's opinion and the prices that it can quote, it is too easy for the opinion-dependent ranges of zero-probability in the buyer and seller PMFs to eliminate the overlap that is required for there to be any likelihood of the randomly-generated ZIC quote-prices crossing and leading to transactions. When this happens, each side issues quotes that are unacceptable to the counterparty side, and the market simply grinds to a halt, with traders on both the buyer-side and the seller-side quoting prices that their respective counterparty side can no longer transact at.
This problem is at its most acute if the supply and demand schedules are each perfectly elastic as used e.g. by \cite{smith_1965_swastika} (in this case both the supply and demand curves are horizontal and flat over the range of available quantities, giving a graph of the supply and demand curves that various authors have referred to as {\em swastika-} or {\em box-} shaped: see e.g. \cite{smith_1994}). As illustration, consider such a perfectly elastic pair of schedules and assume the best-case overlap in buyer and seller PMFs such that all buyers have the same limit price $\lambda_b$ that is set to the system maximum price ($\bar{M}$ in Lomas \& Cliff's terminology, or $P_{\text max}$ here) and all sellers have the same limit price $\lambda_s$ that is set to the system minimum price ($\underbar{\em M}$ in Lomas \& Cliff's terminology, or $\delta_p$ here). It is easy to prove that in these circumstances, if the market is populated entirely by OZIC traders and each trader holds a neutral opinion (i.e. have zero for their opinion value) then the buyer and seller PMFs cease to overlap for all traders; at which point -- given that we're talking about a situation in which all buyers have the same PMF and all sellers have the same PMF -- transactions cease to occur.
This issue in OZIC is a consequence of what could be characterised as the binary thresholded nature of OZIC's implementation of opinion-influenced quote-price generation: the PMF for an OZIC is starkly divided into two zones by the trader's current value of $\Omega$: in one zone the PMF is a simple uniform distribution (as in ZIC) and in the other the PMF is a constant zero. PRZI's smoothly-varying PMFs obviate this problem because they allow a graded response, with probabilities being reduced to much lower values in the OZIC ``zero-zone" than in the OZIC ``uniform zone" while maintaining a nonzero possibility of a transaction actually occurring.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{zic_ozic_przi.png}
\end{center}
\caption{Comparison of PMFs for ZIC, OZIC, and PRZI buyers and sellers. Three pairs of illustrative PMFs are shown here, stacked vertically for ease of comparison: $\lambda_s$ is the seller's limit price and $\lambda_b$ is the buyer's limit price; $\delta_p$ is the minimum price quotable in the market (here, one tick above zero) and $P_{\text max}$ is the market's maximum allowable price. The upper pair of PMFs show ZIC: transactions can only occur between the buyer and the seller if $\lambda_s < \lambda_b$: this overlap zone in the two PMFs is illustrated by the area of diagonal hatching. The middle pair of PMFs show OZIC: the two PMFs are each truncated by the trader's {\em opinionated limit price}, denoted here by $\Omega_b$ and $\Omega_s$ for the buyer and the seller respectively: this truncation can readily eliminate the PMF overlap zone, thereby setting the probability of any future transactions to zero. The lower pair of PMFs show how PRZI PMFs can be set to attenuate toward the overlap zone, diminishing its relative proportion of the overall PMF, while retaining a nonzero probability of transactions occurring.
}
\label{fig:zic_ozic_przi}
\end{figure}
As in Lomas \& Cliff's work, here we give PRZI traders a real-valued opinion variable, denoted here as $\omega_i$ for trader $i$, s.t. $\omega \in [-1,+1] \in {\mathbb R}$ where negative values of $\omega$ represent the opinion that prices are set to fall, and positive values represent the opinion that prices will rise. The linkage between $\omega$ and the PRZI $s$-value is straightforward: if trader $i$'s opinion is that prices will rise, then if $i$ is a buyer it needs to bias its PMF toward {\em urgency}\/ but if it is a seller then the rational thing to do when prices look set to rise is to bias the PMF toward {\em relaxed}; and if the trader's opinion is that prices will fall, similar reasoning applies {\em mutatis mutandis}.
As should by now be obvious, we need some function $F_i$ that maps from trader $i$'s opinion $\omega_i$ to its PRZI strategy $s_i$, i.e.\ $s_i = F_i(\omega_i)$. In the very simplest case, given that both $s_i$ and $\omega_i \in [-1,+1] \in {\mathbb R}$, the mapping can be the identity function, or the negative of the identity function, depending on whether $i$ is buyer or a seller. For a seller, the simplest $F_i$ is identity: $F_i(+1)=+1$; $F_i(-1)=-1$. For a buyer, the simplest $F_i$ is negative identity: $F_i(+1)=-1$; $F_i(-1)=+1$. However this fairly rapidly moves the trader's strategy to extremes (either SHVR or GVWY) as $|\omega| \rightarrow 1$, which may not always be desirable: instead, some nonlinear mapping resembling a logit/probit function is generally a better choice for $F_i$.
The link $F_i$ provides between $\omega_i$ and $s_i$ is, as stated thus far, necessary but not entirely sufficient: it provides a linkage between a trader's opinion and its PRZI strategy, and if this work was conducted in the manner familiar from much of the opinion dynamics (OD) literature, that {\em would}\/ be sufficient, because a lot of research in OD has been directed at the study of shifts in opinion among a population of agents where the {\em only}\/ factor that might change an individual's opinion is some set of one or more interactions with one or more other agents in the population. That approach makes sense if the opinions being modelled are {\em solely}\/ matters of individual subjective choice, such as personal religious or political views, where there is no external referent, no absolute {\em ground truth} that could prove the individual's opinion to in fact be wrong. But in financial markets there {\em is}\/ a ground truth: the actual dynamics of the actual market; actual prices, actual volumes. If an entire population of agents believes that the price of some asset will rise tomorrow, that can be a self-fulfilling prophecy because all traders will act in a way consistent with the collective belief (this takes us onto the well-trodden path toward sunspot equilibria \cite{cass_shell_1983} and all the way back to Merton's classic work in the 1940's e.g.\ \cite{merton_1948}).
However if half the population believes that the price will rise while the other half believes it will fall and the next day the price does actually rise, then half the population got it wrong and may need to revise their faith in their own opinions to reduce the mismatch between their opinions and the ground truth. Given that another motivation for designing PRZI, another use-case (as discussed in Section~\ref{sec:impact}), was to explore market-impact effects by giving PRZI traders a sensitivity to supply/demand imbalances, currently I'm working with students to explore ABMs in which the PRZI trader's opinion-value is influenced by an appropriate mix of its OD-style interactions with other agents in the population, and its analysis of the currently observable market situation (e.g. its calculation of simple imbalance metrics such as the ISHV-style $\Delta_p$ value, or MLOFI). In the spirit of ZI and minimal-intelligence modelling, a simple weighted linear combination of the two is as good a place to start as any, but this is a rich seam for further research with many alternative approaches to explore.
\subsection{Co-evolutionary Dynamics in Markets of Adaptive PRZI Traders}
\label{sec:coevolve}
The continuously-variable strategy parameter $s$ in PRZI allows for studies of co-evolutionary dynamics in markets populated entirely by adaptive ZI agents. To do this, we need to set up markets populated entirely by PRZI traders where each individual trader can alter/adapt its $s$ value in response to market conditions, always trying to fine-tune it to generate higher trading profits.
Doing this will then provides a ZI-style ABM/ACE test-bed for exploring issues in evolutionary economics -- the adaptive PRZI traders are manifestly engaged in a form of evolutionary game, each adapting their strategies over time to try to maximise their local measure of fitness, a topic explored extensively (although not always using exactly that terminology) since the dawn of economics, predating even \cite{vonneumann_morgenstern_1944}: see for example the reviews by \cite{friedman_1991_evolecon,blume_easley_1992,friedman_1998_evolecon,lo_2004,lo_2019,nelson_2020_evolecon}.
If we were to allow {\em only}\/ a single PRZI trader $i$ to adaptively vary its $s_i$ value, trying to find the best setting of $s_i$ relative to whatever distribution of $s_{j \neq i}$ values is present in the market (i.e., relative to the current mix of other strategies in the market) then we could say that $i$ is {\em evolving}\/ its value of $s_i$ to try to find an optimum, the most profitable setting for its strategy parameter, given the unchanging set of fixed strategies that it is pitted against in the market. But when {\em every}\/ PRZI trader in the market is simultaneously adapting its $s$-value, the system is {\em co-evolutionary}\/ because what is an optimal setting of the $s$ parameter for any one trader will likely depend on the $s$-values currently chosen by many or perhaps all of the other traders in the market. That is, the profitability of $i$ is dependent not only on its own strategy value $s_i$ but also on many or perhaps all other $s_{j \neq i}$ values in play at any particular time, and in principle all the strategy values will be altering all the time.
A primary motivation for studying such co-evolutionary markets with adaptive PRZI traders is the desire to move beyond prior studies of markets populated by adaptive automated traders in which the ``adaptation'' merely involves selecting between one of typically only two or three fixed strategies (as in, e.g., \cite{walsh_etal_2002,vytelingum_etal_2008,vach_2015}. The aim here is to create minimal model markets in which the space of possible ZI strategies is infinite, as a better approximation to the situation in real financial markets with high degrees of automated trading.
Prior researchers' concentration on markets in which the traders can choose one of only two or three fixed strategies can be traced back to the sequence of publications that launched the trading strategies MGD, GDX, and AA (i.e., \cite{tesauro_das_2001,tesauro_bredin_2002,vytelingum_etal_2008}), and the papers in which these strategies were shown to outperform human traders (i.e., \cite{das_etal_2001,deluca_cliff_2011_icaart,deluca_cliff_2011_ijcai}). All of these works relied on comparing the strategy of interest with a small number of other strategies in a series of carefully devised experiments. For example, GDX was introduced in \cite{tesauro_bredin_2002}, and was compared only to ZIP and GD.
In aiming for a fair and informative comparison, experimenters were immediately faced with issues in {\em design of experiments}\/ (see e.g.\ \cite{montgomery_2019}): how best to compare strategy $S_1$ with strategies $S_2$ and $S_3$ (and $S_4$ and $S_5$ and so on), given the finite time and compute-power available for simulation studies, and the need to control for the inherent noise in the simulated market systems.
Early comparative studies such as \cite{das_etal_2001} limited themselves to running experiments that studied the performance of a selection of trading strategies in three fixed experiment designs: {\em homogeneous}\/ (in which the market is populated entirely by traders of a single strategy-type); {\em one-in-many}\/ (OIM: in which a homogeneous market was altered so that all the traders were of strategy type $S_1$ except one, which was of type $S_2$); and {\em balanced-group} (BG: in which there was a 50:50 split of $S_1$ and $S_2$, balanced across buyers and sellers, with allocation of limit-prices set in such a way that for each trader of type $S_1$ with a limit price of $\lambda_1$ there would be a corresponding trader of type $S_2$ also assigned a limit price of $\lambda_1$). There were good reasons for this experiment design, and the results were informative, but they rested on only ever comparing two strategies $S_1$ and $S_2$ in markets with a total number of traders $N_T$ where the ratio of $S_1$:$S_2$ was one of either $N_T$:$0$ (i.e., homogeneous); or $(N_T-1)$:$1$ (i.e., OIM); or $\frac{N_T}{2}$:$\frac{N_T}{2}$ (i.e., BG). This approach left open the question of whether the performance witnessed in one of these three special cases generalised to other possible ratios, other relative proportions of the two strategies in the market.
A method by which that open question could be resolved was developed by \cite{walsh_etal_2002} who borrowed the technique of {\em replicator dynamics analysis} (RDA) from evolutionary game theory (see e.g.\ \cite{maynardsmith_1982}). In a typical RDA, the population of traders is initiated with some particular ratio of the $N_S$ strategies being compared, and the traders are allowed to interact in the market as per usual, but every now and again an individual trader will be selected via some stochastic process and will be allowed to {\em mutate} its current strategy $S_i$ to one of the other available strategies $S_{j\neq i}$ if that new strategy appears to be more profitable than $S_i$. In this way, given enough time, the market system can be started with any possible ratio of the $N_S$ strategies, and in principle it can evolve from that starting point through other system state-vectors (i.e., other ratios of the $N_S$ strategies) to any other possible ratio of those strategies. However in practice the nature of the evolutionary trajectories of the system, i.e. the paths traced by the time-series of state-vectors of the system, will be determined by the profitability of the various strategies that are in play: some points in the state-space (i.e., some particular ratios of $N_S$ strategies) will be unprofitable {\em repellors}, with the evolutionary system evolving away from them; others will be profitable {\em attractors}, with the system converging towards them; and if the system converges to a stable attractor then it's at an {\em equilibrium point}, or potentially on a repeating sequence of equilibrium points, i.e.\ a {\em limit cycle}. Walsh et al's 2002 paper showed the results of RDA for market systems in which $N_S=3$, comparing the trading strategies GD, SNPR, and ZIP, and visualised the evolutionary dynamics as plots of the two-dimensional {\em unit simplex}, an equilateral triangular plane with a three-variable barycentric coordinate frame.
Similar plots of the evolutionary dynamics on the 2D unit simplex were subsequently used by other authors when comparing trading strategies: see e.g.\ \cite{vytelingum_etal_2008,vach_2015}, and those authors also limited themselves to studies in which the traders in the market could switch between one of only $N_S=3$ different discrete strategies. And, in this strand of research, three-way comparisons seem to then have become the method of choice primarily because evolutionary trajectories through state-space, and the location and nature of any attractors and repellors on the space, is readily renderable as a 2D simplex when dealing with a $N_S=3$ system, but rapidly gets very difficult, to the point of impracticability, as soon as $N_S>3$. Higher-dimensional simplices are mathematically well-defined, but very difficult to visualise: the four-variable simplex is a 3D volume, a tetrahedron; and more generally the $N_S$-variable simplex is an $(N_S-1)$-dimensional volume -- so if we wanted to study the evolutionary dynamics of a six-strategy system, we would need to find a way of usefully rendering projections of the 5-D simplex, or we need to find alternative methods of visualisation and analysis.
However, as first shown by \cite{vach_2015} and later confirmed in more detailed studies by \cite{snashall_cliff_2019,rollins_cliff_2020} and \cite{cliff_rollins_2020}, when the complete state-space of all possible ratios of discrete strategies is exhaustively explored, the dominance hierarchies indicated by the simple OIM/BG analyses are sometimes overturned. That is, if strategy $S_1$ outperformed strategy $S_2$ in both the $OIM$ and the $BG$ tests, that would usually be taken as evidence that $S_1$ generally outperformed $S_2$, that $S_1$ was ``dominant'' in that sense; but actually if markets were set up with some ratio of $S_1$:$S_2$ {\em other}\/ than the OIM or BG ratios, then in those markets $S_2$ would dominate $S_1$ -- that is, the direction of the dominance relationship between $S_1$ and $S_2$ can often depend on the ratio of $S_1$:$S_2$, their relative proportions of the overall population. Furthermore, while $S_1$ might dominate $S_2$ in two-strategy experiments (i.e., where $N_S=2$), plausibly $S_2$ would dominate $S_1$ in experiments where values of $N_S>2$: the indications are that as yet there is no single master-strategy that dominates all others in all situations; what strategy is best will depend on the specific circumstances.
By populating a model market entirely with adaptive PRZI traders we create a minimal test-bed for exploring issues of market efficiency and stability in situations where all traders are simultaneously co-evolving in an infinite continuous space of strategies. The state at time $t$ of such a market with $N_T$ traders in it can be characterised as an $N_T$-dimensional vector of $s$-values, denoted by $\vec{S}(t)$, identifying a single point in the $N_T$-dimensional hypercube that is the space of all possible system states, and that point will move over time as the traders each adapt their $s$ values. We can attempt to identify attractors and repellors in this hypercube, but we will need new visualisation techniques: we'll need to leave simplices behind.
There are many ways in which a PRZI trader could be made to dynamically adapt its $s$-value in response to market conditions. Here, in the spirit of minimalism associated with studies of ZI traders, I use a crude and simple stochastic hill-climbing algorithm, of the sort that might be found as an introductory illustrative straw-man sketch in the opening chapter of a book on machine learning. To keep with the tradition of naming ZI/MI trading algorithms with short acronyms, I've named this {\em {\bf PR}ZI {\bf S}tochastic {\bf H}ill-Climber} as PRSH (pronounced ``pursh''). PRSH is defined in Section~\ref{sec:prsh_defn}, and then some illustrative baseline results from experiments in which a {\em single} PRSH trader adapting in markets where all other traders are playing fixed strategies are presented in Section~\ref{sec:prsh_solo}. After that, Section~\ref{sec:prsh_coev} shows results from experiments in which {\em all}\/ traders are PRSH, and hence in which the market is maximally co-evolutionary. The Python source-code for PRSH has been released as free-to-use open-source, in BSE (see \cite{cliff_2012_bse}) to enable other researchers to replicate and extend the preliminary results shown here.
\subsubsection{PRSH: a minimal PRZI Stochastic Hill-Climber}
\label{sec:prsh_defn}
At any time $t$, a PRSH trader $i$ has a set of strategies ${\cal S}_{i, t_m}$ that was created at time $t_m \leq t$ and that consists of $k\in {\mathbb Z}^{+}$ different PRZI strategy values $s_{0,t_m}$ to $s_{{k-1},t_m}$ (i.e., $|{\cal S}|=k>1$). Although $t$ is continuous in this model, alterations to $S_{i,t}$ happen only occasionally. After an initialisation step in which the $k$ strategies are each assigned a value $s_{i,t_0} \in [-1,+1] \in {\mathbb R}$ via a {\em genesis} function ${\cal G}(.)$, PRSH enters into an infinite loop: let $t_m$ denote the time at which a new iteration of the loop is initiated; in each cycle of the loop a PRSH trader first {\em evaluates} each of its $k$ strategies in turn, trading with each of them as the sole exclusive strategy for at least a minimum period of time $\Delta_t$, such that all $k$ have been evaluated by time $t_n \geq t_m + k\Delta_t$; after that, it {\em ranks} the strategies by some performance or {\em fitness} metric $\cal F$, and copies the top-ranked strategy (the {\em elite}) at time $t_n$ into $s_{0,t_n}$; it then creates $k-1$ new `mutants' of $s_{0,t_n}$, via a stochastic {\em mutation} function ${\cal M}(s_{0,t_n})$, and this set of new strategies $s_{j,t_n:1\leq j \leq k-1}$ then replaces the old ${\cal S}_{i,t_m}$, becoming ${\cal S}_{i, t_n}$, at which point it loops back for the next iteration (and hence in that next iteration the value $t_m$ is what was $t_n$ in the prior iteration).
This definition leaves the experimenter free to decide certain key details when implementing PRSH:
\begin{itemize}
\item The choice of $k$ and of $\Delta_t$ together determine the speed of adaptation: PRSH will generate a new ${\cal S}_{t_i}$ at most once every $k\Delta_t$ seconds: i.e., $k\Delta_t$ is the minimum time-period between successive mutations, where each mutation is an {\em adaptive step} on the underlying {\em fitness landscape}. If you want a PRSH to make $N_{\text{steps}}$ adaptive steps on the fitness landscape in the course of an experiment, that experiment needs to run for $>k \Delta_t N_{\text{steps}}$ seconds.
\item Exactly how the set ${\cal S}_0$ is created at initialisation is left open. Naturally $s_{i,0} = \mathcal{U}( -1, +1) \in {\mathbb R}; i \in \{0,\ldots,k-1\}$ is the least constrained, but there may be circumstances where it is informative to use some other method, e.g. $s_{i,0} = c; \forall i$ for some constant $c$ such as zero or $\pm 1$.
\item The stochastic function ${\cal M}: [-1, +1] \in {\mathbb R} \mapsto [-1, +1] \in {\mathbb R} $ that creates new mutants of the elite $s_{0,t_k}$ is similarly unspecified. Treating each mutation as the addition of a random draw from a distribution with zero mean and nonzero variance makes intuitive sense, and then either truncating or using ring-arithmetic to ensure that the function maps to $[-1,+1]$. In the experiments shown below, ${\cal M}(s_{0,t_k}) = s_{0,t_k} + \mathcal{N}(0,\sigma)$ with $\sigma=0.01$. Plausibly a simulated-annealing approach could be introduced, steadily reducing $\sigma$ as time progresses, but that is not explored here.
\item For $k>2$, questions immediately arise over what is the best way of generating the $k$ mutants. For instance if $k=3$ we could arrange a set of two different ${\cal M}$ functions, one per mutant, such that $s_{1,t_k} < s_{0,t_k}$ and $s_{2,t_k} > s_{0,t_k}$ and hence PRSH is always sampling $s$-values at random magnitudes either side of the current elite strategy; and for $k=5$ we could similarly arrange the mutants such that two are generated either side of the elite, one a small random distance away, and the other a much larger random distance away; such decisions are left as an implementation issue. In the work reported here we simply generate $k-1$ mutants via $\cal M$ with no additional constraints.
\item Finally, each iteration of the loop requires deciding which of the $k$ strategies is the current elite, via the fitness function $\cal F$, and there are many possible ways to do that. The method used here was to rank the $k$ strategies at time $t_k$ by the amount of profit generated per unit of time, denoted by {\sc pps} (profit per second), such that the elite $s_{0,t_k}$ strategy has the highest {\sc pps}. To help avoid the hill-climber from becoming trapped on local maxima, if the difference between the {\sc pps} scores of the two highest-ranked $s$-values in $S$ is less than some threshold $\epsilon_s$, then one of the two is chosen at random to be the elite for that iteration of the loop.
\end{itemize}
In essence, PRSH with $k$ strategies is a very primitive $k-$armed bandit, and all of the extensive multi-armed bandit (MAB) literature (such as \cite{gittins_etal_2011,myleswhite_2012,lattimore_szepesvari_2020}) is potentially of relevance here, but ignored: again, the intention here is not to create the best adaptive-PRZI trader, instead it is merely to have a simple minimal adaptive-PRZI algorithm to act as a proof of concept and to enable an initial set of exploratory and illustrative experiments involving populations of adaptive-PRZI traders: PRSH does that job.
\subsubsection{Adaptive Evolution of Strategy in a Single PRSH}
\label{sec:prsh_solo}
Before studying co-evolving populations of PRSH traders, it is informative to explore situations in which there is only a single PRSH trader in the market, and all other traders are one or more of the three ZI strategies that are spanned by PRSH/PRZI, i.e. GVWY, SHVR, and ZIC. In such situations we can talk of how the PRSH trader's strategy evolves over time, but not of co-evolution because the rest of the traders in the market are non-adaptive. A single-PRSH-trader market is sufficiently simple that it eases the introduction of concepts that become significantly more complex in fully co-evolutionary markets.
First, we can visualise the fitness landscape for a single PRSH trader by setting up a market in which, purely for the sake of generating appropriate visualization data, we give the PRSH a large $k$, and initialize $S_0$ to a set of regularly-spaced $s_{i,0}$ values across the range $[-1,+1]$, and then plot the {\sc pps} fitness of each strategy in the first evaluation. Specifically, set:
$ S_0 = \{ s_{i,0}: s_{i,0}=\frac{2i}{k-1}-1 ; i \in \{0, \ldots, k-1\}\} $
And let $\Delta_S=2/(k-1)$, the step-size in our mapping of the fitness landscape.
So for example with $k=21$ we have $\Delta_S=0.1$ and $S_0=\{-1,-0.9,-0.8,\ldots,+0.9,+1.0\}$.
For brevity, and without loss of generality, the discussion that follows in the rest of this section concentrates only on the case of a single PRSH seller in a market that is otherwise entirely populated by traders running nonadaptive strategies. The arguments that are made here for a single PRSH seller could just as easily be made for a single PRSH buyer, but to do both here would be overkill.
Figure~\ref{fig:SHVRlandscape} shows fitness landscapes plotted at $\Delta_S=0.05$ for a single PRSH seller when all other traders in the market are either (from top to bottom) SHVR, ZIC, or GVWY: i.e., a progression from all other traders in the market being maximally relaxed (SHVR) through to maximally urgent (GVWY). In all experiments reported in this paper, all buyers had the same limit price $\lambda_b$ and all sellers had the same limit price $\lambda_s < \lambda_b$, i.e. the supply and demand schedules were `box' style, with perfect elasticity of supply and of demand.\footnote{Specifically, in all the experiments reported here, $\lambda_s=60$ for all sellers and $\lambda_b=100$
for all buyers; and the number of buyers and sellers are the same (i.e., $N_{\text{Buy}}=N_{\text{Sell}}$), so there is no clearly defined equilibrium price in these market sessions: transactions can be expected to take place at any price in the range $[\lambda_s, \lambda_b]\in{\mathbb Z}$, and in principle all traders can expect to find a counterparty to transact with -- i.e., there are no extramarginal traders.}
When generating the landscapes for SHVR and ZIC the number of buyers ($N_{\text Buy}$) and the number of sellers ($N_{\text Sell}$) were each 30, i.e. $N_T=60$, but in the landscape for GVWY results from $N_T=60$ are overlayed with additional results from {\sc iid} repetitions of the same experiment where $N_T=30$ and where $N_T=120$ (in each case $N_{\text Buy} = N_{\text Sell} = N_T/2$), to demonstrate that the overall shape of the fitness landscape varies very little with respect to the $N_T=60$ case when the number of traders is halved or doubled.
As can be seen from Figure~\ref{fig:SHVRlandscape}, in the single-PRSH case the fittest (most profitable) strategies -- i.e., the global maxima -- are all at the high end of the range, at or close to $s=+1$, but in each landscape there is also a local maxima at/near $s=-1$.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{LScape_1PrshSell_SHVR.png}
\includegraphics[width=0.7\linewidth]{LScape_1PrshSell_ZIC.png}
\includegraphics[width=0.7\linewidth]{Lscape_1PrshSell_GVWY_3060120.png}
\end{center}
\caption{Fitness landscapes for a single PRSH seller in a market where all other traders are homogeneously playing the same fixed strategy: horizontal axis is PRSH strategy value $s$; vertical axis is profit per second ({\sc pps}) recorded by the single PRSH trader using $s$ as its strategy. Strategy evaluation time $\Delta_t$ is 7200s. Data points are plotted at strategy-steps of $\Delta_S=0.01$. Upper graph is when all other traders are playing the fixed SHVR strategy; middle graph is when all other traders are ZIC; lower graph is when all other traders are GVWY. In the lower graph only, data is shown for {\sc iid} repetitions of the experiment with the number of traders in the market (denoted by $N_T$) being set to 30 (data-points marked by open triangles), 60 (marked by open circles), and 120 (marked by plus-symbols).}
\label{fig:SHVRlandscape}
\end{figure}
The GVWY fitness landscape for a single PRSH seller shown at the bottom of Figure~\ref{fig:SHVRlandscape} clearly has a global maximum at $s \approx 0.8$. If the PRSH adaptation mechanism is operating as intended, when the single PRSH seller is initialised with $s=0$ and allowed to adapt for sufficiently long then its $s$ value should converge to roughly $0.8$, and then hold at that value. To demonstrate this, Figure~\ref{fig:gvwy_evolve_raw} shows the PRSH trader's $s$ value, plotted once per hour, in a simulation of 30 continuous days of 24-hour trading: as can be seen, from its initial value of zero there is a steady rise in $s$ over the first $\approx$750,000sec of trading (i.e., roughly the first 8.5 days), after which the system stabilises to $s$-values that noisily fluctuate around the $0.85$ level. To smooth out some of the noise, define $\hat{s}$ as the 12-hour simple moving average of the raw hourly $s$ data: Figure~\ref{fig:gvwy_evolve_smooth} shows the $\hat{s}$ line for the raw hourly data shown in Figure~\ref{fig:gvwy_evolve_raw}, along with $\hat{s}$ lines from a further four {\sc iid} repetitions of the same experiment. For the discussion that follows, let's call trader $i$'s $\hat{s_i}$ value at the end of an experiment the {\em terminal strategy} for $i$ in that experiment, and define the set $\widehat{S_{T}}$ as the set of terminal strategies from a population of PRSH traders that have co-evolved in a particular market environment. For the current discussion of the merely evolutionary (i.e., not co-evolutionary) adaptation of single PRSH traders, we can fill $\widehat{S_{T}}$ with the set of terminal strategy values arising from $N_R$ {\sc iid} repetitions of a particular experiment: in Figure~\ref{fig:gvwy_evolve_smooth}, we have $N_R=5$ and $\widehat{S}_T=\{ 0.86, 0.87, 0.88, 0.88, 0.93 \}$. As $N_R$ takes on larger values, it is natural to summarise values in the terminal strategy set $\widehat{S}_T$ as a frequency histogram or kernel density estimate, and from there to note whether the distribution of values in the terminal strategy set is unimodal or multimodal, either by eyeballing the distribution or density estimate, or by applying a test of modality such as those proposed by \cite{hartigan_hartigan_1985} or \cite{chasani_likas_2022}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{gvwy_evolve_raw.png}
\end{center}
\caption{Hourly strategy value over 30 days of round-the-clock trading for a single PRSH seller in a market populated with 29 GVWY sellers and 30 GVWY buyers: horizontal axis is time in seconds; vertical axis is the PRSH trader's strategy value $s$, which is initialized at the start of the experiment to $s=0$, i.e.\ to the ZIC strategy. The $s$-value evolves steadily toward a range of values close to the global optimum strategy identified in the bottom fitness-landscape plot of Figure~\ref{fig:SHVRlandscape}, and then stabilises to that range of values for the remainder of the experiment.
}
\label{fig:gvwy_evolve_raw}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{gvwy_evolve_smooth.png}
\end{center}
\caption{Smoothed PRSH strategy values from multiple 30-day experiments, each with a single PRSH seller in a market populated by 29 GVWY sellers and 30 GVWY buyers: horizontal axis is time in seconds; vertical axis is 12-hour moving-average strategy value (denoted by $\hat{s}$). Black line is the $\hat{s}$ trace for the raw hourly $s$-data shown in Figure~\ref{fig:gvwy_evolve_raw}; the four grey lines are each the $\hat{s}$ traces from four {\sc iid} repetitions of the same experiment. After 100,000 seconds (roughly 11 days) of trading, all five $\hat{s}$ traces have evolved to a steady state close to the global optimum strategy identified in the bottom fitness-landscape plot of Figure~\ref{fig:SHVRlandscape}, and remain clustered around that value for the remainder of the experiment. The set of final $\hat{s}$ values recorded at the end of each experiment is referred to as the {\em terminal strategy set}, denoted by $\hat{S}_T$. Here, $\widehat{S}_T=\{ 0.86, 0.87, 0.88, 0.88, 0.93 \}$: see text for further discussion.
}
\label{fig:gvwy_evolve_smooth}
\end{figure}
\subsubsection{Co-Evolution of Strategies in All-PRSH Markets}
\label{sec:prsh_coev}
As a first illustration of the dynamics of a fully co-evolutionary ZI market system, Figure~\ref{fig:prsh_coev_0init_s-hat} shows the $\widehat{s_i}$ values over time for a 30-day experiment in which the market is populated by 30 PRSH sellers and 30 PRSH buyers, all of which are initialized to have $s_{i,0}=0$: i.e.\ an experiment directly comparable to the results from the zero-initialized single-PRSH system explored in the previous section, except that here the fitness landscape for any one trader will depend on the distribution of strategy-values for all the other traders in the market, and in which the fitness landscape will be varying over time, in principle altering each time any one PRSH trader changes its strategy to a new value. Again, a $\widehat{S_T}$ terminal strategy set can be assembled from the final $\widehat{s_i}$ values of the individual traders that co-evolved against each other in the single market experiment: the corresponding terminal strategy set distribution is again unimodal: in this experiment, all sellers converge on strategy-values in $[\approx+0.55, \approx+0.85]$; multiple {\sc iid} repetitions of this market experiment generate much the same results.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{Strats_30PrshSellers_init_000_000.png}
\end{center}
\caption{Smoothed ($\widehat{s_{i,t}}$) strategy values for each of 30 PRSH sellers in a market experiment lasting for 30 days of continuous trading, where all traders are initialized to have $s_{i,0}=0$. Horizontal axis is time in seconds; vertical axis is the 12-hour moving average strategy $\widehat{s_{i,t}}$ of individual traders. The co-evolutionary dynamic is biphasic: in the initial ``adaptive transient'' phase over the $\approx12$ days (i.e., $\approx$1,000,000 seconds) the system settles to a unimodal steady-state centered on $s_i\approx0.7$; in the steady-state phase the strategy values of individual traders rise and fall but the overall distribution does not vary significantly.
}
\label{fig:prsh_coev_0init_s-hat}
\end{figure}
Further investigation reveals that the unimodal distribution of terminal strategies in experiments like the one illustrated in Figure~\ref{fig:prsh_coev_0init_s-hat} is an artefact of the decision to initialize all traders with $s_{i,0}=0$: if instead we set $s_{i,0}={\cal U}(-1.0,+1.0)$ so that the initial set of strategy values in the population of traders is uniformly distributed over the entire range of possible strategies, we see qualitatively different results:
for both the buyers and the sellers the distribution of terminal strategy values is then multimodal.
The development of multimodal terminal strategy distributions is not the only change resulting from switching the initial state from $s_{\forall i}=0.0$ to $s_{\forall i}={\cal U}(-1.0,+1.0)$. In Figure~\ref{fig:prsh_coev_0init_s-hat}, over the 30 simulated days, the dynamics of the system's co-evolution through strategy space are biphasic: an initial {\em adaptive transient}\/ phase of roughly 12 days in which all traders increased their $s$ values from zero to $\approx0.7$; followed by a steady-state phase lasting for the remainder of the experiment where the population of $s$ values wandered randomly around the $0.7$ level In contrast, when $s_{\forall i}={\cal U}(-1.0,+1.0)$ the system shows no such long-term stability over the same time-period, as is illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_30} and explained in the caption to that figure: even after the system's distribution of strategies has been relatively stable for a period of nine days, an equilibrium or stasis in which the traders have each executed roughly 150,000 transactions, chance co-evolutionary interactions can result in the stasis ending and the system entering a fresh period in which the strategies are in flux.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\linewidth]{prsh_coev_M1P1init_s-hat.png}
\end{center}
\caption{Smoothed ($\widehat{s_{i,t}}$) strategy values for each of 30 PRSH buyers in a market experiment lasting for 30 days of continuous trading, simulated at 60Hz time-resolution, where all traders are initialized to have $s_{i,0}={\cal U}(-1.0,+1.0)$. Horizontal axis is time $t$, with a vertical gridline every 5 days; vertical axis is the 12-hour moving average strategy $\widehat{s_{i,t}}$ of individual traders, with horizontal gridlines at $s$ intervals of 0.2: for $t\geq 0.5$~days (i.e., 12 hours) the trader's average strategy value over the preceding 12 hours is plotted; for $t<0.5$~days the trader's average strategy since the start of the experiment is plotted. By roughly Day~13 the system has settled into a state that then persists as a temporary equilibrium or stasis until roughly Day~22: during the equilibrium phase the modes are at roughly $s=-0.9$ $(n=6)$, $s=-0.1$ $(n=8)$, $s=+0.3$ $(n=3)$, and $s=+0.7$ $(n=13)$. After that, the equilibrium ``punctuates'', entering a new phase where first the mode at $-0.9$ loses its stability, then the mode at $+0.3$ seems to merge up into the mode that was at $+0.7$ but which now seems to be generally heading lower, and then the mode at $-0.1$ seems to dissipate in various directions. In the nine-days stasis/equilibrium, each trader would execute approximately 150,000 transactions. Clearly the dynamics have not reached a stable state after 30 days of trading, and longer simulations should be explored.
}
\label{fig:prsh_coev_M1P1init_s-hat_30}
\end{figure}
To illustrate the longer-term dynamics of this system, Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300} shows buyer-strategy co-evolutionary time series similar to that illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_30} from eight {\sc iid} repetitions of an experiment that lasted 10 times longer, i.e. 300 simulated days. As is clear from the figure, although stable modes do occur in each experiment, individual trader's strategy-values will sometimes transition from one mode to another, with no clear pattern or predictability to the timing and/or direction of these transitions. In particular, The upper four graphs in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300} appear to show that, after an initial adaptive transient phase, the population of traders settles into a steady-state bimodal distribution; but the lower four graphs show that the system does not always quickly converge to such a steady-state distribution and that co-evolutionary interactions can result in major changes in the strategy distributions (e.g., a trader switching from one mode to another) even after 200 or more days of continuous trading, a period over which each trader would execute roughly 3,500,000 transactions.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C7_1.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C7_2.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C6_4.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C6_2.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C7_3.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C7_4.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C6_1.png}
\includegraphics[width=0.49\linewidth]{Buy_Strat_300_C6_3.png}
\end{center}
\caption{Results from eight {\sc iid} experiments each otherwise the same as that illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_30} but instead continued for 300 days. Data lines show smoothed ($\widehat{s_{i,t}}$) strategy values for each of 30 PRSH buyers in a market experiment over 300 days of continuous trading, simulated at 60Hz time-resolution, where all traders are initialized to have $s_{i,0}={\cal U}(-1.0,+1.0)$. Horizontal axis is time $t$, with a vertical gridline every 50 days; vertical axis is the 7-day moving average strategy $\widehat{s_{i,t}}$ of individual traders, with horizontal gridlines at $s$ intervals of 0.2. See text for further discussion.
}
\label{fig:prsh_coev_M1P1init_s-hat_300}
\end{figure}
Thus far, to save space, only the co-evolutionary trajectories of the strategies in the population of buyers have been shown. Naturally, each of the eight buyer-strategy time-series graphs shown in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300} has a corresponding seller-strategy
time-series graph, but in this specific set of experiments there was much less variation in the outcomes for the seller population: rather than showing all eight, Figure~\ref{fig:sell_strat_300} shows one representative example; qualitatively, the other seven are all essentially identical to this.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{Sell_Strat_300_C7_4.png}
\end{center}
\caption{Time-series of co-evolving seller strategies from one of the eight experiments for which the buyer strategies were illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300}: qualitatively, all eight experiments have time series essentially the same as this one, so only the one is illustrated here. The vast majority of sellers rapidly shift their strategy-values to around $+0.9$, but in any one experiment a small number of sellers instead settle on strategy values close to $-1.0$. In all cases, these two modes are stable for the remainder of the duration of the experiment.
}
\label{fig:sell_strat_300}
\end{figure}
The co-evolutionary dynamics of strategy values in these model markets is not the only factor of interest: another equally significant concern is the efficiency of the markets populated by traders with co-evolving strategies: something that is illustrated in Figure~\ref{fig:prsh_coev_M1P1init_Prof_300} which shows, for each of
the eight 300-day experiments illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300}, the total surplus/profit extracted by the traders. Data-lines show collective total profit extracted by the 30 buyers (denoted here as $\pi_B$), collective total profit extracted by the 30 sellers (denoted here as $\pi_S$), and total profit extracted by the entire set of 60 traders (denoted here as $\pi_T = \pi_B + \pi_S$). In each case, after the initial adaptive transient over the first 50 days or less, the buyers' and seller's profit levels stabilise to an approximately constant-sum relationship, where if $\pi_B$ goes up then $\pi_S$ goes down, and {\em vice versa}. The sum $\pi_T$ that the two populations' profit-levels add up to is notably unvarying within any one experiment, but the value that $\pi_T$ settles on varies across experiments: for example, the experiments at upper-left and lower-left both have $\pi_T \approx 93-95$, whereas the upper-right and the left-hand experiment in the third row from the top both never see $\pi_T$ go above 90. The underlying reason for this variation in total profit extracted is illuminated in Figure~\ref{fig:run_300_relaxed_corr}, which shows the inverse relationship between the number of traders with `relaxed' strategy values ($s_i<0$) in the terminal strategy set and the total profit extracted: the more relaxed traders there are present in the market, the less profit extracted; despite their constant striving to improve profitability, traders with strategy values in the relaxed mode seem to be stuck on a local maximum in the fitness landscape.
\begin{figure}[hp]
\begin{center}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C7_1.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C7_2.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C6_4.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C6_2.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C7_3.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C7_4.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C6_1.png}
\includegraphics[width=0.45\linewidth]{Buy_Prof_300_C6_3.png}
\end{center}
\caption{Total extraction of surplus/profit for the eight 300-day experiments illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300}: horizontal axis is time in days; vertical axis is total profit extracted by a group of traders. Data-lines show collective total profit extracted by the 30 buyers, collective total profit extracted by the 30 sellers, and total profit extracted by the entire set of 60 traders. In each case, after the initial adaptive transient over the first 50 days or less, the buyers' and seller's profit levels stabilise to an approximately constant-sum relationship, where if buyers' profits go up then sellers' profits go down, and {\em vice versa}. The sum that the two populations' profit-levels adds up to is notably unvarying within any one experiment, but varies across experiments: for example, the experiments at upper-left and lower-left both have the sum consistently around 93-95, whereas the upper-right and the left-hand experiment in the third row from the top both never see their sum go above 90. See text for further discussion.
}
\label{fig:prsh_coev_M1P1init_Prof_300}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{Run300_relaxed_corr.png}
\end{center}
\caption{Inverse relationship between the percentage of traders in the market playing relaxed strategies (i.e., $s_i<0$) and total profit extracted by all the traders in the market: horizontal axis is percentage of traders with $s_i<0$; vertical axis is profit extracted over the final 50 days' trading (i.e., days 250-300) in the experiment. Markers show the arithmetic mean over that period, with error bars at $\pm$ one standard deviation, for the eight experiments illustrated in Figure~\ref{fig:prsh_coev_M1P1init_s-hat_300}. The dashed line shows linear regression; $R^2\approx 70\%$.
}
\label{fig:run_300_relaxed_corr}
\end{figure}
Although the time-series of co-evolving strategy values and histograms of strategy frequency distributions have served the purposes of this discussion thus far, there is a need for more sophisticated visualization and analysis techniques. Our very first studies studies of co-evolutionary dynamics with a preliminary $k=2$ PRSH-like system, reported in \cite{alexandrov_cliff_figuero_2022} explored the prospects of producing phase portraits, graphical characterisations of the global dynamics of the system, for market sessions in which there are only two evolving traders, each adjusting their $s$-values with the intent of improving their profitability, while all other traders play fixed strategies: in such a two-PRSH market the phase-space of interest is two-dimensional, just the two evolving strategies, and hence very easy to plot as a 2D graphic. But for the all-PRSH $N_T=60$ market sessions studied here, we need a useful way of plotting the trajectory of the dynamical system through its 60-dimensional real-valued phase-space: that is, the strategy vector $\vec{S}(t) \in [-1.0,+1.0]^{N_T} \in {\mathbb R}^{N_T}$.
Thankfully, in recent decades researchers in physics have developed a set of visualisation and analysis tools and techniques for such high-dimensional real-valued dynamical systems: the dynamics of such systems can be characterised visually, as a square array of pixels, via the creation of a {\em recurrence plot} (RP), which will often display macro-scale features that are obvious to the human eye; and then straightforward image-processing techniques can be used to generate quantitative statistics that summarise the nature of the RP and the features within it, an approach known as {\em Recurrence Quantification Analysis} (RQA). For readers unfamiliar with RPs and RQA, Appendix~\ref{appendix:RP} presents a brief introduction.
Figure~\ref{fig:prsh_RP168x168} shows an RP for a single $N_T=60$ all-PRSH market session lasting for 7 days of continuous round-the-clock trading, with the strategy-vector $\vec{S}(t)$ recorded hourly, resulting in a $168\times168$-pixel plot (i.e., $7\times24=168$) where the time-difference between rows and columns is one hour. In all the RPs plotted here, $\vec{S}(t)$ is considered a recurrence of the state $\vec{S}(t-\Delta_t)$ when $|\vec{S}(t)-\vec{S}(t-\Delta_t)|<\epsilon$, using $\epsilon = \sqrt{60\times0.05^2} = 0.387$: the maximum distance possible in this system (i.e., the {\em diameter of the phase-space} in the terminology of the physics literature) is $\sqrt{60\times 2^2}=15.492$
(e.g., if $[\vec{S}(t)]_i = +1.0; \forall i $ and $[\vec{S}(t-\Delta_t)]_i = -1.0; \forall i $), so the value of $\epsilon$ used here is $\approx$2.5\% of the maximum distance.
\begin{figure}
\begin{center}
\includegraphics[trim=80 30 80 40,clip,width=0.8\linewidth]{rp_plot_168x168.png}
\end{center}
\caption{
Example recurrence plot (RP) for a PRSH co-evolutionary market session: in this experiment (as with the experiments illustrated in Figures~\ref{fig:prsh_coev_M1P1init_s-hat_300} and~\ref{fig:prsh_coev_M1P1init_Prof_300}) there are 30 PRSH buyers and 30 PRSH sellers (i.e., $N_T=60$) each co-evolving their individual strategy $s$-values, so the collective state of the system of co-evolving strategy values at time $t$ is a strategy-vector $\vec{S}(t)\in[-1.0, +1.0]^{60} \in {\mathbb R}^{60}$. The traders interact continuously, simulated at 60Hz, trading around the clock 24 hours per day, but the $\vec{S}$ strategy-vector is recorded only once every hour. This RP shows the first 7 days of the market session (i.e., $7\times24=168$ hours): numeric labels on the axes are hour-number. The state $\vec{S}(t)$ is considered a recurrence of the state $\vec{S}(t-\Delta_t)$ when $|\vec{S}(t)-\vec{S}(t-\Delta_t)|<\epsilon$, using $\epsilon = \sqrt{60\times0.05^2} = 0.387$ (here the diameter of the phase space, is $\sqrt{60\times 2^2}=15.492$, so the value of $\epsilon$ used here is $\approx$2.5\% of that diameter). See text for further discussion.
}
\label{fig:prsh_RP168x168}
\end{figure}
As is clear from visual inspection of the RP in Fig.~\ref{fig:prsh_RP168x168}, there are almost always recurrences to the left and below the diagonal line of identity (LOI) and these recurrences are typically short-lasting, being roughly 10 pixels or less (i.e., 10 hours or less) in the first 50 hours of the session, and then lengthening as the session continues, such that by the end of the session the recurrences are recorded as far-distant as roughly 48 hours previously. A commonly-used RQA summary statistic for this kind of observation is the {\em trapping time} (denoted by $TT$: see Appendix~\ref{appendix:RP} for the definition): for the RP in Fig.~\ref{fig:prsh_RP168x168}, the overall $TT\approx 7.25$ hours: i.e., the system typically spends 7.25 hours within $\epsilon$ distance of any particular $\vec{S}(t)$, before co-evolution drives it away from that area of phase-space; and, given the large areas of unshaded area in the RP, we can see that once it co-evolves away from a particular state after a few hours, it never returns to that state (i.e., no further recurrences are recorded), indicating {\em acyclic} evolution -- i.e., continuous ``progress'' of the co-evolutionary dynamic.
Figure~\ref{fig:prsh_long_RP_plots} shows a set of six RPs, from six {\sc iid} market sessions with all parameters set to the same values as used in the experiments illustrated in Figures~\ref{fig:prsh_coev_M1P1init_s-hat_300} and~\ref{fig:prsh_coev_M1P1init_Prof_300}, except these six experiments have each been left to run for 1,500 days. As before, $\vec{S}(t)$ data is recorded hourly, and the traders interact second-by-second simulated at 60Hz, trading around the clock, 24hrs/day; and hence these RPs in their full incarnation are $36000\times36000$ pixels (i.e., $1500\times24=36000$), which of necessity are then downsampled for printed reproduction here. As is discussed in the caption to Figure~\ref{fig:prsh_long_RP_plots}, five of the six sessions show clear evidence of the co-evolutionary process being {\em cyclic}, in the sense that the system is continuously co-evolving, taking a very large sequence of adaptive steps in the 60-dimensional strategy-space, but eventually it returns to points in strategy space that it previously occupied at an earlier time in the session. And, surprisingly, the path-length of these cyclic transits can be extremely long: more than 1,000 days in one instance. And remember that each trading day in the session is simulated at 24hrs/day, at 60 frames per second resolution (i.e., the simulation timestep is 0.0167s), so the 1,000-day cycle occurred after 5.18Bn timesteps, during which more than a billion transactions will probably have taken place. Simulations run for shorter durations would simply not have revealed these long-term cycles.
\begin{figure}[hp]
\begin{center}
\includegraphics[width=0.49\linewidth]{RP_slide_C1_sq.png}
\includegraphics[width=0.49\linewidth]{RP_slide_C6_sq_annotated.png}
\includegraphics[width=0.49\linewidth]{RP_slide_C2_sq.png}
\includegraphics[width=0.49\linewidth]{RP_slide_C4_sq_annotated.png}
\includegraphics[width=0.49\linewidth]{RP_slide_C3_sq.png}
\includegraphics[width=0.49\linewidth]{RP_slide_C5_sq_annotated.png}
\end{center}
\caption{Recurrence Plots (RPs) for six {\sc iid} market sessions, each running for ~1,500 simulated days of continuous (24hr/day) trading, each simulated at 60Hz, and each involving multiple transactions per second, i.e. involving on the order of one billion transactions per 1,500-day session. Simulating each market session took approximately 280 hours of wall-clock continuous CPU time on a 16GB Apple Mac Mini (M1 Silicon, 2020), with data frames recorded once per simulated hour, yielding complete RPs that are 36,000$\times$36,000 pixels. For each RP, the numeric labels on both axes shows the number of days elapsed. The RP at upper-left shows the population of traders drifting in one region of strategy space over days $\approx100$ to $\approx200$, then another region over days $\approx200$ to $\approx700$, before evolving into a new region that holds from days $\approx900$ to $\approx1300$, and then continuing to evolve along a transient into previously unvisited areas of strategy space: this can reasonably be described as {\em acyclic} evolution. However in all five of the other sessions, there are clear recurrences, i.e.\ evidence of {\em cyclic} evolution: in the plot at mid-left, the region of strategy-space visited around days $\approx300$ to $\approx500$ is revisted in days $\approx1200$ to $\approx1500$; in the plot at lower-left, the region of strategy-space first visited over days $\approx10$ to $\approx 100$ is revisited sporadically around roughly days 400--600, 700--900, and 1000--1300 as evidenced by the corresponding thin ``trail of dust'' in the RP; for the three plots in the right-hand column, regions of strategy-space first visited in the opening 100-300 days are returned to after many hundreds of days spent in other regions: the recurrences have been highlighted with freehand-drawn ellipses. The lower-right plot is notable in that it shows a recurrence after a transit of more than 1,000 days of co-evolution.
}
\label{fig:prsh_long_RP_plots}
\end{figure}
\section{Discussion and Conclusion}
The results presented here are the first from market simulations populated wholly by co-evolving parameterised-response zero-intelligence (PRZI) traders using stochastic hill-climbing as their strategy optimization process (i.e., PRSH), and there are three notable points to highlight:
\begin{itemize}
\item
Despite interacting with each other at sub-second time-resolution, such minimally simple adaptive trader models can exhibit surprisingly rich dynamics, over extremely long timescales, with sequences of punctuated equilibria and with the system's co-evolutionary dynamic cycling back to previously-visited points in its phase-space over periods measured in hundreds of days of simulated trading, in which millions of transactions occur.
\item
The stable attractors in strategy-space are reasonably often neither at the extreme points of the range (i.e., $s_i=\pm1.0$) nor at the mid-point ($s_i=0.0$) but instead are at `hybrid' points along the strategy-space, resulting in trading behaviors (quote-price distributions) with no precedents in the prior ZI-trader literature.
\item
Even though each trading entity is forever engaged in attempting to improve its profitability or efficiency, forever making local adjustments to its own trading strategy, system-level inefficiencies can lock in and persist, apparently indefinitely, because some number of the entities stay trapped on local maxima in the fitness landscape.
\end{itemize}
There are a wide range of factors that could be explored in further work. For example: the particular form of adaptation used here, the simple stochastic hill-climber of PRSH, will affect the co-evolutionary dynamics; i.e., it might be more likely to result in traders being stuck on local maxima in the fitness landscape, in comparison to other more sophisticated adaptation/optimisation techniques.\footnote{See \cite{cliff_2022_prde} for recent results which show that switching to a different optimization technique does indeed result in fewer traders being trapped on local maxima.} Also the nature of the supply and demand curves in the market can be expected to affect the dynamics: in the experiments reported here, there was an obvious asymmetry in response, with the vast majority of the population of sellers rapidly co-evolving to be super-urgent (as shown in Figure~\ref{fig:sell_strat_300}) and the buyers then co-evolving toward multi-modal distributions of mainly relaxed strategies in response; with a different supply/demand schedule, this asymmetry could plausibly be reversed.
One compelling avenue for further research is to conduct experiments that explore the interplay between adaptive PRZI traders such as PRSH, and human traders, interacting and co-adapting in the same market (see \cite{bao_etal_2022} for a recent review) and/or to study markets populated by heterogenous mixes of adaptive strategies, pitting adaptive PRZI traders against other adaptive traders with higher-dimensional strategy-spaces (e.g.\cite{cliff_2009_zip60}); and another is to revisit the possibilities for the market's auction mechanism to be co-evolving along with the set of strategies played by the traders active in that market (see, e.g.: \cite{walia_byde_cliff_2003,phelps_mcburney_parsons_2010}).
Future papers will explore these and other issues.
\section*{Conflict of interest}
The author declares that he has no conflicts of interest.
\section*{Acknowledgements}
I am very grateful to the anonymous reviewers whose suggestions for changes to an earlier version of this paper improved it significantly, and to Conor Mullaney who pointed out an error in the revised version, now corrected.
\clearpage
|
2,869,038,154,951 | arxiv | \section{Introduction}\label{s1}
It is well-known that in dense-in-themselves $T_1$-spaces, all
scattered subsets are nowhere dense. This result was established
by Kuratowski in the proof that in $T_1$-spaces the finite union
of scattered subsets is scattered.
In a recent paper Coleman asked the following question
\cite[Question 4]{C1}: Is it true that in dense-in-themselves,
$T_D$-spaces all scattered sets are nowhere dense? In what
follows, we will show that even in dense-in-themselves
semi-$T_D$-spaces all $\alpha$-scattered sets are nowhere dense.
The question of Coleman is in fact very well motivated not only
because it is interesting to know how low one can go on the
separations below $T_1$ and still have the scattered sets being
nowhere dense but also from a `digital point of view'. In terms
of Digital Topology, we will prove that in semi-$T_D$-spaces with
empty open screen, trace spaces have no consolidations.
In Digital Topology several spaces that fail to be $T_1$ are very
often important in the study of the geometric and topological
properties of digital images \cite{KR1,K1}. Such is in fact the
case with the major building block of the digital n-space -- the
{\em digital line} or the so called {\em Khalimsky line}. This
is the set of the integers, $\mathbb Z$, equipped with the
topology $\cal K$, generated by ${\cal G}_{\cal K} = \{ \{ 2n-1,
2n, 2n+1 \} \colon n \in {\mathbb Z} \}$.
A {\em fenestration} \cite{K1} of a space $X$ is a collection of
disjoint nonempty open sets whose union is dense. The {\em
consolidation} $A^+$ \cite{K1} of a set $A$ is the interior of
its closure. When there is a fenestration of a space $(X,\tau)$
by singletons, the space $(X,\tau)$ is called {\em
$\alpha$-scattered} \cite{DGR1} or a {\em trace space} \cite{K1}.
For example, in the digital line the collection $\{ \{ n \} :
n \in {\mathbb Z}$ and $n$ is odd$\}$ is a fenestration of
$({\mathbb Z},{\cal K})$. All scattered sets are
$\alpha$-scattered by not vice versa \cite{DGR1}. In $T_0$-spaces
without isolated points, we may encounter a trace space which
fails to be nowhere dense \cite{C1}. Nevertheless, as we will
show, with the presence of the very weak separation 'semi-$T_D$',
in spaces with no isolated points we have all $\alpha$-scattered
sets being nowhere dense.
A topological space $X$ is a called a {\em $T_{D}$-space} if
every singleton is locally closed or equivalently if the derived
set $d(x)$ is closed for every $x \in X$. Recall that $X$ is a
{\em semi-$T_{D}$-space} if every singleton is open or
semi-closed \cite{D1}. Recall that a subset $A$ of a space
$(X,\tau)$ is called {\em locally dense} \cite{CM1} if $A
\subseteq A^+$. Note that every open and every dense set is
locally dense.
\section{When is $\cal N$ finer than $\cal S$?}\label{s2}
Recall that a topological ideal $\cal I$ is a nonempty collection
of sets of a space $(X,\tau)$ closed under heredity and finite
additivity. For example, the families $\cal N$ (of all nowhere
dense sets) and $\cal F$ (of all finite sets) always form ideals
while the family $\cal S$ of all scattered sets is an ideal if
and only if the space is $T_0$.
\begin{proposition}\label{p1}
For a topological space $(X,\tau)$ the following conditions are
equivalent:
{\rm (1)} $X$ is a dense-in-itself semi-$T_D$-space.
{\rm (2)} Every singleton is nowhere dense.
{\rm (3)} ${\cal F} \subseteq {\cal N}$.
{\rm (4)} There are no locally dense singletons in $X$.
\end{proposition}
{\em Proof.} (1) $\Rightarrow$ (2) Let $x \in X$. Since $X$ is
a semi-$T_D$-space, $\{ x \}$ is open or semi-closed \cite{D1}.
Since $X$ is dense-in-self, $\{ x \}$ is semi-closed. On the
other hand, in any topological space every singleton is locally
dense (= preopen) or nowhere dense \cite{JR1}. If $\{ x \}$ is
preopen, then it must be (due to semi-closedness) regular open.
As $X$ has no isolated points, we conclude that $\{ x \}$ is
nowhere dense.
(2) $\Rightarrow$ (3) Obvious, since the ideal of nowhere dense
sets is closed under finite additivity.
(3) $\Rightarrow$ (4) Follows easily from the fact that
singletons are either locally dense or nowhere dense.
(4) $\Rightarrow$ (1) If some point $x \in X$ were isolated, then
it would be locally dense. This shows that $X$ is
dense-in-itself. That $X$ is a semi-$T_D$-space follows easily
from the fact that nowhere dense sets are semi-closed.
Recall that a subset $A$ of a topological space $(X,\tau)$ is
called {\em $\beta$-open} if $A$ is dense in some regular closed
subspace of $X$. Note that every locally dense set is
$\beta$-open.
\begin{observation}\label{p2}
{\rm (i)} Every $\beta$-open subset of a dense-in-itself
semi-$T_D$-space is also dense-in-itself and semi-$T_D$.
{\rm (ii)} Let $(X_i,\tau_i)_{i \in I}$ be a family of
topological spaces such that at least one of them is a
dense-in-itself semi-$T_D$-space. Then the product space $X =
\prod_{i \in I} X_i$ is also dense-in-itself and semi-$T_D$.
\end{observation}
\begin{theorem}\label{t1}
If a topological space $(X,\tau)$ is dense-in-itself and
semi-$T_D$, then every $\alpha$-scattered subset of $X$ is
nowhere dense.
\end{theorem}
{\em Proof.} Let $A \subseteq X$ be $\alpha$-scattered. Assume
that $A^+$ is nonempty, i.e., there exists a nonempty $U \in
\tau$ such that $U \subseteq {\rm cl} (A)$. Since $(A,\tau|A)$
is $\alpha$-scattered, $U$ meets $I(A)$, the set of all isolated
points of $(A,\tau|A)$. Let $x \in U \cap I(A)$ and let $V$ be
an open subset of $(X,\tau)$ such that $V \cap A = \{ x \}$. Set
$W = U \cap V$. Note that $W \subseteq V \cap \overline{A}
\subseteq \overline{V \cap A} = \overline{\{ x \}}$ and so $\{
x \}$ has nonempty consolidation, i.e. it is not nowhere dense
in $X$. By Proposition~\ref{p1}, we have a contradiction. Hence,
$A$ is nowhere dense. $\Box$
The digital interpretation of Theorem~\ref{t1} is as follows: In
semi-$T_D$-spaces with no isolated points, i.e.,
semi-$T_D$-spaces with empty open screens, the trace spaces have
empty consolidations.
Now, we can apply the result above in order to show that the
$\alpha$-scattered subsets of the density topology are in fact
its Lebesgue null set.
\begin{definition}\label{d2}
{\em A measurable set $E \subseteq {\mathbb R}$ has density $d$
at $x \in {\mathbb R}$ if $$\lim_{h \rightarrow 0} \frac{m(E \cap
[x-h,x+h])}{2h}$$ exists and is equal to $d$. Set $\phi(E) = \{
x \in {\mathbb R} \colon d(x,E) = 1 \}$. The open sets of the
density topology $\cal T$ are those measurable sets $E$ that
satisfy $E \subseteq \phi(E)$. Note that the density topology
$\cal T$ is finer than the usual topology on the real line.}
\end{definition}
\begin{corollary}\label{c1}
The trace spaces (i.e., the $\alpha$-scattered subsets) of the
density topology are precisely its Lebesgue null set.
\end{corollary}
{\em Proof.} Follows from Theorem~\ref{t1} and the fact that a
subset $A$ of the density topology is nowhere dense if and only
if it is a Lebesgue null set \cite{T1}. $\Box$
\baselineskip=12pt
|
2,869,038,154,952 | arxiv | \section{Binary condensates and vortex dipoles}
Dynamics of weakly interacting binary condensate at zero temperature is well
described by a set of coupled GP equations
\begin{equation}
\left[ \frac{-\hbar^2}{2m}\nabla^2 + V_i({\mathbf r},t) +
\sum_{j=1}^2U_{ij}|\Psi_j({\mathbf r},t)|^2 -
i\hbar\frac{\partial}{\partial t}\right]\Psi_i ({\mathbf r},t) = 0
\label{eq.gp}
\end{equation}
in mean field approximation, where $i = 1, 2$ is the species index. Here
$U_{ii} = 4\pi\hbar^2a_{ii}/m_i$, where $m_i$ is the mass and $a_{ii}$ is
the $s$-wave scattering length, is the intra-species interaction,
$U_{ij}=2\pi\hbar^2a_{ij}/m_{ij}$, where $m_{ij}=m_i m_j/(m_i+m_j)$ is the
reduced mass and $a_{ij}$ is the inter-species scattering length, is the
inter-species interaction, and $V_i({\mathbf r})$ is the trapping potential
experienced by $i$th species. In the present work, we consider binary
condensate consisting of $^{85}$Rb and $^{87}$Rb for which $m_1\approx m_2$.
Furthermore, we also consider identical trapping potentials, which are
axially symmetric, for both the species. The total potential is then
$$
V({\mathbf r},t) = \frac{m\omega^2}{2}(x^2 + y^2 +
\beta ^2 z^2) + V_{\rm obs}(x,y,t),
$$
where $V_{\rm obs}(x,y,t) =
V_0 (t)\exp\lbrace -2([x-x_0(t)]^2+y^2)/w_0^2\rbrace$ is the blue detuned
Gaussian obstacle potential and $\beta$ is the anisotropy parameter. Define
the oscillator length of the trapping potential
$a_{\rm osc} = \sqrt{\hbar/(m\omega)}$, and consider $\hbar\omega$ as the unit
of energy. We can then rewrite the equations in dimensionless form with
transformations
$\tilde{{\mathbf r}} = \mathbf r/a_{\rm osc}$,
$\tilde{t} = t\omega $ and
$\phi_{i}(\tilde{{\mathbf r}},\tilde{t})= \sqrt{a_{\rm osc}^3/N_i}
\Psi_i({\mathbf r},t)$.
In pancake-shaped traps ($\beta\gg1$), $\phi(\mathbf{r},t)=
\psi(x,y,t)\zeta(z)\exp({-i\beta t/2})$ \cite{Muruganandam}, where
$\zeta=(\beta/(2\pi))^{1/4}\exp(-\beta z^2/4)$ is the ground state wave
function in axial direction. The Eq. (\ref{eq.gp}) can then be reduced to the
two dimensional form
\begin{eqnarray}
\left[ -\frac{1}{2}\left(\frac{\partial^2}{\partial x^2} +
\frac{\partial^2}{\partial y^2}\right) + \frac{x^2+\alpha_i^2y^2}{2} +
V_{\rm obs}(x,y,t) + \right. \nonumber \\
\sum_{j=1}^2 u_{ij}|\psi_j({\mathbf r},t)|^2 \left. -
i\frac{\partial}{\partial t}\right]
\psi_i ({\mathbf r},t) = 0,
\label{gp_2d}
\end{eqnarray}
where $ u_{ii} = 2 a_{ii}N_i\sqrt{2\pi\beta_i}/a_{\rm osc}$ and
$ u_{ij} = 2 a_{ij}N_j\sqrt{2\pi\beta_i}/a_{osc}$. Here we have neglected a
constant term corresponding to energy along axial direction as it only shifts
the energies and chemical potentials by a constant number without affecting
the dynamics. In the present work, we
consider $u_{12}>\sqrt{u_{11}u_{22}}$ so that the ground state of the binary
condensate is phase-separated. Geometry of the density distribution is such
that the species with the lower repulsion energy forms a core and the other
species forms a shell around it. For convenience, we identify the former and
later as the first and second species, respectively. With this labelling,
interaction energies $u_{11}<u_{22}$ and for equal populations, this implies
$a_{11}<a_{22}$.
To be specific, we consider $^{85}$Rb-$^{87}$Rb binary condensate
with $a_{11} = 460a_0$, $a_{22} = 99a_0$, and $a_{12} = 214a_0$ as the
scattering length values and $2 N_1 = N_2 = 10^6$ as the number of atoms. Here
$a_{11}$ is tunable with magnetic Feshbach resonance \cite{Cornish}.
With these set of parameters, the stationary state of $^{85}$Rb-$^{87}$Rb
binary condensate is just phase-separated. The trapping potential and obstacle
laser potential parameters are same as those considered in Ref. \cite{Neely},
i.e. $\omega/(2\pi) = 8$Hz, $\alpha = 1$, $\beta = 11.25$,
$V_0(0) = 93.0\hbar\omega$, and $w_0 = 10\mu$m. Hereafter we term this set of
scattering lengths, number of atoms and trapping potential parameters as
{\em set a}.
In hydrodynamics, the velocity field of a vortex
dipole is the vector sum of two component fields. One of the fields arises due
to the inhomogeneous density of the condensate and leads to the precession of an
off center vortex around the trap center \cite{Jackson-2,Svidzinsky}. In
addition to this, each vortex has a velocity field which
varies inversely with the distance from its center, which is experienced by
the other vortex of vortex dipole. In the present work, we move the obstacle
along $x$-axis and generate vortex dipoles located symmetrically about
$x$-axis. If $(x,y)$ and $(x,-y)$ are the locations of the positively and
negatively charged vortices of the vortex dipole, respectively, then the
velocity field of the positively charged vortex is
$$
\mathbf v(x,y) = \omega_{\rm pr}\hat k \times \mathbf r + \frac{1}{2y}\hat i,
$$
where $\omega_{\rm pr}$ is the rotational frequency of a vortex with charge
$+1$ in the condensate. A similar equation describes the velocity field of the
negatively charged vortex.
\begin{figure}
\includegraphics[width=8.5cm] {plot1}
\caption{(Color online) Stationary state $|\psi|$ of binary condensate with
obstacle potential at (a) $-6.0a_{\rm osc}$, (b)$-5.9a_{\rm osc}$
and (c)$-5.0a_{\rm osc}$.
}
\label{stat_fig}
\end{figure}
\section{Obstacle modified density}
To examine the density perturbations from the obstacle beam,
let $R_{\rm in}$ be the radius of the inner species or the interface boundary.
And, let $R_{\rm out}$ be the radial extent of the outer species. In the
absence of the obstacle beam, the chemical potential of first and second
species in scaled units are
$\mu_1=R_{\rm in}^2/4 + u_{11}/(\pi R_{\rm in}^2)$ and $R_{\rm out}^2/2$,
respectively. The obstacle beam initially ($t=0$) located
at $(-R_{\rm out},0)$ traverses towards the center with velocity $v_{\rm obs}$
and the intensity is ramped down at the rate $-\partial V_0/\partial t=\eta$.
The location of the beam at a later time is
$x_0(t) = -R_{\rm out} + v_{\rm ob}t$,
and intensity of the beam is $V_0(t) = V_0(0) -\eta t$,
where $V_0(0)$ is the initial intensity of the obstacle beam. At the starting
point, the total potential $V(R_{\rm out},0,0) > R_{\rm out}^2/2$ and the
density of the outer species $|\psi_2|^2$ is zero around the center of the
obstacle beam. However, as it traverses the condensates with decreasing
intensity, at some later time $t'$, $V(x_0(t'),0,0) < R_{\rm out}^2/2$.
Density $|\psi_2|^2$ is then finite within the obstacle. For compact notations,
hereafter we drop the explicit notation of time dependence while writing
$x_0(t)$ and $V_0(t)$.
A critical requirement to form coreless vortices is complete immersion of
the obstacle beam within $n_1$. Based on the previous discussions, as the beam
approaches the origin, the last point of contact between the beam and interface
at $R_{\rm in}$ lies along $x$-axis. To determine the condition when complete
immersion occurs, consider the total potential along $x$-axis around the
obstacle potential
\begin{eqnarray}
V(x,0,t) &\approx & \frac{x^2}{2} + V_0(t) \left[1 -2\frac{(x-x_0(t))^2}{w^2}
\right. \nonumber \\
&& \left . + 4\frac{(x-x_0(t))^4} {w^4}\right],
\label{V_x0t}
\end{eqnarray}
where, the Gaussian beam potential is considered up to the second order term.
The expression is appropriate in the neighborhood of the beam, and along
$x$-axis it has one local minima ($x_{\rm min}$ ) and maxima each. There is
also a global minima, however, it is not the correct solution as it lies in
the domain where $x>w/\sqrt{2}$ and hence outside the domain of validity of
Eq.~(\ref{V_x0t}). Correct global minima is located at
$x\approx 0$ and is associated with the harmonic potential. The obstacle is
considered well immersed when $x_{\rm min}$ is located at the interfacial
radius $R_{\rm in}$, and let $t_{\rm im}$ be the time when it occurs.
When the obstacle beam is well inside the inner species, within the obstacle
beam $n_1$ is zero but $n_2$ is nonzero. It then forms a second interface
layer, which embeds a bubble of the $n_2$ within $n_1$. Recollect, the first
interface layer is located at $R_{\rm in}$ and it is where $n_2$ encloses
$n_1$. The second interface, unlike the one at $R_{\rm in}$, is a
deformed-ellipse and we label it as $\Gamma$. Around the interface, the two
condensates mix with a penetration depth
$$
\Lambda_i = \xi_i\left [ \frac{\sqrt{a_{11}a_{22}}}
{a_{12}-\sqrt{a_{11}a_{22}}}\right ] ^{1/2},
$$
and the density of the minority species decays exponentially.
The transition from a single continuous interface to two separate boundaries
at $R_{\rm in}$ and $\Gamma$, when the obstacle crosses $R_{\rm in}$, is
smooth in TF approximation and that is how we have defined $t_{\rm im}$. There
are, however, strong perturbations when surface tension is considered, and the
separation of the two interfaces occurs when the beam is deep inside $n_1$.
Prior to the separation, the interface is deformed to accommodate a long neck
region where $n_1$ and $n_2$ are non zero. As the interface splits into two,
there are large deformations from the equilibrium interface geometry, and
surface tension generates a restoring force to bring it to equilibrium
geometry. This creates density patterns with high curvature and initiates
formation of the coreless vortex dipoles.
\section{Obstacle assisted bubble}
At a time $\Delta t$ after the obstacle is immersed in $n_1$, the location
and amplitude of the obstacle potential are
\begin{eqnarray}
x_0(t_{\rm im}+\Delta t) & = & -R_{\rm out} + v_{\rm ob}\times(t_{\rm im}
+ \Delta t), \nonumber \\
V_0(t_{\rm im}+\Delta t) & = & V_0(0) - \eta\times(t_{\rm im} + \Delta t).
\nonumber
\end{eqnarray}
Equilibrium TF $n_2$ within the obstacle potential at this instant of time is
$$
n_{2\Gamma} (x, y, t_{\rm im} + \Delta t ) = \frac{\mu_2
- V(x, y, t_{\rm im} + \Delta t)}{u_{22}}.
\label{n2_den}
$$
This, however, is higher than the density distribution at $t_{\rm im}$, that is
$n_{2\Gamma} (x, y, t_{\rm im} + \Delta t )> n_{2\Gamma} (x, y, t_{\rm im})$ as
the potential $ V $ is lower. This is on account of two factors: first,
the amplitude of the obstacle potential decreases with time; and second, the
harmonic oscillator potential is lower at $x_0(t_{\rm im}+\Delta t)$. The
number of atoms, however, does not change from the value at $t_{\rm im}$
unless there is a strong Josephson current. Density $n_{2\Gamma}$ is thus
below the equilibrium value once the obstacle beam is well within $n_1$.
This creates a stable bubble of $n_2$ assisted or trapped within the beam
and is transported through the $n_1$.
Departure of $n_2$ from the equilibrium is not the only density evolution
within the beam. There is a progressive change of $n_{1\Gamma} $ (density
of first species within the obstacle beam) as the beam moves deeper into
$n_1$. At time $t_{\rm im}$, when the obstacle is completely immersed in
$n_1$ the effective potential, experienced by $n_1$,
$V(x, y, t_{\rm im}) + n_{2\Gamma}u_{12}$ is larger than $\mu_1$. So, $n_1$ is
zero within the beam. However, if the rate of ramping $\eta$ is such that at
a later time $V(x, y, t_{\rm im} + \Delta t) + n_{2\Gamma}u_{12}< \mu_1$,
while the beam is still within $n_1$, there is a finite $n_1$ within the beam.
Since $a_{12}>\sqrt{a_{11}a_{22}}$ for the condensate, in TF approximation
the bulk values of $n_{1\Gamma}$ and $n_{2\Gamma}$ can not be simultaneously
non-zero. At the same time, $n_{2\Gamma}$ is forbidden to migrate to the
bulk $n_2$ due to the $n_1$ generated potential barrier in the region between
interfaces $\Gamma$ and $R_{\rm in}$. To accommodate both $n_1$ and $n_2$
within the beam, the shape of interface $\Gamma$ is transformed to increase
$n_{2\Gamma}$. So that $n_2$ is zero in certain regions within the beam where
the condition, $V(x, y, t_{\rm im} + \Delta t) + n_{2\Gamma}u_{12}<\mu_1$,
is satisfied. This mechanism is responsible for obstacle
assisted transport of $n_2$ across $n_1$.
\begin{figure}
\includegraphics[width=8.5cm] {plot2}
\caption{(Color online) $|\psi_i|$ and phase of binary condensates after the
creation of coreless vortex dipole. (a) Coreless vortex dipole is
fully formed but yet to dissociate from the obstacle, (b) center of
obstacle beam is at origin and coreless vortex dipole is separated,
(c) additional coreless vortex dipoles are generated when the obstacle
reaches the interface, (d) phase of the binary condensate
corresponding to (b), and (e) densities of the condensates parallel
to $x$-axis and passing through the center of the coreless vortex. Blue
arrow marks the center of the coreless vortex dipole.
}
\label{vortex_fig}
\end{figure}
\section{Energetic stability of normal versus coreless vortex dipoles}
We use TF approximation to compare the energetic stabilities of normal
and coreless vortex dipoles.
\subsubsection{Normal vortex dipole}
Assuming that the vortex affects the density of the condensate only within the core
regions, we can adopt the following {\em ansatz} for binary condensate with a
normal vortex dipole at $(v_1,\pm v_2)$
\begin{eqnarray}
\psi_1(r) & = & \left \{
\begin{aligned}
& 0 \!&& x^2+y^2 > R_{\rm in}^2\\
& 0 \!&& [(x-v_1)^2+(y\pm v_2)^2] \leqslant \xi^2\\
& \sqrt{\frac{\mu_1 - V(x,y)}{u_{11}}}
&& \left \{
\begin{aligned}
& x^2+y^2 \leqslant R_{\rm in}^2~\&\\
&~[(x-v_1)^2+(y\pm v_2)^2] > \xi^2
\end{aligned} \right .
\end{aligned} \right . \\
\psi_2(r) &=& \left \{
\begin{aligned}
& \sqrt{\frac{\mu _2-V(x,y)}{u_{22}}}
&&R_{\rm in}^2\leqslant (x^2+y^2) \leqslant R_{\rm out}^2\\
& 0&&(x^2+y^2) > R_{\rm out}^2\\
& 0&&(x^2+y^2) < R_{\rm in}^2.
\end{aligned} \right .
\end{eqnarray}
The vortex dipole contributes mainly through the kinetic energy of $\psi_1$,
which may be approximated with the value of single species condensate given
in Ref.\cite{Zhou}
\begin{equation}
E_{\rm vd} = \frac{2\mu _1}{u_{11}}\ln\left(\frac{2v_2}{\xi}\right),
\end{equation}
where $\xi = 1/\sqrt{2\mu_1}$ is the coherence length of inner species.
Using these {\em ansatz} the number of atoms are
\begin{eqnarray}
N_1 & = & \frac{\pi \left(1+4 v_1^2 \mu _1+4 v_2^2 \mu _1-8
\mu _1^2-2 R_{\rm in}^4 \mu _1^2+8 R_{\rm in}^2
\mu _1^3\right)}{8 u_{11} \mu _1^2},\nonumber \\
N_2 & = & \frac{\pi \left(R_{\rm in}^2-2 \mu _2\right){}^2}{4 u_{22}}.
\end{eqnarray}
In a similar way, we can evaluate the energy of the entire condensate.
\subsubsection{Coreless vortex dipole}
For coreless vortex dipole, we adopt the {\em ansatz}
\begin{eqnarray}
\psi_1(r) & = & \left \{
\begin{aligned}
& 0 && x^2+y^2 > R_{\rm in}^2\\
& 0 && [(x-v_1)^2+(y\pm v_2)^2] \leqslant \xi^2\\
& \sqrt{\frac{\mu_1 - V(x,y)}{u_{11}}}
&& \left \{
\begin{aligned}
& x^2+y^2 \leqslant R_{\rm in}^2~\&\\
&~[(x-v_1)^2+(y\pm v_2)^2] > \xi^2,
\end{aligned} \right .
\end{aligned} \right . \\
\psi_2(r) & = & \left \{
\begin{aligned}
& \sqrt{\frac{\mu _2-V(x,y)}{u_{22}}}
&&\left \{
\begin{aligned}
& R_{\rm in}^2\leqslant (x^2+y^2) \leqslant R_{\rm out}^2~||~\\
& [(x-v_1)^2+(y\pm v_2)^2] \leqslant \xi^2
\end{aligned} \right . \\
& 0&&(x^2+y^2) > R_{\rm out}^2\\
& 0&& \left \{
\begin{aligned}
& x^2+y^2 < R_{\rm in}^2~\&\\
&~[(x-v_1)^2+(y\pm v_2)^2] > \xi^2.
\end{aligned} \right .
\end{aligned} \right .
\end{eqnarray}
Using these {\em ansatz} the modified expressions for $N_2$ is
\begin{eqnarray}
N_2 &=& \frac{\pi}{8 u_{22} \mu _1^2}\left(2 R_{\rm in}^4 \mu _1^2
+ 8 \mu _1 \mu _2-8 R_{\rm in}^2 \mu _1^2 \mu _2+8 \mu _1^2 \mu _2^2 -1
\right. \nonumber \\
& &\left . -4 v_1^2 \mu _1-4 v_2^2 \mu _1\right).
\end{eqnarray}
As done earlier, we can also calculate the total energy $ E$ of the system. The important
change in $E$ is the inclusion of interface interaction energy $E_{\rm int}$.
It arises from the interface interactions at the cores of the vortex and
antivortex. Based on Ref.~\cite{Timmermans},
\begin{equation}
E_{\rm int} = \frac{8}{3}Pb\pi\xi\left (\frac{a_{12}}{\sqrt{a_{11}a_{22}}}-1
\right ),
\end{equation}
where $P$ is the pressure on the circumference of the cores and
\begin{equation}
b = 2\left [ \frac{3(\mu_1+\mu_2)\sqrt{a_{11}a_{22}}}{4\mu_1\mu_2
(a_{12}-\sqrt{a_{11}a_{22}})} \right ] ^{1/2}.
\end{equation}
In both the case, i.e. with normal and coreless vortex dipoles, the
energy can be minimized with the constraint of the fixed number of atoms
and $R_{\rm in}$ as a minimization parameter. For the parameters {\em set a}
without obstacle potential, the coreless vortex dipole has lower energy than
the normal vortex dipole and is shown in
Fig.~\ref{fig_coreless_vortex_dipole1} for the vortex dipole located
at $(0,\pm1)$.
\begin{center}
\begin{figure}[ht]
\includegraphics[width=8.5cm] {coreless_vortex_dipole1}
\caption{(Color online) The energy of the binary condensate with $V_0=0$ and rest of the
parameters same as those in the parameters {\em set a} as a function of
$R_{\rm in}$. The condensate has a vortex dipole located at $(0,\pm 1)$.
Black and blue curves are for coreless and normal vortex dipoles
respectively.}
\label{fig_coreless_vortex_dipole1}
\end{figure}
\end{center}
For $N_1=N_2=10^6$, $a_{11}=51a_0$, $a_{22} = 99a_0$ and rest
of the parameters same as in parameter {\em set a}, condensate with the normal
vortex dipole has lower energy than the one with coreless vortex dipole
(see Fig.~\ref{fig_coreless_vortex_dipole2}). These results are in very good
agreement with the numerical results.
\begin{figure}[ht]
\includegraphics[width=8.5cm] {coreless_vortex_dipole2}
\caption{(Color online) The energy of the binary condensate with $N_1=N_2=10^6$,
$a_{11}=51a_0$, $a_{22}=99a_0$, $V_0 = 0$, and rest of the parameters same as
those in parameters {\em set a} as a function of $R_{\rm in}$. The condensate
has vortex dipole located at $(0,\pm 1)$. Black and blue curves are for
coreless and normal vortex dipoles respectively.}
\label{fig_coreless_vortex_dipole2}
\end{figure}
\section{Numerical results and conclusions}To examine the formation of coreless vortex dipoles in finer
detail, we resort to numerical solution of Eq. (\ref{gp_2d}) with a modified
version of the split-step Crank-Nicholson code reported in
Ref. \cite{Muruganandam}. Consider obstacle potential is initially
located in the outer component and is moved across the interface, towards
the origin. For this case, we consider $^{85}$Rb-$^{87}$Rb binary condensate
with parameters {\em set a}, however, with maximum value of obstacle laser
potential $V_0(0) = 125.0$. The obstacle potential is initially located at
$x = -15 a_{\rm osc}$. The obstacle moves with the speed of $180\mu$m/s,
progressively decreases in strength with rate constant $\eta = 10.1$
(in scaled units), and vanishes at $x = 8a_{\rm osc}$. The obstacle potential
creates a normal vortex dipole as it traverses $n_2$. As the obstacle
penetrates the interface, it carries the vortex dipole generated in the outer
component in its region of influence. Further motion of the obstacle, in
$n_1$, creates coreless vortices.
The key factor which influences the generation of coreless vortex dipoles is
the deformation at the aft region of the obstacle confined $n_2$. The
deformation accompanied by large mixing is initiated when the interface
is about to break up. This is evident even in the stationary state density
distribution shown in Fig. \ref{stat_fig}(b). At break up, the interface
repulsion and potential gradient are highest along the $x$-axis and lead to
the formation of a dimple. Curvature is large around the deformed interface,
and flow of $n_1$ past it generates vortex dipoles. However,
as the vortex dipoles are generated within the penetration zone, the build up
of $n_1$ around the vortex core drives $n_2$ from the penetration zone to the
core of the vortex. For the parameters considered, first coreless vortex
dipole is formed soon after the interface break up and is shown in
Fig. \ref{vortex_fig}(a) at $t=0.21s$. The vortex dipole is almost detached
when the obstacle reaches origin, shown in Fig. \ref{vortex_fig}(b). From the
phase, Fig. \ref{vortex_fig}(d), it is evident that the phase singularity
is associated with $n_1$ and $n_2$ is non-zero at the core. This is seen in
the plot of densities along a line parallel to $x$-axis and passing through the
vortex core, Fig. \ref{vortex_fig}(e). In the figure, the blue arrow marks the
location of the vortex core. Another important density modification is,
although the obstacle potential is repulsive, $n_2$ has a maxima at the
center. This is due to the repulsion energy from $n_1$. More coreless vortex
dipoles are created when the obstacle crosses the center of harmonic potential,
\ref{vortex_fig}(c).
The other initial configuration is to place the obstacle potential within
$n_1$. If $V_{\rm obs}$ is such $\mu_1 < V(\mathbf r, t_0) < \mu_2$, then
within the obstacle potential $n_2$ is nonzero. Initial density distribution
is like in Figs. \ref{stat_fig}(c) and \ref{triply_charged_vdipole}(a).
Due to inertia, $n_2$ lags behind the beam
when the obstacle suddenly starts to move. When the obstacle has shifted from
its initial position $x_0$ to $x_0 + \delta x_0$, the points of intersection of
the inner interfaces (assumed circular with radii $R_{\Gamma}$ and centered around
$x_0$ and $x_0 + \delta x_0$) are
\begin{equation}
(x_c,y_c) = \left(\frac{1}{2} (2 x_0+\text{$\delta $x}),
\pm \frac{1}{2} \sqrt{4 {R_{\Gamma}}^2-
\text{$\delta $x}^2}\right).
\label{cross_over}
\end{equation}
\begin{center}
\begin{figure}
\includegraphics[width=8.5cm] {plot3}
\caption{(Color online) The generation of triply charged vortex dipole in the
$^{85}$Rb-$^{87}$Rb binary condensate with parameters {\em set a}.
The obstacle potential, initially located at $x = -5.0 a_{\rm osc}$,
is moved with a velocity of $220\mu$m/s up to $x = 5 a_{\rm osc}$.
First, second, and third columns are the solutions at $t=0$s,
$t=0.14$s and $t = 0.32$s, respectively.}
\label{triply_charged_vdipole}
\end{figure}
\end{center}
Due to inertia $n_2$ still occupies the region $\delta \Gamma$ defined as
$$
(x-x_0)^2+y^2 < R_{\Gamma}^2<(x-x_0-\delta x)^2+y^2
$$
The potential experienced by $n_2$ along the left interface of $\delta \Gamma$
decreases as one moves from $x$-axis. This leads to redistribution of
$n_2$ in $\delta \Gamma$ region and creation of a pressure difference
$\delta P = (n_1^2u_{11}-n_2^2u_{22})/2\geqslant0$ at the left bounding arc
of $\delta\Gamma$. Along this arc, $\delta P$ decreases from
$(x_0-R_{\Gamma},0)$ to $(x_c,y_c)$ and tends to flatten it. Another equally
important dynamical process is the redistribution of $n_1$ as
$V_{\rm eff} = V(x,y) + n_2(x,y)u_{12}$ increases along the left interface
from $x$-axis to $(x_c,y_c)$. Due to this, $n_1$ start to penetrate the
interface from the point where $V_{\rm eff} \leqslant \mu_1$. Thus the
repulsive interaction at the interface and gradient of harmonic potential,
combine to form a dimple. The dimple formation initiates the formation of
(coreless) ghost vortices in the obstacle region \cite{Fujimoto}, which
detach to form coreless vortex dipoles as is shown in
Figs.~\ref{triply_charged_vdipole}(b)-(c).
We have studied the motion of the Gaussian obstacle across a phase-separated
binary condensate. With the possibility of tuning one of the scattering lengths
using Feshbach resonances, these condensates can be used to experimentally
realise the obstacle assisted transport of one species across another as well
as coreless vortices. Using both TF approximation and exact numerical
solutions of coupled GP equations, we have shown that coreless vortex dipoles
can be energetically more preferable than normal vortex dipoles.
\begin{acknowledgements}
We thank S. A. Silotri, B. K. Mani, and S. Chattopadhyay for very useful
discussions. The numerical computations reported in the paper were done on
the 3 TFLOPs cluster at PRL. The work of PM forms a part of Department of
Science and Technology (DST), Government of India sponsored research project.
\end{acknowledgements}
|
2,869,038,154,953 | arxiv |
\section*{Reproducibility Statement}
We use Pytorch~\citep{pytorch} and transformers library~\citep{huggingface} from Huggingface to implement all the baselines and our proposed method in the experiments. We have described our method of self-distillation for further pre-training in Algorithm~\ref{algo:self-distillation} and specified all the experimental setup including hyperparameters in Section~\ref{sec:exp} and Appendix~\ref{appendix:hparams}. For theoretical analysis, we have provided all the proofs in Appendix~\ref{app:1}.
\section*{Appendix}
\section{Proofs} \label{app:1}
We also define the model output vector $f_t \in \RR^{n \times p}$ by $(f_t)_{ij} =f(x_{i},w_{t})_j$. For example, $f_0$ is the initial teacher label matrix. Let $[n]=\{1,\dots,n\}$. Denote the rank of $[I_p \otimes \Phi]$ by $r=\rank([I_p \otimes \Phi]) \le np$.
Define ${\tilde U}=[u_1,u_{2}\dots,u_{r}] \in \RR^{dp\times r}$ and $\Pb_{r}=I-{\tilde U} {\tilde U}\T$, which is the projection matrix onto the null space of ${\tilde U}\T$.
We first prove the following lemma, which will be used in the proofs of Theorem \ref{thm:1} and Theorem \ref{prop:2} later:
\begin{lemma} \label{lemma:1}
For any $t\in \NN_0$,
$w_{t,0} = \sum_{i=1}^r \alpha_{i,t} \tilde u_i + \one\{t=0\}v$, where $\alpha_{i,t}=\frac{ 1}{\sigma_i}\left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t\in \RR$, $\tilde u_i={\tilde y}_i u_{i} \in \RR^{dp}$, ${\tilde y}_i = ( V\T \vect[f_0])_{i} \in \RR$, and $v=\Pb_{r} w_{0,0}$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:1}]
Define $w_{t}\coloneqq w_{t,0}$ for $t \in \NN_+$.
The necessary condition on the solution $w_t$ at step $t$ is that $\nabla L(w_{t})=0$. Thus, by solving $\nabla L(w_{t})=0$ for $w_t$, we have that $w_{t}=([I_p \otimes \Phi] [I_p \otimes \Phi]\T+n\lambda I)^{-1}[I_p \otimes \Phi] \vect[f_{t-1}]$.
By using the singular value decomposition $[I_p \otimes \Phi]=U \Sigma V\T$, since $UU\T =U\T U= I$ and $V\T V=I$, we have that
\begin{align*}
w_t =(U \Sigma \Sigma\T U\T+n\lambda I) ^{-1}U \Sigma V\T\vect[ f_{t-1}]&=(U (\Sigma \Sigma\T +n\lambda I)U\T) ^{-1}U \Sigma V\T \vect[f_{t-1}]
\\ & =U(\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T \vect[f_{t-1}].
\end{align*}
Therefore, $w_t =U(\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T \vect[f_{t-1}$]. Using this and $[I_p \otimes \Phi]=U \Sigma V\T$,
\begin{align*}
\vect[f_t ]= \vect[\Phi\T W\T _{t}I_p]=[I_p \otimes \Phi]\T w_t &= [I_p \otimes \Phi]\T U(\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T \vect[f_{t-1} ]
\\ & = V\Sigma\T (\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T \vect[f_{t-1}].
\end{align*}
Therefore, $\vect[f_t]=VA V\T \vect[f_{t-1}]$ where $A=\Sigma\T (\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma$
. Repeating this process for $\vect[f_{t-1}]$, since $V\T V=I$,
$$
\vect[f_t ]=VA V\T VA V\T \cdots VA V\T \vect[f_0 ]=VA^{t} V\T \vect[f_0].
$$
Plugging this equation of $\vect[f_{t-1}]=VA^{t-1} V\T \vect[f_0]$ into the equation of $w_t =U(\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T \vect[f_{t-1}]$, we have that
$$
w_t =U(\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma V\T VA^{t-1} V\T \vect[f_0 ]=U BA^{t-1} V\T \vect[f_0]
$$
where $B = (\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma$. Here, we can rewrite the matrix $B \in \RR^{dp \times np}$ as
$$
B= \begin{bmatrix} \bar B \\
\mathbf{0}_{(dp-np )\times np} \\
\end{bmatrix}
$$
where $\mathbf{0}_{(dp-np )\times np} $ is the $(dp-np)$ by $np$ matrix with all entries being zero, and $\bar B \in \RR^{np \times np}$ is a diagonal matrix defined by
$$
\bar B_{ii} \coloneqq \sigma_i (\sigma_i^2 +n\lambda)^{-1}.
$$
Using this $B$ in the above equation of $w_t =U BA^{t-1} V\T \vect[f_0]$,
\begin{align*}
w_t &=U \begin{bmatrix} \bar B \\
\mathbf{0}_{(dp-np )\times np} \\
\end{bmatrix} A^{t-1} V\T\vect[ f_0]\\
&= \begin{bmatrix}u_1 & u_{2} & \cdots & u_{dp}\end{bmatrix} \begin{bmatrix} \bar BA^{t-1} \\
\mathbf{0}_{(dp-np )\times np} \\
\end{bmatrix} V\T \vect[f_0 ]\\
&=\bar U \bar BA^{t-1}V\T \vect[f_0]
\end{align*}
where $\bar U=[u_1,u_{2}\dots,u_{np}] \in \RR^{dp\times np}$. Since the matrix $A=\Sigma\T (\Sigma \Sigma\T +n\lambda I)^{-1} \Sigma\in \RR^{np \times np}$ is a diagonal matrix with its entry being
$
A_{ii} = \sigma_i ^{2}(\sigma_i^2 +n\lambda)^{-1},
$
this can be further simplified as
\begin{align*}
w_t &=\bar U \bar BA^{t-1}V\T\vect[ f_0]
\\ & = \bar U\begin{bmatrix}\frac{\sigma^{1}_1}{\sigma_1^2 +n\lambda} & & \\
& \ddots & \\
& & \frac{\sigma^{1}_{np}}{\sigma_{np}^2 +n\lambda} \\
\end{bmatrix} \begin{bmatrix} \frac{\sigma_1 ^{2}}{\sigma_1^2 +n\lambda} & & \\
& \ddots & \\
& & \frac{\sigma_{np} ^{2}}{\sigma_{np}^2 +n\lambda} \\
\end{bmatrix}^{t-1} \begin{bmatrix}{\tilde y}_1 \\
{\tilde y}_2 \\
\vdots \\
{\tilde y}_{np} \\
\end{bmatrix}
\\ & = \begin{bmatrix}u_1 & u_{2} & \cdots & u_{np}\end{bmatrix} \begin{bmatrix}\sigma_1 (\sigma_1^2 +n\lambda)^{-1}(\sigma_1^{2}(\sigma_1^2 +n\lambda)^{-1})^{t-1}{\tilde y}_1 \\
\sigma_2 (\sigma_2^2 +n\lambda)^{-1}(\sigma_2 ^{2}(\sigma_2^2 +n\lambda)^{-1})^{t-1}{\tilde y}_2 \\
\vdots \\
\sigma_{np} (\sigma_{np}^2 +n\lambda)^{-1}(\sigma_{np} ^{2}(\sigma_{np}^2 +n\lambda)^{-1})^{t-1}{\tilde y}_{np} \\
\end{bmatrix}
\\ & = \sum_{i=1}^{np}\sigma_i (\sigma_i^2 +n\lambda)^{-1}(\sigma_i ^{2}(\sigma_i^2 +n\lambda)^{-1})^{t-1} {\tilde y}_i u_i
\\ & = \sum_{i=1}^r\sigma_i (\sigma_i^2 +n\lambda)^{-1}(\sigma_i ^{2}(\sigma_i^2 +n\lambda)^{-1})^{t-1} {\tilde y}_i u_i
\end{align*}
where the last line follows from the fact that $\sigma_i (\sigma_i^2 +n\lambda)^{-1}(\sigma_i ^{2}(\sigma_i^2 +n\lambda)^{-1})^{t-1} =0$ for all $i > r$.
Since $\sigma_i (\sigma_i^2 +n\lambda)^{-1}(\sigma_i ^{2}(\sigma_i^2 +n\lambda)^{-1})^{t-1} =\sigma_i ^{2t-1}(\sigma_i^2 +n\lambda)^{-t}=\frac{1}{\sigma_i}(\frac{\sigma_i^{2}}{\sigma_i^2 + n\lambda})^t$ for $i \le r$, this implies that
$$
w_t = \sum_{i=1}^r \frac{1}{\sigma_i} \left(\frac{\sigma_i^{2}}{\sigma_i^2 + n\lambda}\right)^t {\tilde y}_i u_i = \sum_{i=1}^r \frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_i u_i
$$
Since $t\in \NN_+$ was arbitrary, this holds for any $t\in \NN_+$. This proves the first statement of the theorem for any $t\in \NN_+$. For $t=0$, since
$$
{\tilde y}_i = ( V\T \vect[f_0])_{i}= ( V\T [I_p \otimes \Phi]\T w_{0,0})_{i}=( V\T V\Sigma\T U\T w_{0,0})_{i}=(\Sigma\T U\T w_{0,0})_{i} =\sigma_i u_i\T w_{0,0},
$$
we have that
$$
\sum_{i=1}^r \frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_i u_i=\left(\sum_{i=1}^r u_i u_i\T\right) w_{0,0} = {\tilde U} {\tilde U} \T w_{0,0}.
$$
Thus,
$$
w_{0,0} = {\tilde U} {\tilde U} \T w_{0,0}+(I- {\tilde U} {\tilde U} \T )w_{0,0}=\sum_{i=1}^r \frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_i u_i+(I- {\tilde U} {\tilde U} \T )w_{0,0}.
$$
Since $(I- {\tilde U} {\tilde U} \T )w_{0,0}=\Pb_{r} w_{0,0}$, this completes the first statement of the theorem for any $t\in \NN_0$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:1}}
\begin{proof}
Define $Z\coloneqq [I_p \otimes \Phi]\T \in \RR^{{\bar n} \times {\bar d}}$ where ${\bar n} = np$ and ${\bar d}=dp$.
Then,$$
\Lcal(w)=\frac{1}{2}\sum_{i=1}^n \|f(x_{i},w)-y_{i}\|^{2}_2=\frac{1}{2} \|Zw-Y\|_{2}^2
$$
where $Y = \vect[[y_1 ,\dots, y_n]\T] \in \RR^{{\bar n}}$. Since $\nabla \Lcal(w_{t,\tau})= Z\T(Zw_{t,\tau}-Y)$,
\begin{align*}
\frac{d w_{t,\tau}}{d \tau} = - Z\T(Zw_{t,\tau}-Y)
\end{align*}
Since $\rank(\Phi)=n$ and $d \ge n$, we have $\rank(Z)={\bar n}$ by the property of the Kronecker product with the identity matrix. Since $\rank(Z)={\bar n}$, there exists $v\in \RR^{{\bar d}}$ such that $Y=Zv$. Thus,
\begin{align*}
\frac{d w_{t,\tau}}{d\tau} &= - Z\T(Z w_{t,\tau}-Zv)
\\ & = - Z\T Z( w_{t,\tau}-v)
\\ & = - Z\T Z( w_{t,\tau}-v).
\end{align*}
Since $Z\T=U \Sigma V\T$, we have $Z\T Z=U \Sigma \Sigma\T U\T= \sum_{i=1}^{\bar n} \sigma_i^2 u_i u_i\T$. Thus,
\begin{align*}
\frac{d w_{t,\tau}}{d \tau} &= - \left( \sum_{i=1}^{\bar n} \sigma_i^2 u_i u_i\T \right)(w_{t,\tau}-v)= - \sum_{i=1}^{\bar n} \sigma_i^2 u_i u_i\T (w_{t,\tau}-v).
\end{align*}
Since the columns of $U$ forms the basis of $\mathbb{R}^{\bar d}$ and $w,v \in \RR^{\bar d}$, we can write $w_{t,\tau}= \sum_{k=1}^{{\bar d}} c_k ^{(t, \tau)}u_k$ and $v=\sum_{k=1}^{\bar d} q_k u_k$ for some $c_k ^{(t, \tau)}$ and $q_k$. Thus,
\begin{align*}
\frac{d w_{t,\tau}}{d \tau} &= - \sum_{i=1}^{\bar n} \sigma_i^2 u_i u_i\T \sum_{k=1}^{{\bar d}} (c_k ^{(t, \tau)}-q_k)u_k
\\ & =- \sum_{i=1}^{\bar n} \sum_{k=1}^{{\bar d}} \sigma_i^2 (c_k ^{(t, \tau)}-q_k)u_i u_i\T u_k
\\ & =- \sum_{i=1}^{\bar n} \sigma_i^2 (c_i ^{(t, \tau)}-q_i)u_i.
\end{align*}
Using $w_{t,\tau}= \sum_{k=1}^{{\bar d}} c_k ^{(t, \tau)}u_k$ for the right-hand side too, we have that
$$
\frac{d}{d \tau} \sum_{i=1}^{{\bar d}} c_i ^{(t, \tau)}u_i=- \sum_{i=1}^{\bar n} \sigma_i^2 (c_i ^{(t, \tau)}-q_i)u_i.
$$
This implies that for all $i \in \{1,\dots,{\bar n}\}$,
$$
\frac{d}{d \tau} c_i ^{(t, \tau)}=-\sigma_i^2 (c_i ^{(t, \tau)}-q_i),
$$
and $\frac{d}{d \tau} c_i ^{(t, \tau)}=0$ for all $i \notin \{1,\dots,{\bar n}\}$. This can be also seen by the fact that $\frac{d w_{t,\tau}}{d\tau} = - Z\T(Z w_{t,\tau}-Zv)$ with $Z\T=U \Sigma V\T$ and thus the dynamics only adds components of $u_i$ for $i \in \{1,\dots,{\bar n}\}$,
and not for $i \notin \{1,\dots,{\bar n}\}$. Thus, for components of $u_i$ for $i \notin \{1,\dots,{\bar n}\}$, the initial values stays. In other words, for $i \notin \{1,\dots,{\bar n}\}$,
$$
c_i ^{(t, \tau)}= c_{i}^{(t, 0)}.
$$
On the other hand, for $i \in \{1,\dots,{\bar n}\}$, since $\frac{d}{d \tau} q_i=0$,
\begin{align*}
\frac{d}{d \tau} (c_i ^{(t, \tau)}-q_i) = \frac{d}{d \tau} c_i ^{(t, \tau)}=-\sigma_i^2 (c_i ^{(t, \tau)}-q_i).
\end{align*}
Solving this for $(c_i ^{(t, \tau)}-q_i)$, we have that for $i \in \{1,\dots,{\bar n}\}$,
$$
c_i ^{(t, \tau)}-q_i = (c_i ^{(t, 0)}-q_i) e^{-\sigma_i^2 \tau}.
$$
This implies that
$$
c_i ^{(t, \tau)} =q_i+(c_i ^{(t, 0)}-q_i) e^{-\sigma_i^2 \tau}=q_i(1- e^{-\sigma_i^2 \tau})+c_i ^{(t, 0)} e^{-\sigma_i^2 \tau}.
$$
Combining these with $w_{t,T}= \sum_{k=1}^{{\bar d}} c_k ^{(t,T)}u_k$,
\begin{align} \label{eq:4}
w_{t,T}= \sum_{i=1}^{{\bar d}} c_i ^{(t,T)}u_i = \sum_{i=1}^{{\bar n}}q_i(1- e^{-\sigma_i^2T})u_i+\sum_{i=1}^{{\bar n}}c_i ^{(t, 0)} e^{-\sigma_i^2T}u_i+\sum_{i={\bar n}+1}^{{\bar d}}c_{i}^{(t, 0)} u_i.
\end{align}
Therefore, for any particular $s \in \Scal$,
since $U=[u_1,u_{2}\dots,u_{dp}]\in \RR^{{dp}\times{dp}}$ is an orthogonal matrix,\begin{align} \label{eq:1}
\|\Acal_{t}(s)\|_{2}^2=\|w_{t,T}\|_2^2 \le \sum_{i=1}^{{\bar n}}\left(q_i(1- e^{-\sigma_i^2T})\right)^2 + \sum_{i=1}^{{\bar n}}(c_i ^{(t, 0)})^2 e^{-2\sigma_i^2 T}+\sum_{i={\bar n}+1}^{{\bar d}}(c_{i}^{(t, 0)})^2.
\end{align}
where $q_i, \sigma_i$, and $c_i^{(t, 0)}$ all depend on $s$.
By using Lemma 4 of \citep{pham2021combined} and taking union bound with $\PP(s \notin \Scal) \le \delta$, with probability at least $1-\delta$, we have that $w_{t,T} \in \Fcal_{t}$ and the following holds:
\begin{align} \label{eq:2}
\EE_{x,y}[\ell_{ }(w_{t,T},x,y)]
\le \frac{1}{n}\sum_{i=1}^{n} \ell_{ }(w_{t,T},x_{i},y_{i})+2\Rcal_{n}(\Fcal_t)+M \sqrt{\frac{\ln(2/\delta)}{2n}}, \end{align}
where $\Rcal_{n}(\Fcal_t)=\EE_{s,\xi}[\sup_{w\in\Fcal_t}\frac{1}{n} \sum_{i=1}^n \xi_i \|W\varphi(x_{i})-y_{i}\|^{2}_2]$, $s=((x_i,y_i))_{i=1}^n$, $w=\vect[W\T]$, and $\xi_1,\dots,\xi_n$ are independent uniform random variables taking values in $\{-1,1\}$.
By using Corollary 4 of \citep{maurer2016vector},
there exits a constant $c$ (only depending on $M$) such that,
\begin{align*}
\Rcal_{n}(\Fcal_t) &\le \frac{c}{n}\EE_{s,\xi}[\sup_{w\in\Fcal_t} \sum_{i=1}^n \sum_{k=1}^p\xi_{ik} W_{k}\varphi(x_{i})]
\\ & = \frac{c}{n}\EE_{s,\xi}[\sup_{w\in\Fcal_t} \sum_{k=1}^p W_{k}\sum_{i=1}^n \xi_{ik} \varphi(x_{i})]
\\ & = \frac{c}{n}\EE_{s,\xi}[\sup_{w\in\Fcal_t} w\T h]
\end{align*}
where $W_{k}$ is the $k$-th row of $W$, $\xi_{ik}$ are independent uniform random variables taking values in $\{-1,1\}$, $h=\vect[H]\in \RR^{dp}$, and $H \in \RR^{d\times p}$ with $H_{jk}=\sum_{i=1}^n \xi_{ik} \varphi(x_{i})_{j}$. Thus,
\begin{align*}
\Rcal_{n}(\Fcal_t) \le \frac{c}{n}\EE_{s,\xi}[\sup_{w\in\Fcal_t} \|w\|_2 \|h\|_2] =\frac{c(\sup_{w\in\Fcal_t} \|w\|_2)}{n}\EE_{s,\xi}[ \|h\|_2]
\end{align*}
Here,
\begin{align}
\EE_{s,\xi}[ \|h\|_2]=\EE_{s,\xi} \sqrt{\sum_{j=1}^d \sum_{k=1}^p \left(\sum_{i=1}^n \xi_{ik} \varphi(x_{i})_{j}\right)^2} & \le \sqrt{\sum_{j=1}^d \sum_{k=1}^p \EE_{s,\xi}\left(\sum_{i=1}^n \xi_{ik} \varphi(x_{i})_{j}\right)^2} \nonumber
\\ & = \sqrt{ \sum_{j=1}^d \sum_{k=1}^p \EE_{s}\sum_{i=1}^n(\varphi(x_{i})_{j})^2} \label{eq:cross}
\\ & = \sqrt{ \sum_{k=1}^p \sum_{i=1}^n \EE_{s}\sum_{j=1}^d(\varphi(x_{i})_{j})^2} \nonumber
\\ & = \sqrt{ \sum_{k=1}^p \sum_{i=1}^n \EE_{s}\norm{\varphi(x_{i})}^2_2} \nonumber
\\ & \le R \sqrt{ pn } \nonumber
\end{align}
Equation~\ref{eq:cross} holds since
$$
\EE_{s, \xi}\left[\left(\xi_{ik}\phi(x_i)_j\right)\cdot \left(\xi_{lk}\phi(x_l)_j\right)\right]=\EE_s\left[\one\{i=l\}\phi(x_i)_j \phi(x_l)_j\right]
$$
for all $i,l\in [n]$.
Thus,
\begin{align} \label{eq:3}
\Rcal_{n}(\Fcal_t) \le\frac{cR\sqrt{p}(\sup_{w\in\Fcal_t} \|w\|_2)}{\sqrt{n}}.
\end{align}
Define
$$
\zeta_{t}(s) \coloneqq \sqrt{\sum_{i=1}^{{\bar n}}\left(q_i(1- e^{-\sigma_i^2T})\right)^2 + \sum_{i=1}^{{\bar n}}(c_i ^{(t, 0)})^2 e^{-2\sigma_i^2 T}+\sum_{i={\bar n}+1}^{{\bar d}}(c_{i}^{(t, 0)})^2}.
$$
where $q_i, \sigma_i$, and $c_i^{(t, 0)}$ all depend on $s$. With this, we define $$
\zeta(t) \coloneqq \sup_{s \in \Scal} \zeta_{t}(s).
$$
Then, by combining \eqref{eq:1}, \eqref{eq:2}, and \eqref{eq:3}, with probability at least $1-\delta$, the following holds:
$$
\EE_{x,y}[\ell_{ }(w_{t,T},x,y)]
\le \frac{1}{n}\sum_{i=1}^{n} \ell_{ }(w_{t,T},x_{i},y_{i})+\zeta(t) \sqrt{\frac{4c^2 R^2p}{n}}+M \sqrt{\frac{\ln(2/\delta)}{2n}}.
$$
Finally, from Lemma \ref{lemma:1}, for any $t\in \NN_0$ and $i \in\{1,\dots,{\bar n}\}$,
$$
(c_i ^{(t, 0)})^2 =\left(\frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_{i} \right)^2 .
$$
Since $\frac{1}{1+(n \lambda/\sigma_i^{2})}< 1$ (because $n \lambda/\sigma_i^{2}>0$), the value of $\left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^{2t}$ strictly decreases as $t$ increases. Since $\frac{1}{\sigma_i^2}>0$ and ${\tilde y}_i^2\ge0$, this implies that $(c_i ^{(t, 0)})^2$ is strictly decreasing in $t \in \NN_0$ unless $c_i ^{(t, 0)}=0$. Moreover, from Lemma \ref{lemma:1}, we have that
$$
w_{t,0} = \sum_{i=1}^{\bar n} \alpha_{i,t} {\tilde y}_i u_i + \one\{t=0\}(I-{\tilde U}\bU\T) w_{0,0}.
$$
Since $\{u_1, \ldots, u_{\bar d}\}$ is a orthonormal basis for $\mathbb{R}^{\bar d}$ with inner product $\ip{x}{y} = y\T x$, we get
$$
w_{0,0} = \sum_{i=1}^{\bar d} (u_i\T w_{0,0})u_i.
$$
Since ${\tilde U}\bU\T w_{0,0} = \sum_{i=1}^{{\bar n}} (u_{i}\T w_{0,0}) u_i$, we have that
$$
(I-{\tilde U}\bU\T) w_{0,0}= \sum_{i=1}^{{\bar d}} (u_{i}\T w_{0,0}) u_i-\sum_{i=1}^{{\bar n}} (u_{i}\T w_{0,0}) u_i = \sum_{i={\bar n}+1}^{{\bar d}} (u_{i}\T w_{0,0}) u_i,
$$
which implies that the $u_i$ component to span $w_{t,0}$ for $i \in\{{\bar n}+1,\dots,{\bar d}\}$ is only present in $(I-{\tilde U}\bU\T) w_{0,0}$.
In other words,
$$
w_{t,0} = \sum_{i=1}^{\bar n} \alpha_{i,t} {\tilde y}_i u_i + \sum_{i={\bar n}+1}^{{\bar d}} \one\{t=0\}(u_{i}\T w_{0,0}) u_i.
$$
Thus, for any $t\in \NN_0$ and $i \in\{{\bar n}+1,\dots,{\bar d}\}$, we have that
$$
(c_i ^{(t, 0)})^2 =\one\{t=0\}(u_{i}\T w_{0,0})^2.
$$
These implies that $\zeta(t)$ is strictly decreasing in $t \in \NN_0$ unless $w_{0,0}=0$.
\end{proof}
\subsection{Proof of Theorem \ref{prop:2}}
\begin{proof}
In this proof, we continue to use the results and the notation from the proof of Theorem~\ref{thm:1}. By using \eqref{eq:4} in the proof of Theorem \ref{thm:1}, we have that
$$
\norm{{w_{\mathrm{init}}} - w_{t,T}}_2= \norm{{w_{\mathrm{init}}} -v_{t}}_2,
$$
where
$$
v_{t} = \sum_{i=1}^{{\bar n}}q_i(1- e^{-\sigma_i^2T})u_i+\sum_{i=1}^{{\bar n}}c_i ^{(t, 0)} e^{-\sigma_i^2T}u_i+\sum_{i={\bar n}+1}^{{\bar d}}c_{i}^{(t, 0)} u_i.
$$
If ${w_{\mathrm{init}}} = -\alpha v_{t}$ for some $\alpha>0$, then
\begin{align*}
\norm{{w_{\mathrm{init}}} -v _{t}}_2&=\norm{v _{t}+\alpha v_t}_2
\\ &=(1+\alpha)\norm{v_{t}}_2
\\ &=\norm{v_{t}}_2+\norm{\alpha v_t}_2
\\ &= \sqrt{\sum_{i=1}^{{\bar n}}q_i^{2}(1- e^{-\sigma_i^2T})^2+\sum_{i=1}^{{\bar n}}(c_i ^{(t, 0)} )^{2}e^{-2\sigma_i^2T}+\sum_{i={\bar n}+1}^{{\bar d}}(c_{i}^{(t, 0)} )^{2}}+\norm{{w_{\mathrm{init}}}}_2.
\end{align*}
On the other hand, for any ${w_{\mathrm{init}}} \in \RR^{dp}$,
\begin{align*}
\norm{{w_{\mathrm{init}}} - v_t}_2 &\le\norm{v_{t}}_2+\norm{{w_{\mathrm{init}}}}_2
\\ &\le \sqrt{\sum_{i=1}^{{\bar n}}q_i^{2}(1- e^{-\sigma_i^2T})^2+\sum_{i=1}^{{\bar n}}(c_i ^{(t, 0)} )^{2}e^{-2\sigma_i^2T}+\sum_{i={\bar n}+1}^{{\bar d}}(c_{i}^{(t, 0)} )^{2}}+\norm{{w_{\mathrm{init}}}}_2.
\end{align*}
Thus, setting $\psi (t)$ to be the following function satisfies conditions (1) and (2) in the statement:
$$
\psi (t)\coloneqq\sqrt{\sum_{i=1}^{{\bar n}}q_i^{2}(1- e^{-\sigma_i^2T})^2+\sum_{i=1}^{{\bar n}}(c_i ^{(t, 0)} )^{2}e^{-2\sigma_i^2T}+\sum_{i={\bar n}+1}^{{\bar d}}(c_{i}^{(t, 0)} )^{2}}+\norm{{w_{\mathrm{init}}}}_2
$$
Finally, from Lemma \ref{lemma:1}, for any $t\in \NN_0$ and $i \in\{1,\dots,{\bar n}\}$,
$$
(c_i ^{(t, 0)})^2 =\left(\frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_{i} \right)^2.
$$
which is strictly decreasing in $t \in \NN_0$ unless $c_i ^{(t, 0)}=0$ for all $i \in\{1,\dots,{\bar n}\}$ as shown in the proof of Theorem \ref{thm:1}. Moreover, from Lemma \ref{lemma:1}, for any $t\in \NN_0$ and $i \in\{{\bar n}+1,\dots,{\bar d}\}$,
$$
(c_i ^{(t, 0)})^2 =\one\{t=0\}(u_{i}\T w_{0,0})^2.
$$
That is,
$$
\psi (t)=\sqrt{G_1+\psi_1 (t) +\sum_{i={\bar n}+1}^{{\bar d}}\one\{t=0\}(u_{i}\T w_{0,0})^2} + G_2,
$$
where
$$
G_1\coloneqq\sum_{i=1}^{{\bar n}}q_i^{2}(1- e^{-\sigma_i^2T})^2,
$$
$$
\psi_1 (t) \coloneqq \sum_{i=1}^{{\bar n}}\left(\frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_{i} \right)^2e^{-2\sigma_i^2T},
$$
and
$$
G_2 \coloneqq\norm{{w_{\mathrm{init}}}}_2.
$$
Since $e^{-2\sigma_i^2T} > 0$ is a constant in $t$ and we have previously shown that $\left(\frac{1}{\sigma_i} \left(\frac{1}{1+ (n\lambda/\sigma_i^2 )}\right)^t {\tilde y}_{i} \right)^2$ is strictly decreasing in $t\in\NN_0$ unless $w_{0,0}=0$. It implies that both $\psi_1 (t)$ and $\psi(t)$ are strictly decreasing in $t \in \NN_0$ unless $w_{0,0}=0$.
\end{proof}
\section{Masked Auto-Encoding}\label{appendix:mae}
In this section, we describe the masked auto-encoding objective from~\eqref{eq:mae} in more detail. Given a sequence ${\mathbf{x}}=(x_1, \ldots, x_K)$ with length $K$, we sample mask ${\mathbf{z}}=(z_1, \ldots, z_K)$ from a Binomial distribution $p_{\gamma, K}$ with probability for success $\gamma \in (0,1)$ and the number of trials $K$. For each $x_k$, we replace it with the special token ``mask" if $z_k=1$. Otherwise we use the same $x_k$ for an masked input. Let $\hat{{\mathbf{x}}}=(\hat{x}_1, \ldots, \hat{x}_K)$ be a masked input and let $f_\theta, g_\phi$ be encoder and decoder, respectively. We want to compute the log-likelihood of the reconstructed input $\sum_{k=1}^K z_k \log p_{\theta, \phi}(x_k | \hat{{\mathbf{x}}})$.
For language models, reconstruction of the masked input $\hat{x}_k$ is to predict which token is masked out of pre-defined vocabulary of which size is $V$, where each token is represented as an integer from $\{1,\ldots, V\}$. Thus the conditional probability of $x_k \in \{1,\ldots, V\}$ given $\hat{{\mathbf{x}}}$ is parameterized as follows:
\begin{equation*}
\begin{gathered}
p_{\theta, \phi}(x_k | \hat{{\mathbf{x}}})= \frac{\exp(u_{x_k}))}{\sum_{j=1}^V\exp(u_j)} \\
\text{where } (u_1, \ldots, u_V) = g_\phi({\mathbf{h}}_k) \in \mathbb{R}^V, \quad
\begin{bmatrix}
\vert & & \vert \\
{\mathbf{h}}_1 & \cdots & {\mathbf{h}}_K \\
\vert & & \vert
\end{bmatrix}=f_\theta (\hat{{\mathbf{x}}}) \in \mathbb{R}^{h\times K}.
\end{gathered}
\end{equation*}
For Vision Transformers, the sequence ${\mathbf{x}}$ consists of image patches and the reconstruction of the masked input is to predict pixel values for each masked patches, which is a regression problem. Thus, we parameterize the conditional probability of a patch $x_k = (x_{k,1}, \ldots, x_{k,m}) \in \mathbb{R}^m$ given $\hat{{\mathbf{x}}}$ as follows:
\begin{equation*}
\begin{gathered}
p_{\theta, \phi}(x_k | \hat{{\mathbf{x}}}) = \prod_{i=1}^m \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left( -\frac{(x_{k,i} - \mu_{k,i})^2}{2\sigma^2} \right) \\
\text{ where } \boldsymbol{\mu}_k = (\mu_{k,1}, \ldots, \mu_{k,m}) \in \mathbb{R}^m, \quad \begin{bmatrix}
\vert & &\vert \\
\boldsymbol{\mu}_1 & \cdots &\boldsymbol{\mu}_K \\
\vert & &\vert \\
\end{bmatrix} = f_\theta (\hat{{\mathbf{x}}}) \in \mathbb{R}^{m \times K}.
\end{gathered}
\end{equation*}
Since $\sigma >0$ and $\pi$ are constants with respect to $\theta$ and $\phi$,
\begin{align*}
\argmin_{\theta, \phi} -\sum_{k=1}^K\log p_{\theta, \phi}(x_k|\hat{{\mathbf{x}}})
&= \argmin_{\theta, \phi} \sum_{k=1}^K\left(\frac{1}{2\sigma^2}\sum_{j=1}^m (x_{k,j}- \mu_{k,j})^2 - m\log\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right) \right) \\
&=\argmin_{\theta,\phi}\sum_{k=1}^K\left(\frac{1}{2\sigma^2}\sum_{j=1}^m (x_{k,j}- \mu_{k,j})^2 \right)\\
&=\argmin_{\theta,\phi}\sum_{k=1}^K\left(\sum_{j=1}^m (x_{k,j}- \mu_{k,j})^2 \right)\\
&=\argmin_{\theta, \phi} \sum_{k=1}^K\norm{x_k-\boldsymbol{\mu}_k}_2^2.
\end{align*}
\section{Dataset}
We describe statistics of all the image and text classification datasets used for our experiments in Table~\ref{tab:img-stat} and~\ref{tab:text-stat}.
\input{tables/imag_data.tex}
\input{tables/text_data.tex}
\label{appendix:dataset}
\clearpage
\section{Hyperparameters}
\label{appendix:hparams}
In Table~\ref{tab:hparams}, we summarize all the hyperparameters for Vision Transformer and RoBERTA.
\input{tables/hparams.tex}
\section{Experiment}
\label{sec:exp}
\paragraph{Dataset}
For image classification problem, we use six datasets --- FGVC Aircraft (Aircraft)~\citep{aircraft-dataset}, Caltech UCSD Birds 200 (CUB)~\citep{cub-dataset}, Chest X-ray~\citep{chest-xray-dataset}, Describable Textures Dataset (DTD)~\citep{dtd-dataset}, Stanford Dogs~\citep{stanford-dogs}, and Oxford 102 Flower~\citep{flower-dataset}. For text classification problem, we use four datasets --- Chemprot~\citep{chemprot}, ACL-ARC~\citep{acl-arc}, SCIERC~\citep{scierc}, and Twitter-Emotion~\citep{twitter-emotion}. Please see Appendix~\ref{appendix:dataset} for more detail.
\paragraph{Implementation Detail}
For the image classification problem, we use Vision Transformer pre-trained on unlabeled ImageNet dataset with masked auto-encoding~\citep{mae} and fine-tune it on the downstream task with AdamW optimizer~\citep{adamw} for 10,000 steps with batch size 32. Regarding further pre-training and self-distillation, we continue to pre-train the model for 20,000 steps with batch size 64. We evaluate the Vision Transformers with accuracy. For text classification, following the experimental setup from~\cite{dont-pt}, we use RoBERTA~\citep{roberta} as a backbone network and fine-tune it on the target labeled dataset with AdamW optimizer for 10 epochs with batch size 32. In terms of further pre-training and self-distillation, we further pre-train RoBERTA for 100 epochs with batch size 128. We evaluate the models with macro F1 for SCIERC, ACL-ARC, and Twitter-Emotion dataset, and micro F1 for Chemprot dataset.
\paragraph{Baselines}
We compare our method against the following baselines targeting for fine-tuning pre-trained models. All the models are initialized with the pre-trained weights $\theta_\texttt{init}$ and $\phi_\texttt{init}$.
\vspace{-0.1in}
\begin{enumerate}[itemsep=1.0mm, parsep=0pt, leftmargin=*]
\item \textbf{Fine-tuning}: The model fine-tuned on target labeled dataset $\mathcal{D}^\texttt{tr}$ without any further pre-training or regularization except dropout and weight decay.
\item \textbf{RecAdam~\citep{recadam}}: The model trained with RecAdam optimizer which is a variant of Adam optimizer~\citep{adam} and additionally penalizes $L_2$ distance between the fine-tuned and the initial pre-trained weight.
\item \textbf{MARS~\citep{mars}}: The model trained to minimize the cross-entropy loss along with the regularization projecting the fine-tuned weight to lie within a sphere centered on the initial pre-trained weights. For each layer, the distance induced by Maximum Absolute Row Sum (MARS) matrix norm $(\max_j \sum_{i=1}\lvert W_{j,i}-U_{j,i}\rvert)$ is used for the regularization.
\item \textbf{R3F~\citep{r3f}}: The model trained to minimize the cross-entropy loss as well as symmetric KL-divergence between softmax output of the original input and that of the input perturbed by Gaussian noise.
\item \textbf{Further Pre-training~\citep{dont-pt}}: Task adaptive pre-training where we further pre-train the model on the unlabeled target dataset $\mathcal{D}^u$ with masked auto-encoding objective and fine-tune it on the target labeled dataset $\mathcal{D}^\texttt{tr}$.
\item \textbf{Self-Distillation}: This is our model which is further pre-trained on unlabeled target dataset $\mathcal{D}^u$ with~\eqref{eq:self-distill} and fine-tuned on the target labeled dataset $\mathcal{D}^\texttt{tr}$.
\end{enumerate}
\input{tables/image_exp}
\subsection{Main Results}
As shown in Table~\ref{tab:img_exp}, self-distillation consistently outperforms all the regularization methods and the further pre-training method on image datasets. Notably, our method significantly improves the performance of the Chest X-ray dataset consisting of grey-scaled images for diagnosis of pneumonia. In addition, self-distillation effectively tackles the Flower dataset which contains only 2,040 labeled examples. In contrast, the other baselines do not show consistent improvement across all the image datasets. For instance, further pre-training is effective for the Aircraft dataset, but significantly degrades the test accuracy on the DTD dataset. Regularization methods such as RecAdam, MARS, and R3F barely improve generalization performance on most datasets or underperform the simple fine-tuning strategy on certain datasets. This empirical evidence supports that the regularizations enforcing the fine-tuned models close to the initial pre-trained weight are not effective for adapting a pre-trained model to the target datasets of specific domains.
\input{tables/text_exp}
Furthermore, as shown in Table~\ref{tab:text_exp}, we provide additional experimental results for text classification tasks. Again, self-distillation significantly outperforms all of the baselines across all four datasets, except RecAdam in the Chemprot dataset. In contrast to the previous experiment, the further pre-training method improves the test F1 score of the simple fine-tuning method, yet it still underperforms our model. For regularization methods --- RecAdam, MARS, and R3F, they do not achieve consistent improvement across all three datasets. RecAdam moderately improves the F1 score on the SCIERC and Chemprot dataset but significantly degrades the generalization performance on ACL-ARC dataset. Both MARS and R3F show poor performance on SCIERC and ACL-ARC datasets, and their performance slightly is worse than Fine-tuning method on the Chemprot dataset.
\paragraph{Result for Low Resource Data}
We further perform experiments to show how self-distillation effectively handles low resources of labeled data. Given a full CIFAR-100 dataset~\citep{cifar} which contains 50,000 training pairs of an image and corresponding label, we plot the test accuracy of each model by varying the number of training instances. Note that we also reduce the number of unlabeled images used for further pre-training or self-distillation. As shown in Figure~\ref{fig:low-resource}, self-distillation consistently improves the generalization performance of both fine-tuning method and the model which is further pre-trained on the images from the CIFAR-100 dataset. Notably, the gain by self-distillation becomes larger when the models are trained with an extremely small number of instances. For example, self-distillation achieves $13\%$ and $6\%$ improvement of test accuracy compared to the model with simple fine-tuning when there are 1,000 and 2,500 labeled examples, respectively. These empirical results verify that self-distillation can effectively adapt the pre-trained model to the target dataset even if there are extremely small amounts of labeled data.
\paragraph{Ablation Study}
We perform ablation study to verify the effectiveness of each component of self-distillation. In Table~\ref{tab:ablation}, we show empirical results on both the CUB dataset and SCIERC data set while removing or replacing various components of self-distillation. Firstly, we remove masked auto-encoding objective $\mathcal{L}_\texttt{MAE}$ and train the model with only distillation loss $\mathcal{L}_\texttt{Distill}$ before fine-tuning. On image dataset CUB, it does not make a significant difference, however, removing the masked auto-encoding objective degrades the generalization performance of the language model on text classification dataset SCIERC. Alternatively, we remove the distillation loss $\mathcal{L}_\texttt{Distill}$ in ~\eqref{eq:self-distill}, which results in further pre-training method. Furthermore, we continue to pre-train the model for twice longer steps as the original further pre-training method, denoted as Further Pre-train$\times2$, to show that higher test accuracy of self-distillation is not a consequence of longer pre-training. Both of the models significantly underperform self-distillation, which shows the effectiveness of the self-distillation loss. Lastly, we perform experiments for variants of distillation loss $\mathcal{L}_\texttt{Distill}$ in~\eqref{eq:self-distill}.
Instead of matching representation of teacher and student, we enforce the reconstruction of masked inputs by teacher and student to be consistent, i.e., $\minimize_{\theta, \phi }\norm{g_{\phi}\circ f_{\theta}(\hat{{\mathbf{x}}}) - g_{\phi_0} \circ f_{\theta_0}(\hat{{\mathbf{x}}})}^2_2$ for ViT or $ \minimize_{\theta,\phi} \sum_{t=1}^TD_{\mathrm{KL}}\left(p_{\theta_0,\phi_0}(x_t|\hat{{\mathbf{x}}})\parallel p_{\theta,\phi}(x_t|\hat{{\mathbf{x}}}) \right)$ for RoBERTA, denoted as Prediction-Matching.
Furthermore, we replace the distillation loss with the one minimizing $L_2$ or MARS distance between the parameters of student and teacher, denoted as Weight-Matching. As shown in Table~\ref{tab:ablation}, all these variants are not effective compared to the one minimizing the distance between hidden representations of the student and teacher.
\input{figures/exp_allinone}
\paragraph{Multi-Round of Self-Distillation} Lastly, we empirically show that the first round of self-distillation plays the most significant role in improving generalization performance. Specifically, we fine-tune each model after $t$ round of self-distillation and plot the test accuracy on Oxford 102 Flower dataset, where $0$ round of self-distillation $(t=0)$ denotes the model with further pre-training. As shown in Figure~\ref{fig:multi-round-acc}, the first round of self-distillation significantly improves the test accuracy of the model with further pre-training and the gain by self-distillation becomes marginal after the first round. Considering the extra computational cost and marginal improvement of multi-round self-distillation, we perform a single round of self-distillation for all the experiments.
\subsection{Further Analysis}
In this subsection, we present numerical experiments to analyze why self-distillation can potentially help improve the generalization performance of downstream tasks compared to further pre-training and empirically show that Theorem~\ref{thm:1} and~\ref{prop:2} can be extended to deep neural networks --- transformers.
\input{figures/analysis.tex}
\textbf{(a) Generalization gap:} In Figure~\ref{fig:gap}, we plot the generalization gap, which is test loss minus training loss on each labeled dataset, of self-distillation and further pre-training method. Self-distillation improves the generalization gap of the further pre-training method across all the datasets. It is consistent with Theorem~\ref{thm:1} showing that self-distillation with a simplified model strictly decreases the generalization bound on the supervised loss of the fine-tuning stage.
\textbf{(b) Effect of self-distillation on distance:} To empirically validate Theorem~\ref{prop:2} about regularization effects by self-distillation on $L_2$ distance between the initial pre-trained weight $\theta_\texttt{init}$ and the final weight after fine-tuning, we plot the distance obtained from self-distillation and further pre-training. Specifically, we compare the distance $\norm{\theta_\texttt{init}- \theta_{1, T}}_2$ and $\norm{\theta_\texttt{init}-\theta_{0,T}}_2$, where $\theta_{t,\tau}$ is the parameter after $t$ round of self-distillation and $\tau$ steps of gradient descent for fine-tuning. As shown in Figure~\ref{fig:distance}, self-distillation consistently decreases the distance and the reduced distance correlates with the better generalization gap in Figure~\ref{fig:gap}. These empirical results also confirm the connection between the $L_2$ distance from the initialization and generalization bound~\citep{nagarajan2019generalization}.
\textbf{(c) Effect of multi-round self-distillation:} Lastly, we empirically verify part of Theorem~\ref{prop:2} which shows that the first round of self-distillation plays the most critical role of regularization on the $L_2$ distance between the initial pre-trained weight $\theta_\texttt{init}$ and the final weight $\theta_{t,T}$ denoted as the parameter after $t$ round of self-distillation and $T$ steps of gradient descent for fine-tuning on VGG flower 102 dataset. As shown in Figure~\ref{fig:round-distance}, self-distillation significantly decreases the distance at the first round $(t=1)$ and the regularization effect on the distance diminishes afterward, where $0$ round of self-distillation $(t=0)$ denotes the model with further pre-training but without self-distillation.
\section{Theoretical Analysis}
In this section, we analyze how self-distillation affects the final model after fine-tuning in terms of generalization and regularization. This section proves a generalization bound on the supervised loss for our method and shows that the generalization bound strictly decreases as the number of self-distillation increases. Moreover, we show that self-distillation acts as a regularizer on the distance between the initial weight before further pre-training and the final weight after fine-tuning. The regularization effect is shown to have the largest impact in the first round of self-distillation, which suggests that the first round of self-distillation plays a more significant role in the final performance when compared to the other rounds.
We consider the dynamics of the weight vector $w_{t,\tau}$ over time $\tau$ of fine-tuning after $t$ rounds of self-distillation, where $w_{0,0}$ is the result of further pre-training,
and $w_{t,0} \in \mini_{w} L_{t}(w)$ is the result of the self-distillation of $t$ rounds with $L_{t}(w) = \frac{1}{n}\sum_{i=1}^n \|f(x_{i},w)-f(x_i,w_{t-1,0})\|^{2}_2+\lambda \|w\|^{2}_2$ for some $\lambda>0$. After $t$ rounds of self-distillation, we consider the dynamics over fine-tuning time $\tau$ via gradient flow \citep{saxe2013exact,kawaguchi2021theory}:
$
\frac{d w_{t,\tau}}{d \tau} = - \nabla \Lcal(w_{t,\tau}),
$
with the initialization $w_{t,0}$ obtained by the self-distillation where $\Lcal(w)=\frac{1}{2}\sum_{i=1}^n \ell(w,x_{i},y_{i})$ with $\ell(w,x,y)=\|f(x,w)-y\|^{2}_2$ and $y \in \RR^p$. Here, the self-distillation and fine-tuning share a same training dataset $s=\{(x_i,y_i)\}_{i=1}^n$. In this section, to obtain theoretical insights, we consider the regime of $d> n$ and a simple abstract model, $f(x,w)= W\varphi(x) \in \RR^p$, with some nonlinear map $\varphi$ and the weight matrix $W\in \RR^{p \times d}$ where $w=\vect[W\T] \in \RR^{dp}$ and $\varphi(x) \in \RR^{d}$. Here, $\vect[W\T]$ is a vectorization of the matrix $W\T$. Let us fix the fine-tuning time length $T$ as $1<\tau \le T<\infty$. Since $d>n$, there are infinitely many solutions to the problem of minimizing $\Lcal(w)$.\ Thus, each of the finite length $T$ and the over-parameterization $d>n$ implies that the initialization $w_{t,0}$ at the fine-tuning phase via self-distillation plays an important role.
Let $\delta > 0$ and $t\in \NN_0$. We then define $
\Fcal_{t}= \{\Acal_{t}(s) :s \in \Scal \},
$
where $\Scal$ is a set of all training datasets of size $n$ such that with probability at least $1-\delta$, the training dataset $s$ is in $\Scal$. For each training dataset $s \in \Scal$, $\Acal_{t}(s)=w_{t,T}$ is the final weight vector of the model after $t$ rounds of self-distillation and $T$ time of fine-tuning.
Let us define the matrix $\Phi \in \RR^{d \times n}$ by $\Phi_{ij}=\varphi(x_{j})_{i}$. We assume that $\Phi$ is of full rank; i.e., $\rank(\Phi)=n$ since $d \ge n$. This is typically satisfied because if $\rank(\Phi)<n$, there is some redundancy in the rows of the matrix $\Phi$.
Denote by $[I_p \otimes \Phi] \in \RR^{dp \times np}$ the Kronecker product of the identity matrix $I_p \in \RR^{p \times p}$ and the matrix $\Phi$.
We write its singular value decomposition by $[I_p \otimes \Phi]=U \Sigma V\T$ where $U=[u_1,u_{2}\dots,u_{dp}] \in \RR^{dp\times dp}$ contains the left-singular vectors $u_i \in \RR^{dp}$ for $i \in \{1,\dots, dp\}$ and $\Sigma \in \RR^{dp \times np}$ is a rectangular diagonal matrix with $\Sigma_{ii}=\sigma_i \in \RR_{\geq 0}$ for $i \in \{1,\dots, np\}$ and $\sigma_1 \ge \sigma_2\ge \dots \ge \sigma_{np}\geq 0$. Define $M$ to be an upper bound on the loss as $\ell_{}(w,x,y) \le M$. Define $R$ to be an upper bound on the expected norm of the features as $\EE_{x}\|\varphi(x)\|_2 \leq R$.
We assume that $w_{0,0} \neq 0$; if $w_{0,0}=0$, then the target function in the self-distillation phase is always zero as $f(x_i,w_{0,0})=0$ for all $i$, which is unlikely the case in practice. We define ${w_{\mathrm{init}}}\in \RR^{dp}$ to be the weight before further pre-training and define $Y = \vect[[y_1 ,\dots, y_n]\T] \in \RR^{np}$.
The following theorem shows that the generalization bound on the supervised loss $\ell_{ }(w_{t,T},x,y)$ of the fine-tuning phase strictly decreases as we increase the number $t$ of self-distillation rounds in the further pre-training phase:
\begin{theorem} \label{thm:1}
There exists a constant $c$ (that only depends on $M$) such that with probability at least $1-\delta$, the following holds:
\begin{align} \label{eq:5}
\EE_{x,y}[\ell(w_{t,T},x,y)]
\le \frac{1}{n}\sum_{i=1}^{n} \ell_{ }(w_{t,T},x_{i},y_{i})+\zeta(t) \sqrt{\frac{4c^2 R^2p}{n}}+M \sqrt{\frac{\ln(2/\delta)}{2n}},
\end{align}
where the function $\zeta(t)$ is strictly decreasing in $t \in \NN_0$.
\end{theorem}
The proofs of all results in this section are presented in Appendix \ref{app:1}.
Moreover, the following theorem shows that the tight upper bound on the distance between the initial weight ${w_{\mathrm{init}}}$ and the final weight $w_{t,T}$ after $T$ steps of fine-tuning (i.e., $\|{w_{\mathrm{init}}} - w_{t,T}\|_2$) strictly decreases as the number $t$ of self-distillation rounds increases:
\begin{theorem} \label{prop:2}
There is a function $\psi:\NN_0 \to \mathbb{R}_{\geq 0}$ such that (1) $\|{w_{\mathrm{init}}} - w_{t,T}\|_2 = \psi(t)$ for some ${w_{\mathrm{init}}}\in \RR^{dp}$, (2) $\|{w_{\mathrm{init}}} - w_{t,T}\|_2 \le \psi(t)$ for all ${w_{\mathrm{init}}} \in \RR^{dp}$, (3) the function $\psi(t)$ is strictly decreasing in $t \in \NN_0$,
(4) the function $\psi(t)$ can be decomposed to $\psi(t)= \sqrt{G_{1}+\psi_1(t)+\one\{t=0\}\Bcal}+G_2$ with constants $G_{1},G_2\ge 0$ in $t$ where $\psi_1(t)$ is strictly decreasing in $t\in\NN_0$ and $\Bcal=\sum_{i=np+1}^{dp}(u_{i}\T w_{0,0})^2 \ge 0$.
\end{theorem}
Theorem~\ref{prop:2} shows that the self-distillation acts as a regularizer on the distance between the initial weight ${w_{\mathrm{init}}}$ and the final weight $w_{t,T}$. Since the Rademacher complexity of a set of vectors is invariant to a shift by a constant vector, this distance has been shown to control the generalization bound in previous papers in various models and settings, including deep neural networks \citep{bartlett2002rademacher,bartlett2017spectrally,nagarajan2019generalization}. This suggests that self-distillation helps generalization via a regularization effect on the distance. Moreover, the first round of self-distillation is expected to have the largest impact based on Theorem \ref{prop:2} since Theorem \ref{prop:2} shows that we can completely remove the unnecessary component $\Bcal$ of $w_{0,0}$ in the first round of self-distillation. We have verified these theoretical predictions in the experiments where we show the correlation between the improvement via self-distillation and the distance that appeared in the generalization bound in the previous paper~\citep{nagarajan2019generalization}.
\section{Introduction}
\vspace{-0.12in}
Pre-trained transformer models~\citep{bert, gpt-3, roberta, mae} have been effective on various vision and natural language processing tasks. The pre-trained models learn general representation from a large volume of unlabeled data so that they generalize well to various downstream tasks when they are fine-tuned on each task with a labeled dataset. However, in many of real-world applications, it requires a considerable amount of effort to adapt the pre-trained model to a specific downstream task domain since there exists a significant distributional discrepancy between data for the pre-training and fine-tuning stage. Moreover, it is difficult to collect a large amount of labeled data for such specific domains, which renders adaptation of the pre-trained model to downstream tasks more challenging.
\input{figures/overfitting.tex}
Several works have proposed to tackle the problem of adapting pre-trained models to a specific domain. A prevalent approach for adaptation of the pre-trained model is \textit{further pre-training} where we continue to update the parameters of the pre-trained model on additionally curated domain-specific unlabeled data with self-supervision~\citep{scibert, biobert}, before fine-tuning it on the target labeled data as depicted in Figure~\ref{fig:concept}\textcolor{Red}{b}. \cite{dont-pt} also show that further pre-training only with the target unlabeled data is still effective without any extra data. However, most of the existing further pre-training approaches have focused on language models, and we find that the further pre-training strategy is not effective for Vision Transformer (ViT)~\citep{vit}. As shown in Figure~\ref{fig:overfitting},
ViT is vulnerable to overfitting and does not generalize well to downstream tasks as when we continue to pre-train it on the target unlabeled data.
Several regularization methods~\citep{recadam, mars, r3f} have proposed to tackle the overfitting issue of large pre-trained models, however, they do not consider the adaptation process such as further pre-training. Instead, they enforce the distance between the final fine-tuned weight and the pre-trained weight to be small to promote the transfer of the knowledge acquired from pre-training to downstream tasks for better generalization. However, these regularizations hinder the adaptation of pre-trained models to downstream tasks especially when there is a significant distributional shift between the pre-trained data and target data. It eventually results in worse generalization than the simple fine-tuning strategy.
\input{figures/concept}
To tackle these limitations, we propose \emph{self-distillation} as a regularization for further pre-training on a target unlabeled dataset so that we can effectively adapt pre-trained models to the downstream task of various domains with a limited amount of labeled data. For self-supervision, we focus on masked auto-encoding as the objective function for pre-training since it does not depend on any data augmentations, compared to other self-supervised learning methods~\citep{simclr,moco, byol, barlow-twins, simsiam} which require data augmentations to construct positive pairs for self-supervised learning objective such as contrastive learning. This is especially useful when it is hard to define meaningful data augmentations for a target domain.
Specifically, we take the pre-trained model with an encoder $f_{\theta_\texttt{init}}$ and a decoder $g_{\phi_\texttt{init}}$ which are pre-trained on a massive amount of unlabeled data from general domain, and continue to pre-train it with masked auto-encoding (MAE)~\citep{bert,mae} objective on the target unlabeled data to obtain $f_{\theta_0} \text{ and } g_{\phi_0}$. After that, we set the encoder $f_{\theta_0}$ as a teacher for self-distillation. Then we take the copy of the pre-trained model $(f_{\theta_\texttt{init}}, g_{\phi_\texttt{init}})$ as a student, and match the representations of the student encoder and those of the teacher encoder while optimizing the student with the MAE on the target unlabeled data. Finally, we fine-tune the self-distilled student $f_{\theta_1}$ on the target labeled data for the downstream task. We illustrate the overview of our method in Figure~\ref{fig:concept}\textcolor{Red}{c}.
To verify the efficacy of our method, we empirically show that it significantly improves the generalization performance of a pre-trained Vision Transformer and language model RoBERTA~\citep{roberta}, and outperforms the relevant baselines on various image and text classification datasets. Moreover, we theoretically analyze the proposed method with a simplified model to understand how self-distillation for further pre-training can potentially help improve the generalization performance on the target tasks after fine-tuning.
Our contribution is threefold:
\vspace{-0.15in}
\begin{itemize}
\item We propose self-distillation for further pre-training on the target unlabeled dataset, where we enforce representations of the student to be close to those of the further pre-trained teacher while training the student with masked-auto encoding objective.
\item We theoretically analyze the proposed method with a simplified model to understand how self-distillation for further pre-training can potentially lead to better generalization performance of downstream tasks.
\item We extensively validate our method on various image and text classification datasets with pre-trained transformers and show that ours outperforms the relevant baselines.
\end{itemize}
\section{Conclusion}
To effectively adapt pre-trained transformers to a target domain, we proposed self-distillation as a regularization for further pre-training. Specifically, we first took the initial pre-trained transformer and continued to pre-train it with the masked auto-encoding objective on the target unlabeled dataset and considered the encoder part of the model as a teacher for self-distillation. Then we took the copy of the same initial pre-trained model as a student and enforced representations of the student to be close to those of the teacher while optimizing the student with the masked auto-encoding objective on the target unlabeled dataset. Finally, we fine-tuned the self-distilled student on the target labeled dataset. We empirically verified our method on various image and text classification benchmark datasets, showing that self-distillation consistently improved the generalization performance compared to the relevant baselines we considered. Lastly, we provided the theoretical analysis of the proposed method with a simplified model to understand how self-distillation for further pre-training can potentially help improve the generalization performance of the downstream tasks.
\section{Related Work}
\vspace{-0.1in}
\paragraph{Self-Distillation}
Knowledge distillation is to transfer knowledge of teacher to student by minimizing a divergence between output of teacher and student~\citep{kd}. When the parameterization of student and teacher is identical, we call it \emph{self-distillation} as a special case of the knowledge distillation. Although there is no new information during self-distillation process,~\citet{born-again} have shown that the student from self-distillation achieves better generalization performance than the teacher.
A similar phenomenon has been consistently observed in other works~\citep{self-distill-2, self-distill-3}.~\cite{self-distill-analysis} theoretically analyze how self-distillation induces regularization and reduces overfitting in Hilbert space. However, all of them focus on self-distillation for supervised learning. Instead, we empirically and theoretically show that self-distillation for further pre-training with self-supervision leads to better generalization of downstream tasks after fine-tuning the self-distilled model with target labeled data.
\vspace{-0.1in}
\paragraph{Further Pre-training}
\citet{biobert, scibert, ernie} have shown the success of continual pre-training language model on a large number of corpora collected from target domain and fine-tuning the model on target labeled dataset. However, it is computationally expensive to further pre-train the model on a large amount of unlabeled text data and it may not be feasible to collect such a large scale of unlabeled data on certain domains. Instead,~\citet{dont-pt} devise a task-adaptive pre-training where we use only target unlabeled data for further pre-training language model before fine-tuning the model on the target labeled data. To improve the effectiveness of further pre-training, \cite{nmg, maml-pt} propose learning to mask input for masked auto-encoding with bilevel optimization, which requires a prohibitive computational cost. However, all of them solely focus on pre-trained language models and we empirically find that naive further pre-training is not effective for Vision Transformers.
\vspace{-0.1in}
\paragraph{Regularization for Fine-tuning}
There are several works proposing regularization for fine-tuning a pre-trained model. \cite{recadam} propose to modify Adam~\citep{adam} optimizer, called RecAdam, which enforces the fine-tuned model close to the initial pre-trained model by minimizing $L_2$ distance between fine-tuned and initial pre-trained weight. Similarly, \cite{mars} project the fine-tuned weight for every gradient descent update such that it lies within the sphere centered on the initial pre-trained weights with the distance induced by the norm of maximum absolute row sums (MARS). Instead of explicitly minimizing the distance, motivated by trust region theory, \cite{r3f} propose to minimize symmetric KL-divergence between the model output of an original input and that of the input perturbed by Gaussian noise. However, all of them do not consider adaptation of pre-trained models to a specific target domain, which results in worse generalization performance of downstream tasks than a simple fine-tuning strategy.
\section{Method}
\subsection{Preliminaries}
\paragraph{Problem Statement}
We assume that we are given $(\theta_\texttt{init}, \phi_\texttt{init})$ parameters of the neural network $g_{\phi_\texttt{init}}\circ f_{\theta_\texttt{init}}$ which is pre-trained on a large volume of unlabeled data with masked auto-encoding objective, where $f_{\theta_\texttt{init}}$ is an encoder which extracts hidden representation of an input and $g_{\phi_\texttt{init}}$ is an decoder reconstructing a masked input. Our goal is to fine-tune the pre-trained model $f_{\theta_\texttt{init}}$ with a randomly initialized task specific head $h_{\omega}$ on labeled dataset $\mathcal{D}^\texttt{tr}=\{({\mathbf{x}}^{(i)}, y^{(i)})\}_{i=1}^n$ of a downstream classification task such that the model generalizes well to unseen test dataset $\mathcal{D}^\texttt{test}$. A typical approach to achieve this goal is empirical risk minimization as follows:
\begin{align} \label{eq:fine-tune}
\begin{split}
& \underset{\theta, \omega}{\text{minimize}}~\mathcal{L}_{\texttt{CE}}(\theta, \omega; \mathcal{D}^{\texttt{tr}}) \ \text{ via algorithm $\Acal$ as }
\\ & (\theta^{*}, \omega^{*}) = \mathcal{A}(\mathcal{L}_\texttt{CE} ; \theta_\texttt{init},\mathcal{D}^\texttt{tr}),
\end{split}
\end{align}
where $\mathcal{L}_\texttt{CE}$ is a cross-entropy loss and $\mathcal{A}$ denotes a stochastic gradient descent algorithm to minimize $\mathcal{L}_\texttt{CE}$ on the dataset $\mathcal{D}^\texttt{tr}$ with the initialization $\theta_\texttt{init}$.
\begin{comment}
A typical approach is to tackle the problem is empirical risk minimization as follows:
\begin{equation}
\begin{gathered}
\underset{\theta, \omega}{\text{minimize}}~\mathcal{L}_{\texttt{CE}}(\theta, \omega; \mathcal{D}^{\texttt{tr}}) \\
\theta^{*}, \omega^{*} = \mathcal{A}(\mathcal{L}_\texttt{CE}(\theta, \omega;\mathcal{D}^\texttt{tr}) ; \theta_\texttt{init})
\label{eq:fine-tune}
\end{gathered}
\end{equation}
where $\mathcal{L}_\texttt{CE}$ is a cross-entropy loss and $\mathcal{A}$ denotes a stochastic gradient descent algorithm to minimize $\mathcal{L}_\texttt{CE}$ with the initialization $\theta_\texttt{init}$.
\end{comment}
\paragraph{Further Pre-training}
However, the pre-trained model is prone to overfitting when it is fine-tuned on a small amount of domain-specific labeled data. \cite{dont-pt} have shown that further pre-training, where we continue to pre-train the model $g_{\phi_\texttt{init}}\circ f_{\theta_\texttt{init}}$ on the target unlabeled dataset $\mathcal{D}^u=\{{\mathbf{x}}^{(i)}\}_{i=1}^n$ and then fine-tune it on $\mathcal{D}^\texttt{tr}$, is effective for improving generalization performance when there is not enough domain-specific labeled data. Note that $\mathcal{D}^u$ is the exactly same as $\mathcal{D}^{\texttt{tr}}$ except that we remove the labels $y^{(i)}$. In this work, we focus on the masked auto-encoding~\citep{bert, mae} as a pre-training objective function since its generality compared to other self-supervised methods~\citep{simclr, moco, byol, moco, simsiam} which require well-defined data augmentations to construct positive pairs for self-supervised learning.
\paragraph{Masked Auto-Encoding} We briefly describe the masked auto-encoding objective~\citep{roberta, mae} for a language model such as RoBERTA~\citep{roberta} and Vision Transformer (ViT)~\citep{vit}. Let ${\mathbf{x}}^{(i)}=(x^{(i)}_1, \ldots, x^{(i)}_K)$ be a sequence of patches for a image or tokens for a sentence with length $K$. Then we independently sample a binary mask from Bernoulli distribution with probability $\gamma$ for each $x^{(i)}_k$, denoted as ${\mathbf{z}}^{(i)}=(z^{(i)}_1, \ldots, z^{(i)}_K)$. If $z^{(i)}_k=1$, then $x^{(i)}_k$ is replaced with a special ``mask" token. Otherwise, we use the same $x^{(i)}_k$ for a masked input. Let $\hat{{\mathbf{x}}}^{(i)}=(\hat{x}^{(i)}_1, \ldots, \hat{x}^{(i)}_K)$ be a masked input and let $f_{\theta}, g_{\phi}$ be an encoder and decoder, respectively. Then the final objective for masked auto-encoding is defined as follows:
\begin{equation}
\mathcal{L}_{\texttt{MAE}}(\theta, \phi; \mathcal{D}^u)= \frac{1}{n}\sum_{i=1}^n\mathbb{E}_{{\mathbf{z}}^{(i)}\sim p_{\gamma, T}({\mathbf{z}})}\left[-\sum_{k=1}^K \frac{z^{(i)}_k}{Z^{(i)}}\cdot\log p_{\theta, \phi}(x_k^{(i)}|\hat{{\mathbf{x}}}^{(i)})\right], \: Z^{(i)} = \sum_{k=1}^K z^{(i)}_k,
\label{eq:mae}
\end{equation}
where $p_{\gamma,K}({\mathbf{z}})$ denotes a Binomial distribution with its parameters $\gamma$ for probability that $z_k=1$ and $K$ for the number of trials. Note that the negative log-likelihood is instantiated as cross-entropy loss for language models or mean square error for Vision Transformers. See Appendix~\ref{appendix:mae} for more detail.
\input{algorithm/distillation.tex}
\subsection{Self-Distillation for Further Pre-training}
Although further pre-training strategy has been effective on text domain~\citep{dont-pt, biobert, ernie}, we empirically find that ViT with further pre-training overfits the target unlabeled data and does not generalize well to downstream image classification tasks.
In order to tackle the issue, we propose self-distillation as a regularization for further pre-training. Specifically, given a pre-trained model $g_{\phi_\texttt{init}}\circ f_{\theta_\texttt{init}}$, we first continue to train the model on the target unlabeled data $\mathcal{D}^u$ with the masked auto-encoding objective as described in equation~\ref{eq:mae} to obtain the encoder $f_{\theta_0}$ and decoder $g_{\phi_0}$. We discard the decoder and consider the encoder $f_{\theta_0}$ as a teacher for self-distillation. Then we take the copy of the pre-trained initial network $g_{\phi_\texttt{init}}\circ f_{\theta_\texttt{init}}$ as a student and further pre-train the student with masked auto-encoding objective but enforce hidden representation of the encoder of the student $f_{\theta_{\texttt{init}}}$ to be close to that of the teacher $f_{\theta_0}$ as follows:
\begin{equation}
\begin{gathered}
(\theta_1, \phi_1) \in \argmin_{\theta, \phi} \left(\mathcal{L}_\texttt{MAE}(\theta, \phi;\mathcal{D}^u) + \mathcal{L}_\texttt{Distill} (\theta; \theta_0, \mathcal{D}^u )\right)\\
\mathcal{L}_\texttt{Distill}\left(\theta; \theta_0, \mathcal{D}^u\right) = \frac{1}{n}\sum_{i=1}^n \norm{f_{\theta}({\mathbf{x}}^{(i)})-\texttt{StopGrad}\left(f_{\theta_0}({\mathbf{x}}^{(i)})\right)}_2^2
\end{gathered}
\label{eq:self-distill}
\end{equation}
where $\theta$ and $\phi$ are initialized with the pre-trained parameters $\theta_\texttt{init}$ and $\phi_\texttt{init}$, respectively and $\texttt{StopGrad}$ denotes the stop-gradient operation which considers its argument as a constant and does not back-propagate through it. As described in Algorithm~\ref{algo:self-distillation}, we can repeat this process to perform multiple rounds of self-distillation $(T^\prime >1)$ where the student of the previous round becomes a teacher and a new student is initialized with the pre-trained weights $\theta_\texttt{init}$ and $\phi_\texttt{init}$ for the next round. We empirically find that the first round of self-distillation plays the most significant role in improving the final generalization performance of downstream tasks. Further, theoretical analysis shows that the first round of self-distillation has the largest impact on regularization. Thus, we perform a single round of self-distillation for computational efficiency. After self-distillation, we discard the decoder $g_{\phi_1}$ and fine-tune the encoder of the student $f_{\theta_1}$ along with a randomly initialized task-specific head $h_\omega$ by minimizing $\mathcal{L}_\texttt{CE}(\theta,\omega, ; \mathcal{D}^\texttt{tr})$ with the initialization $\theta_1$ as described in equation~\ref{eq:fine-tune}.
|
2,869,038,154,954 | arxiv |
\section{Introduction}\label{chap:intro}
\label{sec:intro}
The Compact Muon Solenoid (CMS)~\cite{Chatrchyan:2008zzk} is a
multipurpose detector designed for the precision measurement of
leptons, photons, and jets, among other physics objects, in
proton-proton as well as heavy ion collisions at the CERN
LHC~\cite{LHC}. The LHC is designed to collide protons at a
center-of-mass energy of 14\TeV and a luminosity of $10^{34}\percms$.
At design luminosity, the pp interaction rate exceeds
1\unit{GHz}. Only a small fraction of these collisions contain events
of interest to the CMS physics program, and only a small fraction of
those can be stored for later offline analysis. It is the job of the
trigger system to select the interesting events for offline storage
from the bulk of the inelastic collision events.
To select events of potential physics interest~\cite{DAQ-TDR}, the
CMS trigger utilizes two levels while, for comparison, ATLAS uses a
three-tiered system~\cite{ATLAS-trig}. The first level (L1) of the CMS
trigger is implemented in custom hardware, and selects events
containing candidate objects, \eg, ionization deposits consistent with
a muon, or energy clusters consistent with an electron, photon, $\tau$ lepton,
missing transverse energy (\MET), or jet. Collisions with possibly
large momentum transfer can be selected by, \eg, using the scalar sum
of the jet transverse momenta (\HT).
The final event selection is based on a programmable menu where, by
means of up to 128 algorithms utilizing those candidate objects,
events are passed to the second level
(high-level trigger, HLT). The
thresholds of the first level are adjusted during data taking in
response to the value of the LHC instantaneous luminosity so as to
restrict the output rate to 100\unit{kHz}~\cite{DAQ-TDR}, the upper
limit imposed by the CMS readout electronics. The HLT, implemented in
software, further refines the purity of the physics objects, and
selects an average rate of 400\unit{Hz} for offline storage. The
overall output rate of the L1 trigger and HLT can be adjusted by
prescaling the number of events that pass the selection criteria of
specific algorithms. In addition to collecting collision data, the
trigger and data acquisition systems record information for the
monitoring of the detector.
After commissioning periods at 0.9 and 2.36\TeV in 2009, the first
long running periods were at a center-of-mass energy of 7\TeV in 2010
and 2011, and 8\TeV in 2012. These proton-proton data, together with
the first ion running periods (PbPb at 2.76\TeV, and
pPb at 5.02\TeV), are referred to collectively as Run~1. During this
period, the CMS trigger system selected interesting pp physics events at
maximum instantaneous luminosities of $2.1\times 10^{32}\percms$
(2010), $4\times 10^{33}\percms$ (2011), and
$7.7\times 10^{33}\percms$ (2012), corresponding to 0.2, 4, and
$7.7\,\mathrm{Hz\,nb}^{-1}$. Figure~\ref{fig:lumi2012} shows the pp
integrated and peak luminosities as a function of time for calendar
years 2010, 2011 and 2012. While the nominal bunch crossing (BX)
frequency is 40\unit{MHz}, corresponding to 25\unit{ns} between individual bunch
collisions, the bunch spacing during regular running was never less
than 50\unit{ns} through Run~1. The highest number of
collisions per BX (known as ``pileup'') averaged over a data run in
2011 and 2012 was 16.15 and 34.55, respectively, while the pileup
averages over the year were 9 (21) in 2011 (2012).
\begin{figure}[btp]
\centering
\includegraphics[width=\textwidth]{figures/int_lumi_cumulative_pp_2}
\includegraphics[width=\textwidth]{figures/peak_lumi_pp}
\caption{Integrated (top) and peak (bottom) proton-proton
luminosities as a function of time for calendar years
2010--2012. The 2010 integrated (instantaneous) luminosity is
multiplied by a factor of 100 (10). In the lower plot,
$1\unit{Hz/nb}$ corresponds to $10^{33}\percms$.}
\label{fig:lumi2012}
\end{figure}
The trigger system is also used during heavy ion running. The
conditions for PbPb collisions are significantly different from those
in the pp case. The instantaneous luminosity delivered by the LHC in
the 2010 (2011) PbPb running period was $3\times 10^{25}$
($5\times 10^{26}$)\percms, resulting in maximum interaction rates of
250\unit{Hz} (4\unit{kHz}), much lower than in pp running, with a negligible
pileup probability and an inter-bunch spacing of 500\unit{ns}
(200\unit{ns}). During the pPb run in 2013, an instantaneous luminosity of
$10^{29}\percms$ was achieved, corresponding to an interaction
rate of 200\unit{kHz}, again with a very low pileup probability. Due to
the large data size in these events, the readout rate of the
detector is limited to 3\unit{kHz} in heavy ion collisions.
This document is organized as follows.
Section~\ref{sec:cmstrigger_intro} describes the CMS trigger system
(L1 and HLT) in detail. Section~\ref{sec:objid} gives an overview of
the methods, algorithms, and logic used to identify physics signatures
of interest in LHC collisions, and to select events accordingly. The
physics performance achieved with the CMS trigger system is outlined
in Section~\ref{sec:hpa} based on examples of several physics
analyses. In Section~\ref{sec:triggermenus}, details of the L1 and HLT
menus are given, together with the objectives and strategies to assemble
those menus. The operation and evolution of the trigger system during
the first years of the LHC running is described in
Section~\ref{sec:operations}. A summary is given in
Section~\ref{sec:summary}.
\subsection{The CMS detector}
\label{sec:cmsdetector}
The central feature of the CMS apparatus is a superconducting solenoid, of
6\unit{m} internal diameter, providing a magnetic field of
3.8\unit{T}. Within the superconducting solenoid volume are a silicon
pixel and strip tracker, a lead tungstate crystal electromagnetic
calorimeter (ECAL), and a brass/scintillator hadron calorimeter (HCAL). Muons are
measured in gas-ionization detectors embedded in the steel return
yoke. Extensive forward calorimetry complements the coverage provided
by the barrel and endcap detectors. The missing transverse momentum
vector is defined as the projection on the plane perpendicular to the
beams of the negative vector sum of the momenta of all reconstructed
particles in an event. Its magnitude is referred to as \ETmiss. The
transverse momentum vector is defined as the projection on the plane
perpendicular to the beams of the negative vector sum of the momenta
of all reconstructed particles in an event. Its magnitude is referred
to as \ET. A more detailed description of the CMS detector, together
with a definition of the coordinate system used and the relevant
kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
\section{The trigger system}
\label{sec:cmstrigger_intro}
The trigger system is comprised of an L1 hardware trigger and an HLT
array of commercially available computers running high-level physics
algorithms. In this section we describe the design of the combined
L1--HLT system.
\subsection{The L1 trigger overview}
\label{sec:l1overview}
The L1 trigger is a hardware system with a fixed latency. Within
4\mus of a collision, the system must decide if an
event should be tentatively accepted or rejected using
information from the calorimeter and muon detectors.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{figures/L1-overview.pdf}
\caption{Overview of the CMS L1 trigger system. Data from the
forward (HF) and barrel (HCAL) hadronic calorimeters, and from
the electromagnetic calorimeter (ECAL), are processed first
regionally (RCT) and then globally (GCT). Energy deposits (hits)
from the resistive-plate chambers (RPC), cathode strip chambers
(CSC), and drift tubes (DT) are processed either via a pattern
comparator or via a system of segment- and track-finders and
sent onwards to a global muon trigger (GMT). The
information from the GCT and GMT is combined in a global trigger
(GT), which makes the final trigger decision. This decision is
sent to the tracker (TRK), ECAL, HCAL or muon systems (MU) via
the trigger, timing and control (TTC) system. The data
acquisition system (DAQ) reads data from various subsystems for
offline storage. MIP stands for minimum-ionizing particle.}
\label{fig:L1-over}
\end{figure}
A schematic of the L1 trigger is shown in
Fig.~\ref{fig:L1-over}. The trigger primitives (TP) from
electromagnetic and hadron calorimeters (ECAL and HCAL) and from the
muon detectors (drift tubes (DT), cathode strip chambers (CSC) and
resistive-plate chambers (RPC)) are processed in several steps before
the combined event information is evaluated in the global trigger (GT)
and a decision is made whether to accept the event or not.
The L1 calorimeter trigger comprises two stages, a regional
calorimeter trigger (RCT) and a global calorimeter trigger (GCT). The
RCT receives the transverse energies and quality flags from over
8000 ECAL and HCAL towers (Sec.~\ref{sec:ecaltpg} and
\ref{sec:hcaltpg}), giving trigger coverage over $\abs{\eta}<5$.
The RCT processes this information in parallel and sends as output
$\Pe/\Pgg$ candidates and regional \ET sums based on $4{\times}4$
towers~\cite{Trig-TDR}.
The GCT sorts the $\Pe/\Pgg$ candidates further, finds
jets (classified as central, forward, and tau) using the \ET sums, and
calculates global quantities such as \MET. It sends as output four
$\Pe/\Pgg$ candidates each of two types, isolated and
nonisolated, four each of central, tau, and forward jets, and several
global quantities.
Each of the three muon detector systems in CMS participates in the L1
muon trigger to ensure good coverage and redundancy. For the DT and
CSC systems ($\abs{\eta}<1.2$ and $\abs{\eta}>0.9$, respectively), the
front-end trigger electronics identifies track segments from the hit
information registered in multiple detector planes of a single
measurement station. These segments are collected and then transmitted
via optical fibers to regional track finders in the electronics
service cavern, which then applies pattern recognition algorithms that
identifies muon candidates and measure their momenta from the amount
they bend in the magnetic field of the flux-return yoke of the
solenoid. Information is shared between
the DT track finder (DTTF) and CSC track finder (CSCTF) for efficient
coverage in the region of overlap between the two systems at
$\abs{\eta}\approx1$. The hits from the RPCs ($\abs{\eta}<1.6$) are directly
sent from the front-end electronics to pattern comparator trigger
(PACT) logic boards that identify muon candidates. The three regional
track finders sort the identified muon candidates and transmit to the
global muon trigger (GMT) up to 4 (CSCTF, DTTF) or 8 (RPC) candidates
every bunch crossing. Each candidate is assigned a \PT and quality
code as well as an ($\eta$,$\phi$) position in the muon system (with a
granularity of ${\approx}0.05$). The GMT then merges muon candidates found
by more than one system to eliminate a single candidate passing
multiple-muon triggers (with several options on how to select \PT
between the candidates). The GMT also performs a further quality
assignment so that, at the final trigger stage, candidates can be
discarded if their quality is low and they are reconstructed only by
one muon track finder.
The GT is the final step of the CMS L1 trigger system and implements
a menu of triggers, a set of selection requirements
applied to the final list of objects (\ie, electrons/photons,
muons, jets, or $\tau$ leptons), required by the algorithms of the HLT
algorithms to meet the physics data-taking objectives. This menu
includes trigger criteria ranging from simple single-object selections
with \ET above a preset threshold to selections requiring coincidences
of several objects with topological conditions among them. A maximum
of 128 separate selections can be implemented in a menu.
\subsection{The L1 calorimeter trigger system}
The following is the description of the reconstruction of ECAL and HCAL energy deposits used in the L1 trigger chain followed by describing the RCT and GCT processing steps operating on these trigger primitives.
\subsubsection{The ECAL trigger primitives}
\label{sec:ecaltpg}
The ECAL trigger primitives are computed from a barrel (EB) and two
endcaps (EE), comprising 75\,848 lead tungstate (PbWO$_4$) scintillating
crystals equipped with avalanche photodiode (APD) or vacuum
phototriode (VPT) light detectors in the EB and EE, respectively. A
preshower detector (ES), based on silicon sensors, is placed in front
of the endcap crystals to aid particle identification. The ECAL is
highly segmented, is radiation tolerant and has a compact and hermetic
structure, covering the pseudorapidity range of $\abs{\eta} < 3.0$. Its
target resolution is 0.5\% for high-energy electrons/photons. It
provides excellent identification and energy measurements of electrons
and photons, which are crucial to searches for many new physics
signatures.
In the EB, five strips of five crystals (along the azimuthal direction) are
combined into trigger towers (TTs) forming a $5{\times}5$ array
of crystals. The
transverse energy detected by the crystals in a single TT is summed
into a TP by the front-end electronics and sent to off-detector
trigger concentrator cards (TCC) via optical fibers. In the EE,
trigger primitive computation is completed in the TCCs, which must
perform a mapping between the collected pseudo-strips trigger data
from the different supercrystals and the associated trigger towers.
\paragraph{Mitigation of crystal transparency changes at the trigger level}
\label{sec:trans}
Under irradiation, the ECAL crystals lose some of their transparency,
part of which is recovered when the radiation exposure stops (\eg,
between LHC fills). The effect of this is that the response of the
ECAL varies with time. This variation is accounted for by the use of a
laser system that frequently monitors the transparency
of each crystal~\cite{ecallaser} and allows for offline corrections to the
measured energies to be made~\cite{Chatrchyan:2013dga}.
In 2011, the levels of radiation in ECAL were quite small, and no
corrections to the response were made at L1. From 2012 onwards, where
the response losses were larger, particularly in the EE, corrections
to the TT energies were calculated
and applied on a weekly basis in order to maintain high trigger
efficiency and low trigger thresholds.
\subsubsection{HCAL trigger primitives}
\label{sec:hcaltpg}
The HCAL TPs are computed out of the digital samples of the detector
pulses by the trigger primitive generator (TPG). In the barrel, one
trigger primitive corresponds to one HCAL readout, whereas raw data
from the two depth-segmented detector readout elements are summed in
the endcap hadron calorimeter. For the forward hadron calorimeter
(HF), up to 12 readouts are summed to form one trigger primitive. One
of the most important tasks of the TPG is to assign a precise bunch
crossing to detector pulses, which span over several clock
periods. The bunch crossing assignment uses a digital filtering
technique applied to the energy samples, followed by a peak finder
algorithm. The amplitude filters are realized using a sliding sum of
2 consecutive samples. A single sample is used for HF where the
signals are faster. The peak finder selects those samples of the
filtered pulse that are larger than the two nearest neighbors. The
amplitudes of the peak and peak+1 time slices are used as an estimator
of the pulse energy. The position of the peak-filtered sample in the data
pipeline flow determines the timing. The transverse energy
of each HCAL trigger tower is calculated on a 10-bit linear scale. In
case of overflow, the \ET is set to the scale maximum. Before
transmission to the RCT, the 10-bit trigger tower \ET is converted to
a programmable 8-bit compressed nonlinear scale in order to minimize
the trigger data flux to the regional trigger. This data compression
leads to a degradation in the trigger energy resolution of less than
5\%. The energy in\GeV is obtained from the ADC count by converting
the ADC count into fC, subtracting the pedestal and correcting for the
gain of each individual channel. Finally, a correction factor is
applied to compensate for the fraction of signal charge not captured
in the two time-slice sum.
\subsubsection{Regional calorimeter trigger system}
The CMS L1 electron/photon ($\Pe/\Pgg$), $\tau$ lepton, jet,
$\HT$ (where $\HT = \sum \pt^{\text{jets}}$ is the scalar sum of the \pt of all jets with
$\pt>10$\GeV and $\abs{\eta}<3$),
and missing \ET
trigger decisions are based on input from the L1 regional calorimeter trigger
(RCT)~\cite{Trig-TDR,klab2007,klab2008,klab2009}. Eighteen crates of custom RCT
electronics process data for the barrel, endcap,
and forward calorimeters, with a separate crate for LHC clock distribution.
Twenty-four bits comprising two 8-bit calorimeter energies, either two ECAL fine-grain (FG)
bits or two HCAL minimum ionizing particle (MIP) bits,
an LHC bunch crossing bit, and 5 bits of error detection code, are sent from the ECAL, HCAL,
and HF calorimeter back-end electronics to the nearby RCT racks on 1.2\unit{Gbaud} copper links.
This is done using one of the four 24-bit channels of the Vitesse 7216-1 serial transceiver
chip on the calorimeter output and the RCT input, for 8 channels of calorimeter data per chip.
The RCT V7216-1 chips are mounted on receiver mezzanine cards located on each of 7
receiver cards (RC) and the single-jet summary cards (JSC) for all 18
RCT crates.
The RCT design includes five high-speed custom GaAs
application-specific integrated circuits (ASICs), which were designed
and manufactured by Vitesse Semiconductor: a phase ASIC, an adder
ASIC, a boundary
scan ASIC, a sort ASIC, and an electron isolation
ASIC~\cite{smith2000}.
The RC has eight receiver mezzanine cards for the HCAL and ECAL data,
four per subsystem. On the mezzanine, the V7216-1 converts the serial
data to 120\unit{MHz} TTL parallel data. Eight phase ASICs on the RC align
and synchronize the data received on four channels of parallel data
from the Vitesse 7216-1, check for data transmission errors, and
convert 120\unit{MHz} TTL to 160\unit{MHz} emitter-coupled logic (ECL) parallel
data. Lookup tables (LUTs) convert 17 bits of input (8 bits from ECAL,
HCAL and the FG bit) for two separate paths. They rescale the incoming
ECAL energies, and set quality bits for the $\Pe/\Pgg$ path (a
tower-level logical OR of the ECAL FG bits and a limit on fractional
energy in the HCAL), and rescale and sum HCAL and ECAL for the
regional sums path. On the RC, the boundary scan ASIC aligns the
$\Pe/\Pgg$ tower energy data with data shared on cables between
RCT crates adjacent in $\eta$ and $\phi$, and makes copies so that
each of 7 electron isolation cards (EIC) receives 28 central and 32
adjacent towers via the custom 160\unit{MHz} backplane. The HCAL+ECAL
summed towers are added together to form $4{\times}4$ trigger tower sums
by three adder ASICs, which sum up eight 11-bit energies in 25\unit{ns},
while providing bits for overflows. The tower sums are then sent to the JSC via the
backplane for further processing. A logical OR of the MIP bits over
the same $4{\times}4$ trigger tower regions is sent to the JSC.
The EIC receives the 32 central tower and 28 neighboring trigger tower
data from the RCs via the backplane. The electron isolation algorithm
is implemented in the electron isolation ASIC, which can handle four
7-bit electromagnetic energies, a veto bit, and nearest neighbor
energies every 6.25\unit{ns}. It finds up to four electron candidates
in two $4{\times}4$ trigger tower regions, two isolated and two
non-isolated. These candidates are then transmitted via the backplane
to the JSC for further processing. In this way the $\Pe/\Pgg$
algorithm is seamless across the entire calorimeter.
The JSC receives 28 $\Pe/\Pgg$ candidates, 14 sums, and has a
single mezzanine card to receive eight HF TPs and quality bits. The JSC
rescales the HF data using a lookup table and delays the data so
that it is in time with the 14 regional \ET sums when they are sent
to the GCT for the jet finding and calculation of global quantities
such as $\HT$ and missing \ET. In addition, for muon isolation, a
quiet bit is set for each region and forwarded with the MIP bits on
the same cables as the electron candidates. The 28 electron
candidates (14 isolated and non-isolated) are sorted in \ET in two
stages of sort ASICs on the JSC, and the top four of each type are
transmitted to the GCT for further sorting. A block diagram of this
dataflow is shown in Fig.~\ref{fig:rct}.
\begin{figure}[tbph]
\centering
\includegraphics[width=4in]{figures/RCTDataFlow.png}
\caption{Block diagram of the regional calorimeter trigger (RCT)
system showing the data flow through the different cards in a RCT
crate. At the top is the input from the calorimeters; at the bottom
is the data transmitted to the global calorimeter trigger
(GCT). Data exchanged on the backplane is shown as arrows between
cards. Data from neighboring towers come via the backplane, but may
come over cables from adjoining crates. }
\label{fig:rct}
\end{figure}
Finally, a master clock crate (MCC) and cards are located in one of
the ten RCT racks to provide clock and control signal
distribution. Input to the system is provided by the CMS trigger
timing and control (TTC) system. This provides the LHC clock, bunch
crossing zero (BC0), and other CMS synchronization signals via an
optical fiber from a TTC VME interface board which can internally generate or
receive these signals from either a local trigger controller board
(LTC) or from the CMS GT.
The MCC includes a clock input card (CIC) with an LHC TTC receiver
mezzanine (TTCrm) to receive the TTC clocks and signals via the fiber
and set the global alignment of the signals. The CIC feeds fan-out
cards, a clock fan-out card midlevel (CFCm) and a clock fan-out card
to crates (CFCc) to align and distribute the signals to the individual
crates via low-skew cable. Adjustable delays on these two cards allow
fine-tuning of the signals to the individual crates.
\subsubsection{Global calorimeter trigger system}
\label{sec:gct}
The GCT is the last stage of the L1 calorimeter trigger chain. A detailed description of the GCT design, implementation and commissioning is provided in several conference papers~\cite{stettler:2006,iles:2006,foudas:2007,iles:2008,tapper:2008,brooke:2009} that describe the changes in design since the CMS trigger technical design report~\cite{Trig-TDR}.
The trigger objects computed by the GCT from data supplied by the RCT
are listed below and described in subsequent paragraphs:
\begin{itemize}
\item four isolated and four non-isolated electrons/photons of highest transverse energy;
\item four central, four forward, and four tau jets of highest transverse energy;
\item total transverse energy ($S_\mathrm{T}$),
\begin{equation*}
S_\mathrm{T} \equiv \sum \et,
\end{equation*}
calculated as the scalar sum of the \ET of all calorimeter deposits;
$H_\mathrm{T}$ (see Section 1); and ($\ETmiss$);
\item missing jet transverse energy; summing of feature bits and transverse energies in the HF calorimeter.
\end{itemize}
The electron/photon sort operation must determine the four highest transverse energy objects from 72 candidates supplied by the RCT,
for both isolated and non-isolated electrons/photons.
To sort the jets, the GCT must first perform jet finding and calibrate the clustered jet energies. The jets are created from the 396 regional transverse energy sums supplied by the RCT. These are the sum of contributions from both the hadron and electromagnetic calorimeters. This is a substantial extension of the GCT capability beyond that specified in Ref.~\cite{Trig-TDR}. The jet finding and subsequent sort is challenging because of the large data volume and the need to share or duplicate data between processing regions to perform cluster finding. The latter can require data flows of a similar magnitude to the incoming data volume depending on the clustering method used. The clusters, defined as the sum of $3{\times}3$ regions, are located using a new method~\cite{iles:2006} that requires substantially less data sharing than the previously proposed sliding window method~\cite{oldGCT}. Jets are subdivided into central, forward, and tau jets based on the RCT tau veto bits and the jet
pseudorapidity.
The GCT must also calculate some additional quantities. The total transverse energy is the sum of all regional transverse energies. The total missing transverse energy \MET is calculated by splitting the regional transverse energy values into their $x$ and $y$ components and summing the components in quadrature. The resulting vector, after a rotation of $180^\circ$, provides the magnitude and angle of the missing energy. The jet transverse energy \HT and missing jet transverse energy are the corresponding sums over all clustered jets found.
Finally two quantities are calculated for the forward calorimeters. The transverse energy is summed for the two rings of regions
closest to the beam pipe in both positive and negative pseudorapidities. The number of regions in the same rings with the
fine-grain bit is also counted.
In addition to these tasks the GCT acts as a readout device for both
itself and the RCT by storing information until receipt of an L1 accept
(L1A) and then sending the information to the DAQ.
The GCT input data volume and processing requirements did not allow all data to be concentrated in one processing unit. Thus, many large field programmable gate arrays (FPGA) across multiple discrete electronics cards are necessary to reduce the data volume in stages. The cards must be connected
together to allow data sharing and to eventually concentrate the data into a single location for the sort algorithms.
The latency allowed is 24 bunch crossings for jets and 15 bunch crossings for electrons/photons. Using many layers of high-speed
serial links to transport the large data volumes between FPGAs was not possible since these typically require several clock cycles
to serialize/deserialize the data and thus they have to be used sparingly to keep the latency low. The final architecture uses
high-speed optical links (1.6\unit{Gb/sec}) to transmit the data and then concentrates the data in the main processing FPGAs, followed by standard FPGA I/O to connect to downstream FPGAs.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\textwidth]{figures/GCT.pdf}
\caption{A schematic of the global calorimeter trigger (GCT) system,
showing the data flow through the various component cards.}
\label{fig:gct}
\end{figure}
Figure~\ref{fig:gct} shows a diagram of the GCT system data flow. The input to the GCT is 18 RCT crates.
The 63 source cards retransmit the data on optical high-speed serial links (shown by dashed arrows).
For each RCT crate, the electron data are transmitted on 3 fibers and the jet data on 10 fibers.
There are two main trigger data paths: electron and jet.
The jet data are sent to leaf cards (configured for jet finding)
mounted on the wheel cards. The leaf cards are connected in a circle
to search for clustered jets in one half of the CMS calorimeter
(either in the positive or the negative $\eta$). The wheel card
collects the results from three leaf cards, sorts the clustered jets,
and forwards the data to the concentrator card. A more detailed description of each component is given below.
\begin{itemize}
\item Source card. The 6 differential ECL cables per RCT crate are fed into source cards, each receiving up to two RCT cables and transmitting the data over four fiber links. This has several advantages: it allows the source cards to be electrically isolated from the main GCT system, the different data within the RCT cables to be rearranged, a large amount of information to be concentrated so that it can be delivered to the processing FPGAs on leaf cards, and data to be duplicated.
\item Leaf card. The leaf card is the main processing block in the GCT design. The most difficult task in the GCT is the jet finding. This is made simpler by concentrating the data in as few FPGAs as possible. Consequently, each leaf card has two Xilinx Virtex II Pro FPGAs each with 16 multi-gigabit transceivers that are used to bring the raw data in. Three Agilent 12 channel receivers provide the opto-electronic interface. The large standard I/O capacity is used to transmit the data to the wheel card.
\item Wheel card. There are two wheel cards, one for each half of the detector. They act as carriers for three leaf cards and further concentrate the data. They sum the energy values, sort the 54 clustered jets by transverse energy into the three types (forward, central, tau). The wheel cards then forward the information to the concentrator card via high-speed Samtec low-voltage differential signal (LVDS) cables.
\item Concentrator card. The concentrator card performs similar
actions to that of the wheel card after which it transmits the
resulting trigger objects to the GT and stores the information in a
pipeline until receipt of an L1A signal. The concentrator card also
carries two leaf cards that process the electron data. These leaf
cards record the incoming RCT data in a pipeline memory until
receipt of an L1A signal and perform a fast sort on the incoming data. The interface to the GT is via a mezzanine card which transmits data over 16 fiber links running at 3 Gb/s.
\end{itemize}
The CMS L1 calorimeter trigger chain does not use information from
other L1 subsystems, \ie, the L1 muon trigger, which is
described in the next section. L1 calorimeter and muon information is
combined to a final L1 trigger decision in the GT~(Sec.~\ref{sec:global_trigger_desc}).
\subsection{The L1 muon trigger system}
\label{sec:l1muon}
All three CMS muon detectors contribute to the L1 trigger decision.
Details on how the flow of information from the DTs, CSCs, and RPCs is
processed to build full muon tracks within each system, and how tracks
are combined together by the GMT to provide final muon trigger candidates,
are given below.
\subsubsection{Muon local trigger segments}
Whereas RPC trigger tracks are built by the pattern comparator trigger
(PACT) using information coming from detector hits directly, local
trigger track segments (primitives) are formed within DT and CSC
detectors prior to the transmission to the respective track finders.
In the case of the DTs, local trigger (DTLT) track segments are
reconstructed by electronics installed on the detector. Each of the
250 DTs is equipped with a mini-crate hosting readout and trigger
electronics and implemented with custom ASIC~\cite{DTbti,DTTraco}
and
programmable ASIC~\cite{DTTSM} devices. Up to two DTLT per BX in the
transverse plane can be generated by one chamber; DTLT information
includes the radial position, the bending angle, and information about
the reconstruction quality (\ie, the number of DT layers used to build
a track segment). Additionally, hits along the longitudinal direction are
calculated; in this case only a position is calculated as the track is
assumed to be pointing to the vertex. The DTLT electronics is capable
of highly efficient (94\%) BX identification~\cite{DTtestbeam,Chatrchyan:2008zzk}, which is a challenging
task given that single hits are collected with up to ${\approx}400\unit{ns}$ drift
time. A fine grained synchronization of the DTLT clock to the LHC
beams is needed to ensure proper BX identification~\cite{DTtestbeamFineSync, DTcraftSync}.
The DTLT segments are received by the trigger sector collector (TSC) system,
installed on the balconies surrounding the detector and implemented using
flash-based FPGAs~\cite{DTtsc}.
The TSC consists of 60 modules, each receiving local trigger data from one
DT sector (the four or five detectors within the same muon barrel slice, called wheel,
and covering $30^\circ$ in azimuthal angle): trigger segments are synchronized and
transmitted over 6\unit{Gb/s} optical links per sector, to the underground
counting room, where optical receiver modules perform deserialization and
deliver data to the DT track finder (DTTF) system.
For the CSCs, local charged-track (LCT) segments, constructed
separately from the cathode (CLCT) and anode (ALCT) hits of a detector,
are correlated in the trigger motherboard (TMB) when both segments
exist within a detector. A CLCT provides information on the azimuthal
position of a track segment, while an ALCT provides information on the
radial distance of a segment from the beam line, as well as precise
timing information. A maximum of two LCTs can be sent from each
detector per bunch crossing. The segments from nine detectors are collected
by a muon port card (MPC) residing in the same VME crate as the
TMBs. The MPC accepts up to 18 LCTs and sorts them down to the best
three before transmission over an optical fiber to the CSC
track finder (CSCTF). There are 60 MPCs, one in each peripheral crate.
More detailed description of the DT and CSC local trigger segment reconstruction and
performance in LHC collisions is given in Ref.~\cite{Chatrchyan:2013sba}.
\subsubsection{Drift tube track finder}
The DTTF processes the DTLT information in order to reconstruct muon
track candidates measured in several concentric rings of detectors,
called stations, and assigns a transverse momentum value to the track
candidates~\cite{DTdttf}. First, the position and bending of each DTLT is
used to compute, via a LUT, that expected position at the outer
stations (in case of the fourth station layer, the extrapolation is
done inward towards the third one). The position of actual DTLTs is
compared to the expected one and accepted if it falls within a
programmable tolerance window. These windows can be tuned to achieve
the desired working point, balancing the muon identification
efficiency against the accepted background. To enable triggering on
cosmic muon candidates, the windows can be as large as a full DT
detector in order to also accept muons that are not pointing to the
interaction point. All possible station pairs are linked this way and
a track candidate is built. Then, the difference in azimuthal
positions of the two inner segments is translated into a transverse
momentum value, again using LUTs. Also the azimuthal and longitudinal
coordinates of the candidate are computed, while a quality code based
on the number and positions of the stations participating in the track
is generated. The hardware modules are VME 9U boards hosted in 6
crates with custom backplanes and VME access; there are 72 such track
finding boards, called sector processors (SP). Each SP finds up to
two tracks from one DT sector. Two separate SPs analyze DTLTs from the
sectors of the central wheel, to follow tracks at positive or negative
pseudorapidity. Each SP receives also a subset of the DTLT information
from their neighboring SPs, through parallel electrical connections, in
order to perform track finding for tracks crossing detectors in
different sectors. SP from external wheels also receive track segments
from the CSC trigger.
The last stage of the DTTF system consists of the muon sorter
(MS)~\cite{DToffdetector}. First, a module called the wedge sorter (WS)
collects up to 12 track candidates from the 6 SPs of one ``wedge" (5 DT
sectors at the same azimuthal position) through parallel backplane
connections, and selects two based on the matched magnitude of the transverse
momentum and on their reconstruction quality. The resulting 24 muon
candidates from 12 wedge sorters are collected via parallel LVDS cables into the final
sorting module, called the barrel sorter (BS), which selects the final
four muon candidates to be delivered to the GMT. Both
the WS and BS perform ghost cancellation algorithms before the track
sorting, in order to remove duplicate tracks, \eg, multiple
track candidates originating from the same muon crossing from
neighboring SPs. Two WS modules are installed in each DTTF crate,
while the BS is located in a separate crate called central crate. Also
readout information (DTLT track segments and DTTF track candidates in
a $\pm1$ BX window) is provided by each DTTF module and concentrated in
a readout module (provided with serial link output and TTS inputs)
called a data concentrator card (DCC) and located in the central crate.
\subsubsection{Cathode strip chambers track finder}
The CSCTF logic consists of pairwise comparisons of
track segments in different detector stations that test for the
compatibility in $\phi$ and $\eta$ of a muon emanating from the
collision vertex within certain tolerance windows. These comparisons
are then analyzed and built into tracks consisting of two or more
stations.
The track finding logic has the ability to accept segments in
different assigned bunch crossings by analyzing across a sliding time
window of programmable length (nominally 2 BX) every bunch
crossing. Duplicate tracks found on consecutive crossings are
canceled. The reported bunch crossing of a track is given by the
second arriving track segment. The reported \pt of a candidate muon is
calculated with
large static random-access memory (SRAM) LUTs that take
information such as the track type, track $\eta$, the segment $\phi$
differences between up to 3 stations, and the segment bend angle in
the first measurement station for two-station tracks.
In addition to identifying muons from proton collisions, the CSCTF
processors simultaneously identify and trigger on beam halo muons for
monitoring and veto purposes by looking for trajectories approximately
parallel to the beam line. A beam halo muon is created when a proton
interacts with either a gas particle in the pipe or accelerator
material upstream or downstream the CMS interaction point, and the
produced hadrons decay. The collection of halo muons is an
interesting initial data set; the muons' trajectory is highly parallel
to the beam pipe and hence also to parallel to the solenoidal
magnetic field; therefore, they are minimally deflected and their
unbent paths are a good tool for aligning different slices of the
detector disks. Additionally, these muons are a background whose rate
need to be known as they have the potential to interact with multiple
detector subsystems. The halo muon trigger also allows monitoring of
the stability of the proton beam.
The CSCTF system is partitioned into sectors that correspond to a
$60^\circ$ azimuthal region of an endcap. Therefore 12 ``sector
processors'' are required for the entire system, where each sector
processor is a 9U VME card that is housed in a single crate. Three 1.6
Gbps optical links from each of five MPCs are received by each sector
processor, giving a total of 180 optical links for the entire
system. There is no sharing of signals across neighbor boundaries,
leading to slight inefficiencies. There are several FPGAs on each
processor, but the main FPGA for the track-finding algorithms is from
the Xilinx Virtex-5 family. The conversion of strip and wire positions
of each track segment to $\eta$, $\phi$ coordinates is accomplished
via a set of cascaded SRAM LUTs (each 512k$\times$16 bits). The final
calculation of the muon candidate \pt is also accomplished by SRAM
LUTs (each 2M$\times$16 bits). In the same VME crate there is also one
sorter card that receives over a custom backplane up to 3 muons from
each sector processor every beam crossing and then sorts this down to
the best four muons for transmission to the GMT. The crate also
contains a clock and control signal distribution card, a DAQ card with
a serial link interface, and a PCI-VME bridge~\cite{CSCAcosta,Trig-TDR}.
\subsubsection{Resistive plate chambers trigger system}
\label{sec:rpc}
The RPCs provide a complementary, dedicated triggering detector system
with excellent time resolution ($\mathcal{O}(1\mathrm{ns})$), to
reinforce the measurement of the correct beam-crossing time, even at
the highest LHC luminosities. The RPCs are located in both the barrel
and endcap regions and can provide an independent trigger over a large
portion of the pseudorapidity range ($\abs{\eta} < 1.6$). The RPCs are
double-gap chambers, operated in avalanche mode to ensure reliable
operation at high rates. They are arranged in six layers in the barrel
and three layers in the endcaps. Details of the RPC chamber design,
geometry, gas mixtures used and operating conditions can be found in
Refs.~\cite{Chatrchyan:2008zzk,Chatrchyan:2012xi}. The RPC trigger is based on
the spatial and temporal coincidence of hits in different layers. It
is segmented into 25 towers in $\eta$ which are each subdivided into
144 segments in $\phi$. The pattern comparator trigger
(PACT)~\cite{pact} logic compares signals from all RPC chamber layers
to predefined hit patterns in order to find muon candidates. The RPCs
also assign the muon \pt, charge, $\eta$, and $\phi$ to the matched
pattern.
Unlike the CSCs and DTs, the RPC system does not form trigger
primitives, but the detector hits are used directly for muon trigger
candidate recognition. Analog signals from the chambers are
discriminated and digitized by front end boards (FEB), then assigned
to the proper bunch crossing, zero-suppressed, and multiplexed by a
system of link boards located in the vicinity of the
detector. They are then sent via optical links to 84
trigger boards in 12 trigger crates located in the underground
counting room. Trigger boards contain the complex PAC logic, which fits
into a large FPGA. The strip pattern templates to be compared with
the particle track are arranged in segments of approximately $0.1$ in
$\abs{\eta}$ and 2.5$^\circ$ (44\unit{mrad}) in $\phi$, called logical cones. Each
segment can produce only one muon candidate. The trigger algorithm
imposes minimum requirements on the number and pattern of hit planes,
which varies with the position of the muon. As the baseline, in the
barrel region ($\abs{\eta} \le$ 1.04), a muon candidate is created by at
least a 4-hit pattern, matching a valid template. To improve
efficiency, this condition is relaxed and a 3-hit pattern with at
least one hit found in the third or fourth station may also create a
muon candidate. In addition, low-\pt muons often do not penetrate
all stations. Muon candidates can also arise when three
hits are found in four layers of the first and second station. In this case,
only low-\pt candidates will be reconstructed. In the endcap region
($\abs{\eta} > 1.04$) there are only 3 measurement layers available, thus
any 3-hit pattern may generate a muon candidate. A muon quality value
is assigned, encoded in two bits, that reflects the number of hit
layers (0 to 3, corresponding to 3 to 6 planes with hits).
Hits produced by a single muon may be visible in several logical cones
which overlap in space. Thus the same muon may be reconstructed,
typically with different momentum and quality, in a few segments. In
order to remove the duplicated candidates a special logic, called the
RPC ghost buster (GB), is applied in various steps during the
reconstruction of candidates. The algorithm assumes that among the
muon candidates reconstructed by the PACT there is the best one,
associated to the segment penetrated by a genuine muon. Since the
misreconstructed muons appear as a result of hit sharing between
logical cones, these muons should appear in adjacent segments. The
best muon candidate should be characterized by the highest number of
hits contributing to a pattern, hence highest quality. Among
candidates with the same quality, the one with highest \pt is
selected. The muon candidates from all the PACTs on a trigger board
are collected in a GB chip. The algorithm searches for groups of
adjacent candidates from the same tower. The one with the best rank,
defined by quality and \pt, is selected and other candidates in the
cluster are abandoned. In the second step the selected candidate is
compared with candidates from the three contiguous segments in each of the
neighboring towers. In the last step, the candidates are sorted based
on quality criteria, and
the best ranked four are forwarded to the trigger crate sorter. After
further ghost rejection and sorting, the four best muons are sent to
system-wide sorters, implemented in two half-sorter boards and a
final-sorter board. The resulting four best muon candidates from the
barrel and 4 best muon candidates from the endcap region are sent to
GMT for subtrigger merging.
The RPC data record is generated on the data concentrator card that
receives data from individual trigger boards.
\subsubsection{Global muon trigger system}
The GMT fulfills the following functions: it synchronizes incoming
regional muon candidates from DTTF, CSCTF, and RPC trigger systems,
merges or cancels duplicate candidates, performs $\pt$ assignment
optimization for merged candidates, sorts muon candidates according to
a programmable rank, assigns quality to outgoing candidates and stores
the information about the incoming and outgoing candidates in the
event data. The GMT is implemented as a single 9U VME module with a
front panel spanning four VME slots to accommodate connectors for 16
input cables from regional muon trigger systems. Most of the GMT logic
is implemented in a form of LUTs enabling a high level
of flexibility and functional adaptability without changing the FPGA
firmware, \eg, to adjust selection requirements, such as
transverse momentum, pseudorapidity, and quality, of the regional muon
candidates~\cite{Sakulin:687846}.
The input synchronization
occurs at two levels. The phase of each input with respect to the on-board clock can be adjusted
in four steps
corresponding to a quarter of the 25\unit{ns} clock cycle to latch correctly the incoming data.
Each input can be then delayed by up to 17 full
clock cycles to compensate for latency differences in regional systems such
that the internal GMT logic receives in a given clock cycle
regional muon candidates from the same bunch crossing.
The muon candidates from different regional triggers are then matched
geometrically, according to their pseudorapidity and azimuthal angle
with programmable tolerances, to account for differences in
resolutions. In addition, the input $\eta$ and $\pt$ values are
converted to a common scale and a sort rank is assigned to each
regional muon candidate. The assignment of the sort rank is
programmable and in the actual implementation it was based on a
combination of input quality and estimated transverse momentum.
The matching candidates from the DT and barrel RPC and similarly from
the CSC and endcap RPC triggers are then merged. Each measured
parameter ($\eta$, $\phi$, $\pt$, charge, sort rank) is merged
independently according to a programmable algorithm. The $\eta$, charge, and rank were taken from
the either the DT or CSC. For $\pt$ merging, the
initial setting to take the lowest $\pt$ measurement was optimized
during the data taking to become input quality dependent in certain
pseudorapidity regions. In case of a match between DT and CSC,
possible in the overlap region ($0.9<\abs{\eta}<1.2$), one of the candidates
is canceled according to a programmable logic, dependent, for example,
on an additional match with RPC.
Each of the output candidates is assigned a three-bit quality value
which is maximal for a merged candidate. If the candidate is not
merged, its quality depends on the input quality provided by the
regional trigger system and on the pseudorapidity. The quality
assignment is programmable and allows for flexibility in defining
looser or tighter selection of muon candidates in GT
algorithms. Typically, muon candidates in double-muon triggers were
allowed to have lower quality.
The final step in the GMT logic is the sorting according to the sort
rank. Sorting is first done independently in the barrel and in the
endcap regions and four candidates in each region with the highest
rank are passed to the final sort step. Four candidates with the
highest rank are then sent to the GT.
Since the GMT module and the GT system are located in the same VME
crate, the two systems share a common readout. The data recorded from
GMT contains a complete record of the input regional muon candidates,
the four selected muon candidates from the intermediate barrel and
endcap sorting steps, as well as the complete information about the
four output candidates. This information is stored in five blocks
corresponding to five bunch crossings centered around the trigger
clock cycle.
\subsection{The L1 global trigger system}
\label{sec:global_trigger_desc}
The GT is the final step of the L1 Trigger system. It consists of several VME boards mounted in a VME 9U crate together with the GMT and the central trigger control system (TCS)~\cite{GTref1,GTref2}.
For every LHC bunch crossing, the GT decides to reject or accept a
physics event for subsequent evaluation by the HLT. This decision is
based on trigger objects from the L1 muon and calorimeter systems,
which contain information about transverse energy \ET or transverse
momentum \PT, location (pseudorapidity and azimuthal angle), and quality. Similarly, special trigger signals delivered by various subsystems are also used to either trigger or veto the trigger decision in a standalone way (``technical triggers'') or to be combined with other trigger signals into logical expressions (``external conditions''). These technical triggers (up to 64) are also used for monitoring and calibration purposes of the various CMS sub-detectors including L1 trigger system itself.
The trigger objects received from the GCT and GMT, and the input data
from the other subsystems are first synchronized to each other and to
the LHC orbit clock and then sent via the crate backplane to the global
trigger logic (GTL) module, where the trigger algorithm calculations
are performed. For the various trigger object inputs of each type
(four muons, four non-isolated and four isolated $\Pe/\Pgg$
objects, four central and four forward jets, four tau jets) certain
conditions are applied such as \ET or \pt being above a certain
threshold, pseudorapidity and/or azimuthal angle being within a
selected window, or requiring the difference in pseudorapidity and/or
azimuthal angle between two particles to be within a certain range. In addition, ``correlation conditions'' can be calculated,
\ie, the difference in pseudorapidity and azimuthal angle between two objects of different kinds. Conditions can also be applied to the trigger objects formed
using energy sums such as \ETm and \HT.
Several conditions are then combined by simple combinatorial logic (AND-OR-NOT) to form up to 128 algorithms. Any condition bit can be used either as a trigger or as a veto condition. The algorithm bits for each bunch crossing are combined into a ``final-OR'' signal by the final decision logic (FDL) module, where each algorithm can also be prescaled or blocked. An arbitrary number of
sets of prescales can be defined for the algorithms in a given
logic firmware version. A set of 128 concrete algorithms
form an L1 menu which together with the set of prescales
completely specifies the L1 trigger selection. The algorithms and the
thresholds of the utilized input objects (such as transverse momentum
or spatial constraints) are defined and hard-coded in firmware and are only changed by loading another firmware version.
Different prescale settings allow adjustment of the trigger rate
during a run by modifying the prescale values for identical
copies of algorithms differing only in input thresholds.
In case of a positive ``final-OR'' decision and if triggers are not blocked by trigger rules or detector deadtime, the TCS sends out an L1A signal to
trigger the readout of the whole CMS detector and forward all data to the HLT for further scrutiny.
Trigger rules are adjustable settings to suppress trigger requests coming too soon after one or several triggers, as in this case subsystems may not be ready to accept additional triggers~\cite{Varela:687458}. Sources of deadtime can be subsystems asserting ``not ready'' via the trigger throttling system~\cite{DAQ-TDR}, the suppression of physics triggers for calibration cycles, or the trigger rules described above.
The GT system logs all trigger rates and deadtimes in a database to allow for the correct extraction of absolute trigger cross sections from data. The trigger cross section is defined as $\sigma = R/\mathcal{L}$, where $R$ is the trigger rate and $\mathcal{L}$ is the instantaneous luminosity.
Over the years of CMS running, the GT system has proved to be a highly flexible tool: the trigger logic implemented in the firmware of two ALTERA FPGAs
(the L1 menu) was frequently updated to adapt to changing beam conditions, increasing data rates, and modified physics requirements
(details in Section~\ref{sec:triggermenus}). Additional subsystems (\eg, the TOTEM detector~\cite{TOTEMJINST}) have also been configured as a part of the L1 trigger system.
\subsection{Beam position timing trigger system}
\label{sec:bptx}
The two LHC beam position monitors closest to the interaction point
for each LHC experiment are reserved for timing measurements and are
called the Beam Pick-up Timing eXperiment (BPTX) detectors. For CMS,
they are located at a distance of approximately 175~m on either side
of the interaction point (BPTX+ and BPTX-).
The trigger selects valid bunch crossings using the digitized BPTX
signal by requiring a coincidence of the signals from the detectors on
either side (``BPTX\_AND", logical AND of BPTX+ and BPTX-).
To suppress noise in triggers with high background, a coincidence with
BPTX\_AND is required. Another important application has been the
suppression of pre-firing from the forward hadron calorimeter caused
by particles interacting in the photomultiplier anodes, rather than
the detector itself. As the LHC was mostly running with a bunch
spacing of 50~ns and thus there was at least one 25~ns gap without
proton collisions between two occupied bunch crossings, the trigger
discarded pre-firing events by vetoing the trigger for the ``empty
bunch crossing" before a valid bunch crossing. This is achieved by
advancing the BPTX\_AND signal by one bunch crossing (25~ns time unit)
and using this signal to veto the L1 trigger signal (dubbed ``pre-BPTX
veto"). This solution also improved the physics capabilities of the L1
trigger by enabling a search for heavy stable charged particles
(Sec.~\ref{sec:HSCP} for details).
\subsection{High-level trigger system }
\label{sec:HLTDAQ}
The event selection at the HLT is performed in a similar way to that used in the offline
processing. For each event, objects such as
electrons, muons, and jets are reconstructed and identification
criteria are applied in order to select only those events which are of
possible interest for data analysis.
The HLT hardware consists of a single processor farm composed of
commodity computers, the event filter farm (EVF), which runs Scientific Linux.
The event filter farm consists of filter-builder units. In the builder
units, individual event fragments from the detector are assembled to
form complete events. Upon request from a filter unit, the builder
unit ships an assembled event to the filter unit. The filter unit in
turn unpacks the raw data into detector-specific data structures and
performs the event reconstruction and trigger filtering. Associated
builder and filter units are located in a single multi-core machine
and communicate via shared memory. In total, the EVF executed on
approximately 13,000 CPU cores at the end of 2012. More information
about the hardware can be found elsewhere~\cite{daqhlt2012}.
The filtering process uses the full precision of the data from the
detector, and the selection is based on offline-quality
reconstruction algorithms.
With the $2011$ configuration of the EVF, the CPU power available
allowed L1 input rates of $100$\unit{kHz} to be sustained for an average HLT
processing time of up to about 90 ms per event. With increased CPU
power available in 2012, we were able to accommodate a per-event time
budget of 175~ms per event.
Before data-taking started, the HLT was commissioned extensively using
cosmic ray data~\cite{cmsCollaboration:2010gj}. The HLT design
specification is described in detail in~\cite{HLT-2006}.
The data processing of the HLT is structured around the concept of a
\emph{HLT path}, which is a set of algorithmic processing steps run in
a predefined order that both reconstructs physics objects and makes
selections on these objects. Each HLT path is implemented as a
sequence of steps of increasing complexity, reconstruction refinement,
and physics sophistication. Selections relying on information from the
calorimeters and the muon detectors reduce the rate before the
CPU-expensive tracking reconstruction is performed. The
reconstruction modules and selection filters of the HLT use the
software framework that is also used for offline reconstruction and
analyses.
Upon completion, accepted events are sent to another software process,
called the storage manager, for archival storage. The event data are
stored locally on disk and eventually transferred to the CMS Tier-0
computing center for offline processing and permanent storage. Events
are grouped into a set of non-exclusive streams according to the HLT
decisions. Most data are processed as soon as possible; however, a
special ``parked'' data stream collected during 2012 consisted of
lower-priority data that was collected and not analyzed until after
the run was over~\cite{parked}. This effectively increased the amount
of data CMS could store on tape, albeit with a longer latency than
regular, higher-priority streams. Example physics analyses enabled by
the parked data stream include generic
final states created via vector boson fusion, triggered by four
low-momentum jets ($\ET> 75,55,38,20\GeV$, for the four jets) and
parton distribution function studies via Drell--Yan events at low
dimuon mass, triggered by two low-\pt muons ($\pt>17,8\GeV$, for the
two muons.)
Globally, the output rate of the HLT is limited by the size of the
events and the ability of the downstream systems (CMS Tier-0) to
process the events. In addition to the primary physics stream,
monitoring and calibration streams are also written. Usually these
streams comprise triggers that record events with reduced
content, or with large prescales in order to avoid saturating the
data taking bandwidth. One example is the stream set up for
calibration purposes. These streams require very large data samples
but typically need information only from a small portion of the
detector, such that their typical event size is around 1.5\unit{kB}, while the
full event size is around 0.5~MB. Among the triggers that
define the calibration stream, two select events that are used for the
calibration of the ECAL. The first one collects minimum bias events
and only the ECAL energy deposits are recorded. By exploiting the
$\phi$ invariance of the energy deposition in physics events, this
sample allows inter-calibration of the electromagnetic calorimeter
within a $\phi$ ring. The second ECAL calibration trigger reconstructs
$\Pgpz$ and $\Pgh$ meson candidates decaying into two photons. Only
the ECAL energy deposits associated with these photons are kept. Due
to the small event size, CMS was able to record up to 14\unit{kHz} of
$\Pgpz/\Pgh$ candidates in this fashion~\cite{Chatrchyan:2013dga}. Figure~\ref{fig:calib_etapi}
shows the reconstructed masses for $\Pgpz$ and $\Pgh$ candidates
obtained from these calibration triggers during the 2012 run.
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{figures/EB_Eta_Pi_2012}
\caption{Neutral pion (left) and $\eta$ (right) invariant mass
peaks reconstructed in the barrel with 2012 data. The spectra
are fitted with a combination of a double (single) Gaussian for
the signal and a 4th (2nd) order polynomial for the
background. The entire 2012 data set is used, using special
online $\pi^0/\eta$ calibration streams. The sample size is
determined by the rate of this calibration stream. Signal over
background (S/B) and the fitted resolution are indicated on the
plots. The fitted peak positions are not exactly at the nominal
$\pi^0/\eta$ mass values mainly due to the effects of
selective readout and leakage outside the $3{\times}3$ clusters
used in the mass reconstruction; however, the absolute mass
values are not used in the inter-calibration.}
\label{fig:calib_etapi}
\end{figure}
\label{sec:hltintro}
\section{Object identification}
\label{sec:objid}
In this section, the L1 and HLT selection of each object is discussed
as well as the related main single- and double-object triggers using
those objects. The event selection at the HLT is performed in a
similar manner to that used in the offline event processing. For each event,
objects such as electrons, muons, or jets are reconstructed and
identification criteria are applied in order to select those
events which are of possible interest for data analysis.
The object reconstruction is as similar as possible to the offline
one, but has more rigorous timing constraints imposed by the
limited number of CPUs. Section~\ref{sec:hpa} describes how these objects
are used in a
representative set of physics triggers.
We emphasize the track reconstruction in particular as it is used in
most of the trigger paths, either for lepton isolation or for particle-flow
(PF) techniques~\cite{CMS-pf1,CMS-pf2}.
\subsection{Tracking and vertex finding}
\label{sec:HLTTrack}
Tracking and vertex finding
is very important for reconstruction at the HLT.
A robust
and efficient tracking algorithm can help the reconstruction of
particles in many ways, such as improving the momentum resolution of
muons, tracking-based isolation, and $\rm b$-jet tagging. Since
track reconstruction is a CPU-intensive task, many strategies have
been developed to balance the need for tracks with the increase in CPU
time. In this section we describe the algorithm for reconstructing the
primary vertex of the collision in an efficient and fast manner using
only the information from the pixel detector, as well as the algorithm
for reconstructing HLT tracks.
More details about the tracking
algorithm used in CMS, both online and offline, can be found
elsewhere~\cite{Chatrchyan:2014fea}.
It is worth emphasizing that since the tracking detector data in not
included in the L1 trigger, the HLT is the first place that charged
particle trajectories can be included in the trigger.
\subsubsection{Primary vertex reconstruction}
\label{sec:primvtx}
In many triggers, knowledge of the position of the primary vertex is
required. To reconstruct the primary vertex without having to run the full (and
slow) tracking algorithm, we employ a special track reconstruction
pass requiring only the data from the pixel detector.
With these tracks, a simple gap-clustering algorithm is used for
vertex reconstruction~\cite{Chatrchyan:2014fea}. All tracks are ordered by the $z$ coordinate of
their point of closest approach to the $\Pp\Pp$ interaction point. Wherever two
neighboring elements in this ordered set of $z$ coordinates has a gap
exceeding a distance requirement $z_\text{sep}$, tracks on either side
are split into separate vertices. In such an algorithm, interaction
vertices separated by a distance less than $z_\text{sep}$ are merged.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\textwidth]{figures/vtx_pu}
\caption{Number of vertices as a function of the number of
$\Pp\Pp$ interactions as measured by the forward calorimeter, for fills
taken in two different periods of the 2012 $\Pp\Pp$ run. A linear relation
can be seen between the two quantities, demonstrating good
performance of the HLT pixel vertex algorithm.}
\label{Fig:vtx_pu}
\end{figure}
Figure~\ref{Fig:vtx_pu} represents the estimated number of interactions
versus the number of reconstructed pixel vertices for two periods,
with different pileup conditions. The number of interactions is
measured using the information from the HF,
which covers the pseudorapidity range $3 < \abs{\eta} < 5$. The method
used is the so-called ``zero counting'', which relies on the fact that
the mean number of interactions per bunch crossing ($\mu$)
has a probability density described by the Poisson
distribution. The average fraction of empty HF
towers is measured and then $\mu$ is calculated by inverting the
Poisson zero probability. Figure~\ref{Fig:vtx_pu} shows that in the 2012 data,
where the number of interactions per bunch crossing reached 30, the
number of reconstructed vertices depends linearly on the
number of pileup events for a wide range of values,
demonstrating no degradation of performance due to pileup.
With increasing number of pileup collisions, we observed that the
CPU time to reconstruct pixel tracks and pixel vertices increased
nonlinearly. For a few HLT paths, the CPU time usage is largely
dominated by the pixel track and vertex reconstruction time and
it is prohibitive to use the primary-vertex finding algorithm
described above.
A second method, called fast primary vertex finding, was implemented
to reduce the CPU time usage. This method initially finds a coarse
primary vertex and reconstructs only pixel tracks in jets associated
to this vertex. The pixel tracks are then used to find the online
primary vertex using the standard method described above. The coarse
vertex is found as follows: initially,
jets with $\pt > 40\GeV$ are considered. Pixel clusters in the $\phi$
wedges corresponding to the jets are selected and projected to the
beam axis using the jet pseudorapidity. The projections are then
clustered along the $z$ axis. If a vertex exists, the clusters will
group around the $z$ position of the vertex.
Roughly 5\% of the time, the coarse vertex is not found. In these
cases, the standard vertex reconstruction is run. The coarse vertex
has a resolution of 0.4 cm. By using the fast primary vertex finding,
the overall CPU time needed to reconstruct the vertex is reduced by a
factor 4 to 6, depending on the HLT path. The reduced CPU time requirement
allowed some additional paths to use b-tagging techniques than would not
have been possible with the standard algorithm.
The two methods have similar performance in reconstructing the online primary
vertex. The efficiency of the reconstruction relative
to offline is about 92\% within the vertex resolution.
The pixel tracks are also used in other reconstruction steps as
described in the following subsections.
\subsubsection{HLT tracking}
Given the variety of the reconstructed objects and the fast changes in
the machine conditions, it has been impossible to adopt a unique full
silicon track reconstruction for all the paths. Different objects
ended up using slightly different tracking configurations, which had
different timing, efficiencies, and misreconstruction rates. All configurations use
a combinatorial track finder (CTF) algorithm, which consists of four
steps:
\begin{enumerate}
\item The seed generation provides initial track candidates using a
few (two or three) hits and the constraint of the $\Pp\Pp$ interaction point
position. A seed defines the initial estimate of the trajectory,
including its parameters and their uncertainties.
\item The next step is based on a global Kalman
filter~\cite{Fruhwirth:1987fm}. It extrapolates the seed
trajectories along the expected flight path of a charged particle,
searching for additional hits that can be assigned to the track
candidate.
\item The track fitting stage uses another Kalman filter and smoother
to provide the best possible estimate of the parameters of each
trajectory.
\item Finally, the track selection step sets quality flags and
discards tracks that fail minimum quality requirements.
\end{enumerate}
Each of these steps is configurable to reduce the time at the cost of
slightly degraded performance. As an example, when building track
candidates from a given seed, the offline track reconstruction retains
at most the five partially reconstructed candidates for extrapolation
to the next layer, while at HLT only one is kept. This ensures little
time increase in the presence of large occupancy events and high
pileup conditions. As another example, the algorithm stops once a
specified number of hits have been assigned to a track (typically
eight). As a consequence, the hits in the outermost layers of the
tracker tend not to be used. The different tracking configurations
can be divided into four categories:
\begin{itemize}
\item Pixel-only tracks, \ie, tracks consisting of only three
pixel hits. As stated above, the pixel-based tracking is
considerably faster than the full tracking, but pixel tracks have
much worse resolution and are mostly used to build the primary
vertex and are also used in parts of the b- and
$\tau$-identification stages. These tracks are also used to build
the seeds for the first iteration of the iterative tracking.
\item Iterative tracking, \ie, a configuration which is as similar as
possible to that used offline. This is used as input to the
PF reconstruction.
\item Lepton isolation, \ie, a regional one-step-tracking used in
paths with isolated electrons and muons. On average, higher-\pt tracks are
reconstructed in comparison to the iterative tracking method and as a result
this variant is somewhat more time consuming than the iterative tracking.
\item b tagging, \ie, a regional one-step-tracking similar to the one
used for lepton isolation.
\end{itemize}
The iterative tracking approach is designed to reconstruct tracks in decreasing order of complexity. In the early iterations, easy-to-find tracks, which have high \pt and small impact parameters, are reconstructed. After each iteration hits associated with found tracks are removed, and this reduces combinatorial complexity and allows for more effective searching for lower-\pt or highly displaced tracks. For data
collected in 2012, the tracking consisted of five iterations, similar
(but not identical) to those run offline. The main difference between
each iteration lies in the configuration of the seed generation and
final track selection steps.
The first iteration is seeded with three pixel hits. Each pixel track
becomes a seed. The seeds in this iteration are not required to be
consistent with the primary vertex position. For the other iterations,
only seeds compatible with the primary vertex $z$ position are used.
In the first iteration, we attempt to reconstruct tracks across the
entire detector. For speed reasons, later iterations are seeded
regionally, \ie, only seeds in a given $\eta$-$\phi$ region
of interest are considered. These regions are defined using the
$\eta$-$\phi$ direction of jets from tracks reconstructed in the
previous iterations. Unfortunately, due to hit inefficiency in the
pixel detector and the requirement of hits in each of the three pixel
layers in this step, 10--15\% of isolated tracks may be lost.
This leads to an efficiency loss for one-prong $\tau$ lepton decays, which
is recovered by adding extra regions based on the $\eta$-$\phi$
direction of isolated calorimeter jets. Finally, after the five
iterations, all tracks are grouped together (adding the separately
reconstructed muon tracks), filtered according to quality criteria and
passed to the PF reconstruction.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\textwidth]{figures/trackingEfficiency}
\caption{ Tracking efficiency as a function of the momentum of the
reconstructed particle, for the HLT and offline tracking, as
determined from simulated \ttbar events. Above 0.9\GeV,
the online efficiency is above 80\% and plateaus at around 90\%.}
\label{fig:trackingEfficiency}
\end{figure}
Figure~\ref{fig:trackingEfficiency} shows the offline and online track
reconstruction efficiency on simulated top-antitop ($\PQt\PAQt$) events. Online
efficiencies are above 80\% for track \pt above 0.9\GeV.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\textwidth]{figures/iterativetrackingtiming}
\caption{The CPU time spent in the tracking reconstruction as a function
of the average pileup, as measured in pp data taken during the
2012 run. The red line shows a fit to data with a second-order polynomial. On average, about 30\% of the total CPU time of the HLT
was devoted to tracking during this run. }
\label{iterativetrackingtiming}
\end{figure}
Figure~\ref{iterativetrackingtiming} shows the time taken by the
iterative track reconstruction as a function of the average pileup.
As already discussed, the time spent in tracking is too high to allow the
use of the tracking on each L1-accepted event.
To limit the computing time, HLT tracking was only run on a subset of events that pass a set of filters,
reducing it to about 30\% of the
total HLT CPU time.
\subsection{Electron and photon triggers}
\label{sec:egamma_overview}
The presence of high-\pt leptons and photons is a strong
indicator for interesting high-$Q^2$ collisions and consequently much
attention has been devoted to an efficient set of triggers for these processes.
Electrons and photons (EG or ``electromagnetic objects'') are reconstructed
primarily using the lead-tungstate electromagnetic calorimeter.
Each electromagnetic object deposits its energy primarily in this detector, with little energy deposited in the hadron calorimeter. The transverse shower size is of
the order of one crystal. Electrons and photons are distinguished from
one another by the presence of tracks pointing to electrons
and lack thereof for photons. At L1,
only information from the calorimeter is available and no distinction
can be made between $\Pe$ and $\Pgg$. At the HLT level, tracks are
used to resolve this ambiguity.
\subsubsection{L1 electron / photon identification}
\paragraph{L1 electron / photon trigger performance}
\subparagraph{The L1 electron trigger resolution}
Offline reconstructed electrons are matched to L1 EG candidates by
looking for the RCT region which contains the highest energy trigger tower (TT)
within the electron supercluster (SC)~\cite{cms-e7,cms-e8}. In order to extract the resolution, the supercluster transverse energy reconstructed offline is compared to the
corresponding L1 candidate \ET. Figure~\ref{fig:l1reso} shows the distribution of the L1 EG
trigger resolution, offline reconstructed \ET minus L1 \ET divided by offline reconstructed \ET, in the barrel and endcap regions. The
same observable is displayed as a function of the electron offline
supercluster \ET and $\eta$ in Fig.~\ref{fig:l1resoETeta}. Above 60\GeV, the resolution starts to degrade as the L1
saturation is reached.\footnote{The ECAL trigger primitives saturate at 127.5\GeV
and RCT EG candidates at 63.5\GeV.}
\begin{figure}
\centering
\includegraphics[width=0.467\textwidth]{figures/L1EGresolutionEB.pdf}
\includegraphics[width=0.48\textwidth]{figures/L1EGresolutionEE.pdf}
\caption{The L1 EG resolution, reconstructed offline \ET minus L1 \ET divided by reconstructed offline \ET, in the
barrel (left) and endcap (right) regions. For both distributions, a
fit to a Crystal Ball function is performed. On the right curve, the red solid
line shows the result after applying the transparency corrections (as
discussed in Sec.~\ref{sec:trans}) For EB, the resolution after transparency correction is unchanged.}
\label{fig:l1reso}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figures/L1EGresolutionVsEta.pdf}
\caption{
The L1 EG resolution for all
electron \pt as a function of pseudorapidity $\eta$. For each $\eta$
bin, a fit to a Crystal Ball function was used to model the data distribution.
The vertical bars on each point represent
the sigma of each fitted function which is defined as the width of
the 68\% area.
The red points show the improved resolution after
applying transparency corrections (as discussed in
Sec.~\ref{sec:trans}).
}
\label{fig:l1resoETeta}
\end{figure}
The resolution of L1 EG candidates (Fig.~\ref{fig:l1reso})
is reasonably well described by a fit to a Crystal Ball function~\cite{Oreglia:1980cs}. An electron supercluster can spread
its energy over a large region of the calorimeter due to the emission
of photons from bremsstrahlung. The L1 EG algorithm only aggregates
energy in 2 trigger towers (Section~\ref{sec:ecaltpg}). For this reason, the probability to
trigger is reduced for electrons propagating across
a significant amount of material. This effect increases with the
pseudorapidity and peaks in the transition region between the EB and
the EE. Figure~\ref{fig:l1resoETeta} illustrates this effect by showing the L1 EG resolution as function of $\eta$. Further
effects such as the transparency change of ECAL crystals with time
certainly degrades the resolution further (see
Sec.~\ref{sec:trans}). The resolutions shown in
Figs.~\ref{fig:l1reso} and \ref{fig:l1resoETeta} were obtained after
correcting for this effect.
\subparagraph{L1 electron trigger efficiency}
The electron trigger efficiency was measured with electrons from
$\Z\to\Pe\Pe$ events, using a tag-and-probe method~\cite{cms-wz}.
The data collected in 2011 and 2012 were used. Both the tag and
the probe are required to pass tight identification requirements in order to
reduce significantly the background contamination. The tag electron
must also trigger the event at L1, while the probe electron is used for
the efficiency studies. The invariant mass of the tag-and-probe system
should be consistent with the \Z boson mass ($60 < M_{\Pe\Pe} <
120\GeV$), resulting in a pure unbiased electron
data sample. The trigger efficiency is given by the fraction of probes
above a given EG threshold, as a function of the probe \ET. In order
to trigger, the location of the highest energy TT within the electron
supercluster must match a corresponding region of an L1 candidate in
the RCT.
The trigger efficiency curves are shown in Fig.~\ref{fig:l1eg15perf}
for an EG threshold of 15\GeV. The \ET on the $x$ axis is obtained
from the fully reconstructed offline energy. In the EE this includes
the pre-shower energy that is not available at L1. As a consequence, the
trigger efficiency turn-on point for the EE is shifted to the right
with respect to the EB. For both EB and EE, corrections for crystal
transparency changes were not included at L1 in 2011, which further
affects the turn-on curve (Sec.~\ref{sec:trans}). The width of the
turn-on curves is partly determined by the coarse trigger granularity,
since only pairs of TTs are available for the formation of L1
candidates, which leads to lower energy resolution at L1. An unbinned
likelihood fit was used to derive the efficiency
curves. Parameters of the turn-on curves are given in
Table~\ref{tab:fitbox}. Table~\ref{tab:fitboxtrans} summarizes the parameters of the turn-on
curves and compares them with the actual EE turn-on curve in 2011
(Fig.~\ref{fig:l1eg15perf}).
\begin{figure}[h]
\centering
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{figures/L1EG15Efficiency2011.pdf}
\caption{\label{fig:l1eg15perf} The electron trigger efficiency at L1 as a
function of offline reconstructed \ET for electrons in the EB (black dots) and EE (red
dots), with an EG threshold: $\ET=15\GeV$. The curves show unbinned
likelihood fits.}
\end{minipage}\hfill
\begin{minipage}{0.47\textwidth}
\includegraphics[width=\textwidth]{figures/L1EG15EfficiencyTransparency2011.pdf}
\caption{\label{fig:transl1}
The EE L1 electron trigger efficiency
as a function of offline reconstructed \ET
before (red) and after (green) transparency corrections are
applied at the ECAL TP level. The curves show unbinned likelihood fits.}
\end{minipage}
\end{figure}
\begin{table}[h!]
\centering
\begin{minipage}[t]{0.47\textwidth}
\topcaption{\label{tab:fitbox} The L1 electron trigger turn-on curve
parameters. This table gives the electron \ET thresholds for which
an efficiency of 50\%, 95\% and 99\% are reached for EB and EE
separately. The last entry corresponds to the efficiency obtained at
the plateau of each curve shown in Figure~\ref{fig:l1eg15perf}.}
\vfill
{\renewcommand{\arraystretch}{1.2}
\resizebox{\textwidth}{!}{
\begin{tabular}{|lll|}
\hline
\multicolumn{1}{c}{EG15} & \multicolumn{1}{c}{EB} & \multicolumn{1}{c}{EE} \\
\hline
50\% & $16.06^{+0.01}_{-0.01}$\GeV & $19.11^{+0.03}_{-0.06}$\GeV \\
95\% & $22.46^{+0.04}_{-0.05}$\GeV & $27.05^{+0.01}_{-0.01}$\GeV \\
99\% & $28.04^{+0.07}_{-0.10}$\GeV & $34.36^{+0.01}_{-0.01}$\GeV \\
\hline
100\GeV & $99.95^{+0.01}_{-0.88}$ \% & $99.84^{+0.06}_{-0.60}$ \% \\
\hline
\end{tabular}
}}
\end{minipage}\hfill
\begin{minipage}[t]{0.47\textwidth}
\topcaption{\label{tab:fitboxtrans} The EE L1 electron trigger turn-on curve
parameters. This table gives the electron \ET thresholds for which
an efficiency of 50\%, 95\% and 99\% are reached before and after
transparency corrections are applied. The last entry corresponds to the
efficiency obtained at the plateau of each curve shown in
Figure~\ref{fig:transl1}.}
{\renewcommand{\arraystretch}{1.2}
\resizebox{\textwidth}{!}{
\begin{tabular}{|lll|}
\hline
\multicolumn{1}{c}{EG15} & \multicolumn{1}{c}{EE} & \multicolumn{1}{c}{EE (corr)}\\
\hline
50\% & $19.11^{+0.03}_{-0.06}$\GeV & $17.79^{+0.03}_{-0.06}$\GeV \\
95\% & $27.05^{+0.01}_{-0.01}$\GeV & $24.46^{+0.10}_{-0.23}$\GeV \\
99\% & $34.36^{+0.01}_{-0.01}$\GeV & $30.78^{+0.21}_{-0.48}$\GeV \\\hline
100\GeV & $99.84^{+0.06}_{-0.60}$ \% & $99.89^{+0.01}_{-0.67}$ \% \\
\hline
\end{tabular}
}}
\end{minipage}
\end{table}
In the EE, the material in front of the detector causes more
bremsstrahlung, which together with the more complex TT geometry,
causes the turn-on curve to be wider than that for the EB. Some masked
or faulty regions (0.2\% in EB and 1.3\% in EE) result in the plateaus
being slightly lower than 100\% (99.95\% in EB and 99.84\% in EE) as
shown in Table~\ref{tab:fitbox}. The effect on efficiency of the L1
spike removal~\cite{cms-spike}, described in Sec.~\ref{sec:spikes},
is negligible, but will require further optimization as the number of
collisions per bunch crossing increases in the future. Turn-on curves
for various EG thresholds are shown in Fig.~\ref{fig:turnoncurves}, and
Table~\ref{tab:turnoncurves} gives their turn-on points, \ie, the \ET value where the curve attains 50\% efficiency.
\begin{figure}
\centering
\resizebox{0.48\textwidth}{!}{\includegraphics{figures/L1EGefficiencyAllEE.pdf}}
\resizebox{0.48\textwidth}{!}{\includegraphics{figures/L1EGefficiencyAllEB.pdf}}
\caption{The L1 electron triggering efficiency as a function of the
reconstructed offline electron \ET for barrel (left) and endcap
(right). The efficiency is shown for the EG12, EG15, EG20 and EG30
L1 trigger algorithms. The curves show unbinned likelihood fits.}
\label{fig:turnoncurves}
\end{figure}
\begin{table}
\centering
\topcaption{Turn-on points for the EG12, EG15, EG20, and EG30 L1
trigger algorithms shown in Fig.~\ref{fig:turnoncurves}.}
\begin{tabular}{|lcccc|}
\hline
EG Threshold (\GeVns{}) & 12 & 15 & 20 & 30 \\
\hline
EB turn-on \ET (\GeVns{}) & 12 & 16.1 & 20.7 & 29.9 \\
EE turn-on \ET (\GeVns{}) & 13 & 19.1 & 24.6 & 33.7 \\
\hline
\end{tabular}
\label{tab:turnoncurves}
\end{table}
Figures~\ref{fig:finalEGeff2011} and \ref{fig:finalEGeff2012} show the
comparison of the EG20 algorithm performance obtained in 2011 and
2012. In the latter, the turn-on curve in EE is closer to that in EB. The
optimizations of the ECAL trigger primitive generation (spike killing
procedure and ECAL crystal transparency corrections) and RCT
calibration allowed the retention of the lowest possible unprescaled trigger to be used during physics runs.
\begin{figure}[h]
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=\textwidth]{figures/L1EG20Efficiency2011.pdf}
\caption{\label{fig:finalEGeff2011} Electron trigger efficiency at L1, as a function of offline reconstructed \ET for
electrons in the EB (black dots) and EE (red squares) using the 2011
data set (EG threshold: $\ET=20\GeV$). The curves show unbinned
likelihood fits.}
\end{minipage}\hfill
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=.995\textwidth]{figures/L1EG20Efficiency2012.pdf}
\caption{\label{fig:finalEGeff2012} Electron trigger efficiency at L1
as a function of offline reconstructed \ET for
electrons in the EB (black dots) and EE (red squares) using the 2012
data set (EG threshold: $\ET=20\GeV$). The curves show unbinned
likelihood fits.}
\end{minipage}
\end{figure}
\paragraph{L1 EG trigger rates}
The EG trigger rates were obtained from the analysis of a dedicated data stream,
containing only L1 trigger information, that was collected at high rate on the
basis of L1 decision only.
For the study, events were selected using BPTX\_AND trigger
coincidences. This selection provides unbiased information about the
L1 EG trigger response. In this fashion, it was possible to apply
requirements related to the presence of L1 EG candidates with a given
\ET threshold and pseudorapidity acceptance region within the
analysis.
Rates of isolated and nonisolated single-EG
triggers are presented in Fig.~\ref{figure:L1SingleEGRates}. During
the 2012 run, isolated EG trigger algorithms were
restricted to $\abs{\eta}<2.712$ at the GT level.
Rates were calculated using
data collected with luminosities between $4.5$ and $5.5 \times
10^{33}\percms$ (for an average luminosity of $4.94 \times
10^{33}\percms$), and rescaled to a target instantaneous luminosity
of $5 \times 10^{33}\percms$. Uncertainties stemming from this
small approximation
are well within the fluctuations caused by data acquisition deadtime
variations.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\textwidth]{figures/L1EGRateVsPtCut}
\caption{Rates of the isolated and nonisolated versions of the
single-EG trigger versus the transverse energy threshold rescaled
to an instantaneous luminosity of $5 \times 10^{33}\percms$.
Isolated EG rates are computed within a pseudorapidity range of
$\abs{\eta}<2.172$ to reflect the configuration of the L1 isolated EG
algorithms used in 2012. }
\label{figure:L1SingleEGRates}
\end{figure}
\subsection{Online anomalous signals and their suppression}
\label{sec:spikes}
Anomalous signals were observed in the EB shortly after collisions
began in the LHC: these were identified as being due to direct
ionization within the APDs, thus producing spurious isolated signals
with high apparent energy. These {\it spikes} can induce large trigger
rates at both L1 and HLT if not removed from the trigger decision. On
average, one spike with $\ET >$ 3\GeV is observed per 370 minimum bias
triggers in CMS at $\sqrt{s}$ = 7\TeV. If untreated as many 60\% of
trigger objects containing only ECAL energy, above a threshold of
12\GeV, would be caused by spikes. At high luminosity these would be
the dominant component of the 100\unit{kHz} CMS L1 trigger rate
bandwidth~\cite{spike-petyt}. Spike identification and removal
strategies were developed, based on specific features of these
anomalous signals. In the ECAL the energy of an electromagnetic (EM)
shower is distributed over several crystals, with up to 80\% of the
energy in a central crystal (where the electron/photon is incident)
and most of the remaining energy in the four adjacent crystals. This
lateral distribution can be used to discriminate spikes from EM
signals. A topological variable $s=1-E_4/E_1$ ($E_1$: \ET of the
central crystal; $E_4$: summed \ET of the four adjacent crystals)
named ``Swiss-cross'' was implemented offline to serve this purpose. A
similar topological variable was also developed for the on-detector
electronics, a strip fine grain veto bit (sFGVB). Every
TP has an associated sFGVB that is set to 1 (signifying a true EM
energy deposit) if any of its 5 constituent strips has at least two
crystals with \ET above a programmable trigger sFGVB threshold,
of the order of a few hundred \MeVns{}. If the sFGVB is set to zero, and
the trigger tower \ET is greater than a trigger killing
threshold, the energy deposition is considered spike-like. The
trigger tower energy is set to zero and the tower will not contribute
to the triggering of CMS for the corresponding event.
As the sFGVB threshold is a single value, the electron or photon
efficiency depends upon the particle energy: the higher the threshold,
the more low-energy genuine EM deposits would be flagged as
spikes. However, these spurious spikes may not pass the killing
threshold so they would still be accepted. With a very low sFGVB
threshold, spikes may not be rejected due to neighboring crystals
having noise. A detailed emulation of the full L1 chain was developed
in order to optimize the two thresholds to remove as large a fraction
of the anomalous signals as possible whilst maintaining excellent
efficiency for real electron/photon signals. In order to determine the
removal efficiency, data were taken in 2010 without the killing
thresholds active. Using the Swiss-cross method, spike signals were
identified offline. Those signals were then matched to L1
candidates in the corresponding RCT region and the emulator used to
evaluate the fraction of L1 candidates that would have been
eliminated. In a similar fashion the efficiency for triggering on
genuine electrons or photons could be estimated.
Three killing thresholds were emulated ($\ET = 8$, 12, and 18\GeV), combined with six sFGVB thresholds (152, 258, 289, 350, 456, 608\MeV). Figure~\ref{fig:spikeperf} shows the electron efficiency
(fraction of electrons triggered after spike removal) versus the L1
spike rejection fraction, for all sFGVB thresholds mentioned above
(one point for each threshold value) and a killing threshold of 8\GeV. The optimum configuration was chosen to be an sFGVB threshold of
258\MeV and a killing threshold of 8\GeV. This corresponds to a
rejection of 96\% of the spikes, whilst maintaining a trigger
efficiency for electrons above 98\%. With these thresholds the
efficiency for higher energy electrons is even larger: 99.6\% for
electrons with $\ET >20$\GeV.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/spikekillingopt.pdf}
\caption{\label{fig:spikeperf}Electron trigger efficiency as a
function of the spike rejection at L1. Each point corresponds to a
different spike removal trigger sFGVB threshold. The trigger killing
threshold is set to 8\GeV. The data were taken in 2010.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/spikeEGcontamination.pdf}
\caption{\label{fig:spikepileup}Fraction of spike-induced EG triggers
as a function of the number of reconstructed vertices.
The red points represent the spike removal working point used in 2011, and
the green points the optimized working point for 2012.
The squares (triangles) correspond to higher (lower) pileup data.}
\end{figure}
Table~\ref{tab:spikerate} summarizes the rate reduction factors
obtained for L1 EG algorithms considering the working point discussed
above. This optimized configuration was tested online at the beginning
of 2011. It gave a rate reduction factor of about 3 (for an EG
threshold of 12\GeV), and up to a factor of 10 for \ET sum triggers
(which calculate the total EM energy in the whole calorimeter system).
\begin{table}[h]
\caption{\label{tab:spikerate}Rate reduction factors obtained for
L1 EG algorithms (considering a 258\MeV sFGVB threshold
and an 8\GeV killing threshold on the ECAL Trigger Primitives) for various EG thresholds.}
\centering
\begin{tabular}{|lllll|}
\hline
EG Threshold (\GeVns{}) & 12 & 15 & 20 & 30\\
\hline
Rate reduction factors & 3.4 & 4.3 & 6.0 & 9.6 \\
\hline
\end{tabular}
\end{table}
At the end of 2011 the average pileup had peaked at 16.15, and in 2012
the highest average pileup was 34.55. Efficient identification of EM
showers at trigger level became more and more challenging. As pileup
events act as noise in the calorimeter, they degraded trigger object
resolution and reduced the probability of observing isolated
spikes. The fraction of spike-induced EG triggers was measured as a
function of the number of vertices (roughly equivalent to the number
of pileup events) in Fig.~\ref{fig:spikepileup}. The fraction of
spike-induced EG triggers reaches 10\% for collisions including more
than 20 pileup events (red points). Using the L1 trigger emulator, a
more efficient working point (sFGVB threshold~=~350\MeV, killing
threshold = 12\GeV) for the spike removal algorithm reduces this
fraction to 6\% (green points), but still preserves the same high
trigger efficiency for genuine electrons and photons.
\subsubsection{HLT electron and photon identification}
\label{sec:egammaHLT}
The HLT electron and photon identifications begin with a regional
reconstruction
of the energy deposited in the ECAL crystals around the L1 EM
candidates. This is followed by the building of the supercluster using
offline
reconstruction algorithms~\cite{cms-e8}.
Electron and photon candidates are initially selected based on the
\ET of the supercluster and on criteria based on properties of the
energy deposits in the ECAL and HCAL subdetectors. Selection
requirements include a cluster shape variable
$\sigma_{mathrm{i}\eta\mathrm{i}\eta}$ (the root-mean-square of the
width in $\eta$ of the shower)~\cite{cms-e8} and an isolation requirement
that limits the additional energy deposits in the ECAL in a cone
around the EM candidate with outer cone size of
$\DR\equiv\sqrt{\smash[b]{{\Delta\phi}^2 + {\Delta\eta}^2}}=0.3$, and inner
cone radius corresponding to the size of three ECAL crystals
($\DR=0.05$ in the barrel region.) The energy deposits in channels that
are found in a strip along $\phi$ centered at the ECAL position of the
EM candidate with an $\eta$-width of 3 crystals are also not
considered. Candidates are then required to satisfy selection
criteria based on the ratio of the HCAL energy in a cone of size
$\DR=0.3$ centered on the SC, to the SC energy.
These requirements typically reduce the trigger rate by a factor of 3--4, reaching
10 for the tightest selection used in 2012. The thresholds are
such that, after this set of calorimetric criteria, the rates of
electron candidates are about 1\unit{kHz}.
The previously described steps are common to electron and photon selection.
In addition, photon candidate selection imposes an additional
isolation requirement based on tracks reconstructed in a cone around
the photon candidate. In some trigger paths extra requirements are
needed to keep the rate at an acceptable level. The
$\RNINE \equiv E_{3{\times}3}/E_{SC}$ variable, where $E_{3{\times}3}$
denotes the energy deposited in a small window of $3{\times}3$ crystals
around the most energetic crystal in the SC, is very effective in
selecting good unconverted photons
even in the presence of large pileup. Finally, to
distinguish electrons from photons, a nearby track is required, as
described later in this section.
An improvement deployed in the $\Pe/\Pgg$ triggers in 2012 was the use
of corrections for radiation-induced changes in the transparency of
the crystals in the endcap ECAL~\cite{Chatrchyan:2013dga}.
A new set of
corrections was deployed weekly.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.5\textwidth]{figures/laser_corrections}
\caption{Efficiency of the online \ET selection as a function of the
offline electron \et, in barrel and endcap regions, before
and after the deployment of online transparency corrections. The data
depicts the results of a double-electron trigger requiring
$\pt>33\GeV$ for both legs, and shows that applying the corrections
causes a significant improvement of the online turn-on curve.}
\label{fig:laser_corrections}
\end{figure}
Figure~\ref{fig:laser_corrections} shows that the introduction of
these corrections in the trigger significantly improved the
performance of the electron trigger in the endcap. The turn-on curve
refers to a double-electron trigger requiring a 33\GeV threshold
for both legs.
\paragraph{Double-photon trigger efficiency.}
The tag-and-probe method with $\Z \to\Pe\Pe$ events is used to
measure trigger efficiencies from the data. For photon triggers, the
probe electron is treated as a photon and the electron SC is required
to pass photon selection requirements. Events are selected from the
double-electron data set with the loosest prescaled tag-and-probe
trigger path. Since this path requires only one electron passing the
tight HLT selection for the leading leg of the trigger, the other electron,
which is only required to pass a very loose filter on its SC
transverse energy, is sufficiently unbiased such that it is suitable
for our measurement. We then require at least one offline electron to
match the HLT electron leg, and at least two offline photons to match
the HLT electron and the HLT SC leg, respectively. The two offline
photons are required to have an invariant mass compatible with the \Z
boson (between 70\GeV and 110\GeV), and to pass offline \pt threshold of
30\GeV and 22.5\GeV, respectively. Finally the event is required to
pass offline photon and event selections, \eg, for the \HGG\xspace
measurement.
The photon matched to the HLT electron leg is also required to match
to an L1 $\Pe/\Pgg$ isolated object with $\ET >22\GeV$. This photon is
considered to be the tag, while the other one is the probe. Each trigger step
is measured separately and, to account for the fact that electrons and
photons have different \RNINE distributions, each electron pair used
for the trigger efficiency measurement is weighted so that the \RNINE
distribution of the associated SCs matches the one of a simulated
photon. The net effect is an increase of the measured efficiency due
to the migration of the events towards higher \RNINE values.
\begin{figure}[tbh]
\centering
\includegraphics[height=150pt,trim={0 .2cm 0 0},clip]{figures/HLT_OR_pt_26.pdf}
\includegraphics[height=151pt]{figures/HLT_OR_eta_26.pdf}
\caption{Efficiencies of the leading leg for the double-photon
trigger as a function of the photon transverse energy (left) and
pseudorapidity (right), as described in the text. The red symbols
show the efficiency of the isolation plus calorimeter
identification requirement, and the blue symbols show the
efficiency of the \RNINE selection criteria. The black symbols
show the combined efficiency.}
\label{fig:hlt_26_pt_eta}
\end{figure}
\begin{figure}[tbh]
\centering
\includegraphics[width=0.5\textwidth]{figures/HLT_OR_nvtx_26.pdf}
\caption{Efficiencies of the leading leg of the double-photon
trigger described in the text as a function of the number of
offline reconstructed vertices. The red symbols show the
efficiency of the isolation plus calorimeter identification
requirement, and the blue symbols show the efficiency of the
\RNINE selection. The black symbols show the combined
efficiency.}
\label{fig:hlt_26_nvtx}
\end{figure}
Figures~\ref{fig:hlt_26_pt_eta} to~\ref{fig:hlt_26_nvtx} show the
efficiency of the leading leg selection as a function of the photon
transverse energy, pseudorapidity, and number of offline reconstructed
vertices (N$_\mathrm{vtx}$).
The double-photon trigger is characterized by a steep turn-on
curve. The loss of efficiency shown in Fig.~\ref{fig:hlt_26_pt_eta}
(right) for the \RNINE selection follows the increase of the tracker
material in the region around $\abs{\eta}{\approx}1.2$, where is more likely
to find converted photons with a smaller \RNINE value. The flat
efficiency versus N$_\text{vtx}$ curve demonstrates that the path is quite
insensitive to the amount of pileup events, although some small
dependence is noticeable for N$_\text{vtx} > 30$.
\paragraph{Electron selection.}
In order to distinguish between electron and photon candidates, the
presence of a reconstructed track compatible with the SC is
required. Hence, after the common selection described above, the
selection of online electron candidates follows with selections involving
the tracker. The first step is the so called ``pixel-matching'', which
uses the energy and position of the SC to propagate a hypothetical trajectories through the magnetic field under each charge hypothesis to search for compatible hits in the pixel detector. Full silicon
tracks are then reconstructed from the resulting pixel seeds. Timing
constraints prohibit the usage of the offline tracking algorithms and
a simple Kalman filter technique is used. Nevertheless, since 2012, it
is complemented by the Gaussian-Sum Filtering (GSF) algorithm, which better
parametrizes the highly non-Gaussian electron energy
loss. Due to the large CPU time requirements of the
algorithm, it was used only in paths where it is possible to achieve a
large reduction of the rate before the electron tracking (\eg,
in the path selecting two high-\ET electrons, where the transverse
energy requirement is of 33\GeV on each electron). The electron tracks
are required to have a measured momentum compatible with the SC
energy. Their direction at the last tracker layer should match the SC
position in $\eta$ and $\phi$. These selection criteria reduce the
rate of misidentified electrons by a factor of 10. Finally, isolation
requirements with respect to the tracks reconstructed around the
electron candidate are applied, if required for rate reasons. The
lowest-threshold inclusive single isolated electron path at the end of
the 2012 running (corresponding to instantaneous luminosities of
$7\times10^{33}\percms$) had a threshold of $\ET>27$\GeV, with a rate
of less than 50\unit{Hz}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.7\textwidth]{figures/single_ele}
\caption{Performance of the internal stages of the lowest-\ET unprescaled single-electron trigger. The rate is shown as the
black histogram (left scale); the red symbols show the efficiency
for electron selection (right scale).}
\label{fig:single_ele}
\end{figure}
Figure~\ref{fig:single_ele} shows how the rate is gradually reduced by
the filtering steps of this trigger (black histogram), along with the
efficiency of electrons (red points).
\begin{figure}[tbph]
\centering
\includegraphics[width=0.49\textwidth]{figures/DoubleEl_Leading_Barrel.pdf}
\includegraphics[width=0.49\textwidth]{figures/DoubleEl_Leading_Endcap.pdf}
\caption{Efficiencies of the leading leg for the double-electron
trigger described in the text as a function of the offline
electron momentum. The trigger uses identical selection for both
legs, so the other leg just has a different
threshold. Efficiencies are shown for different running periods
(red May, green June, blue August, and yellow November of 2012)
and separately for electron reconstructed in barrel (left) and
endcap (right).} \label{fig:double_ele_pt}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.49\textwidth]{figures/DoubleElLead_Barrel_NVTX.pdf}
\includegraphics[width=0.49\textwidth]{figures/DoubleElLead_Endcap_NVTX.pdf}
\caption{Efficiencies of the leading leg for the double-electron
trigger described in the text as a function of the number of
reconstructed vertices. The trigger uses identical selection for
both legs, so the other leg just has a different
threshold. Efficiencies are shown for different running periods
(red May, green June, blue August, and yellow November of 2012)
and separately for electron reconstructed in barrel (left) and
endcap (right).}
\label{fig:double_ele_nvtx}
\end{figure}
\paragraph{Double-electron trigger efficiency.}
\label{sec:hlt_egamma_doublee_eff}
Figures~\ref{fig:double_ele_pt} and~\ref{fig:double_ele_nvtx} show the
performance of the double-electron trigger. Efficiencies were measured
using a tag-and-probe technique similar to that described for the
photon path measurements and are computed with respect to a standard
offline selection.
The results are reported for various running periods; the different
results reflect the different pileup conditions.
Figure~\ref{fig:double_ele_nvtx} shows that the efficiency is only
loosely dependent on the pileup conditions.
\subsection{Muon triggers}
\label{sec:objid_muons}
\subsubsection{The L1 muon trigger performance}
The following sections report the performance of the L1 muon trigger
system described in Sec.~\ref{sec:l1muon}. Results concerning
efficiency, $\pt$ assignment resolution, rates, and timing are
presented. At GT level, different GMT quality requirements are
required for single- and multi-muon algorithms. Therefore,
the performance for both the single- and multi-muon objects is
documented.
For most of the studies offline reconstructed muons are used as a reference to measure
the response of the L1 trigger. Muon identification requirements similar
to the ones used by CMS offline analysis are required. These are documented
in Ref.~\cite{Chatrchyan:2012xi}.
\paragraph{The L1 muon trigger efficiency}
The efficiency of the muon trigger was calculated by use of the tag-and-probe
method described in~\cite{Chatrchyan:2012xi}.
Events with two reconstructed muons having an invariant mass compatible with the one
of the \Z boson or of the \JPsi resonance were selected out of a sample of events
collected on the basis of single muon triggers.
Reconstructed tag muons were required to meet ``tight'' identification requirements
and to be matched to SingleMu HLT objects. This allowed the removal of trigger selection
biases.
Reconstructed probe muons had to be identified by either the ``tight'' or``loose''
identification criteria. The former selection matches the one used in most of the physics
analyses with single muons and was used to compute the efficiency for single L1 muon triggers
(Figs.~\ref{figure:L1MuonSingleMuEfficiencyFromZ} and
\ref{figure:L1MuonsEtaSubdetectors}), whereas the second is the muon identification
baseline for many analyses with multiple muons and it was used to compute efficiencies
for L1 double-muon triggers (Fig.~\ref{figure:L1MuonsJPsi0-2p4}).
The L1 muon trigger efficiency was calculated on the basis of probe muons geometrically
matched with L1 muon trigger candidates.
The L1 trigger candidates were matched to probes if the distance between the two was
found to be smaller than $\Delta\phi=0.15$ and $\Delta\eta=0.2$. If two L1 trigger
candidates were matched to a single probe the closest in $\phi$ was chosen.
Tag-and-probe muons were also required to be separated by
$\DR>0.5$
to exclude interference of the
two in the muon chambers.
The performance for different L1 $\pt$ requirements using a sample of dimuons satisfying a mass requirement around the \Z boson mass value is presented. Figure~\ref{figure:L1MuonSingleMuEfficiencyFromZ} shows the efficiency for
single L1 muon trigger GMT quality selections as a function of the reconstructed muon
$\pt$ for $\abs{\eta}<2.4$ and $\abs{\eta}<2.1$ acceptance regions, respectively.
Figure~\ref{figure:L1MuonsEtaSubdetectors} shows trigger efficiency
as a function of the reconstructed muon $\eta$. In this case a L1 $\pt >16\GeV$ is applied and probe muons are required to have a reconstructed
$\pt$ larger than 24\GeV.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1MuonsZDataFitGMT0-2p4}
\includegraphics[width=0.45\linewidth]{figures/L1MuonsZDataFitGMT0-2p1}
\caption{The efficiency of the single-muon trigger versus the
reconstructed transverse momentum of the muon for different
thresholds applied on the trigger candidate $\pt$ for the full
pseudorapidity range $\abs{\eta} < 2.4$ (left), and limited to the
range $\abs{\eta}< 2.1$ (right). The quality requirement used in
the single-muon trigger algorithms (see text) was applied.
Results are computed using the tag-and-probe method applied on a
\Z boson enriched sample.}
\label{figure:L1MuonSingleMuEfficiencyFromZ}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1MuonsEtaSubdetectors}
\caption{The efficiency of the single-muon trigger as a function of
$\eta$ for the threshold of 16\GeV (black) for muons with
reconstructed $\pt > 24\GeV$.
The contribution of the muon trigger subsystems to this efficiency is also
presented: the red/green/blue points show the fraction of the GMT events based on
the RPC/DTTF/CSCTF candidates, respectively. Results are computed using the
tag-and-probe method applied to a \Z boson enriched sample.}
\label{figure:L1MuonsEtaSubdetectors}
\end{figure}
The number of unbiased events recorded by CMS is not sufficient for a direct
and precise estimation of the overall L1 double-muon trigger efficiency.
In this case efficiency is
obtained using the tag-and-probe method on the \JPsi resonance.
Results imposing muon quality cuts as well as L1 $\pt$ requirements
from double-muon algorithms
are shown in Fig.~\ref{figure:L1MuonsJPsi0-2p4}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1MuonsJPsiLooseL1-0-2p4}
\caption{The efficiency of the double-muon trigger versus the reconstructed
transverse momentum of the muon for different thresholds applied on the
trigger candidate $\pt$. Results are computed using the tag-and-probe
method applied to a \JPsi enriched sample.}
\label{figure:L1MuonsJPsi0-2p4}
\end{figure}
The ability of CMS to trigger efficiently on dimuons at low \pt
allowed the CMS experiment to observe the rare $B^0_s\to\mu^+\mu^-$
decay at $4.3\sigma$ significance~\cite{Chatrchyan:2013bka}, where a
dimuon trigger with a \pt threshold of 4\GeV on each muon was applied
at the HLT. The decay was established definitively at $6.2\sigma$
significance with the combination of data from both the CMS and LHCb
experiments~\cite{bsmumu}.
\paragraph{The L1 muon trigger rates}
Muon trigger rate plots were obtained from the analysis of a dedicated data
stream, containing L1 trigger information alone, that was collected at high
rate on the basis of L1 decision only.
This stream provides unbiased information about the L1 trigger response,
which is ideal for L1 trigger rate studies.
For this analysis, events were selected on the basis of the loosest
possible L1 muon trigger algorithm. The latter implies no quality or
$\pt$ requirements on the L1 muon GMT candidates, therefore any further
selection (\eg, the $\pt$ threshold or quality requirements
corresponding to single- or double-muon triggers) was applied offline.
Results on the rates of single- and double-muon triggers are presented
in Figs.~\ref{figure:L1MuonSingleMuRates} and
\ref{figure:L1MuonDoubleMuRate}, respectively. The single-muon trigger
rate was calculated with a data recorded at instantaneous luminosities
up to $7.2 \times 10^{33}\percms$ and then rescaled to an
instantaneous luminosity of $5 \times 10^{33}\percms$. This
extrapolation was possible as the single-muon rate per instantaneous
luminosity (\ie, the trigger cross section)
is not a strong function of instantaneous luminosity.~\footnote{See
Sec.~\ref{sec:muon_perf} and Fig.~\ref{fig:xsections}
specifically. The variation is at the per-mille level.}
The left plot of
Fig.~\ref{figure:L1MuonSingleMuRates} shows a flattening of the slope
of the rate curve for single-muon triggers at high L1 $\pt$ threshold
values. The effect can be explained by studying the resolution of the
$\pt$ estimation of the L1 muon trigger computed with respect to
offline reconstructed ``tight muons''. The results of such a comparison
are presented in Fig.~\ref{figure:L1MuonsPtL1vsPtRec} and show that the muon trigger sometimes assigns very high \pt to muons with very low momentum. These candidates with overestimated transverse
momentum contribute significantly to the L1 muon trigger rate,
especially at high L1 $\pt$ thresholds.
\begin{figure}[tbph]
\centering
\includegraphics[height=200pt,trim={0 .3cm 0 0},clip]{figures/L1MuonsRateVsPtCut}
\includegraphics[height=201pt]{figures/L1MuonsRateVsEta}
\caption{Left: rate of the single-muon trigger versus the transverse momentum
threshold for the full pseudorapidity range $\abs{\eta}<2.4$ and for pseudorapidity
limited to $\abs{\eta}<2.1$. Additionally the curves for pure endcap and barrel
regions are presented.
Right: the rate of the single-muon trigger GMT candidates as a function of
$\eta$ for the $\pt$ threshold of 16\GeV (blue histogram). The contribution
of the muon trigger subsystems to this rate is also presented: the green
and blue histograms show how often the above GMT candidates built using
RPC or DTTF/CSCTF candidates.
On both plots the rates are rescaled to an instantaneous luminosity of
$5 \times 10^{33}\percms$. The quality requirement used for single-muon trigger
algorithms (see text) was applied.}
\label{figure:L1MuonSingleMuRates}
\end{figure}
In case of the double-muon triggers, the rate increases with
luminosity. The rates were calculated using data collected
with the luminosities in the range $4$--$6 \times 10^{33}\percms$ (for an
average luminosity of $4.9 \times 10^{33}\percms$), and rescaled to
a target instantaneous luminosity of $5 \times 10^{33}\percms$.
Errors from this small approximation are well within the fluctuations caused
by data acquisition deadtime variations ($\mathcal{O}(1\%)$).
\begin{figure}[tbph]
\centering
\includegraphics[scale=0.5]{figures/L1MuonsDoubleMuRate}
\caption{The rate of the double-muon trigger versus the threshold applied
to the first and second muon. The rates are rescaled to the instantaneous
luminosity $5 \times 10^{33}\percms$.}
\label{figure:L1MuonDoubleMuRate}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1MuonsPtL1vsPtRec}
\caption{The distribution of the momentum of the L1 muon candidates versus the
momentum of the corresponding reconstructed muon (``tight''identification
criteria). Events with both \Z boson and \JPsi resonances contribute.
Offline
muons in the full acceptance region ($\abs{\eta}<2.4$) are used.}
\label{figure:L1MuonsPtL1vsPtRec}
\end{figure}
\paragraph{The L1 muon trigger timing}
The muon trigger timing is a product of the timing performance of muon
trigger primitive generators and muon regional track-finders (DT, CSC,
RPC). The GMT algorithm is executed independently for each BX. Thus
no further timing corrections on candidates generated by track finders
are performed at this stage. Nevertheless, the GMT algorithm,
optimized for best momentum resolution and rejection of
misreconstructed double-muon candidates, can discard low quality
tracks, more prone to mistiming, affecting the overall L1 muon trigger
timing response as well. This may result in the GMT accepting events
either in the earlier or later bunch crossing (pre- or post-firing).
Such errors do not currently cause incorrect L1 decisions since
triggers appearing in wrong LHC bunch crossings are suppressed at the
GT level by a BPTX veto.
Ideally, the trigger timing logic assigns a muon trigger candidate to
the BX in which the actual muon was produced and reconstructed. In this
case the difference between trigger candidate LHC BX number and LHC BX
number of an event in which muon is reconstructed is 0, meaning that the
candidate arrives at (relative) $\mathrm{BX}=0$. To quantify the trigger timing
performance, the fraction of triggers appearing in a
given BX with respect to those with ideal timing is computed. This procedure
depends on an event selection used for muon reconstruction and the
underlying triggers. A typical distribution of L1 muon trigger
timing is shown in Fig.~\ref{figure:L1MuonTimingL1BXDist}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\linewidth]{figures/cTimingL1BXDist_L1PerformancePaper.pdf}
\caption{The overall timing distribution of L1 muon triggers. The
distribution of GMT candidates is shown as a shaded histogram.
The contributions from regional muon triggers (DT, CSC, RPC) are
given. In addition, the GMT and RPC distributions for heavy stable
charged particle trigger configurations are labeled separately.}
\label{figure:L1MuonTimingL1BXDist}
\end{figure}
The data of Fig.~\ref{figure:L1MuonTimingL1BXDist} come from a stream dedicated to the express monitoring of
muon reconstruction. The event selection requires the
presence of a reconstructed muon with selections similar to the ones
used by the ``tight'' identification criteria. To ensure a
correspondence between L1 muon trigger candidates and reconstructed
muons their position are requested to match within $\Delta R < 0.3$ of
each other. No other reconstructed muons in the proximity of the one
matched with the trigger are allowed. Since the most interesting
candidates are the ones that may affect the GT decision,
only events with $\pt$, $\abs{\eta}$, and quality requirements matching
the ones used for unprescaled L1 single-muon triggers in 2012 are
considered.
A L1 trigger is specifically implemented for heavy, stable
charged particles (HSCP) (Sec.~\ref{sec:HSCP}), which relies on
time extension of RPC signals in the RPC trigger logic. The typical
response to a prompt muon thus extends to two BXs.
It is therefore important that the presence of early or late signals
in the RPC and DT/CSC are not correlated.
Cases where both subsystem candidates respond in $\text{BX}=-1$
$(+1)$, therefore not providing a GMT candidate in $\text{BX}=0$, are
rare. It is therefore typical that GMT candidates to $\text{BX}=+1$
contribute as well to $\text{BX}=0$.
A more detailed picture, derived from the same data set and illustrating
early and late GMT decisions, is given in
Fig.~\ref{figure:L1MuonL1FirePrePost}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\linewidth]{figures/cTimingL1FirePrePost_L1PerformancePaper.pdf}
\caption{The fractions of GMT candidates in early and late bunch crossings as a function of L1 muon candidate transverse
momentum.
}
\label{figure:L1MuonL1FirePrePost}
\end{figure}
Here the fraction of events in $\text{BX}= +1$ and $-1$ is presented
as a function of GMT candidate transverse momentum. The low-$\pt$
behavior of the pre-firing curve follows the relative contribution of
DT and CSC candidates. Event selection and trigger
rules affect trigger timing distributions. In particular, a
trigger issued in an event suppresses possible triggers in the two
consecutive BXs. The above feature does not affect $\text{BX}=-1$
because triggers issued in non-colliding BXs are vetoed, but has an
impact on events triggered at $\text{BX}=+1$. Therefore, in order to
extract the post-firing, only events with the first GMT candidate
appearing in $\text{BX}=+1$ are used. To properly normalize
the plot, only events with a non-muonic additional trigger in the
event were selected.
\subsubsection{HLT muon identification}
\label{sec:muHLT}
The muon high-level triggers combine information from both the muon
and the tracker subdetectors to identify muon candidates and determine
their transverse momenta, $\pt$. The algorithm is composed
of two main steps: level-2 (L2), which
uses information from the muon system only, and level-3 (L3),
which combines measurements from both tracker and muon subdetectors.
\paragraph{Level-2.}
The reconstruction of a track in the muon spectrometer starts from an
initial state, called the \emph{seed}, built from patterns of DT and
CSC segments. The transverse momentum of the seed is parametrized as
$\pt = f(1/\Delta\phi)$, where $\Delta\phi$ is the azimuthal
angle between the two segments and $f$ is a first-order polynomial
function whose coefficients are determined using simulated CMS
data. Only seeds confirmed by the L1 decision are used.
Each seed is used to start the reconstruction of a track using
measurements (hits and segments) from all the muon detectors. Tracks
are built with the Kalman filter technique~\cite{Fruhwirth:1987fm}, a
recursive algorithm that performs pattern recognition and track
fitting. After all tracks were reconstructed, possible duplicates
of the same muon candidate are removed by checking that tracks do not share any
hits. The interaction point position is used to constrain the track parameters
to improve the transverse momentum resolution.
If one or more L2 muons are successfully reconstructed, their number
and parameters are used to filter the event. The main selection is
based on the L2 muon $\pt$. The number of muon chambers and
measurements used in the
track fit can also be used to suppress misreconstructed
muons.
\paragraph{Level-3.}
The L3 muon reconstruction exploits the excellent momentum and vertexing
resolution of the inner silicon tracker, and the larger lever arm of
the muon detector, to improve the momentum resolution at high
$\pt$ (greater than ${\approx}$200\GeV).
The L3 muon trigger algorithm consists of three main steps:
seeding of tracker reconstruction starting from L2 information,
track reconstruction in the tracker, and combined fit in the tracker
and muon systems.
Due to HLT timing and CPU constraints, the full tracker reconstruction
is not performed. Instead, tracks are seeded by L2 muon
candidates. Three different seeding algorithms are available:
\begin{enumerate}
\item
the initial state (position, momentum) for track
reconstruction is the L2-track state extrapolated to the outer
surface of the tracker;
\item
the initial state is the L2-track state extrapolated to
the outer surface of the tracker, and updated with measurements
found on the outermost layers of the silicon-strip detector; and
\item
the initial state is defined by pairs of hits on adjacent
layers of the silicon-pixel subdetector, in small rectangular
$\eta$--$\phi$ regions around the L2 muon track.
\end{enumerate}
All these algorithms perform differently in different parts of
the detector. To optimize efficiency and timing, they are run in
reverse order of CPU time required: slower algorithms are only
called if the faster ones fail to reconstruct a L3 muon.
Starting from the initial seeds, tracks are reconstructed in the
silicon tracker using a Kalman filter. These tracks and the L2 muons
are propagated to a common surface (\eg, the innermost layer of
the muon system) and their compatibility is evaluated using several
criteria, such as their separation, directions, or relative
goodness-of-fit $\chi^2$. If a pair of compatible L2-tracker tracks is found, a final
refit of all the tracker and muon system measurements is performed.
If one or more L3 muons are successfully reconstructed, their number
and parameters are used to filter the event. The main selection is
based on the muon \pt. Other track parameters, such as $\chi^2$ and
impact parameter, can be used to suppress misreconstructed muons.
\paragraph{Isolation}
The isolation of L3 muons is evaluated combining information from the
silicon tracker, ECAL, and HCAL. Tracks are reconstructed in the
silicon tracker in a geometrical cone of size $\DR = 0.3$ around the
L3 muon. In the same cone, ECAL and HCAL deposits are summed. To
reduce the dependence of the isolation variable on the pileup of pp
collisions, the calorimeter deposits are corrected for the average
energy density in the event $\rho$~\cite{fastjetmanual}. A relative
isolation variable is defined as
\begin{linenomath*}
\begin{equation*}
I_\text{rel} = \frac{1}{\pt^{\mu}}
\biggl(\sum_i{p_{\mathrm{T,trk}}^{i}} +
\max\Bigl[ 0, \textstyle{\sum_j}{E_{\mathrm{T,ECAL}}^{j}} +
\textstyle{\sum_k}{E_{\mathrm{T,HCAL}}^{k}} -
\pi (\DR)^2 \, \rho \Bigr] \biggr).
\end{equation*}
\end{linenomath*}
The standard selection is $I_\text{rel}<0.15$.
\paragraph{Double-muon triggers}
Double-muon triggers either require the presence of two L3 muons, as
described above, or one L3 muon and one
``tracker-muon''~\cite{Chatrchyan:2012xi}, \ie, a track in the silicon
tracker compatible with one or more segments in the muon detectors. The
latter class of triggers recovers possible inefficiencies of the L2
muon reconstruction (\eg, due to the muon detector
acceptance). Moreover, dropping the requirement of a fitted track in
the muon system allows reduction of the effective kinematic threshold,
making these triggers particularly suitable for quarkonia and B physics topologies.
The two legs of double-muon triggers are generally required to
originate from the same vertex to reduce the rate of misreconstructed dimuon
events. In specific quarkonia triggers, additional filtering is
applied to reduce the low-\pt background rate. This includes, for
example, mass requirements on the dimuon system and requirements on the
angle between the two muon candidates (Sec.~\ref{sec:bphbpag}.)
\paragraph{Performance of muon triggers.}
\label{sec:muon_perf}
This section describes the performance of the single- and double-
muon triggers during 2012 data taking at 8\TeV. The triggers are:
\begin{itemize}
\item a single-muon trigger seeded by a L1 requirement of $\pt>16$\GeV,
and requiring a L2 track of $\pt > 16$\GeV and a L3 track of
$\pt > 40$\GeV;
\item a single-muon trigger seeded by an L1 trigger of
$\pt > 16$\GeV, and requiring a L2 track of $\pt > 16$\GeV and a
L3 track of $\pt > 24$\GeV; the L3 track must also be isolated;
\item a double-muon trigger by a L1 trigger
requiring two muon candidates of $\pt>10$ and 3.5\GeV,
respectively; the L2 requirement is two tracks of $\pt > 10$ and
3.5\GeV, and the L3 requirement is two tracks of $\pt > 17$ and
8\GeV; the muons are required to originate from the same vertex;
by imposing a maximum distance of 0.2~\cm between the points of
closest approach of the two tracks to the beam line; and
\item a double-muon trigger seeded by a L1 trigger
requiring two muon candidates of $\pt>10$ and 3.5\GeV,
respectively; the L2 requires a track of $\pt > 10$\GeV, and the
L3 a track of $\pt > 17$\GeV; in addition, a tracker muon of $\pt
> 8$\GeV is required; the muons are required to come from the same
vertex, by imposing a maximum distance of 0.2~\cm between the points
of closest approach of the two tracks to the beam line.
\end{itemize}
Trigger efficiencies are measured with the tag-and-probe method,
using \Z bosons decaying to muon pairs. The tag must be identified as a
``tight muon''~\cite{Chatrchyan:2012xi} and triggered by the single-isolated-muon
path. The probe is selected either as a ``tight muon'' or a
``loose muon''~\cite{Chatrchyan:2012xi}, respectively,
for single- and double-muon efficiency studies. When measuring the efficiency of isolated
triggers, the probe is also required to be isolated.
The efficiency is obtained by fitting simultaneously the \Z resonance
mass for probes passing and failing the trigger in question.
Figure~\ref{fig:singlemuons} shows the efficiencies of single-muon triggers with and without
isolation, as functions of $\eta$ and $\pt$ (for $\abs{\eta} < 0.9$), in 2012 data and in
simulation. The ratio between data and simulation is also shown. An
agreement of the level of 1--2\% is observed.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.477\textwidth]{figures/Mu40_eta2p1_Tight_eta_2p1_pt45-500_2012D.pdf}
\includegraphics[width=0.477\textwidth]{figures/Mu40_eta2p1_Tight_pt_40_absetaLt0p9_2012D.pdf}\\
\includegraphics[width=0.477\textwidth]{figures/IsoMu24_eta2p1_TightIso_eta_2p1_pt25-500_2012D.pdf}
\includegraphics[width=0.477\textwidth]{figures/IsoMu24_eta2p1_TightIso_pt_absetaLt0p9_2012D.pdf}
\caption{Efficiency of single-muon triggers without isolation
(top) and with isolation (bottom) in 2012
data collected at 8\TeV, as functions of $\eta$ (left) and
$\pt$, for $\abs{\eta} < 0.9$ (right).}
\label{fig:singlemuons}
\end{figure}
\begin{figure}[tbh]
\centering
\includegraphics[width=0.48\textwidth]{figures/POG_DoubleMu17Mu8_MuonTrg_2D_PF_DATA_HighPtOrdered_Final.pdf}
\includegraphics[width=0.48\textwidth]{figures/POG_DoubleMu17TkMu8_MuonTrg_2D_PF_DATA_HighPtOrdered_Final.pdf}
\caption{Efficiencies of double-muon triggers without
(left) and with (right) the tracker muon requirement in 2012
data collected
at 8\TeV as functions of the pseudorapidities $\abs{\eta}$ of the two muons,
for loose muons with $\pt > 20$\GeV.}
\label{fig:doublemuons}
\end{figure}
Figure~\ref{fig:doublemuons} shows the efficiencies
for the double-muon triggers with and without the tracker muon requirement
for tight muons of $\pt > 20$\GeV, as functions of $\eta$ of the two
muons. The total efficiency includes contributions from the efficiency
of each muon leg and from the dimuon vertex constraint.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.6\textwidth]{figures/xs_vs_lumi_allpaths_2012D_3036-3045-3047-3071.pdf}
\caption{Cross sections of the four main single- and double-muon
triggers used in 2012 data taking, described in the text,
as a function of the LHC instantaneous luminosity. Mild pileup
dependencies are visible. }
\label{fig:xsections}
\end{figure}
Figure~\ref{fig:xsections} shows the trigger cross sections of the four main
muon triggers in 2012 data taking, as functions of the LHC
instantaneous luminosity. As is shown in the Figure, during the 2012
run, a mild pileup-dependent inefficiency was observed for paths using
L3 reconstruction. This effect caused a drop in the cross section of
the isolated muon trigger at high
luminosity. Figure~\ref{fig:xsections} shows that this effect is not
visible in nonisolated triggers (such as the single-muon path with a
$\pt>40$\GeV requirement) as in those cases it is masked by a slight
luminosity-dependent cross section increase.
\subsection{Jets and global energy sums}
\label{sec:JetMET}
Triggers based on jet and missing transverse energy (\MET)
triggers play an important role for search for new physics.
Single-jet triggers are primarily
designed to study quantum chromodynamics (QCD), but can also be used for many analyses, such
as searches for new physics using initial state radiation (ISR)
jets.
The dijet triggers are designed primarily for jet energy scale
studies. The \MET triggers are designed to search for new physics with
invisible particles, such as neutralinos in supersymmetric models.
\subsubsection{The L1 jet trigger}
\label{sec:l1jet}
The L1 jet trigger uses transverse energy sums computed using both HCAL and ECAL in the central region ($\abs{\eta} < 3.0$) or HF in the forward region ($\abs{\eta} > 3.0$). Each central region is composed of a $4 \times 4$ matrix of trigger towers (Fig.~\ref{fig:l1jetalgo}), each spanning a region of $\Delta\eta \times \Delta\phi = 0.087 \times 0.087$ up to $\abs{\eta}{\approx}2.0$; for higher rapidities the $\Delta\phi$ granularity is preserved, while the $\Delta\eta$ granularity becomes more coarse. In the forward region, each region consists of 4 or 6 HF trigger towers and has the same $\Delta\phi$ granularity of 0.384 as in the central region, with the $\Delta\eta$ granularity of 0.5. The jet trigger uses a ``sliding window" technique~\cite{Trig-TDR} based on a $3 \times 3$ regions (\ie, 144 trigger towers in the central region and up to 54 trigger towers in the forward region), spanning the full $(\eta,\phi)$ coverage of the CMS calorimeter. The L1 jet candidate is found if the energy deposits in the $3 \times 3$ window meet the following conditions: the central region of the $3 \times 3$ matrix must have the \ET higher than any of the eight neighbors, and this \ET must exceed a specific threshold (used to suppress the calorimeter noise). The L1 jets are characterized by the transverse energy \ET equal to the sum of transverse energies in the $3 \times 3$ regions of the sliding window centered on the jet. The L1 jet is labeled by the $(\eta,\phi)$ of its central region.
\begin{figure}[tbph]
\centering
\includegraphics{figures/L1JetAlgorithm}
\caption{Illustration of the available tower granularity for the L1
jet finding algorithm in the central region, $\abs{\eta} < 3$
(left). The jet
trigger uses a $3{\times}3$ calorimeter region sliding window
technique which spans the full $(\eta, \phi)$ coverage of the
calorimeter. The active tower patterns allowed for L1 $\tau$ jet candidates are shown on the right.}
\label{fig:l1jetalgo}
\end{figure}
Jets with $\abs{\eta}>3.0$ are classified as forward jets, whereas those
with $\abs{\eta}<3.0$ are classified as central or $\tau$ jets, depending on the OR of the nine $\tau$ veto bits
associated with the 9 regions in the $3{\times}3$ window. To improve the detection efficiency for genuine L1 $\tau$ jets, a geometrical tower pattern is utilized for L1 $\tau$ jet candidates (Fig.~\ref{fig:l1jetalgo}).
The four highest energy central, forward, or central $\tau$ jets in the calorimeter are selected. After jets are found,
LUTs are used to apply a programmable $\eta$-dependent jet energy
scale correction.
The performance of the L1 jets is evaluated with respect to offline
jets, which are formed from the standard CaloJet reconstruction, as
well as PF jet reconstruction.
Jets are reconstructed using the anti-\kt algorithm and calibrated
for the nonlinearity of the calorimeter response and pileup effects
using a combination of studies based on simulation and collision data,
as detailed in Ref.~\cite{Chatrchyan:2012xx}.
A moderate level of noise rejection is applied to the offline jets by
selecting jets passing ``loose''~\cite{Chatrchyan:2012xx} identification criteria.
\paragraph{L1 jet trigger efficiency}
The L1 jet trigger efficiency was measured with a data sample from
the single-muon data set requiring an isolated muon with $\pt>24$\GeV
(HLT\_IsoMu24). Events from the muon paths are unbiased with respect
to the jet trigger paths.
The L1 jet efficiency is calculated relative to the offline
reconstructed jets. The efficiency is defined as the fraction of
leading offline jets that were matched to an L1 central,
forward, or central, $\tau$ jet above a certain trigger threshold, divided by the
number of offline (leading) jets that were matched to an L1
central, forward, or central $\tau$ jet above any threshold. This quantity is
then plotted as a function of the offline jet \pt, $\eta$, and
$\phi$. The efficiency is determined by matching the L1 and
reconstructed offline jets spatially in $\eta$-$\phi$ space. This is
done by calculating the minimum separation, \DR, between the
highest-\ET reconstructed jet (with \pt$>10$\GeV and $\abs{\eta}<3$)
and any L1 jet above a certain \et~threshold, and requiring it to
be less than 0.5. Should there be more than one jet satisfying this selection,
the one closest (in $\Delta R$) is taken as the matched jet.
We evaluated the efficiency turn-on curves for various L1 jet
thresholds ($\ET>16$, $36$ and $92\GeV$) as a function of the
offline jet \pt. The efficiency is calculated with respect to offline
PF and CaloJet transverse energies (Fig.~\ref{fig:l1jets_eff}). Each
curve is fitted with a function that is the cumulative distribution
function of an exponentially modified Gaussian (EMG) distribution. In
this functional form, a parameter, $\mu$, determines the point of 50\%
efficiency and $\sigma$ represents the resolution.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/l1jets_Calo_l1paper}
\includegraphics[width=0.48\textwidth]{figures/l1jets_PF_l1paper}
\caption[l1jets_eff]{Left: The L1 jet trigger efficiency as a
function of the offline CaloJet transverse momentum. Right: The L1
jet trigger efficiencies as a function of the PF jet transverse
momentum. In both cases, three L1 thresholds ($\ET>16, 36, 92$\GeV)
are shown.}
\label{fig:l1jets_eff}
\end{figure}
\paragraph{Pileup dependence}
\label{pileup_dependence}
To evaluate the effect on the performance of the L1 triggers in
different pileup scenarios, the L1 jet efficiency is also benchmarked
as a function of pileup. The measure of the pileup per event is defined by the number
of `good' reconstructed primary vertices in the event, with each
vertex satisfying the following requirements
\begin{itemize}
\item $N_\text{dof} > 4$;
\item vertex position along the beam direction of $\vert z_\text{vtx} \vert < 24$\unit{cm};
\item vertex position perpendicular to the beam of $\rho < 2$\unit{cm}.
\end{itemize}
Three different pileup bins of 0--10, 10--20, and ${>}20$ vertices are
defined, reflecting the low-, medium-, and high-pileup running conditions
in 2012 for CaloJets and PF jets, respectively. The corresponding turn-on curves are shown in Fig.~\ref{fig:l1jet-pu}.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.35]{figures/jetpt_RunC_calopu.pdf}
\includegraphics[scale=0.35]{figures/jetpt_RunC_pfpu.pdf}
\caption{The L1 jet efficiency turn-on curves as a function of the leading
offline CaloJet \ET (left) and as a function of the leading
offline PF jet \ET (right), for low-, medium-, and high-pileup
scenarios for three different thresholds: $\ET>16,36$, and $92\GeV$.}
\label{fig:l1jet-pu}
\end{figure}
There is no significant change of the jet trigger efficiency observed in the
presence of a high number of primary vertices. The increase in
hadronic activity in high-pileup events, combined with the absence of
pileup subtraction within L1 jets, results in the expected observation
of a decrease in the $\mu$ value of the jet turn-on curves as a function of
pileup, while the widths ($\sigma$) of the turn-on curves are found to gradually
increase with increasing pileup.
\subsubsection{The L1 energy sums}
The GCT calculates the total scalar sum of \ET over the calorimeter
regions, as well as $\ETmiss$ based on individual regions.
In addition, it calculates the total scalar sum of L1 jet transverse
energies ($H_\mathrm{T}$) and the corresponding missing transverse energy
$H_\mathrm{T}^\text{miss}$ based on L1 jet candidates.
\paragraph{Energy sum trigger efficiencies}
The performance of the various L1 energy sum trigger quantities is
evaluated by comparison with the corresponding offline quantities. The
latter are defined at the analysis level according to the most common
physics analysis usage. The following offline quantities are defined:
\begin{itemize}
\item Missing transverse energy, \MET , which is the standard
(uncorrected) calorimeter-based \MET.
\item Total transverse jet energy, \HT (see Section 1).
\end{itemize}
Figure~\ref{fig:esumcurves} show the L1 \HT efficiency turn-on curve
for three L1 \HT thresholds of 75, 100, and 150\GeV as a function
of offline CaloJet \HT (left), and PF \HT
(right). Figure~\ref{fig:metcurves} shows the
L1 \MET efficiency curve for three L1
\MET thresholds of 30, 40, and 50\GeV. The turn-on
points in all the efficiency curves are shown to be shifted towards larger
values than the corresponding L1 trigger thresholds, which is
explained by the fact that the quantities are defined in different way at
the trigger and offline levels; the trigger uses standard calorimeter reconstruction
based object definition, whereas offline uses the PF object
definition. The same reasoning explains the slow turn-on curves observed in
the performance of the energy sum triggers versus the PF quantities,
with the resolution appearing to worsen when compared to the
performance obtained using the standard calorimeter reconstruction.
In both cases, the L1 \HT and L1 \MET efficiencies plateau at 100\%.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/l1HT_Calo_l1paper.png}
\centering
\includegraphics[width=0.48\textwidth]{figures/l1HT_PF_l1paper.png}
\caption[esumcurves]{The L1 \HT ~efficiency turn-on curves as a function
of the offline CaloJet (left) and PF (right) \HT, for three thresholds
($\HT>75,100,150$\GeV).}
\label{fig:esumcurves}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/l1MET_Calo_l1paper.png}
\caption{The L1 \MET ~efficiency turn-on curve as a function of the
offline calorimeter \MET, for three thresholds ($\MET>30,40,50$\GeV). }
\label{fig:metcurves}
\end{figure}
\subsubsection{L1 jet and energy sum rates}
The L1 single jet trigger rates as a function of the L1 jet threshold were
also evaluated, using similar strategy to that described in the muon
identification section. We used data recorded in a special data set in
which only the essential needed information about the events was
stored, and further selected events without
any bias based on the trigger selection (\ie, zero bias triggered
events)
and correspond to an instantaneous luminosity of
$5 \times 10^{33}\unit{cm}^{-2}\unit{s}^{-1}$.
Figure~\ref{fig:l1jetrate} shows the L1
single-jet trigger rate as a function of the L1 jet threshold. Similarly, the
rates of the L1 energy sum triggers (L1\_HTT and L1\_ETM triggers
here) are shown in Fig.~\ref{fig:esumrates}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/L1SingleJetRate}
\caption{The rate of the L1 single-jet trigger as a function of the
\ET threshold. The rates are rescaled to the
instantaneous luminosity $5\times10^{33}\percms$. }
\label{fig:l1jetrate}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1HTTrate}
\includegraphics[width=0.45\linewidth]{figures/L1ETMrate}
\caption{Left: Rate of the L1\_HTT trigger versus the
L1\_HTT threshold. Right: Rate of the L1\_ETM missing
transverse energy trigger as a function of the
L1\_ETM threshold. On both plots, the rates are rescaled to the
instantaneous luminosity $5\times10^{33}\percms$. }
\label{fig:esumrates}
\end{figure}
\subsubsection{The HLT jet triggers }
\label{sec:JetHLT}
At the HLT, jets are reconstructed using the anti-\kt clustering
algorithm with cone size $R = 0.5$~\cite{fastjetmanual,Cacciari:2008gp}. The inputs
for the jet algorithm are either calorimeter towers (resulting in
so-called ``CaloJet'' objects), or the reconstructed particle flow
objects (resulting in ``PFJet'' objects). In 2012, most of the jet
trigger paths use PFJet as their inputs. As the PF
algorithm uses significant CPU resources, PFJet trigger paths have a
pre-selection based on CaloJets. Matching between CaloJets and
PFJets is then required in single PFJet paths.
\paragraph{Single-jet paths}
The L1 thresholds for the single-jet paths were chosen such that the
L1 efficiency is at least 95\% at the corresponding HLT threshold. The
jet energy scale corrections (JEC) were applied to the single-jet
paths. The lowest threshold path was a L1 pass-through path that
simply requires a L1 jet in the event with $\pt > 16$\GeV. The single
PFJet trigger paths for $\lumi=7\times10^{33}\percms$ (pileup
${\approx}$32), along with the L1, prescales, and approximate rates are
listed in Table~\ref{tab:JetTrigger}. The trigger turn-on curves for
selected single PFJet paths as a function of transverse momentum
of the offline jet is
shown in Fig.~\ref{fig:JetEff_RunComparison}. The trigger efficiency
was calculated from an independent data sample collected using a
single isolated muon trigger with a $\pt>24$\GeV
threshold. As in the L1 case (Section~\ref{sec:l1jet}), the
efficiency is evaluated in comparison to offline jets, in this case,
PF jets.
\begin{table}[tbp]
\centering
\topcaption{Single-jet triggers used for $\lumi=7\times10^{33}\percms$
(pileup ${\approx}$32),
their prescales, and trigger rates at that instantaneous luminosity.}
\begin{tabular}{ | l c c c c | }
\hline
Path name & L1 seed & L1 prescale & HLT prescale & Approx. Rate (Hz) \\
\hline \hline
\texttt{HLT\_L1SingleJet16} & \texttt{L1\_SingleJet16} & 200,000 & 55 & 0.9\\
\texttt{HLT\_L1SingleJet36} & \texttt{L1\_SingleJet36} & 6,000 & 200 & 1.8\\
\hline
\texttt{HLT\_PFJet40} & \texttt{L1\_SingleJet16} & 200,000 & 5 & 0.2\\
\texttt{HLT\_PFJet80} & \texttt{L1\_SingleJet36} & 6,000 & 2 & 1.0\\
\texttt{HLT\_PFJet140} & \texttt{L1\_SingleJet68} & 300 & 2 & 1.5 \\
\texttt{HLT\_PFJet200} & \texttt{L1\_SingleJet92} & 60 & 2 & 1.2 \\
\texttt{HLT\_PFJet260} & \texttt{L1\_SingleJet128} & 1 & 30 & 1.3\\
\texttt{HLT\_PFJet320} & \texttt{L1\_SingleJet128} & 1 & 1 & 12.7\\
\texttt{HLT\_PFJet400} & \texttt{L1\_SingleJet128} & 1 & 1 & 3.7\\
\hline
\texttt{HLT\_Jet370\_NoJetID} & \texttt{L1\_SingleJet128} & 1 & 1 & 6.7\\
\hline
\end{tabular}
\label{tab:JetTrigger}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/L1_Jet}
\includegraphics[width=0.48\textwidth]{figures/PFJet}
\caption[JetEff_RunComparison]{Left: Efficiency of the L1
single-jet trigger
with an \ET threshold of 128 \GeV as a function of
the offline jet transverse momentum.
Right:
The HLT efficiencies as a function of transverse momentum
for a calorimeter jet trigger with a 370\GeV threshold and no jet
identification requirements~\cite{CMS-PAS-JME-09-008}, and two PF jet triggers
with 320 and 400\GeV thresholds.}
\label{fig:JetEff_RunComparison}
\end{figure}
\paragraph{Dijet paths}
The dijet trigger is primarily used to collect data for
$\eta$-dependent energy corrections using a \pt-balance technique~\cite{Chatrchyan:2012xx}. This
correction removes any variation in the calorimeter response to a
fixed jet \pt as a function of jet $\eta$.
The dijet triggers require two HLT jets with an average transverse
energy greater than a given threshold. The lowest threshold path
requires two HLT jets with an average transverse energy greater than
40\GeV. The DiPFJet trigger paths for $\lumi=7\times10^{33}\percms$
(pileup ${\approx}$32), along with the L1 and HLT prescales and rates
are listed in Table~\ref{tab:DiJetTrigger}. The lowest transverse energy
unscaled path
has a threshold of 400\GeV.
\begin{table}[tbph]
\centering
\caption{Dijet-triggers used at $\lumi=7\times10^{33}\percms$
(pileup ${\approx}$ 32), their
prescales, and trigger rates. The main purpose of these triggers is the
$\eta$-dependent calibration of the calorimeter.}
\begin{tabular}{ | l c c c c | }
\hline
Path name & L1 seed & L1 prescale & HLT prescale & Rate (Hz) \\
\hline
\texttt{HLT\_DiPFJetAve40} & \texttt{L1\_SingleJet16} & 200,000 & 1 & 0.51\\
\texttt{HLT\_DiPFJetAve80} & \texttt{L1\_SingleJet36} & 6,000 & 1 & 0.71\\
\texttt{HLT\_DiPFJetAve140} & \texttt{L1\_SingleJet68} & 300 & 1 & 1.51 \\
\texttt{HLT\_DiPFJetAve200} & \texttt{L1\_SingleJet92} & 60 & 1 & 1.36 \\
\texttt{HLT\_DiPFJetAve260} & \texttt{L1\_SingleJet128} & 1 & 15 & 1.41\\
\texttt{HLT\_DiPFJetAve320} & \texttt{L1\_SingleJet128} & 1 & 5 & 1.19\\
\texttt{HLT\_DiPFJetAve400} & \texttt{L1\_SingleJet128} & 1 & 1 & 1.44\\
\hline
\end{tabular}
\label{tab:DiJetTrigger}
\end{table}
\subsubsection{The HLT \texorpdfstring{\MET}{Missing Transverse Energy} triggers}
\label{sec:METHLT}
In this section, triggers that exclusively place requirements on
missing transverse energy are
described. Unscaled \MET triggers are of particular interest for
searches for new physics processes beyond the standard
model. Hypothetical particles, such as the lightest
supersymmetric particle (LSP), graviton, or dark matter, will interact
weakly in the CMS detector before escaping. Their presence can be
inferred by a measured imbalance in the energy or momentum of the observed particles in the event.
\paragraph{The \texorpdfstring {\MET}{Missing Transverse Energy} algorithms}
The \MET at the HLT is calculated using the same algorithms as the offline
analysis. Two algorithms were used to reconstruct the \MET in the
HLT. The first algorithm, called CaloMET, calculated the \MET by
summing all towers in the calorimeter,
\begin{equation}
\label{equ:CaloMET}
\MET = \sqrt{\Bigl( \sum_\text{towers} E_x\Bigr)^2 + \Bigl(\sum_\text{towers} E_y\Bigr)^2}.
\end{equation}
Another algorithm (PFMET) uses the negative of the vector sum
over transverse momenta of reconstructed anti-\kt PF
jets,
\begin{equation}
\label{equ:PFMET}
\mbox{PF }\MET = \sqrt{\Bigl( \sum_\text{PFJet} P_x\Bigr)^2 + \Bigl(\sum_\text{PFJet} P_y\Bigr)^2}.
\end{equation}
No minimum threshold requirement on jet $\pt$ was applied in this
algorithm at the HLT. As with the PFJet trigger paths, a pre-selection
based on the CaloMET is applied before the PFMET is calculated to
reduce the required CPU time of the PF
algorithm. Table~\ref{tab:METTrigger} shows the \MET triggers used for
$\lumi= 8\times10^{33}\percms$ in 2012, together with
prescale factors at L1 and HLT, and rate estimated using a 2012 dedicated data sample.
\begin{table}
\centering
\topcaption{The \MET triggers used for $\lumi=7\times10^{33}\percms$ (pileup
${\approx}$32), their prescales, and rates at that luminosity. Note
that the L1 $\MET>36\GeV$ trigger (\texttt{L1\_ETM36}) was highly
prescaled starting at this luminosity and hence the need to use an
OR with the L1 $\MET>40\GeV$ trigger (\texttt{L1\_ETM40}). The
parked HLT $\MET>80\GeV$ trigger (\texttt{HLT\_MET80\_Parked}) was
also anticipated to be highly prescaled starting from
$\lumi=8\times10^{33}\percms$. The \MET parking triggers were
available at the end of 2012. ``Cleaned'' refers to application of
dedicated algorithms to remove noise events.
}
\begin{tabular}{ | l c c c | }
\hline
Path name & L1 seed & HLT prescale & Rate (Hz) \\
\hline \hline
\multicolumn{4}{|c|}{Prompt triggers} \\ \hline
\texttt{HLT\_MET80} & \texttt{L1\_ETM36 OR L1\_ETM40} & 100 & 0.48\\
\texttt{HLT\_MET120} & \texttt{L1\_ETM36 OR L1\_ETM40} & 8 & 0.71\\
\texttt{HLT\_MET120\_HBHENoiseCleaned} & \texttt{L1\_ETM36 OR L1\_ETM40} & 1 & 3.92 \\
\texttt{HLT\_MET200} & \texttt{L1\_ETM70} & 1 & 1.46 \\
\texttt{HLT\_MET200\_HBHENoiseCleaned} & \texttt{L1\_ETM70} & 1 & 0.63\\
\texttt{HLT\_MET300} & \texttt{L1\_ETM100} & 1 & 0.47\\
\texttt{HLT\_MET300\_HBHENoiseCleaned} & \texttt{L1\_ETM100} & 1 & 0.15\\
\texttt{HLT\_MET400} & \texttt{L1\_ETM100} & 1 &0.19\\
\texttt{HLT\_MET400\_HBHENoiseCleaned} & \texttt{L1\_ETM100} & 1 & 0.05\\
\texttt{HLT\_PFMET150} & \texttt{L1\_ETM36 OR L1\_ETM40} & 1 & 3.05\\
\texttt{HLT\_PFMET180} & \texttt{L1\_ETM36 OR L1\_ETM40} & 1 & 1.92\\
\hline
\multicolumn{4}{|c|}{Parked triggers} \\ \hline
\texttt{HLT\_MET80\_Parked} & \texttt{L1\_ETM36 OR L1\_ETM40} & 1 & 47.54 \\
\texttt{HLT\_MET100\_HBHENoiseCleaned} & \texttt{L1\_ETM36 OR L1\_ETM40} & 1 & 9.09\\
\hline
\end{tabular}
\label{tab:METTrigger}
\end{table}
\paragraph{Efficiency of $\ETmiss$ triggers}
The trigger turn-on curves as a function of \MET are shown in
Figs.~\ref{fig:metcurves} and \ref{fig:METEff_RunComparison}. The
trigger efficiency is calculated from an independent data sample
collected using the lowest-\pt unscaled isolated single muon
trigger path, with $\pt>24$\GeV.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figures/MET_Eff}
\caption[METEff_RunComparison]{
The HLT efficiencies as a function of the offline PF\MET
for different \MET thresholds ($\ETmiss = 80$--$400\GeV$). }
\label{fig:METEff_RunComparison}
\end{figure}
\subsection{\texorpdfstring{$\tau$}{Tau} lepton triggers}
\label{sec:tau}
The $\tau$-jet triggers are important for a wide variety of physics analyses
that use $\tau$ leptons decaying hadronically. In many models of new
physics, third-generation particles play a special role in
elucidating the mechanism for spontaneous symmetry breaking and
naturalness.
The $\tau$ leptons, as the charged leptons of the third generation,
constitute important signatures for $\mathrm{h} \to \tau\tau$
searches and certain new physics scenarios.
The tau triggers are designed to collect events with $\tau$ leptons
decaying hadronically. Hadronic decays make up more than 60\% of the tau
branching fractions, mostly via final states with either one or three
charged hadrons in a tightly collimated jet with little additional
activity around the central cone. Leptonic tau decays are
automatically collected by electron and muon triggers.
In what follows, we refer to
taus that decay hadronically as $\tau_\mathrm{h}$ and $\tau$ leptons that decay
to electrons (muons) as $\tau_{\Pe}$ ($\tau_{\mu}$).
\subsubsection[L1 tau identification]{The L1 $\tau$ lepton identification}
\label{sec:l1tau}
A common approach to separate $\tau$ leptons decaying to hadrons ($\tau_h$)
from quark and gluon jets is by using isolation criteria. This is a
challenging task to perform at the L1 trigger because of the given coarse
granularity of the L1 calorimeter readout (Fig.~\ref{fig:l1jetalgo}).
The L1 $\tau$ objects are mandatory, however, for analyses such as
$\mathrm{h} \to \tau\tau$, with both $\tau$
leptons decaying hadronically.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\linewidth]{figures/L1TauVeto}
\caption{Examples of trigger regions, where trigger towers with
energy deposits $\ET^\mathrm{ECAL}>4\GeV$ or $\ET^\mathrm{HCAL}>4\GeV$,
are shown as shaded squares. The L1 $\tau$ veto bit is not set if
the energy is contained in a square of $2{\times}2$ trigger towers
(a). Otherwise, the $\tau$ veto bit is set (b). }
\label{figure:L1TauVeto}
\end{figure}
The L1 $\tau_\mathrm{h}$ identification starts from previously identified L1 jet
objects (Section \ref{sec:l1jet}), which are further investigated
using an isolation variable and a $\tau$ veto bit. We require that seven
out of the eight noncentral trigger regions contain small
energy deposits ($\ET < 2\GeV$). This acts as an isolation
requirement. In addition, for each trigger region a $\tau$ veto bit is
set if the energy deposit is spread over more than $2\times2$ trigger
towers (Fig.~\ref{figure:L1TauVeto}). The L1 $\tau$ objects are
required to have no $\tau$ veto bit set in all nine trigger regions,
further constraining the energy spread within the two most energetic
trigger regions.
If either the isolation or the $\tau$ veto bit requirement
fails, the object is regarded as an L1 central jet.
The $\mathrm{h} \to \tau_h\tau_h$
search~\cite{Chatrchyan:2014nva} uses
an L1 seed requiring two L1 $\tau$ objects with $\pt>44\GeV$ and
$\abs{\eta} < 2.17$. For large $\tau$ energies, the isolation
criteria introduce an inefficiency for genuine $\tau$ leptons. This is
recovered by also allowing events with two L1 jets (central or
$\tau$) with $\pt>64\GeV$ and $\abs{\eta} < 3.0$ to be
selected. Figure~\ref{figure:L1TauRate} shows the rate of these
L1 seeds as a function of the applied $\pt$ threshold on the two
objects.
The measured efficiency of this L1 seed reaches a plateau of 100\% at
$\pt \approx70$\GeV, as shown in
Figure~\ref{figure:L1TauEfficiency}. The efficiency as function of the
pseudorapidity is obtained using $\tau$ leptons with $\pt
>45\GeV$.
This requirement emulates the \pt requirement used in the $\mathrm{h} \to \tau_\mathrm{h}\tau_\mathrm{h}$
search.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.6\linewidth]{figures/L1TauRate}
\caption{Rate of L1 double-$\tau$ and double-jet seeds as a function of the
$\pt$ threshold on the two objects. The double-$\tau$ objects are restricted to
$\abs{\eta}<2.17$, while the double-jet requires two seed objects (either $\tau$ or jet)
within $\abs{\eta}<3.0$. The given rates are scaled to an instantaneous luminosity of
$5 \times 10^{33}\percms$.}
\label{figure:L1TauRate}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\linewidth]{figures/L1TauPtEfficiency}
\includegraphics[width=0.452\linewidth]{figures/L1TauEtaEfficiency}
\caption{Efficiency of the double-$\tau_\mathrm{h}$ L1 trigger with a
threshold of 44 and 64\GeV on the L1 $\tau$ and jet objects,
respectively. Presented is the efficiency of one $\tau$ lepton candidate as
a function of transverse momentum (left) and pseudorapidity
(right).}
\label{figure:L1TauEfficiency}
\end{figure}
\subsubsection[HLT tau lepton identification]{The HLT $\tau$ lepton identification}
\label{sec:tauHLT}
The $\tau$-jet triggers identify and select events with hadronic
decays of the $\tau$ leptons; leptonic decays are
selected as prompt electrons and muons.
There are three levels of the $\tau$ HLT; each is designed to reduce
the rate before running the more complex subsequent step. The first
step we call the level-2 (L2) $\tau$ trigger; it is built with
CaloJets. The second step is referred to as level-2.5 (L2.5); this
step requires isolation for matching tracks reconstructed from the pixel
detector. The last step, called level-3 (L3), uses the PF
algorithm to build $\tau$ lepton candidates using information from all major
subdetectors. Offline $\tau$ reconstruction with
CMS is described in more detail elsewhere~\cite{Chatrchyan:2012zz}. The HLT $\tau$
paths come in two distinct varieties. The first is for $\tau_\mathrm{h}$
candidates triggered with the L1 trigger. These $\tau$ lepton triggers
have a L2 and L2.5 step to reduce the rate before running the more
advanced L3 $\tau$ reconstruction. The second type of $\tau$ trigger
path is triggered at L1 by a lepton or other event quantity
such as \MET. These triggers have HLT electron, muon or missing
energy selections to reduce the rate before running the L3 $\tau$
algorithm.
The L2 $\tau$-jet trigger reconstruction is entirely based on calorimeter
information. The CaloJets are built with a cone of radius equal to 0.2
seeded by L1 $\tau$ jets (Section~\ref{sec:l1tau}) or L1 central
jets. The only selection applied is a \pt threshold on the jet transverse
energy.
The L2.5 step consists of a track-based isolation applied on the L2
$\tau$ candidates that are above the \pt threshold. The isolation starts
by reconstructing the pixel tracks and selecting those coming from the
primary vertex and matched to the L2 $\tau$ candidate. A L2 $\tau$ is
considered to be isolated if there is no pixel track from the same vertex
with transverse momentum greater than 1.2\GeV in an isolation annulus
between $0.2 <\DR< 0.4$ around the $\tau$ candidate.
Finally, the L3 $\tau$ reconstruction uses the PF
algorithm. The online reconstruction uses a so-called \emph{fixed cone} $\tau$
algorithm with a signal cone of $\DR = 0.18$, which contains the
$\tau$ decay products, and an isolation annulus of
$0.18<\DR< 0.45$. The trigger uses tracker-only isolation built using
tracks from a vertex compatible with the primary vertex of the
$\tau$ to minimize pileup dependence. There are two isolation
working points: loose and tight. A loose $\tau$ is considered
isolated if no tracks with $\pt>1.5\GeV$
are found in the isolation annulus. A $\tau$ candidate is considered to be ``tight'' if it has no
tracks with $\pt>1.0\GeV$ with a signal/isolation cone boundary at
0.15.
Trigger efficiencies are measured individually in each step. For the
double-$\tau$ trigger a per-leg efficiency is measured. A sample of
$\Z\to\tau\tau$ events selected by a single-muon trigger is
used for the measurement, with one $\tau$ decaying hadronically and
the other to muon and neutrinos. The $\tau_\mathrm{h}\tau_\mu$ candidates are
selected and discriminated against multijet and W boson backgrounds
using muon isolation, charge requirements, and low transverse mass
$M_\mathrm{T}$ to
achieve a $\tau_\mathrm{h}$ purity of approximately 50\%.
The efficiency for the L2/L2.5 stages of the $\tau$ trigger with a
transverse momentum threshold of 35\GeV is shown in
Fig.~\ref{figure:L2TauEff}. The efficiency reaches a plateau of
$93.2$\% at 55\GeV.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{figures/L2TauPt}
\includegraphics[width=0.45\textwidth]{figures/L2TauEta}
\caption{Efficiency of the L2 and L2.5 $\tau$ trigger with a
35\GeV threshold as a function of the offline reconstructed
$\tau$ transverse momentum (left) and pseudorapidity (right). }
\label{figure:L2TauEff}
\end{figure}
For the L3 efficiency measurement, a slightly different event
selection is applied: $\Z\to\tau\tau_\ell$ events
(with $\ell = \Pe$ or $\mu$) are selected with a
muon-plus-$\ETmiss$ or a single-electron trigger.
Tight isolation on the
electron/muon and $M_\mathrm{T}<20\GeV$, measured between the
electron/muon and the missing energy, are also required. The purities
after this selection are 78\% and 65\% for
$\abs{\eta_{\tau_\mathrm{h}}}<1.5$ and $1.5<\abs{\eta_{\tau_\mathrm{h}}}<2.3$, respectively.
The event samples used to calculate the efficiencies in the simulation
are mixed with simulated \PW+jets events to produce a compatible purity.
The efficiency for the L3 $\tau$ trigger with a 20\GeV threshold is
shown in Fig.~\ref{figure:L3TauEff}. The efficiency reaches a plateau
of 90\% very quickly at about 22\GeV. The $\tau_\mathrm{h}\tau_\mathrm{h}$
triggers use the tight working point. This event topology is
dominated by multijet background. The tighter working point
substantially reduces the rate and provides an efficiency of 80\% on
the plateau. In offline analyses the efficiency of the simulation is
corrected as a function of the transverse momentum to match the
efficiency measured in data events.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.32\textwidth]{figures/TauPtEff}
\includegraphics[width=0.32\textwidth]{figures/TauEtaEff}
\includegraphics[width=0.32\textwidth]{figures/TauPVSEff}
\caption{Efficiency of the loose L3 $\tau$ algorithm from the
$\tau_h\tau_\mu$ events plotted as a function of
offline $\tau_h$ transverse momentum (left),
pseudorapidity (center), and number of vertices (right). }
\label{figure:L3TauEff}
\end{figure}
In summary, the $\tau$ HLT is used in a variety of very important
physics analyses, including standard model Higgs boson searches. These
analyses combine the $\tau_h$ trigger algorithms described
above with other HLT objects, such as electrons, muons, and
missing transverse energy. These analysis have efficiencies as high
as 90\% while maintaining a manageable HLT rate.
\subsection{ b-quark jet tagging}
\label{sec:BTag}
Many important processes studied at the LHC contain jets
originating from b quarks. The precise identification of b jets
is crucial to reduce the large backgrounds. In CMS, this background
can be suppressed at the HLT by using b tagging algorithms,
giving an acceptable trigger rate with large signal
efficiency.
The b tagging algorithms exploit the fact that B hadrons typically have
longer decay lifetimes than
the hadrons made of light or charm quarks.
As a consequence, their decay product tracks and vertices
are significantly displaced from the
primary vertex. Similarly, B hadrons decay more frequently to final
states with leptons than their light-flavor counterparts.
The track counting (TC) and combined secondary vertex (CSV) algorithms
used for offline b tagging~\cite{CMS-PAS-BTV-12-001} are adapted to be
used at the HLT to trigger events containing jets originating from b
quarks. The TC algorithm uses the impact parameter
significance of the tracks in the jets to discriminate between jets
originating from b quark jets
from other flavors.
Combined information on impact parameter
significance of the tracks and properties of reconstructed secondary
vertices in the jets are combined in a multivariate discriminant in
the CSV algorithm.
The choice of which b tagging is used in a particular HLT path depends
on timing requirements. A compromise has to be found to keep the CPU
usage and trigger rates at low levels while keeping the trigger
efficiency as high as possible. Therefore, online b tagging techniques
were designed to be very flexible, allowing the use of not only
different algorithms, but also input objects, namely primary vertex and
tracks, reconstructed with different methods. The b tagging
algorithms depend on the primary vertices found via the fast primary
vertex algorithm described in Section~\ref{sec:primvtx}.
\subsubsection{Tracking for b tagging}
\label{subsec:BTagTrack}
Three tracking reconstruction methods are available at the HLT
(Section~\ref{sec:HLTTrack}) and are used for b tagging: pixel,
regional, and iterative tracking.
The reconstruction of pixel tracks is very fast; however, the
performance is limited. Thus, the pixel tracks are essentially only
used in online b tagging using TC algorithms with jets reconstructed
from energy deposits in the calorimeter at an intermediate step
(L2.5) of the trigger paths. At L2.5 the b tagging discriminant
thresholds are typically loose with the exclusive aim to reduce the
input rates to the slower, but better performing, reconstruction of
regional tracks. The regional tracks are used as input to b tagging
at a later step, called L3, of event triggering. Paths using online
PF jets have tracks reconstructed with the high-performance
iterative tracking, which can be used by both online algorithms.
\subsubsection{Performance of online b-tagging}
\label{subsec:BTagPerf}
\begin{figure}[tbhp]
\centering
\includegraphics[width=0.6\textwidth]{figures/Btag_Online_TCxCSV}
\caption{The efficiency to tag b quark jets versus the mistag rate,
obtained from Monte Carlo simulations, for the track counting (TC)
and for the combined secondary vertex (CSV) algorithms. As expected from
offline studies, the CSV algorithm performs better than the
TC algorithm.}
\label{figure:btag_performance}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\textwidth]{figures/Btag_Online_CSV_efficicency}
\caption{The efficiency of the online CSV trigger as a function of
the offline CSV tagger discriminant, obtained from the
data and from Monte Carlo simulations. Good agreement between
the two is observed.}
\label{figure:btag_efficiency}
\end{figure}
The performance of the online b tagging at the HLT is illustrated
in Figs.~\ref{figure:btag_performance} and
\ref{figure:btag_efficiency}. Figure~\ref{figure:btag_performance} shows
the efficiency to tag b quark jets versus the mistag rate, obtained from
Monte Carlo simulations, for both algorithms. As expected
from studies of the performance of the algorithms used offline, the
CSV algorithm performs better than the TC algorithm.
The efficiency of the online CSV trigger as a function of the offline
CSV tagger discriminant, obtained from the data, is shown in
Fig.~\ref{figure:btag_efficiency} for a trigger path with selections
on central PF jets with $\ET>30\GeV$ and $\MET>80\GeV$ relative to
an identical (prescaled) trigger path without the b tagging part.
The data are a \ttbar-enriched control region (requiring at least
three jets and at least one isolated lepton).
This defines the denominator
of the efficiency ratio.
The numerator additionally applies a requirement such that
$\epsilon_\text{CSV}>70\%$ for the b tagging discriminant. For the
simulation studies, a sample of \ttbar events is used with the same
selection. The choice to use $-\ln(1-\mathtt{CSV})$ in the $x$-axis
is because on the fact that the distribution of the CSV discriminant
is limited to the range between zero and unity, and peaks at unity. This
choice makes it possible to visualize the turn-on behavior. A typical
requirement of $\text{CSV}>0.9$ corresponds to 2.3 on the $x$ axis.
\subsection{Heavy ion triggers}
\label{sec:hinobj}
The running conditions for PbPb collisions are significantly different
from the pp case. The instantaneous luminosity delivered by the LHC in
the 2010 (2011) PbPb running periods was $3\times 10^{25}$
($5\times 10^{26}$)\percms resulting in maximum interaction rates of
250\unit{Hz} (4\unit{kHz}), much lower than in pp running, with a negligible pileup
contribution and an inter-bunch spacing of 500~ns (200~ns). During the
pPb run in 2013 an instantaneous luminosity of
$10^{29}\percms$ was achieved, corresponding to an interaction
rate of 200\unit{kHz}, again with a very low pileup contribution.
In PbPb collisions, the number of produced particles depends strongly
on the geometrical overlap of the Pb ions at the time of the
collisions. The number of charged particles produced per unit of
pseudorapidity, $\rd{}N_\mathrm{ch}/\rd\eta$, varies from event to event from
${\approx}$10 for glancing collisions to ${\approx}1600$ for head-on
collisions. The large particle multiplicity of head-on collisions
leads to very high detector occupancies in the inner layers of the
silicon tracker. For such high detector occupancies the hardware-based
zero-suppression algorithm implemented in the front-end-drivers (FED)
of the tracker does not function reliably. As a consequence the
tracker had to be read out without hardware zero suppression and the
zero suppression was performed offline in 2010 and in the HLT in
2011. Table~\ref{tab:ionrunning} shows a summary of the conditions in
various heavy ion running periods.
A consequence of reading out the tracker without zero suppression is the
limited data throughput from the detector
due to the large event size. This limits the readout rate of the
detector to 3\unit{kHz} in PbPb collisions. The limit has to be taken into
account when setting up the
trigger menu for HI collisions.
\begin{table}[tbp]
\topcaption{Summary of the heavy ion running conditions in various
data-taking periods.}
\label{tab:ionrunning}
\centering
\begin{tabular*}{0.75\textwidth}{@{\extracolsep{\fill}}|cccc|}
\cline{1-4}
Run period & Ion species ($\sqrt{s_{\rm NN}}$) & Max. collision rate & Zero suppression \\
\cline{1-4}
Winter 2010 & PbPb (2.76\TeV) & 200\unit{Hz} & Offline \\
Winter 2011 & PbPb (2.76\TeV) & 4500\unit{Hz} & HLT \\
Winter 2013 & pPb (5.02\TeV) & 200\unit{kHz} & FED \\
\cline{1-4}
\end{tabular*}
\end{table}
The HI object reconstruction is based on the pp HLT reconstruction
algorithms described in the previous sections. The physics objects or event
selection criteria used in the trigger menu are the following:
\begin{itemize}
\item Hadronic interactions (minimum bias);
\item Jets;
\item Photons;
\item Muons;
\item High-multiplicity events.
\end{itemize}
In the following we discuss the differences between the algorithms
used in pp running to those used offline, and the performance efficiencies of
these algorithms in the PbPb case.
\textbf{Hadronic interactions.} Since the interaction probability per
bunch crossing during HI data taking is only $\approx10^{-3}$,
it is necessary to deploy a dedicated trigger to select hadronic
interactions. This selection is based on coincidences between the
trigger signals from the $+z$ and $-z$ sides of either beam
scintillation counters (BSCs) or the
HF which cover a pseudorapidity range of $2.9<\abs{\eta}<5.2$. This
trigger has a selection efficiency of more than 97\% for hadronic
inelastic collisions and is thus also referred to as a ``minimum
bias'' trigger. The selection efficiency of this trigger was
determined using a MC simulation of HI events using the
{\textsc{hydjet}} event generator \cite{Lokhtin:2005px} and was
cross-checked with a control data sample
selected using the BPTX signal to identify crossing beam bunches. The
event sample selected this way is referred to as ``zero bias'' sample.
From the zero-bias sample, inelastic events can be selected by
requiring a charged-particle track consistent with originating from
the beam crossing region. The fraction of the zero bias sample selected using the minimum bias trigger is consistent with
the selection efficiency determined from simulated events.
\textbf{Jets.} The jet reconstruction algorithm used for HI data taking
closely follows the corresponding pp algorithm which reconstructs
calorimeter-based jets as described in Sections~\ref{sec:l1jet} and
\ref{sec:JetHLT}, with the addition of a step subtracting the
high-multiplicity underlying event using the iterative pileup
subtraction technique \cite{Kodolova:2007hd}. During the 2010 and
2011 HI data-taking periods an iterative-cone type algorithm was used
for jet clustering.
The efficiency of the jet triggers deployed for the 2010 PbPb
run is illustrated in Fig.~\ref{fig:Jet50U} by the efficiency turn-on
curve of the Jet50U trigger. This trigger was discriminating events
based on uncorrected jet energies. The efficiencies are given as a
function of leading-jet transverse momentum for offline-corrected (left)
and for uncorrected jets (right). The given efficiencies were determined
based on offline jets reconstructed using the iterative cone algorithm with pileup
subtraction in a data sample collected using a minimum bias trigger.
The efficiency is defined as the fraction of the minimum bias sample containing a leading jet of a given \pt that is selected by the jet trigger.
During the 2011 PbPb run, the jet triggers had energy corrections applied
at the HLT, leading to sharper turn-on curves, and thereby to more
efficient data taking. Figure~\ref{fig:Jet80} illustrates the
improvement by showing the efficiency of the Jet80 trigger as a function
of leading-jet transverse momentum in the $\abs{\eta}<2$ region.
The efficiencies are evaluated from a minimum bias sample, as in the 2010 case,
with the jet reconstructed using the anti-\kt algorithm based on PF objects and
also subtracting the underlying event using the iterative pileup subtraction
technique. The efficiencies are given for various cone radii.
\begin{figure}[tph]
\centering
\includegraphics[width=0.49\textwidth]{figures/HIN/TrigEffCorrected151058and151025}
\includegraphics[width=0.49\textwidth]{figures/HIN/TrigEffUnCorrected151058and151025}
\caption{Efficiency curves for the Jet50U trigger in PbPb at $\sqrt{s_\mathrm{NN}}=2.76\TeV$, as a function of
the corrected (left) and uncorrected (right) leading jet transverse
momentum.}
\label{fig:Jet50U}
\end{figure}
\begin{figure}[tph]
\centering
\includegraphics[width=0.49\textwidth]{figures/HIN/triggerEfficiency_Jet80}
\caption{Efficiency curves for the Jet80 trigger in PbPb at
$\sqrt{s_\mathrm{NN}}=2.76\TeV$, as a function of the leading jet
transverse momentum in the $\abs{\eta}<2$ region evaluated from
minimum bias sample. The red, black, and blue points correspond
to anti-\kt jets with cone size of 0.2, 0.3, and 0.4,
respectively. }
\label{fig:Jet80}
\end{figure}
\textbf{Photons.} During the 2010 PbPb run the photon triggers employed at HLT were based on the energy
deposits in the ECAL reconstructed using the island clustering algorithm~\cite{cms-e7}. This is the same algorithm as
used for offline analyses based on the 2010 data, but without energy correction already
applied at HLT. The trigger efficiency for the uncorrected Photon15 trigger for minimum bias events is
shown in the left panel of Fig.~\ref{fig:PhotonTriggers}.
For the data taking of 2011, energy corrections were already applied in the HLT. The performance
of such corrected HLT photon paths is illustrated in the right panel of Fig.~\ref{fig:PhotonTriggers},
which shows the efficiency turn-on curve for the Photon40 trigger,
again determined with respect to minimum bias events.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.407\textwidth]{figures/HIN/trigger_turnon_curve_photon15_mb}
\includegraphics[width=0.584\textwidth,trim={0 .3cm 0 0},clip]{figures/HIN/plot_triggerEfficiencyPhoton40}
\caption{Trigger efficiency of the uncorrected Photon15 (left) and the corrected Photon40 (right)
triggers as a function of corrected offline photon transverse
momentum, in PbPb collisions at $\sqrt{s_{\rm NN}}= 2.76\TeV$.}
\label{fig:PhotonTriggers}
\end{figure}
\textbf{Muons.} Efficient triggering on high-\pt muons
is of primary importance for the HI physics program in CMS.
During data-taking both single- and double-muon triggers were
deployed to allow for maximal flexibility in event selection.
The per-muon trigger efficiency of the double-muon trigger (which
requires two muons with $\pt> 3\GeV$) in the 2011 PbPb data
determined by a tag-and-probe method is shown in
Fig.~\ref{fig:DoubleMu3}. The three panels show the efficiency as a
function of transverse momentum, pseudorapidity, and the overlap
between the two colliding nuclei, expressed by the ``number of
participants.'' Data are shown in red and simulated Z bosons
embedded in {\sc hydjet} background are shown in blue. On average, the
trigger efficiency is very good, reaching 98.2\% as obtained from
tag-and-probe with simulated data.
The single-muon trigger efficiencies for the daughters of $\JPsi$
mesons with $\pt >6.5$\GeV in the 2011 PbPb data as a function of
transverse momentum, pseudorapidity, and the number of participants
are shown in the various panels of Fig.~\ref{fig:DoubleMuOpen}. The
\pt and $\eta$ integrated trigger efficiency is $86.0\pm0.2\%$ in MC,
and $91.5\pm0.4\%$ in data. The trigger efficiency shows no
significant dependence on the number of participants, as expected, in
data or simulation (Fig.~\ref{fig:DoubleMuOpen}, right).
\begin{figure}[tbph]
\centering
\includegraphics[width=0.32\textwidth]{figures/HIN/TnP_SingleMu_triggerEff_pt}
\includegraphics[width=0.32\textwidth]{figures/HIN/TnP_SingleMu_triggerEff_eta}
\includegraphics[width=0.32\textwidth]{figures/HIN/TnP_SingleMu_triggerEff_Npart}
\caption{Per-muon triggering efficiency of the HLT HI double-muon
trigger as a function of \pt (left), $\eta$ (center), and average
number of participant nucleons (right). \cPZ\xspace bosons in
data (red) are compared to simulated \cPZ\xspace bosons embedded in HI
background simulated with \textsc{hydjet} (blue).}
\label{fig:DoubleMu3}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.32\textwidth]{figures/HIN/HIN-12-014-Trg_Comp_pt}
\includegraphics[width=0.32\textwidth]{figures/HIN/HIN-12-014-Trg_Comp_eta}
\includegraphics[width=0.32\textwidth]{figures/HIN/HIN-12-014-Trg_Comp_cent}
\caption{Single-muon trigger efficiencies as functions of probe
muon transverse momentum, pseudorapidity, and number of participants in the 2011 PbPb data.
Red full circles are simulation and blue full squares are data. The numbers quoted in the legends of the figures are the integrated efficiencies.}
\label{fig:DoubleMuOpen}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.49\textwidth]{figures/HIN/hin-13-002-multtrigeff}
\includegraphics[width=0.49\textwidth]{figures/HIN/hin-13-002-multspectra-norm}
\caption{Left: Trigger efficiency as a function of the offline track multiplicity, for the three most
selective high-multiplicity triggers. Right: The spectrum of the offline tracks for minimum bias and
for all the different track-based high-multiplicity triggers in the 2013 pPb data.}
\label{fig:highMult}
\end{figure}
\textbf{High-multiplicity events.} In order to trigger on
high-multiplicity events, several trigger paths were deployed during
the HI data-taking periods. Triggers based on energy deposits
in the calorimeter systems, signals in the BSC
detectors, as well as triggers based on track multiplicities were
employed and used in supplementary roles. The efficiency of
high-multiplicity track triggers used during the 2013 pPb run is shown
in the left panel of Fig.~\ref{fig:highMult}. The histograms
correspond to different thresholds of the same kind as for track-based
triggers. The efficiencies are shown as a function of the offline track
multiplicity. The efficiencies are determined using either minimum bias
events or a lower threshold high multiplicity trigger as a
reference. The efficiency is defined as the fraction of events passing
a given trigger threshold in the reference sample and is shown as a
function of number of offline reconstructed tracks. The gain in the
number of high-multiplicity events is demonstrated in the right-hand
side panel of Fig.~\ref{fig:highMult}.
\section{Physics performance of the trigger}
\label{sec:hpa}
In the previous sections, we described the performance of the CMS
trigger system for single- and multi-object triggers. However, most
physics analyses published using the data taken in the first years of the LHC were performed using more complicated triggers. These triggers either take
advantage of different categories of objects, such as a mixture of
jets and leptons, or are topological triggers, which look at the event as
a whole and calculate quantities such as the scalar sum of jet transverse energy \HT in the event or the missing transverse energy. In this section, to illustrate the performance of the trigger system, we give specific examples of some high-priority analyses that CMS carried out based on data taken in 2012, at a center-of-mass energy $\sqrt{s}=8\TeV$.
\subsection{Higgs boson physics triggers}
\label{sec:higpag}
The observation of the Higgs boson~\cite{Chatrchyan:2012ufa,cms-higgs-long-paper} is the most important CMS
result in the first LHC run. In this section, we discuss the
CMS trigger performance for Higgs boson physics. Single-object
triggers were discussed in Section~\ref{sec:objid}. In
this section, more complex triggers are described. The strategy of
combining different trigger paths to maximize the signal acceptance
for the Higgs boson measurements is also presented.
\subsubsection[Triggers for Higgs boson diphoton analysis]{$\mathrm{h} \to\gamma\gamma$}
As already discussed in Section~\ref{sec:egammaHLT}, diphoton triggers
have been designed to efficiently collect \HGG\xspace events. To be
as inclusive as possible, any photon that passes the general
identification requirements described in Section~\ref{sec:egammaHLT}, and either
the isolation and calorimeter identification or the \RNINE requirement, is
accepted in the diphoton path. Asymmetric thresholds of 26\GeV on
the leading photon and 18\GeV on the subleading photon have been
applied together with a minimum invariant mass requirement on the diphoton system of
60\GeV. In the very late 2012 data-taking period, a similar path
with more asymmetric \ET requirements was added to the HLT menu to enhance the
discriminating power for the non-standard Higgs boson spin-0 and
spin-2 scenarios. The performance of the trigger was shown in
Figs.~\ref{fig:hlt_26_pt_eta} to~\ref{fig:hlt_26_nvtx}.
\subsubsection[Triggers for multi-lepton Higgs boson analyses]{$\rm H \to ZZ\to 4\ell$}
The four-lepton channel provides the cleanest experimental signature
for the Higgs boson search: four isolated leptons originating from a common
vertex. As the number of expected events is very low, it is necessary to preserve the highest possible signal efficiency. The
analysis performance therefore heavily relies on the lepton
reconstruction, identification efficiency, and, due to the low branching
fraction of the Higgs boson into $\Z\Z$, a robust trigger strategy to avoid any signal loss. The
events are selected requiring four leptons (electrons or muons)
satisfying identification, isolation, and impact parameter requirements
(Sections~\ref{sec:egammaHLT} and~\ref{sec:muHLT}).
The triggers described in this section were instrumental in the Higgs
boson discovery and in the studies of its
properties~\cite{Chatrchyan:2012ufa,Chatrchyan:2013mxa}.
In the following, we will describe the main triggers that are used to
collect most of the data, as well as a set of utility
triggers used to measure the online selection efficiencies.
The main trigger selects $\PH\to \Z\Z\to 4\ell$
events with an efficiency larger than 95\% for
$m_\mathrm{h} = 125$\GeV, at a rate less than 10\unit{Hz} at an instantaneous luminosity of
$5 \times 10^{33}\percms$. This trigger has loose isolation and identification requirements applied, and these are
critical for proper background estimation. To improve the absolute
trigger efficiency, a combination of single-electron and dielectron
triggers was used. This combination achieved a 98\% overall trigger
efficiency.
For the $\PH\to \Z\Z\to 4\ell$ analysis, a basic set
of double-lepton triggers is complemented by the triple-electron paths
in the 4e channel, providing an efficiency gain of 3.3\% for signal
events with $m_{\PH} = 125$\GeV. The minimum momenta of the first and
second lepton are 17 and 8\GeV, respectively, for the double-lepton
triggers, while they are 15, 8 and 5\GeV for the triple-electron
trigger. The trigger paths used in 2012 are listed in
Table~\ref{tab:anatriggers}, where ``CaloTrk'' stands for calorimeter-
and tracker-based identification and isolation requirements applied
with very loose criteria, while the ``CaloTrkVT'' name denotes
triggers that make use of the same objects as discriminators, but with more stringent requirements placed on them.
\begin{table}[tbh]
\centering
\topcaption{
Triggers used in the $\PH\to4\ell$ event selection (2012 data and
simulation). No prescaling is applied to these triggers.
}
\label{tab:anatriggers}
\resizebox{\textwidth}{!}{
\begin{tabular}{|lll|}
\hline
Channel & \multicolumn{1}{c}{HLT path} & \multicolumn{1}{c}{L1 seed} \\ \hline
4e & \texttt{ HLT\_Ele17\_CaloTrk\_Ele8\_CaloTrk } & \texttt{ L1\_DoubleEG\_13\_7 } \\
& \texttt{ OR HLT\_Ele15\_Ele8\_Ele5\_CaloIdL\_TrkIdVL } & \texttt{ L1\_TripleEG\_12\_7\_5 } \\
4$\mu$ & \texttt{ HLT\_Mu17\_Mu8 } & \texttt{ L1\_Mu10\_MuOpen } \\
& \texttt{ OR HLT\_Mu17\_TkMu8 } & \texttt{ L1\_Mu10\_MuOpen } \\
2e2$\mu$ & \texttt{ HLT\_Ele17\_CaloTrk\_Ele8\_CaloTrk } & \texttt{ L1\_DoubleEG\_13\_7 } \\
& \texttt{ OR HLT\_Mu17\_Mu8 } & \texttt{ L1\_Mu10\_MuOpen } \\
& \texttt{ OR HLT\_Mu17\_TkMu8 } & \texttt{ L1\_Mu10\_MuOpen } \\
& \texttt{ OR HLT\_Mu8\_Ele17\_CaloTrk } & \texttt{ L1\_MuOpen\_EG12 } \\
& \texttt{ OR HLT\_Mu17\_Ele8\_CaloTrk } & \texttt{ L1\_Mu12\_EG6 } \\
\hline
\end{tabular}}
\end{table}
Figure~\ref{fig:triggerplots4l} shows the efficiency of the trigger
paths described above as a function of the Higgs boson mass, for signal
events with four generated leptons in the pseudorapidity acceptance and for those that have passed
the analysis selection, as determined from simulation. With these trigger paths, the trigger
efficiency within the acceptance of this analysis is greater than 99\%
for a Higgs boson signal with $m_\mathrm{H} > 120\GeV$.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\linewidth]{figures/4ltriggerEff_genacc.pdf}
\includegraphics[width=0.45\linewidth]{figures/4ltriggerEff_fullsel.pdf}\\
\caption{Trigger efficiency for simulated signal events with four
generated leptons in the pseudorapidity acceptance (left), and
for simulated signal events that have passed the full
$\PH\to4\ell$ analysis
selection (right).
}
\label{fig:triggerplots4l}
\end{figure}
The tag-and-probe method is used to measure the per-lepton
efficiency for double-lepton triggers, as described in
Section~\ref{sec:egammaHLT} for electrons, and in Section~\ref{sec:muHLT} for muons.
The performance in data and simulation of the per-leg efficiencies of
the double-lepton triggers
are shown in those sections.
The position and the steepness of the turn-on curve of the trigger
efficiency as a function of the lepton \pt measured on data is in good
agreement with the expectations from simulation for all the triggers
considered. A measurement of the trigger efficiency on the plateau
reveals generally lower efficiency in data, compared to simulation,
by about 1--2\%. The effect of this inefficiency is mitigated,
however, by the fact that multiple leptons in the event can pass the trigger requirements, and so no correction factor is applied. A systematic
uncertainty of 1.5\% in the expected signal yields is included to
allow for this difference in trigger performance between data and
simulation.
In Table~\ref{tab:effitriggers}, the trigger paths used to select the
tag-and-probe pairs for the efficiency measurements are listed. In
case of muons, the prescaled double-muon triggers in the \JPsi mass
window are used to select a low-\pt muon probe to measure the
identification and isolation efficiency for muons with $\pt <
15\GeV$.
\begin{table}
\centering
\topcaption{
Triggers used for tag-and-probe (T\&P) efficiency measurements of
four-lepton events in 2012 data and
simulation: CaloTrk = CaloIdT\_CaloIsoVL\_TrkIdVL\_TrkIsoVL,
CaloTrkVT = CaloIdVT\_CaloIsoVT\_TrkIdT\_TrkIsoVT
}
\resizebox{\textwidth}{!}{
\begin{tabular}{|llllc|}
\hline
Channel & Purpose & HLT path & L1 seed & prescale \\ \hline
4e and 2e2$\mu$ & \Z T\&P & \texttt{ HLT\_Ele17\_CaloTrkVT\_Ele8\_Mass50 } & \texttt{ L1\_DoubleEG\_13\_7 } & 5 \\
4e and 2e2$\mu$ & \Z T\&P low \pt & \texttt{ HLT\_Ele20\_CaloTrkVT\_SC4\_Mass50\_v1 } & \texttt{ L1\_SingleIsoEG18er } & 10 \\
4$\mu$ and 2e2$\mu$ & \Z T\&P & \texttt{ HLT\_IsoMu24\_eta2p1} & \texttt{ L1\_SingleMu16er } & \\
4$\mu$ and 2e2$\mu$ & \JPsi T\&P & \texttt{ HLT\_Mu7\_Track7\_Jpsi } & & \\
& & \texttt{ HLT\_Mu5\_Track3p5\_Jpsi } & & \\
& & \texttt{ HLT\_Mu5\_Track2\_Jpsi } & & \\
\hline
\end{tabular}}
\label{tab:effitriggers}
\end{table}
\subsubsection[Triggers for the di-tau Higgs boson analysis]{$\PH\to\tau\tau$}
\label{sec:HTauTau}
The triggers used for the Higgs boson $\PH\to\tau\tau$ analysis in the
$\tau_{\mu}\tau_\mathrm{h}$ and $\tau_{\Pe}\tau_\mathrm{h}$ channels require both an
electron or muon and a hadronic tau object.
The electron or muon is required to be isolated; the energy in the isolation cone is corrected for the effects of the pileup~\cite{fastjetmanual}.
The tracks for the $\tau_\mathrm{h}$ candidate and the tracks used to compute the isolation were required to come from a
vertex compatible with the electron/muon origin. The efficiencies are
measured using \Z$\to\tau\tau$ events with a muon-plus-\MET or
a single-electron trigger. The events are selected by requiring the electron/muon to pass the tight isolation criteria, and also to have a transverse mass $M_\mathrm{T}<20\GeV$ measured
between the electron/muon and the missing transverse momentum vector. The purities after
this selection are 78\% and 65\% for
$\abs{\eta (\tau_\mathrm{h})} < 1.5$ and
$1.5 < \abs{\eta (\tau_\mathrm{h})} < 2.3$,
respectively. The event samples used to calculate the efficiencies
are mixed with \PW+jets simulated events to produce a
compatible purity. The $\tau$-leg trigger efficiencies are discussed
in detail in Section~\ref{sec:tauHLT}.
\subsubsection[Triggers for ZH to 2 neutrinos + b jets
analysis]{$\Z(\PGn\PAGn)\PH(\bbbar)$ }
The production of the Higgs boson in association with vector bosons is
the most effective way to observe the Higgs boson in the $\PH\to\bbbar$
decay mode~\cite{cms_zhbb}.
In this section, we report on the trigger performance for the 2012
data taking period.
\begin{table}[tbp]
\topcaption{List of L1 and HLT used for 2012 data for
the $\Z(\PGn\PAGn)\PH(\bbbar)$ channel. We use PF \MET. All triggers are combined
to maximize acceptance. In all cases, an OR of the L1 $\MET>36\GeV$
and $\MET>40\GeV$ are used as the L1 seed.}
\label{tab:trgs2012ZnnHbb}
\centering
\begin{tabular}{|lc|} \hline
HLT & Run Period \\ \hline
$\MET > 150\GeV$ & 2012 \\
$\MET > 80\GeV$ and 2 central jets with $\pt > 30\GeV$ & early 2012 \\
$\MET > 100\GeV$ and 2 central jets and $\Delta \phi$ requirement & late 2012\\
$\MET > 100\GeV$ and 2 central jets with $\pt > 30\GeV$ and at least one b tag &late 2012 \\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:trgs2012ZnnHbb} summarizes these triggers. The main trigger requires $\MET>150\GeV$ and was active during
the entire year.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.32\textwidth]{figures/2012AB_pfmet150_fit.pdf}
\includegraphics[width=0.32\textwidth]{figures/2012A_dijetmet_fit.pdf}
\includegraphics[width=0.32\textwidth]{figures/2012B_dijetmet_fit.pdf}
\caption{Trigger efficiencies for the $\Z(\PGn\PAGn)\PH(\bbbar)$ analysis,
as a function of offline PF \MET for the pure $\MET>150\GeV$
trigger (left) using late 2012 data, dijet and \MET trigger
(center) using early 2012 data, and dijet, \MET and
$\Delta \phi$ requirement trigger (right) using 2012 late data, as
described in the text. }
\label{fig:ZnunuHbbTrigEff2012}
\end{figure}
This trigger, however, attains an efficiency of 95\% at
$\MET{\approx}190\GeV$, as shown in Fig.~\ref{fig:ZnunuHbbTrigEff2012} (left).
To accept events with lower $\MET$, we introduce a trigger that
requires two central PF jets with $\pt>30\GeV$ and
$\MET>80\GeV$, for early 2012 data. This trigger recovers events at
lower $\MET$. The efficiency curve, shown in
Fig.~\ref{fig:ZnunuHbbTrigEff2012} (center) reaches a plateau of
95\% at $\MET{\approx}150\GeV$.
For late 2012 running, jets due to pileup caused an increase in
trigger rates, and a more complicated trigger,
requiring at least two central PF jets with $\pt>60 (25)\GeV$
for the leading (subleading) jet, was introduced. At least
one calorimeter dijet pair with $|\sum_i\vec{p}_{\mathrm{T}_i}|>100\GeV$ is
required. The minimum $\Delta\phi$ between the $\MET$ and the closest
calorimeter jet with $\pt>40\GeV$ is required to be greater than 0.5.
Finally, we require PF $\MET>100\GeV$. The obtained turn-on curve
for this trigger is shown in Fig.~\ref{fig:ZnunuHbbTrigEff2012} (right).
The trigger achieves 90\% efficiency at
$\MET\approx170\GeV$, with roughly 80\% efficiency for $\MET$ in the
range of 130--170\GeV.
To accept events with even lower $\MET$ (down to 100\GeV) we exploit
triggers with a b-tag online requirement (Section~\ref{sec:BTag}): two jets
with $\pt > 20\,(30)\GeV$ and $\MET>80\GeV$ for early (late) data.
These triggers by themselves achieve an efficiency of roughly 50\% at
$\MET= 100\GeV$ and 60\% efficiency for \MET between 100 and $130\GeV$ when
requiring at least one PF jet with a high value of the b-tagging
discriminator (tight CSV $>0.898$) offline.
The trigger strategy for the full 2012 period used the
combination of all the aforementioned triggers to collect events
with $\MET>100\GeV$.
Rather than measuring the efficiency curves directly in data and
applying them to the simulation, the efficiencies of the simulated
triggers are parametrized and corrected as a function of $\MET$ and
the CVS b tagging discriminator to match the efficiencies measured
in data (described below). This approach takes into account the
non-negligible correlations among the various trigger paths. It also
characterizes the online b tagging efficiency and its dependence on
jet \pt and $\eta$, as the geometry and trigger algorithm are
simulated in a way that are as close as possible to the actual trigger
environment.
Studies show that the data and simulation agree to within less than
5\%, except for the b tag trigger, where the agreement is approximately
10--20\%.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.49\textwidth]{figures/TriggerCurvesEff_Znn_new.pdf}
\caption{Efficiency as function of
$\MET$ for the $\Z(\PGn\PAGn)\PH(\bbbar)$ signal events. An efficiency greater than 99\% is obtained for
$\MET>170\GeV$.
}
\label{fig:ZnunuHbbTrigEff2012OR}
\end{figure}
In Fig.~\ref{fig:ZnunuHbbTrigEff2012OR}, we show the total trigger
efficiency as a function of $\MET$ for signal
$\Z(\PGn\PAGn)\PH(\bbbar)$ events. The cumulative efficiency is
99\% for $\MET> 170\GeV$, 98\% for events with $130<\MET<170\GeV$, and
88\% for events with $100<\MET<130\GeV$.
The total systematic uncertainty in the trigger efficiency is of the
order of a few percent in the high-\pt ($\MET$ $>
170\GeV$), and not more than 7\% in the intermediate-\pt ($130 < \MET <
170\GeV$), and 10\% in the
low-\pt regions ($100 < \MET < 130\GeV$) search regions.
\subsection{Top quark triggers}
\label{sec:toppag}
Measurement of the properties of the top quark are among the most
important standard model measurements in CMS. The LHC is a top factory, and the large number of top quark pairs created allows detailed studies
of its properties. One of the most fundamental measurements is the top
quark pair production cross section. The most accurate measurement of this cross section can
be made in the so-called `lepton + jets' decay mode, where one of the
W bosons from the top quark decays to a lepton and a neutrino, and
the other W decays hadronically, leading to a final state with a
well-isolated lepton, large missing transverse energy, and four hadronic jets (two
of which are b jets)~\cite{cms_ljets2010,Chatrchyan:2013faa}. In
Run~1, \ttbar production studies used several trigger paths for the
semileptonic top quark decay channels, to ensure that \ttbar signal events
were recorded as efficiently as possible. To maximize the acceptance of the transverse energy (momentum) requirement applied to
leptons, measurements used trigger paths requiring one online
reconstructed lepton ($\Pe$ or $\mu$) as well as at least 3 online
reconstructed jets.
\begin{table}[tbp]
\topcaption{Unscaled cross-triggers used for the \ttbar (lepton plus jets)
cross section measurement in 2012. All leptons use tight or very tight
identification, and lepton and calorimeter isolation
requirements. All jets are PF jets and restricted to the
central region. At L1, single electrons or muons are required with
the denoted thresholds and the L1 muons are required to be central
($\abs{\eta}<2.1$). When two thresholds are listed at L1, they include
a lower (possibly prescaled) threshold and a higher unscaled threshold.
}
\centering
\begin{tabular}{|ccccc|cc|}
\hline
\multicolumn{5}{|c|}{HLT} & \multicolumn{2}{c|}{L1} \\ \hline
$\Pe/\mu$ & Threshold & $n_\text{jet}$ & Jet & Jet threshold & L1 Seed & Threshold\\
& (\GeVns{}) & & corrections & & & (\GeVns{}) \\
\hline
\multirow{4}{*}{$\Pe $} & 25 & 3 & & 30 & EG & 20, 22 \\
& 25 & 3 & pileup subtracted & 30 & EG & 20, 22 \\
& 25 & 3 & pileup subtracted & $30,30,20$ & EG & 20, 22 \\
& 25 & 3 & pileup subtracted & $45,35,25$ & EG & 20, 22 \\
\hline
\multirow{6}{*}{$\mu$} & 20 & 3 & & 30 & MU & 14, 16 \\
& 20 & 3 & pileup subtracted & 30 & MU & 16 \\
& 17 & 3 & pileup subtracted & 30 & MU & 14 \\
& 17 & 3 & pileup subtracted & $30,30,20$ & MU & 14 \\
& 17 & 3 & pileup subtracted & $45,35,25$ & MU & 14 \\
\hline
\end{tabular}
\label{tab:TopLepJetTrigger}
\end{table}
Table~\ref{tab:TopLepJetTrigger} summarizes the main paths used for
the triggers deployed to accommodate the high instantaneous luminosity
and pileup of the 2012 run. All leptons triggers had tight or very
tight lepton identification and calo\-ri\-meter isolation
requirements, comparable to those used offline. Jets in PF jet
triggers were restricted to the central region. At L1, single
electrons or muons are required with the denoted thresholds. The L1
muons are central ($\abs{\eta}<2.1$). Charged-hadron
subtraction~\cite{CMS-PAS-JME-14-001} (labeled `pileup subtracted' in
the Table) was implemented for pileup mitigation. Additionally, the
introduction of jet energy calibrations online in the second half of
2012 resulted in higher \ET thresholds in the three-jet paths;
however, the data from that period were not used in the cross section
measurements due to systematic uncertainties associated with the large
pileup.
Simulated events are used to estimate the top quark acceptance, and were corrected
for the trigger efficiency measured in data. To estimate the trigger
efficiency, simulated Drell--Yan and \ttbar samples were used to
compare with data collected with single lepton triggers. The overall
efficiency for the lepton plus jets paths is parametrized as a
product of two independent efficiencies for the leptonic and hadronic
legs of the trigger, $\epsilon_{\rm lep}~\times~\epsilon_{\rm had}$. A
cleaning requirement based on the \DR\xspace distance between the leptons and jets
motivates this approach.
The leptonic leg efficiency is measured using a tag-and-probe method
with $\Z/\Pgg^*$ events, as described in Sections~\ref{sec:egammaHLT}
(\Pe) and \ref{sec:muHLT} ($\mu$)
While the lepton trigger was not changed during the 2012
data-taking period, the jet trigger changed as shown in
Table~\ref{tab:TopLepJetTrigger}. Similar to the measurement for the
lepton leg, the efficiency of the jet leg of the associated cross-trigger is measured in an unbiased data sample. The
reference sample is required to pass a single lepton trigger, to assure
a data set independent of the hadronic trigger which fulfills the lepton
leg of the cross-trigger simultaneously.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/topEleJetEffvsPt.pdf}
\includegraphics[width=0.45\textwidth]{figures/topEleJetEffvsNPV.pdf}
\caption{Top quark triggers: Efficiency of the hadronic leg for the
electron plus jets paths in 2012 as a function of the \pt of the 4th jet
(left) and of the number of reconstructed vertices
(right).}
\label{figure:topEff}
\end{figure}
As an example, Fig.~\ref{figure:topEff} shows the efficiency turn-on
curve of the hadronic leg (transverse momentum of the 4th jet) for the
electron plus jets paths in 2012, and its dependence with respect to
the number of reconstructed vertices, both for a selection based on the combination of
the PF jets without and with charged-hadron
subtraction. The offline selection of the transverse
momentum requirements on the offline jets was devised to assure a
plateau behavior of the scale factors, meaning no variation of the
scale factor with respect to the MC sample or jet energy
calibrations. From the variation of the scale factors it was concluded
that a systematic uncertainty of 2\% (1.5\%) in electron (muon)
scale factors covered the variations around their value of 0.995
(0.987).
\subsection{Triggers for supersymmetry searches}
\label{sec:susypag}
Supersymmetry (SUSY) is one of the most appealing extensions to the
standard model, as it solves the mass hierarchy problem, offers a
path towards grand unification, and can provide candidate dark matter
particles. During the years 2010--2012, many SUSY searches
were performed with CMS data. Exclusion limits were set in the context
of the mSUGRA model of SUSY breaking and also on the masses of the
particles involved in specific cascade decays (simplified
models~\cite{sms}).
For the allowed parameter space, SUSY signatures~\cite{glennis} are characterized by
the presence and decay of heavy particles. If R-parity is conserved,
stable, invisible particles are expected. Most of the final states
contain significant hadronic activity and \MET. At
CMS, SUSY searches were divided into leptonic, hadronic, and photonic
categories, depending on the event content.
In addition, some supersymmetric models predict the existence of heavy
stable charged particles, \eg, the gluino, top quarks, or $\tau$ sleptons.
Their mass is expected to be of the order of a few hundred\GeV,
therefore their velocity would be significantly smaller than the speed
of light. The signature of heavy stable charged particles would look
like a non-relativistic ionizing particle, with hits in the chambers being
delayed by about one bunch crossing, either in all the layers or
in the outermost one(s), with respect to an ordinary ``prompt''
minimum ionizing particle.
In this section we discuss the CMS trigger performance collecting
events for searches for supersymmetry. Most leptonic searches in CMS
were performed using the same triggers as the Higgs boson leptonic searches
and therefore are not documented here. For hadronic and photonic
searches, we have selected three representative triggers: the $\alpT$
trigger, the ``Razor'' trigger, and the photon trigger. The $\alpT$ and
photon analyses were performed using a data sample corresponding to
an integrated luminosity of 4~\fbinv, while the Razor
analysis used an integrated luminosity of 20~\fbinv, all collected at CMS during
2012 at a center-of-mass energy of $8\TeV$.
\subsubsection{Triggers for all-hadronic events with
\texorpdfstring{$\alpT$}{alphaT}}
We present a typical example of a purely hadronic search, where events
with leptons are vetoed and events with a high jet multiplicity, large
\MET, and large \HT are selected~\cite{cmsalphaT}. Multijet events are
the most important background in this region of the phase space. To
suppress these events, the analysis uses a kinematical variable called
$\alpT$. For events with exactly two jets, $\alpT$ is defined as the
transverse energy of the subleading jet divided by the transverse mass
of the dijet system. For events with two or more jets, two pseudo-jets
are created combining jet components and selecting the configuration
that minimizes the energy between the two. The value of
$\alpha_\mathrm{T}$ is equal to 0.5 in balanced multijet events and less
than 0.5 in multijet events with jet energy mismeasurement. For SUSY
signal events with genuine \MET, $\alpha_\mathrm{T}$ tends to values
$> 0.5$, thus providing a good discrimination between signal and
background.
To estimate the remaining significant backgrounds (\PW+jets,
top quark pair, single top quark , and $\Z \to \PGn\PAGn$), data control
regions are used.
A cross-trigger based on the quantities \HT and $\alpT$
is used to record the candidate event sample. A
prescaled \HT trigger, labeled henceforth as \HT, is used with
various thresholds to record events for the control region. The \HT
thresholds of the \HT and $\HT$--$\alpT$ cross-triggers are chosen to
match where possible, and are 250, 300, 350, 400, and
450\GeV. The $\alpT$ thresholds of the $\HT$-$\alpT$ trigger are tuned
according to the threshold on the \HT leg in order to suppress
QCD multijet events (whilst simultaneously satisfying other criteria,
such as sensitivity to trigger rates).
To ensure that the \HT leg of the $\HT$-$\alpT$ cross-trigger and the \HT prescaled trigger
are efficient for the final event selection, the lower bounds of the offline
\HT bins are offset by 25\GeV with respect to the online thresholds. Figure~\ref{fig:alphaT} shows
the turn-on curves of the \HT and $\alpT$ legs of the trigger with respect to the offline selection.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/alphat-275-250-eff.pdf}
\includegraphics[width=0.48\textwidth]{figures/alphat-325-300-eff.pdf}
\includegraphics[width=0.48\textwidth]{figures/alphat-375-350-eff.pdf}
\includegraphics[width=0.48\textwidth]{figures/alphat-475-400-eff.pdf}
\caption{Efficiency turn-on curves for the $\alpha_T$ triggers
used to collect events for four different \HT regions:
$275<\HT<325\GeV$ (upper left),
$325<\HT<375\GeV$ (upper right), $375<\HT<475\GeV$ (lower left), and $\HT>475\GeV$ (lower right).}
\label{fig:alphaT}
\end{figure}
Efficiencies for the $\HT$-$\alpT$ triggers were calculated using an
orthogonal data set based on single muons, by requiring a matching to an isolated
single-muon trigger. Exactly one isolated muon that is well
separated from all jets is required to ``tag'' the event. This muon is
not considered in the calculations of \HT, \MET-like quantities,
and $\alpT$,
thereby miscalculating genuine \MET by ignoring the muon. The
assumption for the \HT triggers is that their efficiency is not
sensitive to whether there is genuine \MET in the event or not. The
results (efficiencies with respect to offline selection) are shown in
Table~\ref{tab:alphaT}.
\begin{table}[tbp]
\topcaption{Measured efficiencies of the \HT and \HT-\alpT triggers, as a
function of \alpT and \HT, as measured with respect to the offline selection used in the $\alpha_\mathrm{T}$ analysis.}
\label{tab:alphaT}
\centering
\begin{tabular}{|ccc|} \hline\
$\alpT$ lower threshold & \HT range (\GeVns{}) & Efficiency(\%) \\
\hline
$0.55 $ & 275--325 & $89.6^{+0.5}_{-0.6}$ \\
$0.55 $ & 325--375 & $98.5^{+0.3}_{-0.5}$ \\
$0.55 $ & 375--475 & $99.0^{+0.5}_{-0.6}$ \\
$0.55 $ & 475--$\infty$ & $99.4^{+0.5}_{-1.2}$ \\
\hline
\end{tabular}
\end{table}
\subsubsection{Triggers for inclusive search with Razor variables}
The Razor variables $R^2$ and $M_R$ were introduced in CMS to
complement other variables that can be used to probe SUSY production at the
LHC~\cite{RazorPRD,RazorPRL}.
The analyses are designed to kinematically discriminate the pair production of heavy particles from
SM backgrounds, without making strong assumptions about the
$\MET$
spectrum or details of
the decay chains of these particles. The baseline selection requires two or more reconstructed
objects, which can be calorimetric jets, isolated electrons or isolated muons.
The Razor kinematic construction
exploits the transverse momentum imbalance of SUSY events more
efficiently than the traditional $\MET$-based variables, retaining events
with as low as $\MET \approx50\GeV$ while reducing the background from
QCD multijet events to a negligible level. Details of the definition
of $R^2$ and $M_R$ can be found in the above references.
The use of $\MET$ and $\HT$ triggers alone would not be practical
for a Razor-based search, resulting in a nontrivial dependence of the
trigger efficiency on $R$ and $M_R$. Instead, a set of dedicated triggers was
developed, both for the fully hadronic and the leptonic final
states considered in the analysis.
The Razor triggers are based on the events with two central jets with
$\pt>64\GeV$, selected at L1. The calorimetric towers in the
event are clustered using the anti-\kt algorithm with a distance
parameter of $0.5$. The two highest $\pt$ jets are required to have
$\pt>65\GeV$, which is fully efficient for PF
jets with $\pt>80\GeV$. If an event has more than seven jets with
$\pt>40\GeV$, it is accepted by the trigger. Otherwise, we consider
all the possible ways to divide the reconstructed jets in two
groups. We then form a \emph{mega-jet} summing the four-momenta of the
jets in one group. The mega-jet pair with the smallest sum of
invariant masses is used to compute the values of $R$ and $M_R$. A
selection on $R$ and $M_R$ is applied to define a leptonic
Razor trigger. A looser version of this selection is used for the
lepton Razor triggers, in association with one isolated muon or
electron with $\pt>12\GeV$. Electrons are selected with a loose calorimeter
identification requirement and a very loose isolation requirement.
The kinematic selection includes cuts on both on $R$ and $M_R$: $R^2>0.09$ and
$M_R>150\GeV$ (inclusive trigger); $R^2>0.04$ and $M_R>200\GeV$
(leptonic triggers).
A ``parked'' version (as described in Section~\ref{sec:HLTDAQ}) of the inclusive Razor trigger was
also implemented, requiring $R^2>0.04$.
Events selected by the single-electron (single-muon) triggers are used
to measure the efficiency of the inclusive and electron (muon) Razor
paths. The baseline sample for the efficiency measurement is defined
requiring two jets of $\pt>80\GeV$, passing the reference trigger, and
not rejected by the event cleanup requirements (designed to remove the
noisy calorimeter events from the offline analysis). The numerator of
the efficiency is defined from this sample, with the requirement that the
relevant Razor trigger condition is satisfied. Figure~\ref{fig:razorturnon} shows
the efficiency versus $M_R$ and $R^2$ for the inclusive Razor trigger,
also requiring $M_R>400\GeV$ ($R^2>0.25$) in order for the $R^2$ ($M_R$)
plot to match the selection applied in the analysis. The efficiency is
found to be flat within the statistical precision, limiting the
precision on the tail or $R^2$ after the applied $M_R$ requirement. The analysis uses $(95 \pm 5)\%$ as an estimate of the efficiency.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\textwidth]{figures/effTurnON_PFJetsHad_RSQ0_25}
\includegraphics[width=0.4\textwidth]{figures/effTurnON_PFJetsHad_MR400}
\caption{\label{fig:razorturnon} Turn-on curve for $M_R$ (left) and $R^2$
(right) for the inclusive Razor trigger, after requiring $R^2>0.25$
(left) and $M_R>400\GeV$ (right). Events passing the
single-electron triggers are selected to define the denominator of
the efficiency, together with the dijet requirement. The
requirement of satisfying the Razor trigger defines the
numerator.}
\end{figure}
\subsubsection{Triggers for photons and missing energy}
We present the triggers used in a search for supersymmetry in events with at
least one isolated photon, jets, and \MET. Dominant standard model
background processes are direct photon production and QCD
multijet events where
a jet is misreconstructed as a photon. Multijet events have small
intrinsic \MET, but the finite resolution of the jet energy measurement
together with the large cross section leads to a significant
contribution in the tail of the \MET. Other backgrounds arise from
electroweak electron production, \eg, $\PW\to \Pe\nu$, where
an electron is misreconstructed as a photon. Additional contributions
are expected from initial- or final-state photon radiation in various
QCD and electroweak processes. Single-photon trigger
thresholds are too high for the efficient selection of
many SUSY benchmark points, so that for this analysis a cross-trigger
based on a single photon and \HT is
used. The main backgrounds are modeled using data control samples.
To trigger on the signal as well as to collect the control samples used for
estimation of the QCD multijet and electroweak backgrounds, a cross-trigger is used, requiring at least one photon with $\pt > 70\GeV$ and
$\HT>400\GeV$.
The control region is defined by events containing at least one
isolated photon with $\pt > 80\GeV$ and $\abs{\eta}<1.4$, two or more jets
with $\pt > 30\GeV$ and $\abs{\eta}<2.6$, and $\HT>450\GeV$. The signal
region includes an additional $\MET>100\GeV$ requirement.
The trigger efficiency was measured in data for the photon and \HT
legs, using a single-photon baseline trigger, which requires a single
photon with $\pt > 50\GeV$ and is expected to be fully efficient in
the kinematic region of interest. As the statistical
power of the data sample is limited by the large prescale of the
baseline trigger (prescale of 900), a cross-check is performed using a
less prescaled single photon trigger with $\pt > 75\GeV$ (prescale of
150). In this case, it is not possible to observe the \pt turn-on of the
photon leg efficiency, as the baseline selection is more restrictive than the online
selection used by the analysis; however this is a valid check
of the \HT leg.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figures/triggerHT50}
\includegraphics[width=0.48\textwidth]{figures/triggerPhoton50}
\includegraphics[width=0.48\textwidth]{figures/triggerHT75}
\includegraphics[width=0.48\textwidth]{figures/triggerPhoton75}
\caption{Supersymmetry search in the $\gamma$ + \MET channel: trigger
efficiency of the \HT leg (left column), and the photon leg (right column), using as
a reference the single-photon trigger with $\pt > 50\GeV$ (top row) and
$\pt > 75\GeV$ (bottom row). The red lines indicate offline
requirements.}\label{fig:PhotonHT}
\end{figure}
Figure~\ref{fig:PhotonHT} shows the turn-on curve for the
\HT and photon \pt legs, both single-photon triggers. Only in the \HT
leg for the single-photon trigger with the $\pt > 75\GeV$
requirement, a higher threshold in the photon $\pt > 85\GeV$ is
used to avoid regions with inefficiencies due to the cross-trigger. After applying the offline analysis requirements on the
photon momentum of $\pt> 80\GeV$ and on $\HT > 450\GeV$, indicated
in the figure, the trigger
is fully efficient within an uncertainty of 4\%. The uncertainty is
due to the low statistical power of the data set.
\subsubsection{Triggers for heavy stable charged particles}
\label{sec:HSCP}
\begin{figure}[tbph]
\centering
\includegraphics[width=\textwidth]{figures/RPC-HSCP-5.pdf}
\caption{The principle of operation of the RPC HSCP trigger
for an ordinary muon (case 1), and a slow minimum ionizing
particle, which produces hits across two consecutive bunch crossings
(cases 2, 3) or in the next BX (case 4). Hits that would be
seen in the standard PAC configuration are effectively those
shown in pale orange; additionally observed hits in the HSCP
configuration are those shown in dark orange. In case 1 the
output of both configurations is identical, in case 2 the
HSCP configuration uses the full detector information, in
case 3 only the HSCP configuration can issue a trigger, and in
case 4 the HSCP configuration brings back the event to the
correct BX.}
\label{fig:rpc-hscp}
\end{figure}
The CMS experiment has a specific RPC muon trigger configuration to increase the
efficiency for triggering on heavy stable charged particles (HSCP)
using the excellent time resolution of detected muon
candidates. Double-gap RPCs operating in avalanche mode have an
intrinsic time resolution of around 2\unit{ns}. This, folded with the
uncertainty coming from the time propagation along the strip, which
contributes about 2\unit{ns}, and the additional jitter that comes from
small channel-by-channel differences in the electronics and cable
lengths, again of the order of 1--2\unit{ns}, give an overall time resolution
of about 3\unit{ns}--much lower than the 25\unit{ns} timing window of the RPC
data acquisition system.
If hits are not in coincidence within one BX, the RPC PAC algorithm is
likely to fail because the minimum plane requirements would not be
met, or if the algorithm does succeed, a lower quality value and
possibly a different \pt will be assigned to the trigger particle. In addition, if the muon trigger is one BX
late with respect to the trigger clock cycle, the pixel hits will not
be recorded and the muon chamber calibration constant will be
suboptimal, resulting in a poor offline reconstruction of late
``muon-like'' candidates. The functionality to extend the RPC hits to
two (or more) consecutive BXs, plus the excellent intrinsic timing
capabilities of the RPCs, allow the construction a dedicated physics
trigger for such ``late muons''. In the PAC logic the RPC hits are
extended in time to 2 BXs, hence the plane requirements are met for at
least one BX and triggers can be issued. On the GMT input, the RPC
candidates are advanced by one BX with respect to DT and CSC candidates,
so hits of a ``late muon'' generate a trigger in the proper BX.
Ordinary ``prompt'' muons will produce two trigger candidates: one in
the proper BX and one in the previous BX. Misreconstructed candidates can, however, be
suppressed at the GT level by a veto operated on the basis of BPTX coincidences
(Section~\ref{sec:bptx}). Figure~\ref{fig:rpc-hscp} shows the
principle of
operation of the RPC-based HSCP trigger. Studies with simulated data
indicate that the HSCP trigger configuration significantly increases
the CMS capability to detect a slow HSCP, for example, for an 800\GeV
long-lived gluino, the overall improvement in trigger efficiency ranges from 0.24
to 0.32. The gain is the largest within the range
$200 < \pt < 600\GeV$ and for gluino velocities $0.4 < \beta <
0.7$.
The HSCP trigger configuration was the main RPC operation mode during
data-taking in most of the 2011 and the entire 2012 run.
\subsection{Exotic new physics scenarios}
Models of physics beyond the standard model that are not
supersymmetric are called `exotic" in CMS. In this section we describe
three exotic physics scenarios and the triggers used in
searches for these signals.
\subsubsection{Triggers for dijet resonance searches}
During the 7\TeV run, the search for heavy resonances decaying to jet
pairs was performed on events triggered by the single-jet
trigger. With increasing peak luminosity, the tighter threshold
applied on the jet $\pt$ became a major problem for the analysis. At
the same time, the analysis was improved by introducing the so-called
\emph{wide jets} to take into account the presence of additional jets
from final-state radiation.
Wide jets are formed around a given set of \emph{seed} jets, taking as
input the other jets in the event. The four-momentum of each seed
jet is summed with the four-momenta of other jets within $\DR <1.1$ of the seed
jet and with $\pt>40$\GeV. A jet close to more than one seed jet is
associated with the closest seed.
With this new approach, a trigger based on $\HT=\sum_\text{jet} |\pt|$
is more efficient. A further improvement in the analysis was
obtained by implementing a dedicated topology-based trigger, applying
a looser version of the analysis reconstruction and selection
requirements at the HLT:
\begin{itemize}
\item Wide jets were built by looking for jets with $\pt>40$\GeV in a
cone of size $\DR=1.1$ around the two highest $\pt$ jets;
\item Multijet events were removed by requiring that the two wide
jets fulfill $\Delta \eta<1.5$.
\end{itemize}
During the 8\TeV run, events were kept if the wide jets built around
the two highest $\pt$ jets had an invariant mass larger than
$750$\GeV ({\it Fat750}). While this trigger alone would have
performed similarly to the \HT trigger already in use, the
combination of the two triggers in a logical OR allowed us to recover
the inefficiency for mass values close to the applied threshold,
making the overall efficiency turn-on curve sharper.
The loosest \HT-based L1 path (L1\_HTT150) was used as a seed for all
triggers.
The trigger efficiency was measured in data, taking the events
triggered by the prescaled $\HT>550\GeV$ trigger as a baseline. These
events were filtered by applying the analysis selection (particularly,
the $\Delta \eta$ requirement on the two wide jets) to define the
denominator of the efficiency curve. The subset of these events also
satisfying the analysis requirements defines the numerator of the efficiency.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{figures/dijetbump_TriggerEff}
\caption{ Dijet resonance search triggers.
The HLT efficiency of $\HT>650\GeV$, $\HT>750\GeV$, and
\emph{Fat750} triggers individually, and their
logical OR as a function of the offline dijet mass. The efficiency
is measured with the data sample collected with a trigger
path that requires $\HT>550\GeV$.
The horizontal dashed line marks the trigger efficiency ${\geq}
99\%$. }
\label{fig:dijet}
\end{figure}
Figure~\ref{fig:dijet} shows the trigger efficiency as a function of the offline
dijet mass for individual triggers and for their logical OR.
While the combination of the \HT and Fat750 triggers already
represents a sizable improvement with respect to the individual
triggers, a further increase in the efficiency was obtained with the
introduction of the PF-based \HT trigger. The combination of the three
triggers made the analysis $\geq 99\%$ efficient for invariant masses
above 890\GeV. As a result of the trigger improvements, the threshold
for the dijet resonance search for the 8\TeV run was 100\GeV lower
than would have been possible if the 7\TeV strategy had been used.
\subsubsection{Triggers for black hole search}
\label{sec:blackholehpa}
If the scale of quantum gravity is as low as a few \TeVns, it is possible
for the LHC to produce microscopic black holes or their quantum
precursors (``string balls") at a significant
rate~\cite{Dimopoulos:2001hw,Giddings:2001bu,Dimopoulos:2001qe}. Black holes decay
democratically, \ie, with identical couplings to all standard
model degrees of freedom.
Roughly 75\% of the black holes decay products are jets. The
average number of particles in the final state varies from roughly two
(in case of quantum black holes) to half a dozen (semiclassical black
holes and string balls). The microscopic black holes are massive
objects, thus at least a few hundred\GeV of visible energy in the
detector is expected.
Since \textit{a priori} we do not know the precise final state, we
trigger on the total jet activity in an event. The common notation of
such triggers is \texttt{HLT\_HT}$x$, \texttt{HLT\_PFHT}$x$, and
\texttt{HLT\_PFNoPUHT}$x$, where $x$ denotes the total energy in
\GeV.
All energies of HLT jets are fully corrected, and in the case of the
\texttt{HLT\_PFNoPUHT}$x$ paths, pileup corrections are also applied
to the HLT PF jets. The pileup subtraction is performed by
first removing all of the jet's charged hadrons not associated to the
primary vertex, then calculating an energy offset based on the jet
energy density distribution to remove the remaining pileup
contribution. More details of the jet reconstruction at L1 and
HLT are given in Section~\ref{sec:JetMET}.
After the jets are selected at both the L1 and the HLT, an \HT
variable is calculated.
In Ref.~\cite{bh_2012_legacy}, the jet \ET threshold at L1 is 10\GeV and the \HT thresholds are 150,
175, and 200\GeV (Section~\ref{sec:JetHLT}.) These L1 triggers are
used as seeds to the HLT algorithms. At the HLT, the jet \ET threshold is 40\GeV and the \HT thresholds have a range of
650--750\GeV. The unprescaled HLT paths and their L1 triggers are
summarized in Table~\ref{tab:BHTrigger}. The L1 triggers for some of
the ``total jet activity'' paths were updated in the middle of 2012 to
account for higher instantaneous luminosity of the LHC. For
simplicity, we refer to the data taking periods before (after) that
change as ``early" (``late"). In the previous iterations of the
analysis~\cite{bh_full2010_plb,bh_full2011_jhep}, the \HT thresholds
at the HLT were as low as 100--350\GeV.
\begin{table}[tbp]
\centering
\topcaption{Black Hole trigger: Unprescaled total jet activity HLT paths
and their respective L1 seeds. The L1 seeds for a number of the HLT paths were
revised during the data taking to account for higher instantaneous
luminosity.}
\begin{tabular}{ | l l l | }
\hline
Path name & \multicolumn{1}{c}{L1 seed} & \multicolumn{1}{c}{Data-taking period}\\
\hline
\texttt{HLT\_HT750} & \texttt{L1\_HTT150 OR L1\_HTT175} & Early \\
\texttt{HLT\_HT750} & \texttt{L1\_HTT150 OR L1\_HTT175 OR L1\_HTT200} & Late \\
\hline
\texttt{HLT\_PFHT650} & \texttt{L1\_HTT150 OR L1\_HTT175} & Early \\
\texttt{HLT\_PFHT650} & \texttt{L1\_HTT150 OR L1\_HTT175 OR L1\_HTT200} & Late \\
\texttt{HLT\_PFHT700} & \texttt{L1\_HTT150 OR L1\_HTT175} & Early \\
\texttt{HLT\_PFHT700} & \texttt{L1\_HTT150 OR L1\_HTT175 OR L1\_HTT200} & Late \\
\texttt{HLT\_PFHT750} & \texttt{L1\_HTT150 OR L1\_HTT175} & Early \\
\texttt{HLT\_PFHT750} & \texttt{L1\_HTT150 OR L1\_HTT175 OR L1\_HTT200} & Late \\
\hline
\texttt{HLT\_PFNoPUHT650} & \texttt{L1\_HTT150 OR L1\_HTT175} & \\
\texttt{HLT\_PFNoPUHT700} & \texttt{L1\_HTT150 OR L1\_HTT175} & \\
\texttt{HLT\_PFNoPUHT750} & \texttt{L1\_HTT150 OR L1\_HTT175} & \\
\hline
\end{tabular}
\label{tab:BHTrigger}
\end{table}
As the majority of the final-state objects are jets, we use
jet-enriched collision data to search for black holes. These data are
recorded using a logical OR of the following trigger groups, whose
triggers only differ by a threshold: i) total jet activity
triggers, ii) paths that select high-mass dijet events, iii) triggers
that require presence of significant \MET and a jet with \pt above a few
hundred\GeV. The main offline quantities that describe the black hole
are the multiplicity of the final-state objects, $N$, and a scalar sum of
transverse momenta of all objects (jets, leptons, and photons) and the
\MET reconstructed in the event, $\ST = \sum \pt^{\text{jets}} + \sum
\pt^{\text{leptons}} + \sum \pt^{\text{photons}} + \MET$. We apply a
50\GeV requirement on all final-state objects \pt and \MET, and select events
with a multiplicity greater than one. Note that \MET is not counted
towards the multiplicity. The relative efficiency of unprescaled HLT paths
that are used in the analysis as a function of the $\ST$ is shown in
Fig.~\ref{fig:ST}~(left). The efficiencies are calculated using the
same jet-enriched data set with respect to prescaled total jet
activity path with the \HT threshold of 450\GeV. The paths with the
\HT threshold of 650 (750)\GeV are fully efficient starting from $\ST
= 1000~(1200)$\GeV, respectively, which is significantly below the
low-\ST boundary of 1500\GeV that is used in the search. To
check the pileup dependence of the trigger turn-on, we plot the
efficiency of the selection path \texttt{HLT\_PFNoPUHT650} as a function of
\ST in three bins of the number of reconstructed primary vertices, $N_\mathrm{PV}$: i)
$N_\mathrm{PV} \le 10$, ii) $10 < N_\mathrm{PV} < 25$, and iii) $N_\mathrm{PV}
\ge 25$ (Fig.~\ref{fig:ST}~(right)). Although the trigger turn-on curves
become less sharp when $N_\mathrm{PV}$ increases, this does not affect
the point when the trigger becomes fully efficient. Thus, the pileup
dependence of total jet activity triggers can be neglected in the
black holes analysis.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{figures/blackholes_trig_eff_data_ht450.pdf}
\includegraphics[width=0.4\textwidth]{figures/blackholes_trig_eff_data_pileup_ht450.pdf}
\caption{Left: Efficiency of unscaled total jet activity HLT paths as a function of \ST. Right: Efficiency of \texttt{HLT\_PFNoPUHT650} as a function of \ST in three bins of number of primary vertices, $N_\mathrm{PV}$: (i) $N_\mathrm{PV} \le 10$, (ii) $10 < N_\mathrm{PV} < 25$, and (iii) $N_\mathrm{PV} \ge 25$. All efficiencies are calculated with respect to a prescaled total activity path with $\HT = 450$\GeV threshold.}
\label{fig:ST}
\end{figure}
\subsection{B physics and quarkonia triggers}
\label{sec:bphbpag}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\linewidth]{figures/BPH-2011Run-triggers-DimuonMass1fb.pdf}
\caption{\label{fig:BPH:DimuonMass} Dimuon mass distributions
collected with the inclusive double-muon trigger used during early data taking in 2011. The colored areas
correspond to triggers requiring dimuons in specific mass windows,
while the dark gray area represents a trigger only operated during
the first 0.2\fbinv of the 2011 run. }
\end{figure}
The CMS analyses in the fields of B physics and quarkonium production
are mostly based on data samples collected with double-muon
triggers. In the 2010 run, the LHC instantaneous luminosity was
sufficiently low such that relatively loose triggers could be used.
Essentially all the analyses made in the B physics group were
based on one inclusive trigger, which requires two high quality
muons. The resulting dimuon mass distribution covers dimuon mass
values from threshold all the way to 200\GeV, displaying ``needles''
caused by the dimuon decays of resonances on top of a smooth
underlying continuum.
The significantly higher collision rates of the 2011 LHC run, and the
ceiling of around 25--30\unit{Hz} for the total trigger bandwidth allocated
for B physics, required the development of several specific HLT
paths, each devoted to a more or less exclusive set of physics
analyses. Figure~\ref{fig:BPH:DimuonMass} illustrates the
corresponding dimuon mass distributions, stacked on each other. The
high-rate ``low-\pt double muon'' path was in operation only during
the first few weeks of the run; the others had their rates reduced
through suitable selection requirements on the dimuon mass and on the
single-muon and/or dimuon \pt.
The quarkonia trigger paths (\JPsi, $\psi^\prime$ and $\PgU$) had explicit requirements on the \pt of the dimuon system but not of the
single muons. First, because the analyses are made as a function of the
dimuon \pt and second, because the single-muon \pt requirements induce a
significant restriction of the covered phase space in terms of the
angular decay variable $\cos\theta$, and this is crucial for the measurements of
quarkonium polarization. To further reduce the rate, the two muons
were required to bend away from each other because the ones bending
towards each other have lower
efficiencies.
The dimuon was required to have a central rapidity,
$\abs{y} < 1.25$. This is particularly useful to distinguish the
\PgUb\xspace and \PgUc\xspace resonances, as well as for analyses of
P-wave quarkonia production, which require the measurement of the
photon emitted in the radiative decays (\eg, $\chi_\mathrm{c} \to \JPsi \Pgg$). In fact, to resolve the \Pcgci and \Pcgcii\xspace peaks
(or, even more challenging, the \Pbgci\xspace
and \Pbgcii\xspace
peaks), it is very important to have a high-resolution measurement of
the photon energy, possible through the reconstruction of the
conversions into $\Pep\Pem$ pairs in the barrel section of the silicon
tracker.
In addition to the quarkonia resonances, Fig.~\ref{fig:BPH:DimuonMass}
shows a prominent ``peak'' labeled $\mathrm{B}_\mathrm{s}$, which represents the
data collected to search for the elusive $\mathrm{B}_\mathrm{s} \to \Pgm\Pgm$ and
$\mathrm{B}_\mathrm{d} \to \Pgm\Pgm$ decays. These triggers had no restrictions
on the dimuon rapidity or relative curvature and kept \pt requirements much looser
than those applied in the offline
analysis.
The total rate of the $\mathrm{B}_\mathrm{s}$ trigger paths remained
relatively small, of the order of 5\unit{Hz}, even when the LHC
instantaneous luminosity exceeded $7 \times 10^{33}$\percms, at the
end of the 2012 run.
The other prominent trigger path illustrated in
Fig.~\ref{fig:BPH:DimuonMass}, the ``low-mass displaced
dimuons'', selected events with a pair of opposite-sign muons with a
dimuon vertex pointing back to and displaced from the interaction
point by more than three standard deviations. These events were
collected to study decays of B mesons into final states containing a
pair of muons plus one or more kaon and/or pion, as well as to measure
the $\Lambda_\mathrm{b}$ cross section, lifetime, and polarization. This is the
most challenging trigger path because of its very high rate, which
cannot be reduced through the increase of muon \pt requirements without
a significant loss of signal efficiency.
The main difference between the 2011 and 2012 runs, from the
perspective of B physics, was the availability of the
so-called ``parked data'' (Section~\ref{sec:HLTDAQ}). The resulting
increase in available HLT bandwidth meant that most trigger paths
could have looser requirements in 2012 than in 2011. Additionally, several new
triggers were added, including a like-sign dimuon trigger to study the
``anomalous dimuon charge asymmetry'' observed at the
Tevatron~\cite{Abazov:2010hj}.
Two special calibration triggers were developed to study the single-muon detection efficiencies in an unbiased way. One is a single muon
trigger that requires the presence of an extra track such that the
invariant mass of the muon-track pair is in the \JPsi mass region; the
existence of a \JPsi peak in this event sample ensures that the track
is likely to be a muon that can be used to provide an unbiased
assessment of the muon-related efficiencies (offline reconstruction in
the muon detectors, as well as L1 and L2 trigger efficiencies as described in Section~\ref{sec:muHLT}). The
other is a dimuon trigger for those low-mass dimuons in which the
muons are reconstructed without using any information from the
silicon tracker hits, thereby allowing the study of the offline
tracking and track quality selection efficiencies, as well
as the L3 trigger efficiency (Section~\ref{sec:muHLT}). These efficiency measurements are made
using a tag-and-probe methodology.
\begin{figure}[tbp]
\centering
\resizebox{0.5\linewidth}{!}{%
\includegraphics{figures/parametrizedEff-BPH-muons.pdf}}
\caption{\label{fig:BPH:2011MuonEff} Single-muon detection
efficiencies (convolving trigger, reconstruction, and selection
requirements) as a function of \pt, as obtained from the data, using the
tag-and-probe method. Data points are shown for the pseudorapidity
range $\abs{\eta} < 0.2$, while the curves (depicting a parametrization
of the measured efficiencies) correspond to the three ranges
indicated in the legend.}
\end{figure}
As an illustration, Fig.~\ref{fig:BPH:2011MuonEff} shows the single-muon
detection efficiency as a function of \pt for three muon
pseudorapidity ranges.
The rate of events with single muons is very large and it might happen
that a muon is mistakenly identified as two close-by muons. To prevent
such events from increasing the rate of dimuon triggers, the
trigger logic at L1 and L2 discards dimuon signals if the two muon
trajectories are too close to each other. The drawback is that this
significantly reduces the efficiency of the dimuon trigger for signal
dimuons where the two muons are close to each other, which happens
quite often for low-mass dimuons of high \pt.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.65\textwidth]{figures/MC_DeltaPhi_vs_DeltaEta_trig_over_reco.pdf}
\caption{Dimuon trigger efficiencies in the $\Delta\phi$ versus
$\Delta\eta$ plane for \JPsi events generated in the kinematic
region $\pt > 50$\GeV and $\abs{y}<1.2$, illustrating the efficiency
drop when the two muons are too close to each other.}
\label{fig:BPH:rho-factor}
\end{figure}
This drop in the dimuon trigger efficiency, shown in
Fig.~\ref{fig:BPH:rho-factor}, is induced through a muon pair
correlation and, hence, is not taken into consideration through the
simple product of the efficiencies of the two single muons. The
corresponding correction can be evaluated by MC simulation
and validated by studying distributions of measured events as a
function of the distance between the two muon tracks. In the 2012
run, a new trigger was developed, in which a high-\pt single muon
selected at L1 and L2 is associated with a tracker muon at L3 before a
dimuon mass range is imposed. In such events, there is only a single
muon required at the L1 and L2 steps, so that the event is not
rejected in case that there is a second muon very close by. This trigger
path is ideally suited to study charmonium production at very high
\pt.
\section{Trigger menus}
\label{sec:triggermenus}
A trigger menu is defined as the sum of all object definitions and
algorithms that define a particular configuration of the CMS trigger
system. The menu consists of definitions of L1 objects and the algorithms
that are used to render the L1 decision, as well as the configuration of
the software modules that are used in the HLT. Sets of prescale columns
for different instantaneous luminosities are also included. By means of
such a prescale set the data-archiving rate of the readout chain could
be adjusted and maximized during a LHC fill as the instantaneous
luminosity drops along with the current trigger rate.
In this section, we describe the L1 and HLT menus and how they have
evolved in response to the physics goals and significant performance
improvements of the LHC machine during the first run.
\subsection{L1 menus}
From 2010 to 2012, several L1 menus (and corresponding prescale
columns) were developed to meet the experiment's physics goals and to
cope with the evolution of the LHC operational conditions, \ie,
the change of the center-of-mass energy between 2011 and 2012,
the varying number of colliding bunches for LHC fills, and the growth
of luminosity per bunch.
While designing new L1 menus, improved algorithms and thresholds were
utilized to continuously maintain the L1 trigger output rate within
the 100\unit{kHz} bandwidth limit. When the luminosity ramp-up phase
stabilized in 2011 and 2012, the strategy focused on reducing the
number of L1 menus being developed to a few per year, and adapting for
different machine operational conditions by using multiple prescale
columns rather than different L1 menus.
At the end of 2012, during a twelve-hour-long fill, the instantaneous
luminosity delivered by LHC varied significantly spanning from
${\approx}7\times10^{33}\percms$ to
${\approx}2.5\times10^{33}\percms$. The average number of pileup
events per interaction ranged from ${\approx}$30 at the beginning to
${\approx}$12 at the end of the run.
To aid the L1 menu development using data, a special reduced-content
event data format (containing only GCT, GMT and GT readout payloads)
was defined and used to record events in a special data set. These
events were collected on the basis of BPTX and L1 trigger GT decision
only.
Hence, with such recorded zero bias and L1 bias data sets, it was
possible to properly account for rate overlaps of the algorithms
operated in parallel in the GT~(Section~\ref{sec:global_trigger_desc})
while designing new menus. Additionally, since the event size was
significantly smaller than the standard event sizes~\cite{DAQ-TDR}, it
was possible to collect a much higher trigger rate of these events
than the standard event-data payload, enabling frequent offline
analysis and cross-checks of the L1 trigger decision.
\subsubsection{Menu development}
The L1 menu development for the first LHC run was to a large extent based on data. Data recorded during standard collision runs and from
special LHC setups including high pileup runs. To better understand the features of the LHC machine, different magnet and collimator settings were used. In addition, some data were taken with very few proton bunches. Large number of protons per bunch lead to significantly more collisions per bunch crossing, resulting in high-pileup events. These events were used to project trigger rates at improved LHC performance. Simulated data samples were also used to evaluate the
impact of the 7\TeV to 8\TeV LHC energy increase in 2012.
For the L1 menu development, as well as for the development of the L1 trigger algorithms,
we followed the following principles and strategy:
\begin{itemize}
\item use single-object triggers as baseline algorithms and adjust
thresholds to be sensitive to the electroweak physics as well as new
physics, \eg, heavy particles, multi-object final states,
events with large missing transverse energy;
\item in case the thresholds of the single-object triggers are too
high with respect to the given physics goals (or if the acceptance for a
given signal can be largely increased), use multi-object triggers,
\eg, two muons or one muon plus two jets;
\item prefer algorithms which are insensitive to changing LHC run
conditions, \eg, prefer algorithms that are less sensitive to
pileup events; and
\item the algorithms and thresholds in a new L1 menu developed, \eg,
for a different instantaneous luminosity, should result, if
possible, in a similar sharing of rates for the same type of
triggers: \ie, the muon triggers, $\Pe/\Pgg$ and jet/sum
triggers should have the same rate at a different instantaneous
luminosity compared to the existing L1 menu.
\end{itemize}
\begin{table}[tbp]\centering
\topcaption{Machine operational conditions, target instantaneous luminosity used for
rate estimation, and approximate overall L1 rate for three sample L1
menus, representative of the end of the year data-taking conditions
for 2010, 2011, and 2012.}
\begin{tabular}{ | l c c c c |}
\hline
Year & $\sqrt{s}$ [\TeVns{}] & Ref. \lumi [\!\percms] & $\langle$pileup$\rangle$ & $\langle$L1 rate$\rangle$ [kHz] \\
\hline
2010 & 7 & $0.15 \times 10^{33}$ & ${\approx}$2.5 & 56.9 \\
2011 & 7 & $3.00 \times 10^{33}$ & ${\approx}$14 & 80.9 \\
2012 & 8 & $5.00 \times 10^{33}$ & ${\approx}$23 & 56.5 \\
\hline
\end{tabular}
\label{tab:l1_trigger_menus}
\end{table}
\begin{figure}[tbp]
\includegraphics[width=0.48\linewidth]{figures/rate_evolution_eg.pdf}
\includegraphics[width=0.48\linewidth]{figures/cross_section_evolution_eg.pdf}
\caption{Rates (left) and cross sections (right) for a significant
sample of L1 $\Pe/\Pgg$ triggers
from 2010, 2011, and 2012 sample menus.}
\label{fig:rate_evolution_eg}
\end{figure}
\begin{figure}[tbp]
\includegraphics[width=0.48\linewidth]{figures/rate_evolution_mu.pdf}
\includegraphics[width=0.48\linewidth]{figures/cross_section_evolution_mu.pdf}
\caption{Rates (left) and cross sections (right) for a significant sample of L1 muon triggers
from 2010, 2011, and 2012 sample menus.}
\label{fig:rate_evolution_mu}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\linewidth]{figures/rate_evolution_jet_sums.pdf}
\includegraphics[width=0.9\linewidth]{figures/cross_section_evolution_jet_sums.pdf}
\caption{Rates (top) and cross sections (bottom) for a significant sample of L1 jet triggers
from 2010, 2011, and 2012 sample menus.}
\label{fig:rate_evolution_jet_sums}
\end{figure}
Table~\ref{tab:l1_trigger_menus} gives an overview of typical output
rates of the L1 trigger system in 2010, 2011, and 2012, and
Table~\ref{tab:menudetails} shows details for a typical 2012 menu. The
examples are chosen for LHC run periods where the measured
instantaneous luminosities were close to the ones the different menus
were designed for. The overall L1 trigger output rate was
significantly higher than 50\unit{kHz} and well below the 100\unit{kHz} limit, as
intended. The differences of observed and predicted total trigger
rates largely depended on how the L1 trigger was operated, \ie, if a
prescale column was changed at instantaneous luminosities different
from the desired operating instantaneous luminosity of a specific L1
menu it followed that the total trigger output rate changed
significantly ($\mathcal{O}(10\unit{kHz})$).
The average L1 total trigger output rate varied from year to year due
to adaptations to the changing LHC conditions.
Figures~\ref{fig:rate_evolution_eg}, \ref{fig:rate_evolution_mu}, and
\ref{fig:rate_evolution_jet_sums} show trigger rates and
cross sections of the various lowest threshold, unscaled, single-object,
and multi-object triggers defined for the first LHC run. It was found that for almost
all given triggers in a specific menu, the rates and cross sections had to be similar or lower compared to an earlier used trigger menu. This was required to maintain the overall L1 trigger
output rate of below the 100\unit{kHz} limit taking into account the increasing LHC performance. To achieve this goal for
higher instantaneous luminosities, \ie, later in 2011 and 2012,
multi-object trigger algorithms as well as higher object thresholds
were used.
\begin{table}[tbp]\centering \topcaption{Rates from a significant set of unscaled algorithms
participating to a typical L1 menu used during 2012 data-taking.
Rates and cross sections ($\sigma$) are computed for a target luminosity
of 5$\times 10^{33}\percms$. The overall menu rate (including
calibration and monitoring triggers) is 56.5\unit{kHz}. The corresponding
average pileup is approximately 23 interactions per bunch crossing. }
\begin{tabular}{ | l c c | }
\hline
Seed name & rate @ $5 \times 10^{33}\percms$ & $\sigma$ \\
& [kHz] & [$\mu$b] \\
\hline
\texttt{L1\_SingleIsoEG18er} & 7.69 & 1.55 \\
\texttt{L1\_SingleEG20} & 10.5 & 2.14 \\
\texttt{L1\_SingleMu12er} & 8.11 & 1.64 \\
\texttt{L1\_SingleMu16} & 7.49 & 1.51 \\
\texttt{L1\_SingleJet128} & 1.15 & 0.232 \\
\hline
\texttt{L1\_SingleMu6\_NotBptxOR} & 0.03 & 0.007 \\
\texttt{L1\_SingleJetC32\_NotBptxOR} & 0.13 & 0.026 \\
\hline
\texttt{L1\_ETM36} & 4.35 & 0.881 \\
\texttt{L1\_HTT150} & 1.10 & 0.223 \\
\texttt{L1\_ETT300} & 0.21 & 0.043 \\
\hline
\texttt{L1\_DoubleEG\_13\_7} & 6.58 & 1.33 \\
\texttt{L1\_DoubleMu\_10\_Open} & 4.36 & 0.882 \\
\texttt{L1\_DoubleMu0er\_HighQ} & 5.77 & 1.16 \\
\texttt{L1\_DoubleJetC56} & 7.59 & 1.53 \\
\texttt{L1\_DoubleTauJet44er} & 1.88 & 0.381 \\
\hline
\texttt{L1\_TripleMu0} & 0.81 & 0.165 \\
\texttt{L1\_TripleEG\_12\_7\_5} & 2.19 & 0.444 \\
\texttt{L1\_TripleEG7} & 1.35 & 0.273 \\
\texttt{L1\_TripleJet\_64\_48\_28\_VBF} & 2.28 & 0.462 \\
\hline
\texttt{L1\_QuadJetC36} & 0.74 & 0.150 \\
\hline
\texttt{L1\_Mu3p5\_EG12} & 2.34 & 0.474 \\
\texttt{L1\_Mu12\_EG7} & 1.03 & 0.208 \\
\texttt{L1\_Mu0\_HTT100} & 0.46 & 0.094 \\
\texttt{L1\_Mu7er\_ETM20} & 1.19 & 0.241 \\
\texttt{L1\_IsoEG12er\_ETM30} & 1.54 & 0.311 \\
\texttt{L1\_EG22\_ForJet24} & 2.42 & 0.489 \\
\hline
\texttt{L1\_DoubleMu5\_EG5} & 0.54 & 0.109 \\
\texttt{L1\_Mu5\_DoubleEG6} & 0.96 & 0.194 \\
\texttt{L1\_DoubleEG6\_HTT100} & 1.32 & 0.266 \\
\texttt{L1\_DoubleJetC36\_ETM30} & 3.40 & 0.688 \\
\texttt{L1\_Mu10er\_JetC12\_WdEtaPhi1\_DoubleJetC\_20\_12} & 1.02 & 0.207 \\
\hline
\end{tabular}
\label{tab:menudetails}
\end{table}
\subsection{HLT menus}
The configuration of all the HLT paths that are run online at one time is called the HLT menu of CMS. This menu was initially prepared, based on simulated data, before the first data was taken in 2010, and has continuously evolved since then. This evolution is driven mainly by the changes in the machine conditions, namely $\sqrt{s}$, luminosity, bunch spacing, and pileup conditions.
Moreover, timing improvements in the software-based algorithms and analysis techniques allowed the online algorithms to be brought much closer to the ones adopted offline, leading to better performance, as well as closer correspondence between the online and the offline selections. In addition to the trigger paths designed to preselect the events to be used in the physics analyses, calibration and monitoring paths for the different CMS subdetectors are also necessary and were included in all menus.
The first menus in 2010 consisted of fewer than 60 separate trigger
paths. The low instantaneous luminosity supplied by the LHC at that
time allowed the use of several ``pass-through'' paths, in which the
events accepted by the L1 trigger are accepted also by
the HLT without further requirements and restrictions. In addition to the pass-through paths, single ``physics object" triggers started to
be developed, meant to trigger on inclusive isolated or non-isolated
electrons, photons, muons, and jets. As the instantaneous luminosity
increased, the strategies used to control the trigger rate consisted
of: raising \pt thresholds, adding isolation and quality conditions in
the identification of jets, leptons, and photons, using prescales,
introducing cross-triggers (triggers which require several physics
objects of different types), and defining dedicated $\tau$-like jets.
Moreover, a few other paths were included in the menu to study
possible implementations for future data-taking periods at higher
rate. During 2010, 12 different trigger menus were developed, covering
the wide range of instantaneous luminosity scenarios provided by the
LHC (from $1\times10^{27}$ to $2\times10^{32}$\percms). In
addition, prescale values were designed according to the LHC luminosity.
In 2011, with the LHC still operating at 7\TeV center-of-mass energy,
six different HLT and L1 menus were designed, aimed at instantaneous
luminosities ranging from $5\times10^{32}$ to
$5\times10^{33}$\percms. Tighter selections were therefore needed, and
the refinement of the trigger requirements was achieved by gradually
introducing analysis-like selection criteria at the trigger
level. Besides the usual ``physics object''-oriented paths, the
presence of cross-triggers and more complex trigger paths, based on
algorithms similar to those applied in the offline analyses, became
more and more relevant in the menu. A few refined techniques used
offline could therefore be brought to the HLT, after adapting them to
reduce the CPU time needed, at the expense of very little performance, and without
greatly compromising their final response. Amongst those techniques,
particle flow reconstruction~\cite{CMS-pf1} was used since the beginning to characterize the hadronically decaying
$\tau$ leptons at the HLT. Towards the end of the 2011 data-taking
period, strategies designed to mitigate the effect of pileup were also
included in several trigger paths, with the intent of studying their
performance in view of the 2012 data taking when the pileup effect was
expected to become more relevant. In this respect the so-called
\FASTJET corrections~\cite{fastjetmanual}, offset corrections which
take into account the average energy density in the event and the area
of each jet in order to correct its energy on a jet-by-jet basis,
proved very successful.
The number of paths deployed in the 2011 menus rose from about 310 at
the beginning of the data taking to approximately 430 towards the end
of the year. A few paths were included specifically to monitor and
calibrate CMS subdetector components. For example, the response of the
electromagnetic calorimeter, which is fundamental for the selection
and analysis of the Higgs boson decaying in two photons, is
continuously monitored by some dedicated paths, in order to provide
updates to the calibrations in a timely manner.
When the 8\TeV run began in 2012, because of the higher instantaneous luminosity achieved by the LHC, pileup effects became much more important and therefore improvements in the design of most of the paths included in the menu were required. Ideally, the acceptance rate of a trigger should be proportional to the instantaneous luminosity, however, due to pileup it may increase non-linearly. This effect, together with the higher LHC luminosity, would give rise to unacceptably high trigger rates.
The rate increases can only be mitigated by either raising the
acceptance thresholds in the path themselves (with the unwanted effect
of reducing the physics reach of the events selected), or by improving
the performance of the selections, with sharper turn-on curves at the
thresholds and less sensitivity to pileup. The main handle used to
achieve this goal, without affecting the CMS physics potential, was
the extension of the implementation of particle flow reconstruction to
most jet- and \MET-based triggers. The replacement of
calorimeter-based jet triggers with PF-based ones was introduced gradually during the year. An additional advantage of using the PF in the trigger selection is that the selection algorithms are mostly the same as those used offline for the final analysis; however, reconstruction algorithms based on PF methods eventually do need more time compared to more ``classical'' one-object-per-detector-type algorithms. One idea used in the HLT to reduce the overall CPU time consumption was to move the PF reconstruction after all other possible selections, which were based on more classical quantities, which are faster to calculate. Among the other technical improvements to the HLT algorithms that allowed the rate to be kept low and the CPU time manageable, the following were particularly relevant: the optimization of lepton identification and isolation; the use of a filter to select leptons coming from the same vertex in several dilepton paths; weekly updates to ECAL transparency corrections, which allows efficiently compensating for the transparency loss in the endcap region affecting the electromagnetic energy scale; and the dedicated $\tau$ lepton reconstruction for the double-$\tau$ and lepton+$\tau$ triggers.
Different menus were used in 2012 for four different LHC peak luminosities, ranging from $5\times10^{33}$ to $8\times10^{33}$\percms. The number of different HLT paths during 2012 was approximately 400 at the beginning, increasing to about 450 by the end of the year.
In addition to the proton-proton triggers, dedicated menus for the heavy ion (lead-lead) collisions in 2010 and 2011 and for the proton-lead collisions in the first months of 2013 were created. The different running conditions and physics requirements led to different menus for the ion-ion and the proton-ion runs. The final heavy ion menus in 2010 and 2011 consisted of 58 and 77 HLT paths, respectively, while the proton-ion menu of 2013 contains about 150 paths.
\section{Trigger system operation and evolution}
\label{sec:operations}
\subsection{Trigger monitoring and operations}
During data taking the angular distributions of objects satisfying the
trigger and the trigger rates were monitored. As these two kinds of
information are produced using two different software tools, they
provide complementary information about the behavior of the trigger system that
are useful in diagnosing problems.
We use the central CMS data quality monitoring (DQM) tools~\cite{DQM} to
monitor the angular distributions. The DQM tools process a small subset of
events selected by the HLT and produce plots of $\eta$ and $\phi$ of the
trigger objects. The distributions are monitored for regions with abnormal
appearances of either too many or too few events.
The rates of each HLT path are monitored in each node of the HLT computing
cluster, where the CMS data acquisition software records how many times each
trigger path was successful. The path information is summed over all nodes to
give the total rate of each path. The summation occurs at every fixed integrated luminosity,
and the results are written into a database. The HLT group has developed
customized software to extract the rates from the database and compare them to
expectations. The expected rate behavior is defined by fitting the trigger
rates as a function of instantaneous luminosity using previously recorded data certified as
good. Uncertainties from the fit provide an envelope of expected rate variations
which sets the threshold for displaying warnings in the control room. A
selected set of approximately 20 different representative triggers, out of the 400
of the HLT menu, is used for regular online monitoring. The selected HLT paths
have either a large rate or an important physics signature. For instance, some
of the closely monitored triggers includes single-muon, single-electron, and
diphoton triggers, where rate variations are identified with a 5--10\%
sensitivity.
\subsection{Technical performance}
\subsubsection{The L1 trigger deadtime, downtime and reliability}
``Deadtime during active beam" is defined as the percentage of time
during normal data taking (data acquisition system in ``run" mode)
when collisions occur but CMS is not ready to record triggers. There
are several contributions to this deadtime:
\begin{itemize}
\item Partition controller deadtime: It arises when any CMS subsystem
(such as a subdetector or trigger subsystem) asserts ``not ready"
because of a transient problem (\eg, ``out of sync", requiring
a ``resync" command) or because the instantaneous trigger rate is
too high.
\item Trigger rules deadtime: A set of
trigger rules of the type ``not more than $m$ triggers within $n$
bunch crossings" limits the instantaneous trigger rate.
\item Calibration deadtime: At a rate of 100\unit{Hz}, calibration
triggers are sent (required mainly by the electromagnetic
calorimeter) and a small part of the orbit is blocked for this
purpose.
\end{itemize}
Usually, deadtime was kept to approximately 1\%, only a small fraction
of which was due to trigger subsystems.
``Downtime" is the percentage of time when data acquisition system
cannot be put into run mode during active beams because of a
malfunctioning subsystem. During regular running, the downtime due to
trigger subsystems was well fewer than one percent. In most cases, the
trigger downtime caused by hardware or software crashes, which could
be fixed by restarting the electronics subsystem or the software
process, respectively. To take care of the rare cases where an
electronic module is faulty, spare modules for all systems are kept in
the electronics cavern. For the GT, a fully equipped spare crate is
kept running and ready to take over at any time in case of a hardware
fault. Empirically, we observe that the L1 trigger system contributes only a
small fraction to the total experiment downtime.
\subsubsection{The HLT resources and optimization}
As described in Section~\ref{sec:HLTDAQ}, the HLT runs on an EVF consisting of three different types of machines. Two complementary methods are used to monitor the usage of this farm by the HLT menu. The first method directly measures the time taken by the HLT selection and reconstruction steps for each event during data taking. The second method rapidly samples every CPU in the farm to determine its state, and the time per event is calculated based on the frequency of finding the CPU in a non-idle state. The two methods give consistent results. Using the second method, the total busy fraction of the EVF can also be determined.
To estimate the CPU usage of an HLT menu at a future (higher) instantaneous luminosity value, the average busy fraction over the course of an LHC fill is measured, and a fit is performed as a function of instantaneous luminosity, as shown in Fig.~\ref{fig:HLTCPUFraction}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\textwidth]{figures/HLTCPUFraction.pdf}
\caption{The average CPU busy fraction as a function of instantaneous luminosity for one LHC fill. Luminosity sections with data-taking deadtime $>$40\% are removed.}
\label{fig:HLTCPUFraction}
\end{figure}
An exponential function is found to give a good description of the data over a wide range of instantaneous luminosity and allows extrapolation to higher luminosities. In addition, we also measure the time per event for each type of machine used in the filter farm as shown in Fig.~\ref{fig:HLTMachineProcessingTimes}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\textwidth]{figures/HLTMachineProcessingTimes.pdf}
\caption{The HLT processing time per event as a function of instantaneous luminosity
for the three different machine types used in the filter farm.}
\label{fig:HLTMachineProcessingTimes}
\end{figure}
The time per event is observed to be approximately linear as a function of luminosity on the Intel Xeon E5430 CPUs. The other two types of CPUs employ Intel's hyper-threading to run twice as many concurrent processes as there are physical cores by using parts of the CPU that would otherwise be idle. As a result, the time per event for these hyper-threaded CPUs increases faster than linearly as the CPU is saturated with increasing luminosity and input rate. Using this information, it is possible to calculate the maximum time per event of the HLT menu for a given L1 input rate, and also the instantaneous luminosity at which this limit would be reached. The figure of merit used is the time per event for an Intel Xeon E5430 CPU. The filter farm configuration used during 2012 data taking was able to sustain an average processing time per event of approximately 200~ms for an L1 input rate of 100\unit{kHz}.
In addition to the online monitoring of the HLT menus, each menu is validated in an offline environment before being used for online data taking. Each new version of the menu is compared to a previous version on a single machine to ensure that the CPU consumption does not exceed expectations. The menus are tested by running the HLT once with each menu over the same sample of previously collected events. The measurement is done using a machine with similar core architecture to the Intel Xeon E5430 CPU, and is performed using the direct timing measurement described above. New instantaneous luminosity and L1 input rate limits can then be determined by using the relative performance of the new menu and the measured performance of the older menu. An example of an offline comparison of the times per event for two different HLT menus is shown in Fig.~\ref{fig:HLTOfflineComparison}. When testing a new menu, the time per event for each HLT path is also checked to determine which paths are the most CPU intensive. The algorithms for CPU-intensive paths are then optimized to ensure that the total processing time does not exceed the limitations of the system.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.6\textwidth]{figures/HLTMenuOfflineComparison.pdf}
\caption{Comparison of the time per event measured for two different HLT menus using a validation
machine outside of the event filter farm.}
\label{fig:HLTOfflineComparison}
\end{figure}
\subsubsection{The HLT operations}
Following offline validation, HLT menus are validated in an online environment using the HLT online (``HiLTOn'') test stand. In order to be as close as possible to the online environment, the HiLTOn is operated using the same run control interface as the CMS DAQ system. The HiLTOn hardware consists of 30 Dell PE 1950 machines with dual quad-core 2.66~GHz CPUs and 16~GB of RAM. The test stand system is subdivided into three groups of ten machines. The first group is always kept identical to the PE 1950 machines used for data taking operations and thus can be used to validate menus on the current online software environment. The second group of machines may additionally be used to validate the performance of software updates, and HLT menus that depend on the updates, in an online environment. Finally, the third subdivision of the HiLTOn is used to evaluate changes made to the HiLTOn itself. Two machines of each group are dedicated to the building of new online software releases. A third machine is always used to collect the output of the HLT and save events to disk. The remaining seven machines in each group are able to process events via seven instances of the HLT per machine, although only four machines are used for typical menu validation.
The HLT validation is designed to maximize the performance and stability of HLT algorithms. As every event satisfying L1 trigger requirements is examined by the HLT, and several HLT decisions are based on analysis-quality physics objects reconstructed using information from all CMS subdetectors, HLT reliability is critical to the success of the experiment. On rare occasions, typically below a few events per month during data taking, one or more HLT algorithms will experience a processing error while examining a collision event. These events are stored locally for later analysis and are used to improve the reliability of the HLT software.
Roughly 0.5\% of the downtime during collision data taking operations from 2009 until 2012 (including all proton-proton and heavy ion collision operations at any center-of-mass energy) was due to problems with the HLT; 95\% of this downtime was due to a single incident when corrupt detector input resulted in the HLT failure for every incoming collision event. Prior and subsequent data taking using the same HLT menu resulted in no loss of data. Ignoring this incident, the HLT was responsible for a negligible loss of collision data.
\section{Summary}
\label{sec:summary}
The CMS trigger system consists of two levels: an L1 custom
hardware trigger, and an HLT system with custom
\textsc{c++} software routines running on commodity CPUs.
The L1 trigger takes input from the calorimeters and the muon system
to select the events of physics interest. To do this, it uses
identified leptons, photons, and jet candidates, as well as
event-level information such as missing transverse energy. Trigger
primitives are generated on the front-ends of the subdetectors and then
processed in several steps before a final decision is rendered in the
global trigger.
The L1 calorimeter trigger comprises two stages, a regional
calorimeter trigger (RCT) and a global calorimeter trigger (GCT). The
RCT processes the regional information in parallel and sends as output
$\Pe/\Pgg$ candidates and regional \ET sums. The GCT sorts the
$\Pe/\Pgg$ candidates further, finds jets (classified as central,
forward, and tau) using the \ET sums, and calculates global quantities
such as \MET.
Each of the three muon detector systems in CMS participates in the L1
muon trigger to ensure good coverage and redundancy. For the DT and
CSC systems, the front-end trigger electronics identifies track
segments from the hit information registered in multiple detector
planes. Track finders apply pattern recognition algorithms which
identify muon candidates, and measure their momenta from the amount
of their bending in the magnetic field of the return yoke between measurement
locations. The RPC hits are directly sent from the
front-end electronics to pattern-comparator logic boards which
identify muon candidates. The global muon trigger merges muon
candidates, applies quality criteria, and sends the muon candidates to
the global trigger.
The global trigger implements the menu of selection requirements
applied to all objects. A maximum of 128 separate selections can be
implemented simultaneously. Overall, the L1 decision is rendered within
$4\mus$ after the collision. At most 100\unit{kHz} of events are sent
to the HLT for processing.
The HLT is implemented in software, and further refines the purity of
the physics objects. Events are selected for offline storage at an
average rate of 400\unit{Hz}. The HLT event selection is performed in a
similar way to that used in the offline processing. For each event,
objects such as leptons, photons, and jets are reconstructed and
identification criteria are applied in order to select only those
events which are of possible interest for data analysis. The HLT
hardware consists of a processor farm using commodity PCs running
Scientific Linux. The subunits are called builder and filter units. In
the builder units, event fragments are assembled to complete
events. Filter units then unpack the raw data and perform event
reconstruction and trigger filtering. Both the L1 triggers and HLT
include prescaling of events passing defined selection criteria.
The performance of the CMS trigger system has been evaluated in two
stages. First, the performance of the L1 and HLT systems has been
evaluated for individual trigger objects such as electrons, muons,
photons, or jets, using tag-and-probe techniques. Most of the
measurements considered come from the 2012 CMS data set, where data
have been collected at $\sqrt{s}=8$\TeV. Performance has been
evaluated in terms of efficiency with respect to offline quantities
and to the appropriate trigger rate. Both L1 and HLT performance have
been studied, showing the high selection efficiency of the CMS
trigger system. Second, the performance of the trigger system has
been demonstrated by considering key examples across different physics
analyses. In CMS, the HLT decisions often are derived from complex
correlated combinations of single objects such as electrons, muons, or
$\tau$ leptons. The broad range of capabilities of the trigger system
has been shown through examples in Higgs boson, top-quark, and
B physics, as well as in searches for new physics.
The trigger system has been instrumental in the successful collection
of data for physics analyses in Run~1 of the CMS experiment at the
LHC. Efficiencies were measured in data and compared to simulation,
and shown to be high and well-understood. Many physics signals were
collected with high efficiency and flexibility under rapidly-changing
conditions, enabling a diverse and rich physics program, which has led
to hundreds of publications based on the Run~1 data samples.
\begin{acknowledgments}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren}
\hyphenation{Rachada-pisek}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Secretariat for Higher Education, Science, Technology and Innovation, Ecuador; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie program and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2013/11/B/ST2/04202, 2014/13/B/ST2/02543 and 2014/15/B/ST2/03998, Sonata-bis 2012/07/E/ST2/01406; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Programa Clar\'in-COFUND del Principado de Asturias; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); and the Welch Foundation, contract C-1845.
\end{acknowledgments}
|
2,869,038,154,955 | arxiv | \section{Introduction}
Gamma-ray bursts (GRBs) are short flashes of $\gamma$-rays from the outer/deep space. Based on the duration distribution of the prompt emission, GRBs can be divided into two sub-groups \citep{1993ApJ...413L.101K}. One group has a typical duration of $\sim 20~{\rm s}$. The other has a much shorter duration $<2$ s, centered at $\sim 0.1$ s. The long GRBs are usually related to the death of massive stars, and one smoking-gun signature of such a scenario is the bright supernova emission, as identified in most nearby long GRBs \citep{1993ApJ...405..273W,1999ApJ...524..262M,2003Natur.423..847H}. The short GRBs are rarer by a factor of about four according to the BATSE, or about ten from the {\it Swift} observation. The rate depends strongly on the energy bands and on sensitivities of the instruments (e.g., Qin et al. 2013, Zhang et al. 2012). The understanding of such a group of events had not been revolutionized until 2005 when {\it Swift} and {\it HETE-II} localized such events, and the long-lasting multi-wavelength afterglow emission had been detected \citep{2005Natur.437..851G,2005Natur.437..845F}.
Although not as abundant as that of long GRBs, the afterglow emission data of short GRBs by then are valuable for revealing the physical processes taking place in the central engine. For example, the peculiar X-ray emission, such as the X-ray flares and the X-ray plateau followed by abrupt quick decline \citep{2005Natur.437..855V,2005Natur.438..994B,2010MNRAS.409..531R,2011MNRAS.417.2144M}, was found to be inconsistent with the so-called standard forward-shock afterglow model. This emission instead implies the prolonged activity of the central engines \citep{2005ApJ...635L.129F,2006Sci...311.1127D,2006ApJ...639..363P,2006ChJAA...6..513G,2006MNRAS.370L..61P,2008MNRAS.385.1455M,
2013MNRAS.430.1061R}, which is possibly associated with non-ignorable gravitational wave radiation \citep{Fan2013PRD,2013ApJ...763L..22Z}. If these X-ray plateaus are indeed powered the internal energy dissipation of supra-massive neutron star (SMNS) wind and the sharp declines mark the collapse of the SMNSs \citep{2006ChJAA...6..513G,2013MNRAS.430.1061R}, then the maximum gravitational mass of non-rotating neutron star can be estimated to be $\sim 2.3~M_{\odot}$ \citep{Fan2013PRD,LiX2014}. Moreover, the detection of a weak infrared bump in GRB 130603B strongly favors the physical origin of the merger of a compact object binary \citep{2013Natur.500..547T,2013ApJ...774L..23B,2013ApJ...775L..19J}. To account for the shallowly decaying X-ray emission, a millisecond magnetar central engine, which was still active in $\sim 10^{3}$ s after the short burst, is needed, and the progenitor stars are likely to be double neutron stars \citep{2013ApJ...779L..25F,2014A&A...563A..62D,2014ApJ...785...74L,2015arXiv150102589L}.
Motivated by this progress, in this work we examine the physical origin of the X-ray and optical emission of short GRB 130912A, which is characterized by a long-lasting optical plateau. In section 2 we introduce the observations of GRB 130912A. In section 3 we interpret the data. We summarize our results with some discussion in section 4.
\section{Observations}
\subsection{Swift observations}
The {\it Swift} Burst Alert Telescope (BAT) detected GRB 130912A at 08:34:57 UT on September 12, 2013 \citep{2013GCN..15212...1D}.
The $T_{90}$ duration is $0.28\pm0.03$ second. The light curve shows two overlapping peaks.
The time-averaged spectrum of the first 0.32 second is best fitted by a simple power law, with photon index $\Gamma_{\gamma} = 1.20 \pm 0.20$.
The total fluence is $1.7 \pm 0.2 \times 10^{-7}$ erg cm $^{-2}$, and the peak photon flux is $2.2 \pm 0.3$ ph cm$^{-2}$ s$^{-1}$ (Krimm et al. 2013).
All values are in the 15 -- 150 keV energy band.
There is no evidence of extended emission detected in the BAT energy range \citep{2010ApJ...717..411N}, which makes it an unambiguous short GRB.
The Swift X-ray Telescope (XRT) began to observe the field at 08:36:31.7 UT, 93.9 seconds after the BAT trigger. Kennea et al. (2013)
analyzed the initial XRT data and report that the light curve can be modelled with a power-law decay with a decay index of $\alpha=
1.20\pm 0.04$. The spectrum formed from the PC mode data can be fitted with an absorbed power law with a photon
spectral index of $\Gamma_{X} = 1.57^{+0.20}_{-0.16}$ and $N_{\rm H}=1.49^{+0.69}_{-0.25} \times
10^{21}$ cm$^{-2}$, consistent with the Galactic value of $N_{\rm H}=1.2 \times 10^{21}$ cm$^{-2}$ (Kalberla et al. 2005).
The {\it Swift} UVOT took 150 seconds to find the chart exposure 98 seconds after the BAT trigger \citep{2013GCN..15212...1D}.
No optical afterglow within the enhanced XRT position \citep{2013GCN..15217...1B} has been detected in this initial and the subsequence exposures \citep{2013GCN..15229...1C}.
\subsection{Ground-based optical observations}
The field of GRB 130912A was observed at early times by several ground-based optical telescopes, but only two detected the afterglow. GROND started observing at 08:50 UT on September 12, 2013 (16 min after the GRB trigger), and found the afterglow at coordinates RA (J2000) = 03:10:22.23 and Dec. (J2000) = 13:59:48.7 within the Swift XRT error box. Over the first hour-long observation in the $r'$ filter, the afterglow seemed to be constant, with a magnitude (here and throughout this paper, magnitudes are in AB system) of r'=22.0$\pm$0.2 \citep{2013GCN..15214...1T}.
Another telescope, P60, started observing at 8:48 UT on September 12, 2013 (13 min after the Swift trigger) also in $r'$ band, confirmed the GROND observation that the source was almost unchanged in the first hour. It led to the suspicion that the detected source is the host galaxy of GRB 130912A \citep{2013GCN..15222...1C};
however, when RATIR observed the field again started on 2013 September 13.30, the source had faded down to a limited magnitude 23.89 (3-sigma) in SDSS $r$ band, so the previously detected source should be the afterglow \citep{2013GCN..15226...1B}. According to the RATIR observation, the host galaxy is fainter than 23.89, 23.79, 22.86, 22.38, 22.30, and 21.78 magnitudes in $r$, $i$, $Z$, $Y$, $J$, and $H$ bands, respectively \citep{2013GCN..15226...1B}. None of these ground-based observations is publicly available in the form of raw data. We collected the public GCN data as shown in Table 1, and loose limits had been discarded. The magnitudes used in the following analysis were corrected for the Galactic extinction, assuming $E(B-V)=0.28$ \citep{1998ApJ...500..525S} and a ratio of total-to-selective extinction $R_{V}=3.1$, the Galactic extinction in $r'$ band is $A_{r'}=0.78$.
\begin{table*}
\begin{center}
\caption{Optical observations of the field of GRB 130912A}
\label{table:1}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Telescope & Data start & Observation time after trigger & Filter$^{\rm a}$ & Magnitude$^{\rm b}$ & Flux \\
& (UT) & (s) & & & (erg~cm$^{-2}$s$^{-1}$Hz$^{-1}$) \\
\hline
GROND & 08:50/12/09/2013 & 960 & GROND $r'$ & 22.0 $\pm$ 0.20$^{(1)}$ & $ 5.75\times10^{-29} $ \\
P60 & 08:58/12/09/2013 & 1380 & GROND $r'$ & 21.77 $\pm$ 0.20$^{(2)}$ & $ 7.11\times10^{-29} $ \\
P60 & 09:28/12/09/2013 & 3180 & GROND $r'$ & 22.09 $\pm$ 0.25$^{(2)}$ & $ 5.29\times10^{-29} $ \\
RATIR & 07:33/13/09/2013 & $\sim8.1\times10^{4}$ & SDSS $r$ & $>$ 23.89$^{(3)}$ & $<1.01\times10^{-29} $\\
\hline
\end{tabular}
\end{center}
a. The difference between GROND $r'$ and SDSS $r$ is less than 0.04 mag assuming a power-law spectrum with an index $\beta$ between 1 and -2. Here the flux of the afterglow is expressed as $F_{\nu}\propto t^{\alpha}{\nu}^{\beta}$,
b. The flux is reported with the $1\sigma$ statistical error, and the upper limit is at the confidence level of 3$\sigma$.
References:
$^{(1)}$ Tanga et al. (2013),
$^{(2)}$ Cenko et al. (2013),
$^{(3)}$ Butler et al. (2013).
\end{table*}
\section{Interpreting the optical and X-ray afterglow}
In the {\it Swift-}era, long-lasting plateau-like X-ray emission (i.e., the so-called shallow decay phase) was detected well in a good fraction of GRB afterglows (e.g., Zhang et al. 2006; Nousek et al. 2006). The leading interpretation of the long-lasting plateau-like X-ray emission is the energy injection model, which is valid if the central engine works continually, or alternatively, the bulk Lorentz factor of the outflow material has a wide distribution (e.g., Dai \& Lu 1998; Zhang \& M\'esz\'aros 2001; Fan \& Xu 2006). A general prediction of the energy injection model is that the temporal behaviours of multi-wavelength afterglow emission will be shaped simultaneously (Fan \& Piran 2006). However, the X-ray and optical observations of the GRB afterglow usually do not track each other. The X-ray afterglows usually displayed an early shallow decline, which is often not observed in the optical (e.g., Fan \& Piran 2006; Panaitescu et al. 2006). The physical reason is still not clear for such a puzzle, for example, the widely-adopted energy injection model for the X-ray decline is found to be unable to account for the optical data.
Recently, there is another unusual situation. As shown in Fig.1, the X-ray afterglow emission of GRB 130912A can be fitted by a single power law (Beardmore et al. 2013), as found in most GRB X-ray emission. However, the optical emission is plateau-like on a very long timescale $\sim 3.2\times 10^{3}$ s, which is a very strange behaviour that is rarely observed in the optical afterglow. A long-lasting optical plateau was also detected in GRB 090510 (De Pasquale et al. 2010; Gao et al. 2009). The duration of the optical plateau of GRB 090510, however, is just about half of that of GRB 130912A. The optical plateau of GRB 130912A is thus very likely the {\it longest} one people have detected so far in short GRB afterglows. In reality, the lack of optical variability in the first hour after the trigger of GRB 130912A motivated the idea that these emission were from the host galaxy (Tanga et al. 2013; Cenko et al. 2013). The afterglow emission nature of the optical data had not been established until significant fading was identified about one day after the burst (Butler et al. 2013). The main purpose of this work is to interpret the ``unusual" optical plateau and the X-ray emission self-consistently.
\begin{figure*}[t]
\includegraphics[width=10.0cm,angle=0]{f1.eps}
\caption{X-ray (square) and optical (circle) light curves of GRB 130912A and the theoretical
model curves. The X-ray data are from the UK Swift Science Data Center (Evans et al. 2009) and transformed to 1.732 keV. In order to reduce the influence of the error of the spectral index on the flux calculation, we used the geometric mean of the lower and upper boundaries of the corresponding X-ray energy band $0.3-10$ keV. Proper corrections for extinction in the Milky Way Galaxy have been made. The solid and dashed curves are the theoretical optical and X-ray afterglow prediction with a forward shock.}
\label{fig:Num}
\end{figure*}
Because nothing is unusual in the X-ray band for GRB 130912A, we conclude that the energy injection model does not work for the current data, as discussed above. It is widely known that the forward-shock emission is governed by some physical parameters that can be parameterized as (e.g., Sari et al. 1998; Yost et al. 2003, Fan \& Piran 2006)
\begin{equation}
F_{\nu,{\rm max}}=6.6~{\rm mJy}~\Big({1+z\over 2}\Big) D_{L,28.34}^{-2}
\epsilon_{B,-2}^{1/2}E_{k,53}n_0^{1/2}, \label{eq:F_nu,max}
\end{equation}
\begin{equation}
\nu_m =2.4\times 10^{16}~{\rm Hz}~E_{\rm k,53}^{1/2}\epsilon_{\rm
B,-2}^{1/2}\epsilon_{e,-1}^2 C_p^2 \Big({1+z \over 2}\Big)^{1/2}
t_{d,-3}^{-3/2},\label{eq:nu_m}
\end{equation}
\begin{equation}
\nu_c = 4.4\times 10^{16}~{\rm Hz}~E_{\rm k,
53}^{-1/2}\epsilon_{B,-2}^{-3/2}n_0^{-1}
\Big({1+z \over 2}\Big)^{-1/2}t_{d,-3}^{-1/2}{1\over (1+Y)^2},
\label{eq:nu_c}
\end{equation}
where $C_p \equiv 13(p-2)/[3(p-1)]$, $\epsilon_{\rm e}$ ($\epsilon_{\rm B}$) is the fraction of shock energy given to the electrons (magnetic field), $t_{d}$ is the time in days, the Compton parameter $Y\sim
(-1+\sqrt{1+4\eta \epsilon_e/\epsilon_B})/2$, $\eta \sim \min\{1,
(\nu_m/\bar{\nu}_c)^{(p-2)/2} \}$, and $\bar{\nu}_c=(1+Y)^2 \nu_c$. Here and throughout this text, the convention $Q_{\rm x}=Q/10^{\rm x}$ has been adopted.
At $t\sim 3.2\times 10^{3}~{\rm s}$, if we have $\min\{\nu_{\rm c},\nu_{\rm m}\}=\nu_{\rm c}\gtrsim 5\times 10^{14}~{\rm Hz}$, the temporal behaviour of the optical emission would be $F_{\nu_{\rm opt}}\propto F_{\nu,{\rm max}}\nu_{\rm c}^{-1/3} \propto t^{1/6}$, and the X-ray emission light curve should be $F_{\nu_{\rm x}}\propto F_{\nu,{\rm max}}\nu_{\rm c}^{1/2} \propto t^{-1/4}$ in the case of $\nu_{\rm c}<\nu_{\rm x}<\nu_{\rm m}$ or, alternatively, $F_{\nu_{\rm x}}\propto F_{\nu,{\rm max}}\nu_{\rm c}^{1/2}\nu_{\rm m}^{(p-1)/2} \propto t^{-(3p-2)/4}$ in the case of $\nu_{\rm c}<\nu_{\rm m}<\nu_{\rm x}$. While the temporal behaviour of optical emission agrees nicely with the data, the X-ray emission does not. The case of $\nu_{\rm c}<\nu_{\rm x}<\nu_{\rm m}$ is clearly at odds with the data. The case of $\nu_{\rm c}<\nu_{\rm m}<\nu_{\rm x}$ on the timescale of $100~{\rm s}<t<3.2\times 10^{3}~{\rm s}$ is also inconsistent with the X-ray spectrum $F_\nu \propto \nu^{-0.50\pm 0.16}$ (http://www.swift.ac.uk/xrt$_{-}$spectra/).
If we, instead, have $\min\{\nu_{\rm c},\nu_{\rm m}\}=\nu_{\rm m}\gtrsim 5\times 10^{14}~{\rm Hz}$ at $t\sim 3.2\times 10^{3}~{\rm s}$, the temporal behaviour of the optical emission would be $F_{\nu_{\rm opt}}\propto F_{\nu,{\rm max}}\nu_{\rm m}^{-1/3} \propto t^{1/2}$, and the X-ray emission light curve should be $F_{\nu_{\rm x}}\propto F_{\nu,{\rm max}}\nu_{\rm m}^{(p-1)/2} \propto t^{-3(p-1)/4}$ in the case of $\nu_{\rm m}<\nu_{\rm x}<\nu_{\rm c}$ or, alternatively, $F_{\nu_{\rm x}}\propto F_{\nu,{\rm max}}\nu_{\rm c}^{1/2}\nu_{\rm m}^{(p-1)/2} \propto t^{-(3p-2)/4}$ in the case of $\nu_{\rm m}<\nu_{\rm c}<\nu_{\rm x}$. Now the temporal behaviours of both optical and X-ray emission are consistent with the data for $p\sim 2.3$, as are the spectral behaviours. Since an optical plateau lasting a few thousand seconds is rare in the short GRB afterglow data, it is highly necessary to examine whether the required forward-shock physical parameters are reasonable or not.
To self-consistently interpret the optical and X-ray data, $\nu_{\rm c}(t\sim 1.0\times 10^{4}~{\rm s})\approx$ $ 7.254\times 10^{16}$ Hz, $\nu_{\rm m}(t\sim 3.2 \times 10^{3}~{\rm s})\approx 5\times 10^{14}$ Hz and $F_{\nu,\rm max}\sim 0.02$ mJy are needed. Following Zhang et al. (2015), we have
\begin{equation}
\epsilon_{\rm B,-2}^{1/2}E_{\rm k,53}n_0^{1/2}\approx a,
\label{eq:4}
\end{equation}
\begin{equation}
E_{\rm k,53}^{1/2}\epsilon_{\rm B,-2}^{1/2}\epsilon_{\rm e,-1}^2 \approx b,
\label{eq:5}
\end{equation}
\begin{equation}
E_{\rm k,53}^{-1/2}\epsilon_{\rm B,-2}^{-3/2}n_0^{-1}{(1+Y)^{-2}}\approx c,
\label{eq:6}
\end{equation}
where
$a=\frac{1}{6.6}F_{\nu,{\rm max}} D_{L,28.34}^{2}\Big({1+z\over 2}\Big)^{-1}$,
$b=\frac{1}{2.4}\times 10^{-16}\nu_m C_p^{-2} \Big({1+z \over 2}\Big)^{-1/2} t_{d,-3}^{3/2}$, and
$c=\frac{1}{4.4}\times 10^{-16}\nu_c\Big({1+z \over 2}\Big)^{1/2}t_{d,-3}^{1/2}$.
Now we have three relations but four free parameters (i.e., $E_{\rm k},~\epsilon_{\rm e},~\epsilon_{\rm B},~n_0$), which implies that these parameters cannot be uniquely determined. However, $(E_{\rm k},~\epsilon_{\rm e},~\epsilon_{\rm B})$ can be expressed as the functions of $n_0$, and it is possible to reasonably constrain the range of $n_0$, as found in Zhang et al. (2015).
In the case of $Y\leq 1$ (i.e., the synchrotron-self Compton cooling is unimportant), $(1+Y)^{-2}$ can be ignored, and we have (see also Zhang et al. 2015)
\begin{equation}
\epsilon_{B,-2}=a^{-\frac{2}{5}}c^{-\frac{4}{5}} n_{0}^{-\frac{3}{5}},
\end{equation}
\begin{equation}
\epsilon_{e,-1}=a^{-\frac{1}{5}}b^{\frac{1}{2}}c^{\frac{1}{10}} n_{0}^{\frac{1}{5}},
\end{equation}
\begin{equation}
E_{k,53}=a^{\frac{6}{5}}c^{\frac{2}{5}} n_{0}^{-\frac{1}{5}}.
\end{equation}
These three variables weakly depend on $n_0$.
In the case of $Y\geq 1$ (i.e., the synchrotron-self Compton radiation is important), we have $(1+Y)^{-2}\approx Y^{-2}$ and then
\begin{equation}
\epsilon_{B,-2}=(10cd)^{-\frac{8}{5p-9}}a^{-\frac{2(p-1)}{5p-9}}b^{-\frac{4(p-1)}{5p-9}} n_{0}^{-\frac{3p+1}{5p-9}},
\end{equation}
\begin{equation}
\epsilon_{e,-1}=(10cd)^{\frac{1}{5p-9}}a^{-\frac{p-2}{5p-9}}b^{\frac{3p-5}{5p-9}} n_{0}^{\frac{p-1}{5p-9}},
\end{equation}
\begin{equation}
E_{k,53}=(10cd)^{\frac{4}{5p-9}}a^{\frac{6p-10}{5p-9}}b^{-\frac{2(p-1)}{5p-9}} n_{0}^{-\frac{p-5}{5p-9}},
\end{equation}
where $d= \big (\frac{2.4}{4.4} C_{p}^{2}(\frac{1+z}{2})t_{d,-3}^{-1} \big )^{\frac{p-2}{2}}$.
Compared with the case of $Y\leq 1$, $\epsilon_{\rm B}$ and $E_{k}$ depend strongly on $n_0$.
\begin{figure*}[t]
\includegraphics[width=10.0cm,angle=0]{f2.eps}
\caption{$(E_{\rm k},~\epsilon_{\rm e},~\epsilon_{\rm B})$ as functions of $n$ obtained in solving eqs.(\ref{eq:4}-\ref{eq:6}). The solid, dotted, and dashed lines represent $\epsilon_{\rm B}$, $\epsilon_{\rm e}$, and $ E_{\rm k,53}$, respectively. The thin black line (i.e., ${\rm Value}=0.4$) is the {\it reasonable} upper limit of $\epsilon_{B}$ and $\epsilon_{e}$, above which the solution is unphysical.}
\label{fig:Ana}
\end{figure*}
The redshift $z$ is unknown, and we assume that it is in the reasonable range of $0.1\leq z\leq 1.4$. However, for $z<0.4$ in solving eqs.(\ref{eq:4}-\ref{eq:6}), we do not find any reasonable values of the parameters with any given $n_0$, while solutions are obtainable for larger $z$. In Fig. \ref{fig:Ana} we present the physical parameters $(E_{\rm k},~\epsilon_{\rm e},~\epsilon_{\rm B})$ as functions of $n_0$ for $z=(0.4,~0.6,~0.8,~1.0,~1.2,~1.4)$, respectively. As already shown in the analytical approaches, both $E_{\rm k}$ and $\epsilon_{\rm B}$ evolve with $n_0$ quickly, while the dependence of $\epsilon_{\rm e}$ on $n_0$ is rather weak. One can also find from the figure that there is a very interesting constraint that $n_0\leq 3\times 10^{-3}~{\rm cm^{-3}}$ for a {\it reasonable} $\epsilon_{\rm e}\leq 0.4$ ( i.e., expected to be not much larger than the equipartition value $\sim 1/3$), though the afterglow physical parameters cannot be uniquely determined. GRB 130912A was therefore born in a very low-density medium, consistent with the compact-object merger model.
To better show whether the afterglow model can indeed reasonably account for the data or not, in Fig.\ref{fig:Num} we numerically fit the optical and X-ray data of GRB 130912A. The numerical calculation code was developed by Fan \& Piran (2006) and \citet{2006ApJ...642..354Z}. In it, (i) the dynamical evolution of the outflow formulated by \citet{2000ApJ...543...90H} that can describe the dynamical evolution of the outflow
for both the relativistic and non-relativistic phases has been adopted. (ii) The energy distribution of the shock-accelerated electrons is calculated
by solving the continuity equation with the power-law source function $Q\propto \gamma_{\rm e}^{-p}$, normalized by a local injection rate
\citep{2000ApJ...529..151M}. (iii) The cooling of the electrons due to both synchrotron and synchrotron-self Compton has been taken into account.
Assuming $z=0.72$ (following \citet{2013MNRAS.430.1061R} we adopt the average value of redshift of short GRBs), the fit parameters are $(E_{\rm k},~n_0,~\epsilon_{\rm e},~\epsilon_{\rm B},~p,~\theta_{\rm j})=(1.7\times 10^{51}~{\rm erg},~0.002,~0.37,~0.16,~2.3,~0.03)$, where $\theta_{\rm j}$ is the half opening angle of the GRB ejecta. An isotropic fireball is found to be unable to reproduce the data. The inferred $\epsilon_{\rm e}$ and $\epsilon_{\rm B}$ are at the high end of the distribution of the shock parameters of short GRB afterglows (e.g., Soderberg et al. 2006; De Pasquale et al. 2010), which is unexpected since the optical afterglow plateau of GRB 130912A has the longest duration people have ever detected in short events. We would like to point out that the above fit parameters are for $A_{r'}=0.78$, which is very high. If $A_{r'}$ is intrinsically smaller, $F_{\nu_{\rm max}}$ and hence $a$ are lowered accordingly. As shown in eqs.(7-9), or alternatively eqs.(10-12), $\epsilon_{\rm B}$ and $\epsilon_{\rm e}$ would increase, while $E_{\rm k}$ would decrease. The contrary holds in the case of larger $A_{r'}$.
\section{Summary and conclusions}
The most remarkable feature of the short burst GRB 130912A is an optical plateau lasting about $4000$ s, which is the longest one in current short GRB observations, and it is about twice longer than that of GRB 090510. In this work we examined whether any "unusual" information can be extracted from the afterglow data of GRB 130912A. Though the energy injection model has been widely adopted to interpret the shallowly decaying afterglow emission of long and short GRBs (see Zhang et al. 2006; Nousek et al. 2006 and the references therein), it was found to be unable to account for the X-ray and optical data of GRB 130912A self-consistently. Instead the canonical afterglow emission of an ejecta with an opening angle $\theta_{\rm j}\sim 0.03$ can reasonably reproduce the data. The circum-burst medium is found to be ISM-like and has a very low density $\sim 10^{-3}~{\rm cm^{-3}}$, consistent with the model of merger of binary compact objects (either double neutron stars or a neutron star$-$black hole). Significant amounts of the energy of the forward shock were given to accelerate the non-thermal electrons and amplify the magnetic fields (i.e., $\epsilon_{\rm e}\sim 0.37$ and $\epsilon_{\rm B}\sim 0.16$, respectively), which are much more than those inferred in most short burst afterglow modelling and which can explain why the long-lasting optical afterglow plateau is rare in short GRBs.
{\it Acknowledgements.} We thank the anonymous referee for helpful comments and Dr. Y. Z. Fan for stimulating discussion. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester, and is supported in part by 973 Program of China under grant 2014CB845800, National Natural Science Foundation of China under grants 11163003, 11273063, 11103084, 11303098, U1331101, and 11361140349, and the Chinese Academy of Sciences via the Strategic Priority Research Programme (Grant No. XDB09000000). F.-W.Z. also acknowledges the support by the Guangxi Natural Science Foundation (No. 2013GXNSFAA019002), the Open Research Programme of Key Laboratory for the Structure and Evolution of Celestial Objects (OP201207) and the project of outstanding young teachers' training in higher education institutions of Guangxi.
|
2,869,038,154,956 | arxiv | \section{Introduction}
Regionalization is a family of constrained clustering algorithms that define spatially contiguous and homogeneous groups, or regions, in data. Regionalization algorithms have been used in a wide variety of tasks but are especially applicable in grouping geographic communities together. Grouping geographic data is often advantageous due to phenomena such as Tobler's first law of geography, "everything is related to everything else, but near things are more related than distant things" \cite{tobler1970computer}. One such geographically grounded data source is a set of social determinants of health (SDOH). According to Healthy People 2030, SDOH are the conditions in the environments where people are born, live, learn, work, play, worship, and age that affect a wide range of health, functioning, and quality-of-life outcomes and risks. For this paper, we look at community risks in five primary domain areas of SDOH: Economic Climate, Food Landscape, Housing Environment, Health Literacy, and Transportation Network. SDOH have long been considered a good indicator of overall health in communities and in some cases a good predictor of disease or hospital admissions \cite{healthLitHosptial}. This paper aims to quantitatively compare regionalization methods on large scale real world SDOH data and to analyze the advantages and disadvantages of each method.
SDOH can be defined at a community level and while they are unalterable in the short term, mitigation techniques can be used to improve the SDOH risk of a given community especially in areas of high risk. Healthcare companies and governments alike stand to benefit by identifying more optimized areas of SDOH risk. For example, healthcare companies can use such information to help prevent diabetes by buying food vouchers for areas of high food social risk. Governments could use this information to provide COVID-19 tests to areas with higher financial, housing, or transportation risk. Currently, resources are commonly distributed to communities by zones that were not built to separate communities into common SDOH risk, such as block groups, census tracts, counties, or states. Viewing SDOH through the lens of health agnostic zones can break apart or drown out areas of high SDOH, preventing effective intervention strategies. Ideally a fast and simple clustering algorithm such as K-Means would be used to construct new zones derived directly from SDOH data. However, K-Means is unconstrained and thus is allowed to make geographically muddled zones that would be impossible to take any real world intervention. Providing intervention and relief to high SDOH risk regions is more optimally effective if those regions are compact and geographically contiguous because many mitigation techniques commonly have a geographic radius of impact. Compactness is the measure of density given a region or shape. Contiguity pertains to the connectedness of a region, and in the geographical sense a contiguous region is one in which all elements in a given region must share a border with another element in the same region. In this paper we use regionalization algorithms as a way to automate the creation of geographically contiguous regions of SDOH risk.
Many regionalization methods have been proposed, the most popular of which are Agglomerative Clustering, REDCAP \cite{REDCAP}, AZP \cite{AZP}, SKATER \cite{SKATER}, max-p-regions \cite{maxp} \cite{maxpHeuristic}, and many others \cite{wan2018dasscan} \cite{geoSOM}. Many of these regionalization methods have been compared to one another \cite{quantCompare} \cite{dao2018detecting} and individually stress tested; however, this has only been done on small scale or simulated data. Additionally, no geographic comparisons between different algorithms generated regions have been performed. Instead, research in evaluating regionalization methods has focused on the effectiveness of clustering on small artificial data sets which results in a lack of testing on large scale real world data. For example, many individual papers evaluating a single regionalization approach will use 1,000 as the maximum number of spatial units. A quantitative comparison was run between some popular regionalization methods in \cite{quantCompare} running tests up to 3,200 spatial units. By far the largest test that has been performed on any regionalization method is REDCAP run on roughly 1 million data points \cite{nlpRegions}. These experiments, however, did not compare regionalization methods to one another nor did it provide any sort of timing or scaling information about REDCAP. The data in this paper compares regionalization methods beginning at 8,000 data points and scales over 10 intervals up to 1 million data points.
We use regionalization to identify high risk regions in communities of real data at scales orders of magnitude higher than has been previously tested. Additionally, our experiments use real data from across the United States that can be further matched with health trends to do real prediction in the future. Our study uses time metrics, which have only been used to compare regionalization methods once at much smaller scales \cite{quantCompare}. Additionally, we include memory comparisons and popular geographic metrics taken from gerrymandering studies, both of which have never been used to compare regionalization methods.
\section{Related Work}
Using regionalization algorithms to create spatially contiguous homogeneous neighborhoods is a long studied problem that has taken the form of many different approaches. There are some comparison studies that exist under the topic of regionalization but only focus on performance within a single algorithm \cite{justREDCAP}. To our knowledge, there exists three previous papers comparing the performance of a large number of regionalization algorithms \cite{quantCompare,dao2018detecting,folchCompare}. All three deal mostly with simulated data sets that focus on quality the of the regionalization algorithms on a relatively small scale. Our paper on the other hand, stress tests regionalization algorithms on real data in increasing orders of magnitude while monitoring to ensure the quality of results. Our paper is the first of these comparative papers to monitor memory use as well as geographic compactness. The most relevant of the comparative papers to our study is \cite{quantCompare} due to the types of algorithms and metrics used.
Regionalization methods have previously been tested on case study data \cite{quantCompare, REDCAP, maxpHeuristic, geoSocialData, nlpRegions, guo2018detectingMovement, segUSA, geoSOMClustering}. Most papers, however, only run these case studies on a fixed number of data points \cite{geoSOMClustering, quantCompare}. Furthermore, some papers \cite{segUSA, REDCAP} alternate the number of regions but not the number of data points. Others \cite{nlpRegions} evaluate over both changing numbers of regions and data points but only focus on the performance of a single algorithm. Our paper shows how multiple regionalization algorithms perform against each other when dealing with larger and larger amounts of data which is more relevant to real world applications. Notably, our paper is the only one that compares the performance of a large number of regionalization algorithms over real data.
Regionalization algorithms can be split into two broad camps, spatially implicit and spatially explicit, based off how they treat contiguity within the model \cite{regionDef}. Spatially implicit models take the idea of contiguity within a region more as a strong suggestion than a requirement; a good example of this would be manually weighting a geographic element higher in K-Means to encourage compact clusters. Spatially explicit models, on the other hand, rigorously enforce spatial constraints; a good example of this would be agglomerative clustering using strict restrictions to only merge clusters that are spatially contiguous. Many studies \cite{dao2018detecting, geoSOMClustering, geoSocialData, geoSOM, kolak2020quantification} will use spatially implicit models in their study of regionalization or neighborhood building, however, due to the geographic constraints of SDOH clustering and the aim to create actionable compact zones we take a similar approach as \cite{quantCompare} and restrict our regionalization methods to spatially explicit. Building off of \cite{quantCompare} we only use regionalization algorithms that have been initially shown to be time efficient (REDCAP, SKATER, AZP) as well as adding on a new lightweight baseline (Agglomerative Clustering) and a more mathematically rigorous model (Max-P Heuristic).
There are very few studies on the regionalization of SDOH \cite{kolak2020quantification}. To the best of our knowledge all other studies using regionalization on SDOH have only used spatially implicit regionalization methods without exploring or comparing to any other algorithms. These studies do not acknowledge the downstream benefits of having contiguous and compact regions nor the scalability of the algorithm in use. To the best of our knowledge the performance of spatially explicit regionalization algorithms have never been tested on SDOH data. This paper aims to find the optimal spatially explicit regionalization algorithm to create homogeneous neighborhoods of SDOH across a large variety of scales.
\section{Methods}
\begin{figure}
\centering
\includegraphics[scale=.5]{analysisGraphs/rookvqueen.PNG}
\caption{Comparing rook and queen contiguity. Each square is representative of geographic shapes and their centroids represented as black dots. Graphs are generated by treating centroids as vertices and creating edges between vertices based on the existence of borders between the geographic shapes.}
\label{fig:rookqueen}
\end{figure}
Consider a geographic area tessellated by $n$ hexagons. We represent these hexagons as an undirected graph $G=(V, E)$ where $V=v_1, \ldots, v_n$ is the set of hexagons centroids treated as vertices and $E$ is the set of edges connecting spatially adjacent hexagons. Each vertex $v_i$ has an associated SDOH data vector $u$ with $m$ SDOH data factors of interest, $u_i = d_{1}, \ldots, d_{m}$. Each edge between two vectors is associated with a weight that measures the distance between the vectors associated data points. In this paper we use Euclidean distance Equation (1) to calculate the distance between two data points. To formalize this notion, let
\begin{equation}
d(u, v) = \left(\sum_{k=1}^n (u_k-v_k)^2\right)^{\frac{1}{2}},
\label{eq:data_distance}
\end{equation}
where $u, v \in D = \langle d_1, \ldots, d_m \rangle \subseteq \mathbb{R}_m$ where $d_1, \ldots, d_m$ are values of SDOH data factors of interest associated with two vertices $v_i, v_j$.
Spatial adjacency can be defined in two main ways, rook and queen contiguity as seen in Figure \ref{fig:rookqueen}. A rook contiguity graph is defined as creating an edge between two data points $v_i, v_j \in G$ if they share a nonzero length geographic border. Queen contiguity is defined as creating an edge between two data points $v_i, v_j \in G$ if they share a nonzero length geographic border or vertex. For this paper we used queen contiguity to define edges in $G$.
In this paper we expand the contiguity of real world SDOH data to span over areas of missing data. SDOH data is initially represented geographically as fixed size tessellated hexagons, however, many gaps exist in non residential areas such as forests or bodies of water. To extend contiguity over these gaps we generate Voronoi diagrams from the geographic centroids of the hexagons. A continuous graph using queen contiguity is then generated over the resulting polygons which we then break into partitions for regionalization.
A partitioning of $G$ of size $n$ can be defined as $G_1, G_2, \ldots G_n$ such that $G_1 \cup G_2 \cup \ldots \cup G_n = G$ and $G_1 \cap G_2 \cap \ldots \cap G_n = \emptyset$. Let each of the partitions $G_k, k\in (1,n)$ be connected sub-graphs of $G$, so for any pair of nodes $v_i, v_j \in G_k $ there exists a path between them within $G_k$. $G_k$ can also be referred to as a cluster or in our case, a region. This partitioning of $G$ is the formal output of a regionalization algorithm. Regionalization can then be defined as minimizing some objective function over $n$ partitions of $G$. For this paper we minimize the objective function of inter region variability defined as follows in Equation (2). Let $\mu (G_k)$ represent the average SDOH data point value in the partition $G_k$ and $x_i$ a SDOH data point associated with some $v_i \in G_k$. Let
\begin{equation}
W(G_1, \ldots, G_n) = \sum_{q=1}^n \sum_{i\in G_{q}} d(x_i, \mu (G_q)).
\label{eq:within_variability}
\end{equation}
By minimizing the objective function $W(G_1, \ldots, G_n)$ in Equation (2) we find regions that group together similar SDOH data points while still enforcing contiguity. It is also worth noting that between cluster variation $B(G_1, \ldots, G_n)$ defined in Equation (3) shows the distance between $\mu (G_k)$, the average SDOH data point value in the partition $G_k$, and $\mu (G)$ the average SDOH data point value over the entire graph. Let
\begin{equation}
B(G_1, \ldots, G_n) = \sum_{q=1}^n \sum_{i\in G_{q}} d(\mu (G_q), \mu (G)).
\label{eq:between_variability}
\end{equation}
A regionalization algorithm then is a way to minimize the objective function such as in Equation (2) while adhering to spatial constraints. The regionalization algorithms used in this study are defined in the following sections.
\subsection{Agglomerative Clustering}
Hierarchical clustering is a family of clustering algorithms that clusters data by either a top down or bottom up approach. Agglomerative clustering is a specific subset of hierarchical clustering that uses the bottom up approach. Top down approaches begin with the entire data set and recursively splits the data into two further groups according to some optimization function until each data point is in its own cluster. Conversely, bottom up approaches begin with each data point in its own cluster and successively merges data points by some optimization function until there is only one overarching cluster. Commonly data is visualized as a tree called a dendrogram, with the top or root node representing the entire data set in one cluster while the leaves representing each data point as an individual cluster. Travelling from the leaves to the root of the dendrogram, each level represents the merging of two clusters, thus the number of clusters corresponds to the level in the tree. A certain number of clusters can be achieved by cutting the dendrogram at the level with that many clusters. For agglomerative clustering, the algorithm begins at the leaves, with each data point being in their own cluster. Clusters with the "shortest distance apart" are successively merged step by step until the desired number of clusters is reached.
There are four metrics in agglomerative clustering that define the "shortest distance apart" for regions and are used to determine which regions should be merged next. Equations (4-7) respectively define the ward linkage, complete linkage, average linkage, and single linkage distance metrics where $m_i$ represents the centroid of a region $i$ and $n_i$ the number of data points in a given region $i$. Ward linkage minimizes the sum of squared differences within regions, essentially minimizing intra-cluster variance, Equation (4). Complete linkage minimizes the maximum distance between regions by only observing the two points between regions which are farthest away, Equation (5). Average linkage minimizes the average distance between all possible pairs of data points between two regions, Equation (6). Single linkage, somewhat of an opposite from complete linkage, minimizes the closest distance between regions by only observing the two closest points between regions, Equation (7). This paper uses ward linkage to achieve the goal of forming regions of communities that have similar levels of SDOH. Let
\begin{equation}
d_{W}(G_i, G_j) =
\sum_{q \in G_i \cup G_j} \lvert\lvert x_q - m_{ G_i \cup G_j}\rvert\rvert ^2 -
\sum_{q \in G_i} \lvert\lvert x_q - m_{ G_i}\rvert\rvert ^2 -
\sum_{q \in G_j} \lvert\lvert x_q - m_{G_j}\rvert\rvert ^2.
\end{equation}
\begin{equation}
d_{CL}(G_i, G_j) = \max_{u \in G_i, v \in G_j} d(u,v),
\end{equation}
\begin{equation}
d_{AL}(G_i, G_j) = \frac{1}{n_i}\cdot \frac{1}{n_j} \sum_{k=1}^{n_i}\sum_{l=1}^{n_j} d(u_k, v_l),
\label{eq:dist_avg_linkage}
\end{equation}
\begin{equation}
d_{SL}(G_i, G_j) = \min_{u \in G_i, v \in G_j} d(u,v).
\end{equation}
In general performing agglomerative clustering can become very expensive in time and memory because it has to check all possible combinations of regions at every step. When contiguity constraints are enforced however, the possible combinations of regions shrinks dramatically because only connected regions can be merged. This subtle perk of agglomerative clustering has a large impact, because the potential speedup scales with the number of data points used in the clustering.
\subsection{SKATER}
SKATER, which stands for Spatial' K'luster Analysis by Tree Edge Removal, was proposed by \cite{SKATER} and uses the power of minimum spanning trees (MST) to create regions of homogeneous contiguous regions. SKATER fundamentally changes the problem of regionalization into one of optimal graph partitioning. Spanning trees are defined as follows, given our graph $G=(V,E)$, a spanning tree is a subset of the edges $E_0 \subseteq E$ such that all the vertices $v\in V$ are connected together without any cycles. A MST is the spanning tree in which the sum of edge weights is minimum amongst all other possible spanning trees. Creating an MST is proven to be NP-hard so a number of efficient algorithms such as the Prim or the Kruskal algorithm are used in SKATER.
SKATER begins with a weighted connectivity graph $G=(V,E)$ where $V$ is our set of data points and $E$ is the set of edges. The weight of each edge in $E$ is inversely proportional to the similarity of SDOH data points between the regions it connects. A MST is then formed over $G$ and edges in order of most dissimilar are pruned iteratively until the desired number of regions or some other stopping rule is reached. It is worth noting that removing any edge in an MST defines a further partition of the graph $G$, thus pruning an edge automatically makes a new region.
\subsection{REDCAP}
Improving upon SKATER, REDCAP \cite{REDCAP} defines a family of 6 functions that attempt to improve MST formation by building the MST with agglomerative clustering and then partitioning the MST to obtain regions. Agglomerative clustering is used to iteratively merge regions as discussed in Section 3.1 until only one region exists which by definition is a spanning tree. The resulting spanning tree is then partitioned similar to SKATER in Section 3.2 until the desired number of regions or some other stopping rule is reached. The 6 algorithms that are part of the REDCAP family use either single linkage, complete linkage, or average linkage agglomerative clustering as described in Section 3.1 in combination with either first-order or full-order constraints to calculate the MST. All 6 members of the REDCAP family using the same partitioning function on the MST. Note the algorithms in REDCAP are different from pure agglomerative clustering as there is a further step of creating an MST and pruning based on an objective function such as the one described in Equation (2).
Order constraints are rules that define which edges are used when performing linkage calculations. An edge can be defined as first-order if it connects two spatial regions according to a predefined contiguity matrix. Using a first-order constraint with agglomerative clustering prunes edges as the clustering runs, only keeping edges that directly connect two regions, affecting the inputs to the linkage calculations. Full-order constraint, however, uses all existing edges between two regions when computing linkage.
For this paper, we used REDCAP with full order complete linkage Equation (5) following previous studies \cite{quantCompare, REDCAP} as well as REDCAP with full order ward linkage Equation (4) to compare against pure agglomerative clustering with ward linkage.
\subsection{AZP}
AZP (Automatic Zoning Procedure) was originally proposed by \cite{AZP} to supplement the creation of zones using census data via a local optimization approach. AZP begins with $n$ data points and aims to turn them into $m$ regions. The $n$ data points are randomly assigned into $m$ regions while adhering to contiguity constraints. A list of the $m$ regions is formed. A random region $k$ is removed from the list of regions and all data points bordering region $k$ not in region $k$ are put into a set $s$. Data points are then randomly pulled out of set $s$ iteratively and if swapping the given data point to region $k$ gives an improvement in the objective function then the point is assigned to region $k$. This process repeats until the list is exhausted for region $k$, and there are no more possible improvements. Then another region is then randomly selected, and the process repeats until no further improvements can be made.
There are two popular variations of AZP proposed by \cite{newAZP}, namely AZP-Simulated Annealing and AZP-Tabu. Due to the findings of \cite{quantCompare} revealing the poor timing and performance of these two variations, we used the original version of the AZP algorithm for this paper.
\subsection{Max-P-Regions}
The max-p-regions problem, introduced by \cite{maxp} and further improved by \cite{maxpHeuristic}, is unique amongst the other methods under comparison as it strictly uses a threshold value to determine the number of regions rather than a set number of clusters. Using a set threshold value allows max-p-regions to depend on the data to aggregate into regions rather than an arbitrary number of clusters. Max-p-regions is broadly composed of three main stages: region growth, enclave assignment, and local search. The first stage, region growth, randomly selects an unassigned seed unit and then adds to its cluster neighboring data points that have not yet been added to a cluster until the minimum threshold is reached or no unassigned neighboring data points can be found. If the region cannot meet the minimum threshold, it is called an enclave and added to the enclave set. Once the data set is broken into regions and enclaves, every enclave is assigned to a neighboring region with the smallest dissimilarity. Lastly, similar to AZP, in local search data points are traded between regions while ensuring the contiguity and threshold constraints are still met until no further improving trades can be made.
Max-p-regions suffers from computational bottlenecks both in memory and run time for large problems due to the algorithm being NP-hard \cite{maxp}. However, due to the popularity and accuracy of this method, we felt it was appropriate to include max-p-regions in this paper for comparison purposes.
\section{Data and Metrics}
\begin{figure}
\centering
\resizebox*{.5\textwidth}{!}{\includegraphics{analysisGraphs/levelSizes.png}}
\caption{The number of data points in each data set "level". These levels slowly add data points contained in counties expanding out in concentric circles of counties. These levels begin in Washington DC until they eventually consist of all of Virginia and Washington DC} \label{fig:1}
\end{figure}
\subsection{Data}
Geographically, our SDOH data is represented by centroids of hexagons tessellated across the United States with each hexagon having an associated 5 dimensional SDOH feature vector. The hexagons have a diameter of 200 meters in urban areas and 400 meters in rural areas. Hexagons that are designated as non-residential zones (streets, lakes, etc.) are removed from the data set. The feature vector associated with each hexagon consists of five SDOH metrics: Economic Climate, Food Landscape, Housing Environment, Health Literacy, and Transportation Network. The hexagon-data used in this paper was broken up into 11 geographic "levels" by slowly increasing the number of counties included in Virginia and Washington DC. The geographic levels begin with just Washington DC and expand out in concentric circles of counties into Virginia every level after until the final level, Level 10, which consists of all of Virginia and Washington DC. The rate at which counties are added roughly doubles the amount of hexagons used in the previous level, as shown in Figure \ref{fig:1}.
A number of different metrics are used in the analysis of the six different regionalization methods under comparison. The metrics in use can broadly be split into two categories, unsupervised and geographic. Note because this paper uses real world data and the underlying ground truth regions are not known we are not able to use supervised metrics.
\subsection{Unsupervised Metrics}
We use unsupervised metrics to measure the two data driven goals of regionalization, homogeneity within regions and heterogeneity between regions. That is, we want each region to represent similar SDOH, but different regions to represent different SDOH. We use multiple metrics to evaluate different aspects of this problem, as shown in the list below.
\begin{itemize}
\item Calinski-Harabasz index, Equation \eqref{eq:cal_har}: The CHI proposed by \cite{calinski1974dendrite} defines a ratio between within-cluster homogeneity $W(k)$ as defined in Equation \eqref{eq:within_variability} and between-cluster heterogeneity $B(k)$ Equation \eqref{eq:between_variability}. This index can be interpreted as a compactness score of the regionalization clustering when ignoring geography entirely. The CHI will be high for the regionalization we hope to achieve, one that groups highly similar data points in the same region and has large differences in data points between regions. Otherwise, a low CHI is indicative of clusters with dissimilar data points that are not distinct from one another.
\item Silhouette Coefficient, Equation \eqref{eq:sc}: The silhouette coefficient (SC) is defined as a ratio between the average intra-cluster distance $a$ and the average inter-cluster distance $b$. Put another way, $a$ is the average distance between points within a cluster and $b$ is the average distance between clusters. Since we measure the distance between data vectors as in \eqref{eq:data_distance}, SC helps us interpret how well our regions classify the data. The SC ranges from -1 to 1, where an SC near 1 means data points are well matched to their cluster and poorly matched to other clusters, while an SC near -1 means the clusters do not match up with its data points well.
\item Average number of high risk domains (AHR): We define the number of high risk domains in a single SDOH data point as the number of SDOH data factors of interest that are in the top 30\% nationally amongst the other SDOH data factors of interest of its kind. The top 30\% can be determined by binning each SDOH data factor of interest into bins 1 through 5 nationally and treating the 4 and 5 bins as high risk. AHR simply averages the number of high risk domains per data point in the region. A high AHR represents a higher risk region while a low AHR represents a low risk region, providing insight if clusters are separating high and low risk areas well.
\item Sum-Squared Errors, Equation \eqref{eq:sse}: Measures inter region variability by taking the sum of the squared difference between all points in a region. The higher the SSE is, the more variability (more error) there is between SDOH data points in a region. We normalize SSE in this paper by dividing by the number of points per region to compare regions of different sizes and across different data levels.
\end{itemize}
\begin{equation}
CHI(k)=\frac{k-n}{n-1}\cdot \frac{B(k)}{W(k)}
\label{eq:cal_har}
\end{equation}
\begin{equation}
SC = \frac{b-a}{\max(a, b)}
\label{eq:sc}
\end{equation}
\begin{equation}
SSE = \sum_{i=1}^n \sum_{j=1}^n (x_i-x_j)^2
\label{eq:sse}
\end{equation}
\subsection{Geographic Metrics}
Geographic metrics are applied to evaluate how geographically compact and actionable the generated regions are. As discussed previously, geographically compact and contiguous regions are important to regionalizing SDOH, however, this does not guarantee our regions will be compact and uniform. Taking inspiration from gerrymandering studies \cite{sun2021developing} multiple different types of metrics were used to measure the quality of each regionalization methods geographic compactness. The metrics used are listed below.
\begin{itemize}
\item Percent Overlap, Equation \eqref{eq:percent_overlap}: In this metric we fit a concave hull $\alpha$, or alpha shape, to each region $r$ to get a smoothed outline of each generated region. We then calculate the pairwise percentage overlap between all regions and average these percentages. This metric is used as a proxy to estimate how well a regionalization method partitions the given data in a non overlapping manner with well defined borders.
\item Convex Hull, Equation \eqref{eq:convex_hull}: The convex hull (CH) metric is the ratio between the area of a region $A_R$ and the area of the minimum convex polygon that can enclose the region $A_P$, also known as its convex hull. This ratio is between 0 and 1 where 1 signifies a more compact region.
\item Polsby-Popper, Equation \eqref{eq:pp}: The Polsby-Popper (PP) metric proposed by \cite{polsby1991third} is calculated by taking the ratio between the area of a region $A_R$ and the area of a circle who's circumference is equal to the perimeter of the district $A_C$. The PP score is between 0 and 1 where a score of 1 represents a more compact district.
\end{itemize}
\begin{equation}
PO = \frac{1}{n^2-n}\cdot \left(\sum_{i=1}^n \sum_{j=1}^n \frac{area(\alpha (r_i) \cap \alpha (r_j))}{area(\alpha (r_i))} - \sum_{i=1}^n \frac{area(\alpha (r_i) \cap \alpha (r_i))}{area(\alpha (r_i))}\right)
\label{eq:percent_overlap}
\end{equation}
\begin{equation}
CH = \frac{A_R}{A_P}
\label{eq:convex_hull}
\end{equation}
\begin{equation}
PP = 4\pi \cdot \frac{A_R}{A_C^2}
\label{eq:pp}
\end{equation}
During all our experiments we set the number of clusters to 5, as previous exploratory data analysis showed this was a good choice for our data at multiple geographic levels. The SDOH feature vectors are normalized via min-max scaling for each level before regionalization is performed. Queen contiguity is used throughout all experiments to generate connections between data points. Generated regions are labeled 1 through 5 based on their AHR metric with 1 being the lowest AHR (risk) region and 5 being the highest AHR (risk) region. A time limit of 1 hour was put on regionalization methods when running.
\section{Results}
We begin by comparing the regionalization algorithms under Unsupervised metrics and then Geographic metrics in Section 5.1 and 5.2 respectively. Section 5.3 explores Washington DC as a case study on the county level while Section 5.4 runs agglomerative clustering on the largest level in our dataset, Level 10 consisting of Virginia and Washington DC. Lastly, Section 5.5 compares unconstrained clustering and spatially explicit regionalization methods through the lens of real health metrics.
All methods under comparison except agglomerative clustering reached a termination point prior to the final Level 10. REDCAP with complete linkage, REDCAP with ward linkage, SKATER, and AZP all exceeded the working memory on the running machine, 32 GB. Max-P-Regions was only able to complete Level 0 (Washington DC) regionalization within the hour time frame so it is not included in any of the unsupervised or geographic metric analysis.
\subsection{Unsupervised Results}
\begin{figure}[htb]
\centering
\resizebox*{\textwidth}{!}{\includegraphics{analysisGraphs/figure2twobythree.png}}
\caption{Six unsupervised metrics used to compare the ability of each regionalization algorithm to separate data well with efficiency in time and memory.} \label{fig2}
\end{figure}
To compare each regionalization model's ability to create regions that are homogeneous yet heterogeneous from one another, a number of unsupervised metrics were recorded as shown in Figure \ref{fig2}. AZP exceeded the memory limit at Level 3, the REDCAP model family at Level 4, and SKATER at Level 5. The memory peak for each regionalization method also coincides with a large spike in time for clustering as shown in Figure \ref{fig2}. Differing from other algorithms, AZP begins with a large amount of time to cluster and stays that way until it runs out of memory. When the number of data points increases from 14,000 in Level 2 to 67,000 at Level 3, the REDCAP family and SKATER both hit a critical point of increase in time to cluster and memory. After Level 2 the time to cluster for the REDCAP family and SKATER becomes highly nonlinear and quickly exceeds our computational limitations. Agglomerative clustering maintains an almost constant time to cluster over all levels, making it the superior method in terms of time and memory.
The silhouette score in Figure \ref{fig2} shows agglomerative clustering starting off lower than the REDCAP family and AZP. AZP remains superior in this metric until it runs out of memory. The REDCAP family drops off with SKATER by Level 2 well below agglomerative clustering. Agglomerative clustering notably has an inflection point at Level 4, constantly increasing afterwards and eventually achieving a higher silhouette score than all other models. Interestingly a similar trend is seen in the CHI in Figure \ref{fig2}. SKATER again performs the worst while agglomerative clustering sees a large increase after an inflection point at Level 4. These two metrics most notably point out flaws in SKATER's clustering ability and that the quality of agglomerative clustering increases with the size of data after an inflection point.
The SSE metric in Figure \ref{fig2} shows a similar trend in error between all methods except SKATER which had significantly more error. Level 4 is again problematic for agglomerative clustering but acts as an inflection point that leads to much lower levels of error as the levels increase in size.
Intra-cluster variance which each clustering method aims to minimize in Figure \ref{fig2} shows increasingly better clustering as the size of the data increases as well as some similar trends as all the other unsupervised metrics. SKATER again performs the worst out of all the metrics, but interestingly still outperforms agglomerative clustering in the smallest data set level. AZP again outperforms all the other models before it runs out of memory. Agglomerative clustering levels out as the data set sizes increase, again showing that as the data size increases its performance increases.
The unsupervised metrics tell an interesting story of how the quality of each regionalization method scale over larger data sets. AZP generally generates the best results for small data sets but very quickly runs out of memory once the level sizes get larger. SKATER has the worst performance across all the unsupervised metrics in general. Agglomerative clustering outlasts all other regionalization models, requiring essentially constant memory, almost constant time, and increases the quality of its clustering as the data size increases.
\subsection{Geographic Results}
\begin{figure}[htb]
\centering
\resizebox*{\textwidth}{!}{\includegraphics{analysisGraphs/figure3twobythree.png}}
\caption{Three metrics relating to the geography of generated regions of all algorithms under comparison. The Polsby-Popper and Convex-Hull metrics measure the geometric compactness of generated regions. Percent overlap shows the ability of regionalization algorithms to partition the data set into non-overlapping regions with smooth definite borders.} \label{fig3}
\end{figure}
Taking inspiration from gerrymandering studies \cite{sun2021developing}, we also use multiple geographic metrics to evaluate the compactness and partitioning ability of the regions that are generated by each regionalization method in Figure \ref{fig3}.
The Polsby-Popper (PP) metric in Figure \ref{fig3} and Convex Hull (CH) metrics in Figure \ref{fig3} tell consistent stories of compactness. Agglomerative clustering, while it was superior in the unsupervised metrics, shows generally the lowest compactness out of all the regionalization methods in all the lower levels other than Level 2. Similar to the unsupervised metrics, an inflection point for agglomerative clustering is seen at Level 4 in both CH and PP metrics. Dissimilar to the unsupervised metrics, however, after agglomerative clustering's Level 4 inflection point the PP and CH metrics show increasingly worse compactness. This could imply that as the amount of data increases, agglomerative clustering will trade geographic compactness for better clustering. AZP is the best geographic compactness option for smaller data sets until it runs out of memory, just like in the unsupervised metrics. SKATER and complete linkage REDCAP provide the most compact regions for more moderate sized data before they run out of memory as well. Notably, REDCAP with complete linkage and AZP score higher relative to other models in PP rather than CH, implying their clusters are more circular in nature than polygons.
The percent overlap between concave hulls of regions helps determine the partitioning ability of each regionalization method as well as its ability to make well defined borders. We would like our regions to divide a given area into clean partitions such that when the borders are fit to concave hulls, essentially outlining the region, the regions have little overlap. Clean partitions prevent long snaking regions that make intervention more difficult. Following trends set in the PP and CH metrics, percent overlap in Figure \ref{fig3} shows SKATER performing the best while agglomerative clustering is generally performing the worst. Contrary to the PP and CH metrics, AZP and REDCAP complete linkage have poor percent overlap. AZP and REDCAP complete linkage's poor percent overlap performance could be linked to the same reason they both performed worse in the CH metric when compared to the PP metric, because they are not modeled well by polygons but rather by circles. REDCAP ward linkage performs fairly well under this metric initially, showing it may perform well in smaller data set sizes. Agglomerative clustering again experiences an inflection point around Level 4 and 5, however this one is good and leads to a decrease in percent overlap. Agglomerative clustering's inflection point may be due to larger data sets having larger borders and thus a fitted concave hull is far less likely to have sharp overlapping edges.
The geographic metrics show that agglomerative clustering generally has lower compactness among the models. We also see that for smaller data, AZP is the superior method for generating compact regions. SKATER performs comparatively better across all metrics, especially as the data sets get larger. The REDCAP ward linkage performs well across all the metrics but reduces as the data set size gets larger, while REDCAP with complete linkage experiences dramatic variability as data sizes increase.
\subsection{Washington DC - Level 0}
\begin{figure}[htb]
\includegraphics[width=\linewidth]{analysisGraphs/figure4threebytwo.png}
\caption{Regionalization of Washington DC (data set Level 0) by all regionalization algorithms under comparison. Each geography graph is paired with the average number of high risk domains per data point in each region.} \label{fig:4}
\end{figure}
Level 0, or just Washington DC, presents a good way to visualize some of the differences we outlined in the previous unsupervised and geographic compactness sections, as well as see some real world results using each of the algorithms under comparison. Washington DC consists of 8,137 data points, with white spaces representing non residential areas. Linkage is not restricted from stretching over non residential areas up to a given distance. All methods are set to generate 5 regions and max-p had a minimum threshold of 10 percent of points per region, resulting in 6 regions.
The smoothness of SKATER's partitioning of Washington DC in Figure \ref{fig:4}, with 1 as the lowest risk region and 5 at the highest risk region, makes it easy to see how SKATER got such superior scores in the geographic metrics, specifically the percent overlap metric. By using pure MST cuts, the very nature of the SKATER algorithm forces smooth borders with low overlap between regions rather than other algorithms such as AZP or agglomerative clustering that have less compact borders. On the other hand, the SKATER AHR distribution, or "risk" distribution, between clusters shows very low variance between the bottom three clusters, explaining the poor performance of SKATER in unsupervised metrics when compared to other methods.
Figure \ref{fig:4} lets us finally see why we wouldn't want to use max-p-regions even if it was feasible in time and space. The regions that created by max-p-regions are spindly, overlapping, and not compact whatsoever. The max-p-regions algorithm stretches the definition of contiguous to its very limits. Max-P-Regions sacrifices any sort of geographic compactness for finding a more optimal solution.
While each algorithm produces distinct results, there are some overarching consistencies in the regions generated by the algorithms under comparison. For example, each method identifies the south and south-east part of Washington DC as higher risk while picking out the north-west part as low risk. Commonalities between results show that even though different regions are generated, there is a general consensus among methods of roughly where low vs high risk neighborhoods are located geographically and validates the ability to geographically cluster SDOH data.
\subsection{Virginia}
\begin{figure*}
\includegraphics[width=\linewidth]{analysisGraphs/figure5threebytwo.png}
\caption{Agglomerative clustering run on SDOH in the state of Virginia and Washington DC (data set Level 10). Each region is shown individually mapped on Virginia and Washington DC. Regions are ordered in increasing amounts of high risk domains per data point on average as shown on the right.} \label{fig:5}
\end{figure*}
The largest run, Level 10, which consists of Virginia and Washington DC was only able to be run by agglomerative clustering and the results are shown in Figure \ref{fig:5}.
Notice that the neighborhoods of risk pay little to no attention to actual county lines. You can see this especially along the border of the high risk region, Region 5, where many counties are split in half. Additionally, notice that these groups of risk aren't influenced by the physical geography of Virginia, in fact both Region 3 and 5 span across all geographic regions of Virginia while showing large differences in risk.
Agglomerative clustering shows the ability in Figure \ref{fig:5} to pick out unique hot spots of risk even when the region size is relatively small. Washington DC and Northern Virginia, certainly the most wealthy part of Virginia and Washington DC, is singled out in the two lowest risk regions. On the opposite end of the spectrum a condensed part of Virginia Beach is identified as having uniquely high risk for a small area in region 4.
\subsection{Unconstrained Clustering Comparison}
\begin{figure*}
\includegraphics[width=\linewidth]{analysisGraphs/figure6.png}
\caption{Side-by-side comparison between K-Means and agglomerative clustering run on SDOH in the state of Virginia and Washington DC (data set Level 10).} \label{fig:6}
\end{figure*}
This study focuses on how spatially explicit regionalization algorithms compare to one another, but how does the spatial constraints of regionalization inhibit its original goal, forming homogeneous clusters that are heterogeneous from one another. To gauge the change in cluster quality we compare agglomerative clustering to a completely unconstrained and unaltered K-Means algorithm both run on SDOH in the state of Virginia and Washington DC, also known as data set Level 10. Agglomerative clustering is used here as a representative algorithm of the regionalization algorithms under comparison so the analysis could be run on large scale data. Additionally, to compare the quality of regions or clusters formed, a number of health metrics are analyzed for both K-Means and agglomerative clustering. Namely, these metrics are the 2021 Age-Adjusted Suicide Rate, Life Expectancy, and Percentage of Adults with Diabetes in Virginia counties and Washington DC taken from https://www.countyhealthrankings.org/.
The side by side geographical comparison between agglomerative clustering and K-Means can be seen in Figure \ref{fig:6}. Note that there is very little in common in terms of how K-Means and agglomerative clustering cluster the data other than the DC area being identified as the lowest risk area in the entire geography. K-Means clusters as expected pay little attention to geography whatsoever with clusters fairly evenly distributed throughout Virginia. The lack of compactness in the K-Means clusters is a severe disadvantage in the context of health risk because you no longer have actionable intervention zones in the broader context of the whole state. Both K-Means and agglomerative clustering pay little to no attention to county lines in their cluster assignments.
To further compare unconstrained clustering methods such as K-Means and spatially explicit regionalization methods such as agglomerative clustering we explore how well each clustering correlates to health data. We obtained county level health data for Virginia and Washington DC from countyhealthrankings.org. Each data point was assigned the health value corresponding to their county, and the data shown in Figure \ref{fig:7} is the result of averaging each one of these health metrics per region for each algorithm. For all three health metrics the high risk cluster identified by agglomerative clustering, Cluster 5, has slightly worse health than those identified by K-Means. Both algorithms provide significant levels of separation between different health metrics with the low risk clusters settling on almost equivalent values. The average number of high risk domains in Figure \ref{fig:7} show a more gradual increase for agglomerative clustering and eventually arrive at a higher risk factor than K-Means which shows a large jump in risk between Clusters 3 and 4. This shows a slight advantage in correlation to health metrics for agglomerative clustering while also having the advantage of creating actionable intervention regions.
The ability of SDOH to separate communities into high and low risk groups is reflected in the results shown in Figure \ref{fig:7}. Between low and high clusters, life expectancy decreases by roughly 9 percent, the percentage of adults with diabetes increases 92 percent, and the age-adjusted suicide rate increases by 97 percent.
\begin{figure}[htb]
\includegraphics[width=\linewidth]{analysisGraphs/figure7.png}
\caption{Comparing different health metrics between unconstrained K-Means clustering and spatially constrained agglomerative clustering regionalizaiton. Additionally, the number of average number of high risk domains per cluster is shown.} \label{fig:7}
\end{figure}
\section{Discussion}
Our results show that agglomerative clustering, AZP, and SKATER perform the best out of all the regionalization methods under comparison. Agglomerative clustering proves to be superior in time, memory, and unsupervised clustering metrics in general when your aim is to process large amounts of data quickly. AZP shows that if you have smaller (around 10,000 points) data sets and a little extra time it's capable of performing the best overall clustering when factoring in unsupervised and geographic compactness metrics. Finally, SKATER is the best method if you wish to find the most geographically compact and smooth regions faster and for larger data sets than AZP can handle, if you're willing to trade for worse performance on unsupervised metrics. All around, for the purposes of regionalization of SDOH, agglomerative clustering is the best method to use due to time and memory scaling superiority, superior unsupervised metric performance, and comparable geometric compactness.
The comparative nature of this study allows us to ignore the option of simply scaling up our compute power until each method is able to run all the way to Level 10. Without finding extraordinary computing power we are able to log the inflection points of each algorithm and when each method starts to use memory and time at a rate that is no longer realistic or worth the extra cost of computation. We discovered breaking points in algorithms without having to use an unreasonable amount of computing power.
This paper uses the python library pygeoda for all algorithms except for agglomerative clustering which is from sklearn also in python. Pygeoda is ported over from the popular GeoDa software package written in C++. Sklearn and pygeoda are the best libraries available for each respective method available in python, however it seems pygeoda may be less tailored to run in python as it is ported over from C++. Because of this, the weaknesses of some methods may be accentuated when run on large sets of data. We do not attribute the main differences in cluster time or memory to the difference in python libraries because of the large amounts of expensive processing (MST cutting or region neighbor trading) necessitated by all algorithms other than agglomerative clustering.
The assumption that each regionalization method should generate 5 regions at every granularity may have impacted both geographic and unsupervised metrics. For some levels, 5 regions may have actually been ideal, providing quality separation between data points and achieving good clustering while also being able to pick out the correct number of unique contiguous neighborhoods of risk. However, issues of both geographic and clustering significance may have arisen from assuming 5 regions throughout the entire experiment. Given more regions, identified regions could have been subdivided further into a high and low risk neighborhood, increasing the clustering quality and further identifying areas of higher risk. Increasing the number of regions could also start to break off regions that are far too small and statistically insignificant. Given less regions, smaller unique areas of higher risk could be lost in a larger region.
One exciting repercussion of the slow growth rate of agglomerative clustering's time to cluster is the ability to dynamically do regionalization of any given subset of data points in the entire country. Given a precomputed contiguity matrix, which is feasible to store on the back end of a server, regionalization can be done on roughly 250,000 data points in about a minute an a half, and roughly 1 million data points in about 9 minutes on a standard 8 vCPU with 32 GB of memory. Because each data point is a hexagon, we assume the number of neighbors across the United States per hexagon is fairly similar, meaning we can treat these timing measurements as a good estimation of what to expect from regionalization in other parts of the country at a similar scale. The speed of agglomerative clustering opens the door for a completely dynamic regionalization experience with regionalization in under 2 minutes for data set selections under 300,000 data points running on a standard computer.
Especially shown by agglomerative clustering, regionalization even on the multivariate scale is able to separate SDOH effectively. The geographic constraints do not prevent significantly different clusters from forming in vastly different amounts of data. This is important not only for finding communities with high SDOH risk but also any other potentially geographically influenced data field. The large significance of down the road health repercussions such as life expectancy, diabetes, and suicide rates all show significant separation under SDOH clusters as well. The effectiveness of separation opens doors for regionalization to be used to identify high risk areas for real world applications such as intervention zones.
\section{Future Works}
Multiple continuations could be stemmed from this study to further explore how regionalization methods perform on SDOH. Different numbers of clusters could be experimented with to identify how the number of clusters affects regionalization, how the optimal number of clusters scales with data size, and how to find what the optimal number of clusters for any given regionalization method/data size is. Another direction could be running the given regionalization methods with large amounts of compute to see how the methods compare in unsupervised and geographic metrics with higher data levels. Additionally to assist in this effort, new implementations could be written for all of the given methods in python to allow them full utilization of the language and potentially providing a time and memory reduction. Finally spatially implicit regionalization methods or just normal clustering methods could also be included in the comparison to show how these clustering algorithms compare to the spatially explicit methods under comparison in this paper.
Machine learning methods could be applied to the generated regions using them as abstract zones. Using regions as opposed to low level hexagons could reduce noise and boost the results found by machine learning methods. Results from machine learning models would then have actionable zones that can benefit from real world intervention rather than the much smaller hexagons. Intervention studies using the generated regions could be performed as well to show if intervention in a high risk region benefits the overall risk of the regionalization cluster more than anywhere else. More exploratory data analysis could also be performed on the identified regions to study the distributions of specific SDOH (Food Landscape, Economic Climate, ...) within each region and show how they differ across algorithms, number of clusters, and data set sizes.
\section{Conclusion}
This paper compared the quality of 6 different regionalization algorithms using multiple new metrics over real world SDOH data at scales orders of magnitude larger than ever tested previously. Unsupervised metrics, geographic compactness metrics, time, and memory were used to evaluate and compare the quality of regionalization between algorithms. We noted the subtle similarities and differences between regions generated by the different regionalization algorithms when applied to our smallest data set size, Washington DC. Notably, agglomerative clustering was the only regionalization algorithm capable of running all 11 data set sizes and was superior in long term unsupervised metrics performance. AZP and SKATER both proved to be good alternative options for small and medium sized data sets respectively. Ultimately for large scale regionalization tasks, agglomerative clustering works the best for multivariate SDOH at scale. We show that SDOH can be clustered well with geographic constraints opening the door for future machine learning work on dynamic SDOH regions.
\bibliographystyle{tfv}
|
2,869,038,154,957 | arxiv |
\subsection{Event and track selection}
\label{evtrkSel}
The data were collected using a minimum-bias trigger requiring at least one hit in both the V0 detectors. In addition, a central and a semicentral trigger were used, also determined by the V0 detectors, selecting collisions in the 0--10\% and 30--50\% centrality intervals, respectively.
Moreover, the timing information of the V0 scintillator arrays is used to reject the events triggered by the interactions of the beam with the residual gas in the LHC vacuum pipe. A further selection using the Zero Degree Calorimeter is applied in order to reject the electromagnetic beam--beam interactions and beam--satellite bunch collisions~\cite{Herr:941319}. These three rejections are done in the offline analysis.
The production yield of primary (anti)deuterons, (anti)tritons and (anti)$^3$He are measured at midrapidity. In order to provide optimal particle identification by reducing the difference between transverse and total momentum, the spectra are provided within a rapidity window of $|y|\,<$ 0.5.
Only tracks in the full tracking acceptance of $|\eta|\,<$ 0.8 are selected.
In order to guarantee good track momentum and d$\textit E$/d$\textit x$ resolution in the relevant $p_{\mathrm{T}}$ ranges, the selected tracks are required to have at least 70 out of 159 possible reconstructed points in the TPC and two points in the ITS (out of which at least one is in the SPD).
The requirement of at least one point in the two innermost layers, the SPD, assures a resolution better than 300 $\mu$m on the distance of closest approach to the primary vertex in the plane perpendicular (DCA$_{xy}$) and parallel (DCA$_z$) to the beam axis for the selected tracks~\cite{Abelev:2014ffa}.
Furthermore, it is required that the $\chi^{2}$ per TPC reconstructed point is less than 2.5 and tracks of weak-decay products are rejected as they cannot originate from the tracks of primary nuclei.
\subsection{Particle identification}
\label{partId}
The TPC allows for a clean identification of (anti)$^3$He in the whole \pt range and of (anti)deuterons up to \pt$\sim 1$ GeV/$c$. For higher transverse momenta, the d$E$/d$x$ information for charged particles is combined with the TOF mass determination in the (anti)deuteron analysis. For the (anti)tritons in the whole \pt range, a combined TPC and TOF analysis is performed.
Figure~\ref{fig:tpcdedx} shows the TPC specific energy loss as a function of rigidity ($p/z$) in Pb--Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}} = 5.02$ TeV.
The dashed curves represent parameterisations of the Bethe--Bloch formula for the different particle species.
The (anti)deuteron and (anti)$^3$He identification with the TPC is achieved by requiring that the energy-loss signal of a track lies in a 3$\sigma$ window around the expected value for a given mass hypothesis, where $\sigma$ is the d$E$/d$x$ resolution. For the (anti)tritons, a reduced 2$\sigma$ window is employed in order to further decrease the background.
\begin{figure}[tp]
\centering
\includegraphics[width=0.7\textwidth]{figures/dEdxPerformancePlotPaper.png}
\caption{Specific energy loss of charged tracks in the TPC vs.\,rigidity ($p/z$) for Pb--Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}} = 5.02$ TeV. The dashed lines represent parameterisations of the Bethe--Bloch curve. Particles lighter than deuterons have been removed artificially, such that only nuclei are visible.}
\label{fig:tpcdedx}
\end{figure}
In order to extend the \pt reach of the measurement, it is additionally requested that the track is matched to a hit in the TOF detector.
As shown in Fig.~\ref{fig:m2deuteron}, based on the time-of-flight measurement the squared mass of the particle is determined in different \pt intervals and
the distributions are then fitted using a Gaussian function with an exponential tail for the signal.
The background of the (anti)deuterons mainly originates from two components, namely wrong association of a track with a TOF hit and the non-Gaussian tail of lower mass particles. For the (anti)tritons the dominant background originates from the wrong associations of a track with a TOF hit.
For both nuclei, the background is described with the sum of two exponential functions.
\begin{figure}[tp]
\centering
\includegraphics[width=0.49\textwidth]{figures/rawSignal_d.pdf}
\includegraphics[width=0.49\textwidth]{figures/rawSignal_t.pdf}
\caption{Fit to the measured squared mass to extract the antideuteron signal in 4.4 $<$ \pt $<$ 5.0 GeV/$c$ (left) and the antitriton signal in 2.0 $<$ \pt $<$ 2.4 GeV/$c$ (right). The red dashed line shows the background, the solid blue line the combined fit to the data and the green dashed line the signal only.}
\label{fig:m2deuteron}
\end{figure}
\subsection{Background rejection}
\label{bckrej}
One of the main sources of background in the analyses of the primary deuteron and triton production are nuclei originating from secondary interactions.
These secondary nuclei come mostly from the interactions of other primary particles with the detector material.
In some of these interactions, a light nucleus can be produced by spallation processes, i.e. can be knocked-out from detector or from support material.
The baryon number conservation sets a very high energy threshold for the production of secondary antinuclei with similar processes, thus making the contribution of secondary antinuclei from material negligible, as also confirmed by simulations.
Other processes, such as the decay of (anti)hypernuclei, represent a negligible contamination to the observed (anti)deuterons and (anti)tritons.
As already done in previous analyses~\cite{ALICE:2015rey,Adam:2015vda,ALICE:2017nuf,ALICE:2022zuz,ALICE:2022amd}, in order to subtract the background from secondary deuterons and \Hee the DCA$_{xy}$ is used.
The distribution of primary particles is expected to be peaked at DCA$_{xy}\,=\,0$, whereas secondary particles are expected to exhibit a flat DCA$_{xy}$ distribution to the first order. The typical distributions of DCA$_{xy}$ for nuclei and antinuclei detected in ALICE are shown in Fig.~\ref{fig:fit-dcaxy}.
In second order, the tracks originating from secondary particles may be associated to a wrong hit in the innermost layers of the ITS.
If the latter belongs to a primary particle, the extrapolation of the secondary particle track will wrongly point to the primary vertex, as the track pointing is mostly driven by the hits in the innermost layers of the ITS.
In the deuteron and \Hee analyses presented in this article, a fit to the observed DCA$_{xy}$ distribution is performed to extract the primary fraction of deuterons and \Hee.
The DCA$_{xy}$ distributions of primary and secondary deuterons as well as \Hee in each transverse momentum interval are extracted from Monte Carlo (MC) events and are used as templates to fit the measured DCA$_{xy}$ distribution.
Since the secondary particles have large DCA$_{xy}$, the fits are done in a range of DCA$_{xy}$ wider than the actual track selection criterion to better constrain the secondary particle components.
The contamination from deuterons produced in the interactions with the detector material is only significant below 1.4 \gmom.
In contrast to deuterons, the background from secondary tritons is rather dominant over the low number of primary triton counts. As this background only occurs at low \pt, the triton yield is only measured above 2.4~\gmom (2.0~\gmom in the most peripheral centrality interval).
\begin{figure}
\begin{center}
\includegraphics[width=0.95\textwidth]{figures/DCAxy.pdf}
\caption{\label{fig:fit-dcaxy}$\rm{DCA}_{\it{xy}}$ of deuterons (open red circles) and antideuterons (solid red circles) for the $\pt$ intervals $1.0 \leq \pt < 1.1~\rm{GeV}/\it{c}$ (left) and of \Hee (open blue squares) and \antiHee (solid blue squares) for $1.2 \leq \pt < 1.6$ $\rm{GeV}/\it{c}$ (right).}
\end{center}
\end{figure}
\subsection{Corrections to the spectra}
\label{corrections}
The $p_{\mathrm{T}}$-differential-production spectra of (anti)deuterons, (anti)\Hee and (anti)tritons are obtained by correcting the raw spectra for tracking efficiency and acceptance based on MC generated events.
The MC samples used to compute the efficiency and the acceptance corrections for the Pb--Pb analysis were generated using the HIJING event generator~\cite{Wang:1991hta}.
Since HIJING does not provide light (anti)nuclei, an \emph{ad hoc} generator that injects particles on top of the event generator was used.
The kinematics of the injected nuclei is chosen randomly by picking their transverse momentum from a flat distribution in the range between 0 and 10 \gmom, their azimuthal angle from a flat distribution between 0 and 2$\pi$ radians, and their rapidity from a flat distribution in the range $|y|\,<\,1$.
All particles are transported with GEANT4~\cite{Agostinelli:2002hh} through a full simulation of the ALICE detector.
The GEANT4 version used in the ALICE software framework was modified to take into account the latest (anti)nuclei hadronic interaction measurements~\cite{ALICE:2020zhb,ALICE:2022zuz}.
For the (anti)deuteron, (anti)\Hee and (anti)triton analyses, the efficiency$\times$acceptance was determined for each centrality interval separately.
The input \pt distributions of (anti)nuclei in the simulation are modified according to a Blast-Wave parametrization using \pt-dependent weights. The BW parameters are taken from~\cite{ALICE:2019hno}.
\section{Introduction}
\label{sec:intro}
\input{introduction.tex}
\section{Experimental apparatus and data sample}
\label{sec:detector}
\input{detector.tex}
\section{Data analysis}
\label{sec:analysis}
\input{analysis.tex}
\section{Systematic uncertainties}
\label{sec:Syst}
\input{systematics.tex}
\section{Results}
\label{sec:results}
\input{results.tex}
\section{Summary and conclusion}
\label{sec:summary}
\input{summary.tex}
\bibliographystyle{utphys}
|
2,869,038,154,958 | arxiv | \section{Introduction}
Sliding friction between
solid bodies, among the most basic and pervasive phenomena in physics
and in our everyday experience, can be measured, simulated but --
disappointingly -- not yet formulated theoretically. By that we mean
that even in the purely classical sliding of a body on another there
is no unprejudiced way of identifying and determining a handful of
variables (as opposed to the $\sim{10}^{23}$ atomic coordinates and
velocities) that obey a well defined equation of motion describing
the essence of the frictional process. The burgeoning area of
nanofriction~\cite{vanossi2013}, where realistic simulations are often possible,
has made if anything this theoretical vacuum even more blatant. In
a recent pubblication we proposed~\cite{Pellegrini2016} that Markov
State modeling (MSM) -- a probabilistic approach commonly applied
to characterize the kinetics of systems characterized by an equilibrium
measure~\cite{Noe2009,Schwantes2014,Noe2013,Bowman2014,Schutte2015}
-- can be extended and used for the strongly non-equilibrium, non--linear
problem of sliding friction. The approach was demonstrated in a simple
1D toy model, a 10-atom Frenkel Kontorova model~\cite{Frenkel1938}
where, despite the difficulty represented by a time--growing phase
space, non-equilibrium MSM was shown to describe adequately the forced
dynamics of steady-state sliding friction. The probabilistic analysis
of a long steady-state frictional simulation and the choice of a metric
led to the recognition of Markovian evolution in phase space, to the
identification of a few slow collective variables (``excitations'')
describing the events occurring in the course of sliding, and to the
construction of a transfer--matrix--dictated model of the time evolution
of probabilities. This approach represents in our view a first step
towards a theory of friction, and a methodological advance of very
significant importance.
Here we showcase the first application of the MSM approach to a more
realistic frictional system. We choose for this purpose the sliding
of a two-dimensional (2D) island of more than $1000$ particles,
harmonically interacting at a spacing that is incommensurate with
respect to a periodic 2D substrate potential. We consider different
sliding regimes, including the ``superlubric'' and, at the opposite
limit, the pinned regime. The results are rewarding: firstly, and
most importantly, MSM again identifies an extremely small set of significant
variables, despite the totally generic choice of metric and the much
larger dimensionality of the phase space in which the original model
is defined. This handful of variables in turn describes without any
built-in prejudice the main slow time-dependent frictional events,
including superlubric soliton flow and atomic stick-slip frictional
sliding of the island, in the two extreme and opposite regimes of
weak and strong potential.
\section{Markov State Modeling}
The Markov State Modeling procedure starts from a classical molecular dynamics
simulation of the sliding system, long enough to explore all relevant
configurations in phase space a sufficiently large number of times.
Structurally similar configurations are then grouped in a finite number of
{\em microstates}, which will serve as a basis, through a clustering
(such as k--means~\cite{Perez-Hernandez2013}) or geometric technique.
This partitioning requires a {\em metric} in phase space to
quantify the similarity between configurations: the quality of
the partition will generally depend on this choice, to be made
with utmost physical care.
While in real world applications it is considered mandatory to
define the metric in a relevant subset of the coordinates (for
example, the coordinates of the solute), we will show that, in the
specific system considered in this work, one can carry on the procedure
even using a ``blind'' metric, defined taking into account all the
cooordinates of the system.
In equilibrium settings, the transition probability matrix between
pairs of microstates (in a time $\tau$) would be equivalent
to a hermitian matrix (on account
of detailed balance) and have a unique eigenvalue equal to one,
whose eigenvector represent the equilibrium distribution, and all other
real eigenvalues smaller than one. The eigenvectors of eigenvalues
closest to one are associated with slow modes of the system evolution,
while the smaller eigenvalues correspond to fast motions, expected
to be increasingly irrelevant. A clear gap between high and low
eigenvalues leads to a natural dimensional reduction~\cite{Deuflhard2004,
Weber2005}.
To study non-equilibrium problems such as friction, this procedure has
been modified in several key points: the frictional dynamics does not
reach an equilibrium, but instead reaches a steady state where the
configuration space grows (approximately) linearly with the simulation
time, making sampling and clustering problematic. The solution we
proposed is dividing the evolution in intervals, still long enough to
be deemed equivalent between them, so that results from each interval
can be cumulated on top of one another. Stability of the results against
extension of the time interval indicates the validity of the procedure.
Care should be taken in dealing with the transition probability matrix
from this steady--state evolution under forcing, which is
non-hermitian~\cite{Pellegrini2016}.
Moreover, since the phase space metric contains a large number
of microscopic variables, we do not build microstates by the tessellation
techniques used in standard MSM~\cite{Prinz2011}, but use instead
a recently proposed clustering algorithm~\cite{Rodriguez2014}, which
associates a microstate to each meaningful peak of the probability
distribution in the coordinate space associated with the metric.
The main goal of this contribution is demonstrating that this procedure
works for a much more realistic model than the one considered in
ref.~\cite{Pellegrini2016}: the sliding of a 2D Frenkel Kontorova island
including approximately 1000 atoms on a periodic incommensurate potential.
\section{The 2D Frenkel-Kontorova model}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{collective1.pdf}
\caption{\label{Fig1}
(a) Schematic of the Frenkel-Kontorova island sliding on a generic
2D incommensurate triangular potential.
(b) Average friction force in the steady state regime for simulations
with varying ratio $K/U_0$, highlighting the transition from free
sliding to stick--slip. The arrows indicate the sample parameters chosen
to compare the results of our method, with a color code we will keep
throughout all figures.
(c) Slowest timescales after transition matrix diagonalization for the
different regimes. The full line represents half of the time required
for the external force to move the particles to successive
potential minima.
(d) Sample evolution for the different regimes: we show the deviations
of the center of mass positions from the free sliding corresponding to
an infinitely stiff island. The vertical dashed lines are spaced like the
time lag $\tau=1100$ chosen to build the MSM to represent sampled configurations.}
\end{figure}
Our present study focuses on a two-dimensional FK model~\cite{Braun1998},
Fig.~\ref{Fig1}(a). We consider a hexagonal island of $N=1027$
classical particles, internally arranged as a triangular lattice,
dragged by a force applied on the center of mass, which causes it
to slide over a triangular potential $V(x,y)=U_{0}\left[2\cos\left(\frac{2\pi x}{a_{S}}\right)
\cos\left(\frac{2\pi y}{\sqrt{3}a_{S}}\right)+\cos\left(\frac{4\pi y}{\sqrt{3}a_{S}}\right)\right]$.
Nearest neighbor harmonic springs of stiffness $K$
link the particles of mass $m$ and positions $\mathbf{r}_{i}=(x_{i},y_{i})$
whose equilibrium spacing $a_{H}$ is incommensurate with the periodic
potential: $a_{S}/a_{H}\sim1.07$. Each particle is dragged by a spring
of constant $\kappa$ moving with constant velocity $v_{{\rm ext}}$.
Particle motion obeys an overdamped Langevin dynamics (large damping
$\gamma$), in a bath of inverse temperature $\beta=1/k_{B}T$:
\begin{equation}\begin{array}{rcl}
\mathbf{r}_l^{t+dt}&=&\displaystyle \mathbf{r}_l^t+
\left[\frac{1}{\gamma m}\nabla V(\mathbf{r}_l^t)
+\frac{\kappa}{\gamma m}\left(v_{\rm ext}t-\frac{1}{N}\underset{j}{\sum}x_j^t\right)+\right.\\
&&\displaystyle\left.-\frac{K}{\gamma m}\sum_{j\in\mathrm{NN}}
(\mathbf{r}_l^t-\mathbf{r}_{j}^t)
\right] dt
+\sqrt\frac{2 dt}{\gamma m \beta}\mathbf{f}^t,
\end{array}\end{equation}
where $\mathbf{f}^{t}$
is an uncorrelated Gaussian distribution and $dt$ is the elementary
time step (here $dt=0.1$, $m=1$, $\gamma=1$, $\beta=100$, $\kappa=0.01$, $v_{\rm ext}=0.0001$).
In a temperature and parameter regime where the island does not rotate,
its sliding mechanics retains some similarity to 1D sliding~\cite{Braun1998}.
In the weak potential limit the bulk of the island, characterized
by weak solitons (small deviations of the interparticle distance from
the equilibrium value) which form a moir\'e pattern over the incommensurate
potential, is structurally lubric (``superlubric''). Upon sliding
in this regime the solitons flow unhindered, and the only source of
pinning and static friction is actually provided by the island edge~\cite{varini2016}.
In the opposite strong potential limit the solitons, no longer weak,
are strongly entrenched, and the whole island is pinned, with a bulk
static friction independent of edges. Under the external spring--transmitted
force, the island sliding in this regime will alternate long ``sticking''
periods during which particles are close to their respective potential
minima, to fast slips during which one or more lattice spacings are
gained. This kind of atomic stick--slip motion is well established
for, e.g., the sliding of an Atomic Force Microscope tip on a crystal
surface~\cite{vanossi2013} -- of course involving in that case three-dimensional
displacements of larger complexity. A slip event always involves the
flow of either pre--existing solitons or of newly created ones that
enable the system to slide faster.
Our input for the MSM procedure is a long steady-state trajectory of
the island motion, obtained by integrating these equations for $\sim{10}^{8}$
timesteps for a slow external velocity, but far from the linear
response regime.
Values of $K/U_{0}$ were chosen so as to straddle between
and beyond the weak ($K/U_{0}\geq2$) and strong ($K/U_{0}\leq0.2$)
potential regimes.
The average friction force $F_{\rm mean} = \kappa \left\langle v_{ext} t
- \frac{1}{N}\sum_l x_{l}\right\rangle$
obtained from the simulation as a function of the ratio $K/U_{0}$
can be seen in Fig.~\ref{Fig1}(b), where the crossover
from a superlubric to a pinned regime is clearly reflected. We focus
our study on three different values of the parameters representative
of these different regimes, as can be hinted by looking at the evolution
of the position of the center of mass shown in Fig.~\ref{Fig1}(d).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{collective2.pdf}
\caption{\label{Fig2}
(a) Steady state probability distribution of the nearest neighbor
distances for the three regimes. Vertical dotted lines represent
the rest interparticle (``harmonic'') distance, while the dashed ones the substrate lattice spacing.
(b) The perturbations $g_i$ (see Eq.~\ref{EqForgi}) estimated for the first three excited states. The observable $O$ in Eq.~\ref{EqForgi} is here
the nearest neighbor distance.
To compute $P(O | \alpha)$ 100 intervals have been chosen.
}
\end{figure}
\section{Implementing the frictional MSM}
The protocol begins by defining a metric, measuring distances between
configurations in phase space. Since we want to remain as unprejudiced
as possible, we adopt the simplest, most generic and bias--free metric,
and define the distance between two configurations $s$ and $t$ as:
\begin{equation}
d_{st}=\left[({\mathbf{r}}^{s}_{\rm CM}-{\mathbf{r}}^{t}_{\rm CM})_{\text{mod }2}\right]^2
+\left[\overset{N}{\underset{l=1}{\sum}}({\mathbf{r}}^{s}_{l}
-{\mathbf{r}}^{t}_{l})_{\text{mod }2}\right]^2. \label{Eq:dist_dB}
\end{equation}
The microstates were built using the Density Peak algorithm~\cite{Rodriguez2014}. This approach
requires only defining a distance between the configurations, here estimated using Eq.~\ref{Eq:dist_dB}.
Based on this definition, the approach automatically finds the peaks in the probability distribution
in the space of the coordinates in which the distance is defined. Here, following our previous work~\cite{Pellegrini2016}, we
identify the microstates used for building the MSM with the density peaks.
We used samples of $N_{\rm conf}\sim{10}^{4}$ configurations (separated
by the lagtime $\tau$) and clustered them using the metric~(\ref{Eq:dist_dB}).
The order of magnitude of the lagtime $\tau$ has been chosen in order
to describe the stick (and slip) events of the system. The optimal
lagtime $\tau=1100$ was determined after some convergence checks
resembling those carried out in the previous work~\cite{Pellegrini2016}:
In particular, we verified that the relevant
timescales stay within the statistical error in Fig.~\ref{Fig1} when doubling or
halving the lagtime. We also verified that by using the Core Set MSM
approach~\cite{Schutte2011} the influence of the time lag on the timescales is further reduced.
This indicates that we are indeed far from the non-Markovian regime.
Given these $\{c_{\alpha},\alpha=1,\dots,n_{c}\}$ microstates, we
can construct a discretized, coarse--grained Transfer
Operator (TO)~\cite{Bowman2014}:
if $\Pi^{\tau}(X\rightarrow X')$ is the probability to go from a
configuration $X$ at time $t$ to $X'$ at time $t+\tau$,
a finite $n_{c}\times n_{c}$ Transfer Matrix (TM) can be built
by estimating the probability to go from $c_{\alpha}$ to $c_{\beta}$
in time $\tau$: $\Pi_{\alpha\beta}^{\tau}=\int_{X\in c_{\alpha}}
\int_{X'\in c_{\beta}}{\rm d}X{\rm d}X'P(X)\Pi^{\tau}(X\rightarrow X')$.
This TM contains less detail than $\Pi^{\tau}$, but
it can be sampled in finite time. In principle, $\Pi_{\alpha\beta}^{\tau}$
depends on the choice of $\tau$, but an optimal value for this parameter
can be chosen.
We call $\{\lambda_{i}\}$ the eigenvalues of the TM and $\{\vec{\chi}_{i}\}$
its left eigenvectors. Since detailed balance does not hold, the TM is
not symmetric and the eigenvalues can be complex, however $\md{\lambda_{i}}\le1$
is still guaranteed; the eigenvalue of largest modulus
is still $1$ and unique if the evolution is ergodic.
The eigenvector $\vec{\chi}_{0}$ represents the
steady state distribution, while the eigenvectors $\vec{\chi}_{i}$ with
$\md{\lambda_{i}}\simeq1$ form the so--called Perron Cluster~\cite{Deuflhard2004}.
They characterize the long--lived perturbations to the steady state, decaying with
characteristic times $\tau_{i}=-\tau/\ln\md{\lambda_{i}}\gg\tau$,
while oscillating with period $\tau/\arctan({\rm Im}\lambda_{i}/{\rm Re}\lambda_{i})$.
To better characterize the eigenmodes $\chi_{i}$ it is useful to consider
a system prepared in the mixed state $P_{\alpha}^{0}$ (probability vector
to be in $c_{\alpha}$) at $t=0$ and the evolution of the probability
distribution $P\left(O,t\right)$ of an observable $O$ as a function of time.
We have:
\begin{equation}\label{eq:Pexp}
P\left(O,t\right)=P^{\rm ss}\left(O\right)+\sum_{i>0}f_{i}g_{i}\left(O\right)e^{-t/\tau_{i}},
\end{equation}
where $f_i=\sum_\alpha\chi_i^\alpha P^0_\alpha/P^{\rm ss}_\alpha$
accounts for the initial condition and
\begin{equation}
g_{i}\left(O\right)=\sum_{\alpha}\chi_i^\alpha P(O | \alpha),
\label{EqForgi}
\end{equation}
where $P(O|\alpha)$ is the
probability distribution of $O$ in microstate $\alpha$, $P^{{\rm ss}}(O)=g_{0}(O)$
the steady state distribution of $O$, and $P_{\alpha}^{{\rm ss}}$
the steady state probability to visit microstate $\alpha$. The $g_{i}(O)$
for $i>1$ represent ``perturbations'' of $P^{{\rm ss}}(O)$, each
decaying within the lifetime $\tau_{i}$.
While the expansion~\eqref{eq:Pexp} is meaningful only for a given
starting configuration, the analysis of the shape of these functions
(regardless of their amplitude) provides a direct insight into the
nature of the slow eigenmodes.
One can therefore turn back to observables deemed relevant in the
original system and estimate their influence in the relevant
dynamical modes of the system in order to gain insight on their
characteristics.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{collective3.pdf}
\caption{\label{Fig3}
(a) Steady state probability distribution of the harmonic energy
for the three regimes. The vertical dashed lines represent
the harmonic energy for an island completely relaxed to the substrate
lattice spacing. In this case we applied a running average to the data,
both for the steady--state distribution and the corrections.
The switch from a single peak of the hard island to a multiplicity of
peaks for the medium to soft island is a direct evidence of the more
elaborate sliding dynamics of the latter.
(b) The perturbations $g_i$ (see Eq.~\ref{EqForgi}) estimated for the
first three excited states. The observable $O$ in Eq.~\ref{EqForgi}
is here the harmonic energy.
To compute $P(O | \alpha)$ 100 intervals have been chosen.
}
\end{figure}
\section{Observables}
We apply the described procedure to three evolutions of our model characterized
by different parameters $K/U_0$ as indicated in Fig.~\ref{Fig1}(b).
The time corresponding to the first largest eigenvalue
(computed from the real part of the eigenvalues,
besides the eigenvalue $0$ associated with the steady--state,
while the imaginary part is much smaller and has been ignored)
can be seen in Fig.~\ref{Fig1}(c): for all cases we find
that the first implied timescale is approximately equal to
$6\times{10}^{4}$, corresponding to roughly half the
time $a_S/v_{\rm ext}$ required on average to move by
one lattice spacing.
This is consistent with the interpretation of the slowest mode as being
related to the movement from one local minimum of the substrate to the next. We notice that
in the more extreme case $K/U_0=10$ the first relaxation time is
faster, since the island is stiff the substrate does not play a role.
The successive timescales are almost an order of magnitude faster,
and further insight is required for their interpretation.
We will presently analyze the $g$ functions of some relevant
physical observables of this frictional
system, in order to characterize these rapidly decaying states.
\subsection{Nearest neighbour distance}
As a first observable we consider the nearest neighbor distance between
all particle pairs. The steady--state distributions (see Fig.~\ref{Fig2}(a))
show the expected trend: while the $K/U_0=2$,case has a peak centered on the
harmonic equilibrium distance
reflecting the hard island's nature during structurally lubric sliding~\cite{vanossi2013},
the opposite case $K/U_0=0.2$ is centered on a distance commensurate with the
substrate, reflecting the soft island's strong adhesion to the external potential.
For $K/U_0=0.5$ the situation is intermediate.
The excited states complete this description (see Fig.~\ref{Fig2}(b)):
the first excited states show little correction to the
steady--state distribution, as the change of a whole lattice spacing has
only a minor influence on the nearest neighbor distribution, while the second
and third excited states display a significant change. Indeed,
the latter correspond to internal relaxations of the island not
associated with the collective sliding.
In the specific case these corrections are related to the
formation/destruction of incommensurate solitons
induced on the island by the external potential~\cite{varini2016}.
This observable lacks the ability to clearly distinguish
between the excited states. We therefore considered a
more extensive observable, able to highlight more differences.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{collective4.jpg}
\caption{\label{Fig4}
(a) Steady state 2D probability distribution of all single particle positions
(excluding edges) for the three regimes. Position along x and y is taken modulus
1 and 2 lattice spacings, respectively.
The zig-zag pattern reflects the motion of the whole which, while pulled along $X$,
moves between neighboring potential minima that are $\pm 60^\circ$ off.
(b) The perturbations $g_i$ (see Eq.~\ref{EqForgi}) estimated for the first
three excited states. The observable $O$ in Eq.~\ref{EqForgi} is here
the single particle positions.
Note the evolution for decreasing $K/U_0$, from smooth flow to sharp hopping betweem minima.
To compute $P(O | \alpha)$ 100 intervals have been chosen for x and 50 for y.
}
\end{figure}
\subsection{Harmonic energy}
We now consider the distribution of the total harmonic energy of the island
$\frac{K}{2}\langle\sum_{\langle i,j\rangle\in\mathrm{NN}}(\mathbf{r}_i^t-\mathbf{r}_{j}^t)^2 \rangle$.
Fig.~\ref{Fig3}(a) shows the steady--state distributions,
clearly highlighting the richer information encoded by this observable.
In the stiff $K/U_0=2$ case the distribution of this observable shows a single peak, while in the softer
cases it acquires a more complex structure, related to the presence of a different number of
solitons in the system. The corrections in Fig.~\ref{Fig3}(b) highlight how the different modes
(besides the first one, as previously noted) are related to different relative weights in these
soliton distributions, representative of the different dynamics of each regime:
while in the stiff case the few defects merely slide through the island during the motion,
leaving their population unchanged, for the softer islands the stick-slip motion is achieved through
the creation of new solitons at the edges and their relatively fast propagation, leading to a complex
time dependence of their population.
\subsection{Single particle positions}
To gain additional insight in the nature of the slow dynamical modes we
now consider the probability distribution of the position of a single
particle (not on the border)
as a function of its position $x$ and $y$, modulus the periodicity of the substrate.
The steady state positional distribution of (Fig.~\ref{Fig4}(a)) again shows
how the increase of $U_0$ leads from a smooth distribution over the continuous
transition path from a minimum to the next, to an increasingly
peaked distribution in the potential minima. Therefore if for $K/U_0=2$
the particle performs a rather smooth
zig-zag path between successive potential minima,
In the intermediate case ($K/U_0=0.5$) these positions
are much more probable, eventually becoming dominant
for $K/U_0=0.2$ where the distribution reduces essentially to sharp peaks.
The excited state effect on the particle position distribution is
shown in Fig.~\ref{Fig4}(b).
While the first excitation is clearly related to the single period shift,
as mentioned earlier, the second excited state
shows that the particle jumps among successive minima in the zig-zag path.
This shorter periodicity was not visible in the previous
observables as it is not shared by all particles. The third excited state,
finally, reflects the particle position
probability perturbation caused by the ``slip'' events, which are
characteristic and strong for the softer island, as in the previous analysis.
\subsection{Work distribution}
As a final observable, relevant to the description of a frictional model, we consider
the instantaneous work done on the system by the external force in a single
timestep $\tau$:
$W_t = \kappa\sum_l (v_{\rm ext}t-x_l^t) (x_l^{t+\tau}-x_l^t)$,
shown in Fig.~\ref{Fig5}(a).
(Notice that this quantity depend on the successive position at timestep
$t$ and $t+\tau$).
The steady-state work distribution $P_{ss}(W)$ is centered on $ \langle W \rangle $,
a value evolving from near zero to larger values as one goes from $K/U_0=2$ to $K/U_0=0.2$.
At the same time $P_{ss}(W)$ develops an increasing asymmetry
with a broader and broader tail around positive values of work.
Both features are related to the increase of dissipation as the substrate
corrugation increases.
As for the previous observables, this steady-state level information is straight
from the simulation and does not need the MSM analysis.
Now however we can examine what the excitations do.
As seen in Fig.~\ref{Fig5}(b) for $K/U_0=2$ the excitations show just noise, which tells
us that the slider moves as a whole, as characteristic of the superlubric sliding in this regime.
The notable exception is however the second excitation, showing a forward jump. This marginal
stick-slip behaviour is actually due to the weak but nonzero pinning caused by the island edge
that hinder the entrance and exit of solitons~\cite{varini2016}, a subtle but real feature
which in this case is efficiently and unbiasedly uncovered through this excitation.
As we move towards smaller and smaller $K/U_0$ and the island softens, all excitations
gradually come into play.
In the final stick--slip regime, modifications in the
soliton structure are strongly related to an increase in the positive tail of the work
distribution, highlighting the mechanism behind the increased friction coefficient.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{collective5.pdf}
\caption{\label{Fig5}
(a) Steady state probability distribution of work
for the three regimes. In this case we applied a running average to the data, both
for the steady--state distribution and the corrections.
(b) The perturbations $g_i$ (see Eq.~\ref{EqForgi}) estimated for the first three
excited states. The observable $O$ in Eq.~\ref{EqForgi} is here the work.
To compute $P(O | \alpha)$ 100 intervals have been chosen.
}
\end{figure}
\section{Conclusions}
The Markov State Model method, so far developed for the equilibrium
evolution of large--scale molecular systems, can be naturally extended
to non-equilibrium dynamics under the action of external forces. Among
non--equilibrium phenomena, the physics of sliding friction is in
bad need of a description, with coarse--grained variables and their
time evolution constructed in the least prejudiced manner. We have
shown here that application of this technique to a realistic model
involving a mesoscopically large sliding system is possible and fruitful.
Three important conclusions that were not a priori granted deserve
being underlined. The first is that no particularly clever or savvy
choice of the metric is necessary: the very naive choice of considering
the distance between all the particles of the sliding island works
very well. Since the metric is so simple, the kinetic model that is
obtained is likely to be accurate. The second, and equally remarkable
result is that despite many thousands of atomistic degrees of freedom,
the procedure allows selecting just very few slow variables, automatically
eliminating all other fast irrelevant variables. The third is that
the slow variables, once examined at the end, are found to make a
lot of sense when confronted with the actual frictional physics of
the system, be it superlubric or stick--slip. These gratifying bottomlines
provide a strong encouragement towards the future use of the MSM for
the theoretical description of sliding friction.
\section*{ACKNOWLEDGMENTS}
Work carried out under ERC Advanced Research Grant N.~320796 -- MODPHYSFRICT. COST Action
MP1303 is also acknowledged. Early discussions with Fran\c{c}ois Landes are gratefully acknowledged.
|
2,869,038,154,959 | arxiv |
\section{\@startsection{section}{1}{\z@}%
{-21dd plus-4pt minus-4pt}{10.5dd plus 4pt
minus4pt}{\normalsize\bfseries\boldmath}}
\def\subsection{\@startsection{subsection}{2}{\z@}%
{-21dd plus-4pt minus-4pt}{10.5dd plus 4pt
minus4pt}{\normalsize\itshape}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-13dd plus-4pt minus-4pt}{-5.5pt}{\normalsize\itshape}}
\def\paragraph{\@startsection{paragraph}{4}{\z@}%
{-13pt plus-4pt minus-4pt}{-5.5pt}{\normalsize\itshape}}
\makeatother
\newcommand{\section}{\section}
\newtheorem{thm}{Theorem}[section]
\newtheorem{prop}[thm]{Proposition}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\newtheorem{ex}[thm]{Example}
\newcommand{\institute}[1]{#1}
\newcommand{\email}[1]{\href{mailto:#1}{#1}}
\newenvironment{acknowledgement}{\small \setlength{\parindent}{0pt}\emph{Acknowledgements.}}{}
\newenvironment{myproof}{\begin{proof}}{\end{proof}}
\newenvironment{beweis}{\textbf{Beweis:}}{\hspace*{\fill} $\Box$}
\newenvironment{beweisvon}[1]{\textbf{Beweis von #1:}}{\hspace*{\fill} $\Box$}
\newcommand{\hspace*{\fill} \ensuremath{\Box}}{\hspace*{\fill} \ensuremath{\Box}}
\newenvironment{tablist}[1]
{\begin{list}{}{
\renewcommand{\makelabel}[1]{##1\hfill}
\settowidth{\labelwidth}{#1}
\setlength{\leftmargin}{\labelwidth}
\addtolength{\leftmargin}{\labelsep}}
}{\end{list}}
\newcommand{\commdiag}[2]
{\hspace{8mm}\begin{xy}
\xymatrix#1{#2}\end{xy}}
\newcommand{\glossaryentry}[2]{\item#1 \dotfill\; #2}
\renewcommand{\bar}{\overline}
\newcommand{{\bar{\ensuremath{\mathds{Q}}}}}{{\bar{\ensuremath{\mathds{Q}}}}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\ensuremath{\mathds{N}}}{\ensuremath{\mathds{N}}}
\newcommand{\ensuremath{\mathds{R}}}{\ensuremath{\mathds{R}}}
\newcommand{\ensuremath{\mathds{Q}}}{\ensuremath{\mathds{Q}}}
\newcommand{\ensuremath{\mathds{Z}}}{\ensuremath{\mathds{Z}}}
\newcommand{\ensuremath{\mathds{C}}}{\ensuremath{\mathds{C}}}
\newcommand{\ensuremath{\mathds{F}}}{\ensuremath{\mathds{F}}}
\renewcommand{\H}{\ensuremath{\mathds{H}}}
\renewcommand{\P}{\ensuremath{\mathds{P}}}
\renewcommand{\L}{\ensuremath{\mathcal{L}}}
\newcommand{\ensuremath{\mathfrak{M}}}{\ensuremath{\mathfrak{M}}}
\newcommand{\ensuremath{\mathcal{B}}}{\ensuremath{\mathcal{B}}}
\newcommand{\ensuremath{\mathscr{K}}}{\ensuremath{\mathscr{K}}}
\newcommand{\ensuremath{\mathcal{A}}}{\ensuremath{\mathcal{A}}}
\newcommand{\ensuremath{\mathcal{G}}}{\ensuremath{\mathcal{G}}}
\newcommand{\ensuremath{\mathcal{T}}}{\ensuremath{\mathcal{T}}}
\newcommand{\ensuremath{^{\mathrm{nt}}}}{\ensuremath{^{\mathrm{nt}}}}
\newcommand{\ensuremath{^{\mathrm{nc}}}}{\ensuremath{^{\mathrm{nc}}}}
\newcommand{\ensuremath{\mathscr{F}}}{\ensuremath{\mathscr{F}}}
\newcommand{\ensuremath{\mathscr{N}}}{\ensuremath{\mathscr{N}}}
\newcommand{\ensuremath{\mathscr{T}}}{\ensuremath{\mathscr{T}}}
\newcommand{\ensuremath{\mathscr{C}}}{\ensuremath{\mathscr{C}}}
\newcommand{\ensuremath{\mathscr{D}}}{\ensuremath{\mathscr{D}}}
\newcommand{\ensuremath{\mathscr{W}}}{\ensuremath{\mathscr{W}}}
\newcommand{\ensuremath{\mathscr{H}}}{\ensuremath{\mathscr{H}}}
\newcommand{\ensuremath{\mathcal{T}}}{\ensuremath{\mathcal{T}}}
\newcommand{\ensuremath{\mathcal{M}}}{\ensuremath{\mathcal{M}}}
\newcommand{\ensuremath{\mathcal{H}}}{\ensuremath{\mathcal{H}}}
\newcommand{\ensuremath{\mathcal{N}}}{\ensuremath{\mathcal{N}}}
\newcommand{\ensuremath{\mathcal{C}}}{\ensuremath{\mathcal{C}}}
\newcommand{\ensuremath{{\boldsymbol{d}}}}{\ensuremath{{\boldsymbol{d}}}}
\newcommand{\ensuremath{{\boldsymbol{g}}}}{\ensuremath{{\boldsymbol{g}}}}
\newcommand{\ensuremath{{\boldsymbol{n}}}}{\ensuremath{{\boldsymbol{n}}}}
\newcommand{\open}[1]{\overset{\circ}{#1}}
\DeclareMathOperator{\PGL}{PGL}
\DeclareMathOperator{\PSL}{PSL}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\SL}{SL}
\DeclareMathOperator{\SO}{SO}
\DeclareMathOperator{\ord}{ord}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\Aff}{Aff}
\DeclareMathOperator{\Gal}{Gal}
\DeclareMathOperator{\Bild}{Bild}
\DeclareMathOperator{\Kern}{Kern}
\DeclareMathOperator{\im}{Im}
\DeclareMathOperator{\id}{id}
\DeclareMathOperator{\Tor}{Tor}
\DeclareMathOperator{\Rang}{Rang}
\DeclareMathOperator{\ggT}{ggT}
\DeclareMathOperator{\kgV}{kgV}
\DeclareMathOperator{\cha}{char}
\DeclareMathOperator{\Deck}{Deck}
\DeclareMathOperator{\rad}{rad}
\DeclareMathOperator{\dist}{dist}
\DeclareMathOperator{\grad}{grad}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\Epi}{Epi}
\DeclareMathOperator{\Homeo}{Homeo}
\DeclareMathOperator{\Inn}{Inn}
\DeclareMathOperator{\Out}{Out}
\DeclareMathOperator{\Gen}{Gen}
\DeclareMathOperator{\Stab}{Stab}
\DeclareMathOperator{\Pic}{Pic}
\DeclareMathOperator{\Ind}{ind}
\DeclareMathOperator{\Diffeo}{Diffeo}
\DeclareMathOperator{\der}{der}
\DeclareMathOperator{\re}{Re}
\DeclareMathOperator{\Sym}{Sym}
\newcommand{\text{ab}}{\text{ab}}
\newcommand{\text{aff}}{\text{aff}}
\newcommand{\leftarrow}{\leftarrow}
\newcommand{\Leftarrow}{\Leftarrow}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\Rightarrow}{\Rightarrow}
\newcommand{\leftrightarrow}{\leftrightarrow}
\newcommand{\Leftrightarrow}{\Leftrightarrow}
\newcommand{\hookrightarrow}{\hookrightarrow}
\newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\cdot}{\cdot}
\newcommand{\restrict}[1]{|_{#1}}
\renewcommand{\OE}{\raisebox{0.1pt}{o}\hspace{-2pt}{\scriptsize E} }
\newcommand{\lightning}{\lightning}
\newcommand{\A^0_{\infty}}{\ensuremath{\mathcal{A}}^0_{\infty}}
\newcommand{\vartriangleleft}{\vartriangleleft}
\newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\abs}[1]{\left|#1\right|}
\newcommand{\smallabs}[1]{\bigl|#1\bigr|}
\newcommand{\smallset}[1]{\bigl\{#1\bigr\}}
\newcommand{\textabs}[1]{|#1|}
\newcommand{\comm}[1]{\left[#1,#1\right]}
\newcommand{\ind}[1]{\left[#1\right]}
\newcommand{\vcorr}[1]{\parbox[b][#1pt]{0pt}{}}
\newcommand{\defi}[1]{\emph{#1}\index{#1}}
\newcommand{\mathdefi}[2]{#1\index{#2@\protect{#1}}}
\newcommand{\defindex}[2]{\emph{#1}\index{#2}}
\newcommand{\zitat}[2]{\cite{#2}, #1}
\newcommand{\mat}[1]{\begin{pmatrix}#1\end{pmatrix}}
\newcommand{\smat}[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)}
\newcommand{\vect}[2]{\mat{#1 \\ #2}}
\newcommand{\gen}[1]{\left<#1\right>}
\newcommand{\mattwo}[4]{\left(\begin{matrix}#1 & #2 \\ #3 & #4\end{matrix}\right)}
\newcommand{\topfrac}[2]{{#1\!/\!#2}}
\newcommand{\thispagestyle{myheadings}\markright{}}{\thispagestyle{myheadings}\markright{}}
|
2,869,038,154,960 | arxiv | \section*{Supplementary Information}
\section{Experimental details}
We started our experiment with a degenerate two-dimensional gas of rubidium-$87$ atoms,
spin polarized in the hyperfine state $\ket{F,~m_F}=\ket{1,-1}$ and held in a single
confining antinode of an optical lattice along the $z$-direction.
We then switched on a square optical lattice with $\alat=532\,$nm spacing in
this single $x$-$y$-plane, preparing about $170$ atoms in a unity-filling Mott
insulator. For the experimental sequence, we ramped the optical lattices along
the $(x,y,z)$-directions to depths of $(40,40,80)E_r$, where
$E_r=\frac{h^2}{8m\alat^2}$ is the recoil energy for rubidium-$87$ in our
lattice. In this regime, the spatial extent of the Wannier function of
an atom in a lattice site as well as tunnelling between the sites on the
timescale of the experiment can be neglected and the Mott insulator served as a well controlled
starting state for our experiments with a single spin per site.
Employing an addressing scheme based on a digital mirror device
~\cite{Weitenberg2011a,Fukuhara2013}, we selected a single line of $10$
atoms aligned with the $x$-direction in the lattice for our experiment,
optically removing all other atoms from the trap. The filling of $87(3)\%$
of a single line is determined by infidelities in the single atom addressing
scheme and the filling of the initial Mott insulator ($90\%$ and $97\%$ respectively).
In order to introduce long-range interactions, the state $\ket{F=2,~m_F=-2}$ was
optically coupled to the $31P_{1/2}$, $\ket{J=1/2,~m_J=+1/2}$ Rydberg state.
The coupling laser at a wavelength of $298\,$nm was $\sigma_+$-polarized and
propagated in the plane of the atoms at an angle of $45^{\circ}$
with the initially prepared chain of atoms.
We used approximately $77(10)\,$mW of uv-light, focused down to a
waist of $18(3)\,\mu$m to achieve a Rabi frequency of $\Omega/2\pi=3.57(3)\,$MHz
on the Rydberg transition. The Rabi frequency was calibrated by measuring the
AC-Stark shift $\delta=\frac{\Omega^2}{4\Delta}$ of the dressed ground state
$\ket{2,-2}$ due to the Rydberg dressing laser for different detunings
$\Delta$~\cite{Zeiher2016a}.
A magnetic field of strength $B_{xy}=0.405\,$G was applied in the plane of the atoms
along the excitation laser to set a well defined quantization axis for the
dressing scheme and the ground state spin basis.
\begin{figure}
\centering
\includegraphics{FigS1_Gross.pdf}%
\caption{\label{fig:5}
\textbf{Experimental pulse sequence.}
Microwave pulse power and dressing pulse power recorded using a photodiode, normalized to the maximum, are shown in blue and green. The role of the pulses in the experimental sequence was the preparation of all spins in an equal superposition of the states $\ket{1,-1}$ and $\ket{2,-2}$ in an eigenstate of $\hat{S}^y$ ($1$), followed by a first dressing interval of time $t/2$ ($2$). A spin echo pulse ($3$) was used to cancel phases acquired by every spin due to the dressing laser light shift after the second dressing interval of time $t/2$ ($4$). The final spin-readout along the $S^y$-direction was realized by a global spin rotation ($5$) identical to the one used for the preparation. To ensure the same decoherence due to magnetic field noise for measurements with different dressing times $t$, the time between the two $\pi/2$ pulses ($1,5$) was kept constant.
The microwave pulses coupling $\ket{1,-1}$ and $\ket{2,-2}$ with an area $\pi/2$ and $\pi$ lasted $22\,\mu$s and $44\,\mu$s respectively. The dephasing time due to drifts of the global magnetic field was $2.8(4)\,$ms}
\end{figure}
\begin{figure*}
\centering
\includegraphics{FigS2_Gross.pdf}%
\caption{\label{fig:6}
\textbf{Spin-resolved detection of spin chains.}
To detect the number of spins in a chain in the states $\ket{1,-1}$ and $\ket{2,-2}$, a magnetic field gradient was applied in the plane of the atoms along the $y$-direction, orthogonal to the initially prepared chain, to separate the two components due to their different magnetic moments. The main figure shows the density of atoms obtained after this procedure as an average over many experimental shots. The panel on the right shows the average of the density along the $y$-direction for atoms in $\ket{1,-1}$ and $\ket{2,-2}$ (red and blue, respectively). The gray shading marks the region of interest chosen for extracting the number of atoms in the two states. Its width of $10$ sites matches the width of the initially addressed chain. The lower panel equivalently shows the density averaged over the $x$-direction. The gray vertical line marks the cut to distinguish between atoms in $\ket{1,-1}$ (to the right) and $\ket{2,-2}$ (to the left). The position of the cut was obtained by minimizing the probability of false positive detection events for spin polarized initial states. For the chosen cut, we estimate the probability of an atom in $\ket{1,-1}$ to be falsely counted as a $\ket{2,-2}$ atom to be approximately $2\%$. The slight shift to the left of the central minimum is due to atoms in $\ket{1,-1}$ which were not perfectly removed in the preparation sequence of the single line. All results obtained in the main text are insensitive to small shifts of the separating line by $\pm\alat$.}
\end{figure*}
\subsection{Pulse sequence}
The experimental sequence for measuring the collapse and revival dynamics is shown in Fig.~\ref{fig:5}. The cancellation of the dressing laser induced AC-Stark shift was of paramount importance for the successful detection of collapse and revivals in our experiment because it can, if not controlled, lead to a fast dephasing of the spin dynamics. This was accomplished by sandwiching the two dressing phases of duration $t/2$ in a microwave echo sequence, where the single particle shifts acquired during the two dressing intervals before and after the spin-echo pulse are cancelled, provided the two dressing pulse areas are equal. The total time between the two microwave pulses of area $\pi/2$ was kept constant to ensure the same effect of magnetic field decoherence for all dressing times $t$.
For our parameters $\Omega/2\pi=3.57(1)\,$MHz, $\Delta/2\pi=11.00(2)\,$MHz, the dressing laser induced AC-Stark shift amounted to $\delta/2\pi\approx290\,$kHz, exceeding the nearest-neighbor interaction $|U_0|=13.1(5)\,$kHz by a factor of $22$. Hence, for the revival at $t=5/U_0$, a small fluctuation of the dressing pulse areas between the two intervals of $1\%$ already leads to a detrimental phase error of more than $\pi$ in the global spin. The detection of the revival in our experiments shows that the two areas of the dressing laser pulses shown in Fig.~\ref{fig:5} cancelled to a better degree in our experiment.
Nevertheless, to eliminate experimental runs with excessive pulse fluctuations, we monitored the dressing pulses for each experimental run and only used those shots for the analysis, where the relative difference of the two pulse areas was smaller than $1\%$ and the absolute height of the pulse deviated from the mean of all pulse heights by less than $2.5\%$. This eliminated $5\%$ and $7\%$ of the dataset, respectively.
In addition, we used the recorded total pulse area to rescale the dressing time for a run $k$ by the relation $t_k=(h_k/\bar{h})^2\tilde{t}$, where $h_k$ is the intensity measured by the photo diode for shot $k$, $\bar{h}$ is the mean pulse height averaged over all experimental shots ($3880$ in total) and $\tilde{t}$ denotes the set length of the pulses.
\subsection{Spin-resolved detection}
\begin{figure}
\centering
\includegraphics{FigS3_Gross.pdf}%
\caption{\label{fig:7}
\textbf{Lifetime of the dressed spin chain.}
Measured atom number after variable dressing time $t$ in the echo sequence. The lifetime extracted from an exponential fit is $\tau=1.20(4) \,$ms. Error bars denote the s.e.m. and are smaller than the size of the symbols.}
\end{figure}
After the second dressing interval, we applied a microwave $\pi/2$-pulse to rotate the spins to the measurement basis in the $S^z$-direction. Here, the populations of the two eigenstates $\ket{1,-1}$ and $\ket{2,-2}$ can be detected due to their different magnetic moments. To achieve the latter, we allowed tunnelling orthogonal to the initially prepared chain by ramping down the $y$-lattice. At the same time, we adiabatically ramped up a magnetic field gradient in the plane of the atoms. This led to a separation of the two spin states along the $y$-direction, see Fig.~\ref{fig:6}. After the atoms had settled to their respective new equilibrium position, we ramped up the $y$-lattice, followed by a fluorescence image to obtain the site-resolved occupation of each lattice site~\cite{Sherson2010}.
The splitting procedure allowed the detection of the spin state of an atom with approximately $98(1)\%$ and was mainly limited due to residual atoms in the initial two-dimensional Mott insulator which had not been removed in the addressing scheme and whose distribution had a small overlap with the $\ket{2,-2}$ spins after the splitting, see Fig.~\ref{fig:6}.
\begin{figure*}
\centering
\includegraphics{FigS4_Gross.pdf}%
\caption{ \label{fig:8}
\textbf{Simulation of spatially resolved magnetization dynamics for nearest-neighbor Ising interactions.}
\textbf{(A)} Theoretically calculated evolution of the probability (color coded) for a spin at a given lattice site (labelled from $1$ to $10$) to point in the initially prepared $S^y$-direction along $\ket{\leftarrow}$ for each spin versus dressing time $t$ in a defect-free chain of $N=10$ atoms. The out-of-phase time evolution of the edge spin is clearly visible. All bulk spins display periodic revivals at times $t=n/U_0$. \textbf{(B)} shows representative traces as cuts of (A) for a spin at the edge of the sample (red) and a spin in the bulk (blue), illustrating the different dynamics. \textbf{(C)} Probability for a spin to point in the initial $S^y$-direction along $\ket{\leftarrow}$ averaged over the full chain. Due to the different edge spin dynamics, every second revival is suppressed. \textbf{(D)} The frequency differences $\Delta\nu$ of the many-body eigenstates of the model are multiples of the nearest-neighbor interaction strength $U_0=-13.1(5)\,$kHz (gray), but only few are relevant for the dynamics of the transverse magnetization density (blue bars).}
\end{figure*}
\section{Lifetime and coherence of the Rydberg-dressed spin chain}
Due to the admixture of the Rydberg state with lifetime $\tau_R$ to the ground state $\ket{2,-2}$, the latter acquires a finite effective lifetime $1/\gamma=\tau_R/\beta^2$, where $\beta=\Omega/2\Delta$ denotes the admixed amplitude of the Rydberg state to the ground state. For our parameters $\beta^2=2.63(5)\%$ and the literature value $\tau_R=27\,\mu$s~\cite{Beterov2009}, we expect $1/\gamma=1.04(2)\,$ms under ideal conditions.
The decoherence induced by coupling to the Rydberg state results predominantly in loss from the trap to an unobserved state with rate $\gamma_0\approx\gamma$~\cite{Zeiher2016a}.
Experimentally, we directly obtain access to $\gamma_0$ in our system by measuring the time evolution of the total atom number $N$ in the experimental sequence used to study the magnetization dynamics. It is expected to decay exponentially by~\cite{Zeiher2016a}
\begin{equation}
N(t)= N_0~e^{-\gamma_0t/2}\equiv N_0~e^{-t/\tau}.
\end{equation}
The factor of one half appearing in the exponent is a consequence of a reduced $\tilde{\beta}=\beta/\sqrt{2}$ for the states in the $S^y$-$S^x$ (equator) plane of the Bloch sphere. There, each state is an equal superposition of dressed and undressed ground state, only differing by a relative phase.
The measured atom number decay is shown in Fig.~\ref{fig:7}. An exponential fit to the data captures the decay well and yields a time constant of $\tau=1.20(4)\,$ms, reaching $60\%$ of the theoretically expected value $2/\gamma_0$ under ideal conditions.
The time evolution of the expectation value of the parity operator, $\langle\hat{P}\rangle = \langle e^{-i\pi\sum
\limits_{i=1}^{N}{\hat{S}^{\rightarrow}_i}} \rangle$, sheds additional light on decoherence processes present in the system. For short times $t/\tau\ll1$ where the atom number is close to its initial value $N_0$ and where we experimentally probed the parity, it can be shown following the formalism used in~\cite{Foss-Feig2013, Zeiher2016a} that $\langle\hat{P}\rangle$ decays as
\begin{equation}
\langle \hat{P}(t)\rangle= P_0~e^{-\frac{N}{2}(\gamma_0+\gamma_{\downarrow}+\gamma_{\uparrow})t}\equiv P_0~e^{-t/\tau_P}.
\end{equation}
In addition to decoherence by the above discussed atom loss, also spin flips from the dressed ground state $\ket{2,-2}$ to the undressed ground state $\ket{1,-1}$ with rate $\gamma_{\downarrow}$, and dephasing of the dressed state with rate $\gamma_{\uparrow}$ contribute to the decay of the parity.
Interestingly, the number of atoms $N$ in the system appears as a scaling factor in the exponent, making the parity a very sensitive probe for all three decoherence processes.
From the measurement (see Fig.~\ref{fig:4}) we extract $\tau_P/\tau\approx 1/N$, from which we conclude that $\gamma_{\downarrow}$ and $\gamma_{\uparrow}$ only contribute insignificantly to the total decoherence of the system.
This is also consistent with our observation that the magnetization density shows revival dynamics even when a fraction of the atoms has been lost, which indicates that no excessive dephasing is present in the system.
The achieved long atom number lifetime $\tau$ in our experiment shows that collective decay effects limiting the achievable coherence times in two dimensions~\cite{Zeiher2016a} seem to be strongly suppressed in the 1d system. We experimentally checked the decay time $\tau$ for different detunings and verified the absence of collective losses for detunings as small as $\Delta/2\pi=4\,$MHz. This promises far longer coherence times when working closer detuned to resonance, as the interaction-lifetime product increases and the influence of first order AC-Stark shifts decreases relative to the interactions strength with decreasing detuning.
\begin{figure*}
\centering
\includegraphics{FigS5_Gross.pdf}%
\caption{ \label{fig:9}
\textbf{Spatially resolved magnetization dynamics for Rydberg-dressed interactions.}
\textbf{(A)} Theoretically calculated evolution of the probability (color coded) for a spin at a given lattice site (labelled from $1$ to $10$) to point in the initially prepared $S^y$-direction along $\ket{\leftarrow}$ for each spin versus dressing time $t$ in a defect-free chain of $N=10$ atoms. Contrary to the nearest-neighbor interacting Ising chain, the bulk spin-dynamics is modulated by the next-nearest-neighbor interaction $U(2\alat)\approx U_0/5$. Near the edge, spins show different temporal dynamics and now also the spins further in the chain are affected by the edge. Above, a zoom into the initial part of the dynamics, indicated by the gray area, shows the measured site-resolved probability for a spin at a given lattice site to point in the initially prepared $S^y$-direction along $\ket{\leftarrow}$ versus dressing time $t$ (upper panel) and the corresponding theoretical expectation for a defect-free initial chain using the same binning as for the experimental data (lower panel).
\textbf{(B)} illustrates the dynamics with single spin time traces for an edge spin (red), the spin neighboring the edge spin (green) and spins in the bulk (blue), all taken as cuts of (A). \textbf{(C)} The average global spin now displays an interaction-induced collapse after two weak, bulk driven revivals at $t=1/U_0$ and $t=2/U_0$, before reviving at $t=5/U_0$ and later at $t=10/U_0$. \textbf{(D)} The frequency differences $\Delta\nu$ of the many-body eigenstates of the long-range interacting chain (gray bars) and those relevant for the dynamics of the transverse magnetization density (blue bars). Compared to the nearest-neighbor interacting Ising chain, many more frequencies contribute.}
\end{figure*}
\section{Dynamics including the spin echo}
In this section, a link between the time evolution including a spin echo and the evolution in a rotating frame of reference is established. To this end, we first consider the exact model realized in the experiment, where only spins in the optically dressed state $\ket{2,-2}$ interact. It is given as
\begin{equation}
\label{eq:S1}
\hat{H_a}=h\sum_{i\neq j}^{N}\frac{U(d_{ij})}{2}~\hat{\sigma}^e_i \hat{\sigma}^e_j.
\end{equation}
Here, $\hat{\sigma}^e_i = \ket{e}\bra{e}$ measures the occupation of the dressed state $\ket{e}=\ket{2,-2}$ and $U(d_{ij})$ with $d=|i-j|$ denotes the dressed interaction potential between two dressed atoms in state $\ket{e}$ at sites $i$ and $j$ respectively.
This atomic model can be rewritten in terms of an Ising model~\cite{Schachenmayer2010,Schauss2015}, by introducing the spin operators $\hat{S}^z_i=\frac{1}{2}(2\hat{\sigma}^e_i-\mathbb{I})$, yielding
\begin{equation}
\label{eq:S2}
\hat{H}_{\mathrm{rot}}=h\sum_{i\neq j}^{N}\frac{U(d_{ij})}{2}~\hat{S}^z_i \hat{S}^z_j + h\sum_{i}^{N} \Delta^{\mathrm{(coll)}}_i\hat{S}^z_i \equiv \hat{H} + \hat{H}_s.
\end{equation}
The term $\Delta^{\mathrm{(coll)}}_i=\sum_{i\neq j}^{N} \frac{U(d_{ij})}{2}$ is picked up additionally in this transformation and acts as an additional longitudinal magnetic field, captured by the Hamiltonian $\hat{H}_s$.
It is linear in the spin operator $\hat{S}^z_i$ and could hence be eliminated in a rotating frame of reference generated by the unitary operator $\hat{U}=e^{-i 2\pi t\sum_{i}^{N} \Delta^{\mathrm{(coll)}}_i\hat{S}^z_i}=\bigotimes_i^{N} e^{-i2\pi t\Delta^{\mathrm{(coll)}}_i\hat{S}^z_i} = \bigotimes_i^{N} \hat{U}_i$. However, due to the spatial dependence on $i$ in a finite system, local control of the rotation operators $\hat{U}_i$ would be required.
An alternative, experimentally more tractable strategy for finite systems is the introduction of spin echo pulses of the form $\Pi=\bigotimes_j^N e^{-i\pi\hat{S}^x_j}$. These spin echoes can be implemented by a global microwave pulse, applied after half of the evolution time $t$. They effectively invert the roles of $\ket{\uparrow}$ and $\ket{\downarrow}$ and hence eliminate the effects of terms linear in $\hat{S}^z_i$ during the time evolution, leaving the part $\hat{H}$ of the Hamiltonian with the spin-spin interactions as the only drive for the dynamics.
In order to prove that the time evolutions of $\hat{H}_{\mathrm{rot}}$ and $\hat{H}$ are equivalent if a spin echo is applied after an evolution time $t/2$, we evaluate
\begin{align*}
e^{-i t\hat{H}_{\mathrm{rot}}/2\hbar}~\Pi~e^{-i t\hat{H}_{\mathrm{rot}}/2\hbar}
&= e^{-i t(\hat{H}+\hat{H}_s)/2\hbar}~\Pi~e^{-i t(\hat{H}+\hat{H}_s)/2\hbar}\\
&=\Pi~e^{-i t(\hat{H}-\hat{H}_s)/2\hbar} e^{-i t(\hat{H}+\hat{H}_s)/2\hbar}\\
&=\Pi~e^{-i t\hat{H}/\hbar}.
\end{align*}
In the second step we have used the anti-commutation relation for the spin operators at the same site $i$, $\{\hat{S}^z_i,\hat{S}^x_i\}=0$.
This shows that the time evolution of the system under $\hat{H}_{\mathrm{rot}}$ is equivalent to the one under $\hat{H}$, up to the spin echo pulse, which merely amounts to a global phase and a redefinition of the measurement basis and has no influence on the dynamics.
\section{Spatially resolved dynamics of the revivals}
For the system studied in our experiment, the expected spin dynamics can be calculated exactly. This allows to obtain both expectation values for mean spin, spin-spin correlations and the spatial structure of the revivals~\cite{Foss-Feig2013,Richerme2014,Zeiher2016a}. As mentioned in the main text, the expectation value for a spin at a site $j$ can be evaluated to yield
\begin{equation}
\label{eq:S3}
\langle\hat{S}^y_j(t)\rangle=\frac{1}{2}\prod\limits_{i\neq j}^{N}\cos(\pi U(d_{ij})t).
\end{equation}
The dynamics of the spin is governed by the interaction with its neighbors and the resulting beat notes due to frequency mixing. In the following, we will link this analytic result for the local magnetization dynamics to the spectral features of the Hamiltonian
\begin{equation}
\hat{H}=h\sum_{i\neq j}^{N}\frac{U(d_{ij})}{2}~\hat{S}^z_i \hat{S}^z_j.
\end{equation}
To first obtain an intuitive understanding of the dynamics, we consider the time evolution of an Ising model with an interaction potential $U(d_{ij})=U_0$ constrained to nearest-neighbor interaction only. In this case, the mean magnetization evolves with periodic revivals at times $t=n/U_0$, multiples of the interaction time $1/U_0$ (see Fig.~\ref{fig:8}C).
The frequency differences dominating the dynamics are limited to $\Delta\nu = 0, \pm U_0, \pm U_0/2$. This can be understood from the simple structure of the many-body spectrum of $\hat{H}$, whose set of eigenstates $\ket{\lambda}$ comprises all products of the single spin eigenstates $\ket{\uparrow}$ and $\ket{\downarrow}$ of $\hat{S}^z$.
The spectrum of $\hat{H}$ only allows for $\Delta\nu$ being a multiple of $\pm nU_0/2$ (Fig.~\ref{fig:8}D). The matrix element $\expect{\eta}{\hat{S}^y_j}{\lambda}$ is non-zero for those many-body eigenstates $\ket{\eta}$ and $\ket{\lambda}$ which differ by a single flipped spin from $\ket{\uparrow}$ to $\ket{\downarrow}$ at the same position $j$. The cost of such a flipped spin is measured by $\Delta\nu$ and amounts to $\Delta\nu=0$ for anti-aligned neighbors of site $j$ ($\ket{\dots\downarrow\uparrow\uparrow \dots}\longrightarrow\ket{\dots\downarrow\downarrow\uparrow \dots}$, the central spin is assumed to be at site $j$) or $\Delta\nu=\pm U_0$ if neighbors are aligned ($\ket{\dots\uparrow\uparrow\uparrow \dots}\longrightarrow\ket{\dots\uparrow\downarrow\uparrow \dots}$ or $ \ket{\dots\downarrow\downarrow\downarrow \dots} \longrightarrow\ket{\dots\downarrow\uparrow\downarrow \dots}$).
From this argument, it is also clear that the magnetization of the edge spin will evolve differently due to its different environment. Focusing on the spatial structure of the magnetization dynamics shown in Fig.~\ref{fig:8}A, this is directly visible. Due to the single missing neighbor, the spin flip energy $\Delta\nu$ is only $\pm U_0/2$ and hence the oscillation is correspondingly slower than that of a bulk spin. As a consequence, the strength of every second revival of the average total magnetization of the chain is reduced, since the edge spins are out-of-phase.
Similar arguments hold for the long-range interaction potential. However, here the interaction spectrum is much more complicated due to beyond nearest-neighbor interactions and the number of relevant frequency differences $\Delta\nu$ increases significantly, acquiring sidebands due to interactions with spins at larger distances (see Fig.~\ref{fig:9}D). Therefore, the revival dynamics is more complex. First, the fast oscillations of the bulk with periodicity $1/U_0$, showing full contrast in the nearest-neighbor Ising case, are modulated with the next-nearest-neighbor interaction. This leads to damping of the magnetization revival amplitude initially, but allows for the partial revivals later at $t=5/U_0$ and $t=10/U_0$. Second, the edge spin and the spin next to the edge spin show different dynamics compared with the bulk, see Fig.~\ref{fig:9}B, which can again be understood from the frequencies contributing to $\langle\hat{S}^y_j\rangle$. The initial out-of phase-dynamics of the edge spin is also observed experimentally in the spatially resolved magnetization density, see Fig.~\ref{fig:9}A. Summing the time evolution traces for the mean local magnetization $\langle\hat{S}^y_j\rangle$ yields the total collapse and revival dynamics with the characteristic features observed also in the experiment (see Fig.~\ref{fig:9}C).
As a final remark it should to be noted that the different evolution of the edge spins is has similar dynamical features as the ``collective field'' $\Delta^{\mathrm{(coll)}}_i$~\cite{Zeiher2016a} resulting from the symmetrization of the spin-spin interaction potential. However, contrary to the collective field it is not a mere single particle effect and it is therefore not removed by the spin-echo pulse, but rather a consequence of the direct spin-spin interaction and the finite system size.
\end{document}
|
2,869,038,154,961 | arxiv | \section{Introduction}
A ubiquitous market assumption in the literature of Stochastic Finance theory is postulating the existence of Equivalent Local Martingale Measure (ELMM). The latter refers to a probability measure $\mathbb{Q}$, equivalent to the ``real-world'' probability $\mathbb{P}$, with the property that all discounted nonnegative wealth processes are local $\mathbb{Q}$-martingales. In view of the Fundamental Theorem of Asset Pricing (FTAP), it is quite clear why such assumption is made from the outset: existence of an ELMM is intimately connected to market viability; in fact, it is equivalent to the economically-sound ``No Free Lunch with Vanishing Risk'' (NFLVR) condition --- see for example \cite{MR1304434} and \cite{MR1671792} for a complete treatment on the topic.
\smallskip
Stipulating the existence of an ELMM seems unavoidable in order to maintain market viability. However, in recent publications there has been considerable interest in models where an ELMM might fail to exist. These have appeared, for instance:
\begin{itemize}
\item in the context of stochastic portfolio theory, for which the survey \cite{FerKar07} is a good introduction;
\item from the financial modeling perspective, an example of which is the \textsl{benchmark approach} of \cite{MR2267213};
\item in a financial equilibrium setting, both for infinite-time horizon settings (see \cite{RePEc:ier:iecrev:v:33:y:1992:i:2:p:323-39}),
as well as finite-time horizon models with credit constraints on economic agents (see \cite{MR1774056} and \cite{MR1748373}).
\end{itemize}
The common assumption that the previous approaches share is postulating the existence of an Equivalent Local Martingale Deflator (ELMD), that is, a strictly positive process that makes all discounted nonnegative wealth processes, when multiplied by it, local martingales. (An ELMD was called a \textsl{strict martingale density} in \cite{MR1353193}; we opt here to call it ELMD as it immediately connects with the notion of an ELMM.) An ELMD is a strictly positive local martingale, but not necessarily a martingale; therefore, it cannot always be used as a density processes to produce an ELMM.
\smallskip
While models where an ELMM might fail to exist are now being extensively studied, a result that would justify their applicability along the lines the FTAP has not yet appeared in the literature.
In this work, the aforementioned issue is tackled. A precise economical condition of market viability is given using the concept \textsl{arbitrage of the first kind}, which has first appeared under this appellation in \cite{Ing87}; see also \cite{MR1348197} in the context of large financial markets, as well as \cite{MR1774056}, where arbitrage of the first kind is called a \textsl{cheap thrill}. Absence of arbitrage of the first kind in the market, which we shall abbreviate as condition NA$_1$, is close in spirit, but strictly weaker, than condition NFLVR; in fact, it is exactly equivalent to condition ``No Unbounded Profit with Bounded Risk'' (NUPBR) that appeared in \cite{MR2335830}. The main result of the present paper precisely states that in a semimartingale market model there is equivalence between condition NA$_1$ and the existence of an ELMD.
\smallskip
In the literature concerning discrete-time models, there have appeared two ways of providing a proof of the FTAP. The first one is the approach of \cite{MR1041035} (initiated in \cite{MR540823}), which utilizes convex separation functional-analytic arguments. The alternative, presented in \cite{MR1380761}, uses the economic idea that the marginal utility evaluated at the optimal terminal wealth of an economic agent, when properly scaled, defines the density of an equivalent martingale measure. The former approach has been adapted with extreme success to continuous time models in \cite{MR1304434} and \cite{MR1671792}. The present work can be seen as a counterpart of the latter approach in continuous-time markets --- here, the utility involved is logarithmic (under a suitable change of probability), and makes the reciprocal of the log-optimizer an ELMD. Interestingly enough, in continuous-time models the two approaches do not give rise to the same result; the present approach weakens the equivalent conditions of the classical FTAP in \cite{MR1304434}, both from the mathematical and the economic side. Note also that the main result of this paper can also be seen as an intermediate step in proving the general version of the FTAP as is presented in \cite{MR1304434}. In fact, this task is taken up in \cite{Kar_09_fin_add_ftap}.
\medskip
The structure of the paper is simple. In Section \ref{sec: weak version of FTAP}, the market is introduced, arbitrage of the first kind is defined and the main result is stated, whose somewhat lengthy and technical proof is deferred for Section \ref{sec: proof}.
\section{Absence of Arbitrage of the First Kind and Equivalent Local Martingale Deflators} \label{sec: weak version of FTAP}
\subsection{Probabilistic remarks}
All stochastic processes in the sequel are defined on a \textsl{filtered probability space} $\left(\Omega, \, \mathcal{F}, \, (\mathcal{F}(t))_{t \in \mathbb R_+}, \, \mathbb{P}\right)$. Here, $\mathbb{P}$ is a probability on $(\Omega, \mathcal{F})$, $\mathcal{F}$ being a sigma-algebra that will make all random elements measurable. All relationships between random variables are understood in the $\mathbb{P}$-a.s. sense. The filtration $(\mathcal{F}(t))_{t \in \mathbb R_+}$ is right-continuous. We assume the existence of a finite financial planning horizon $T$, where $T$ is a \emph{finite} stopping time. All processes will be assumed to be constant, and equal to their value they have at $T$, after time $T$. Without affecting the generality of the discussion, it will be assumed throughout that $\mathcal{F}(0)$ is trivial modulo $\mathbb{P}$ and that $\mathcal{F}(T) = \mathcal{F}$.
\subsection{Investment}
Let $S$ be a \emph{semimartingale}, denoting the \emph{discounted}, with respect to some baseline security, price process of a financial security. Starting with capital $x \in \mathbb R$, and investing according to some predictable and $S$-integrable strategy $\vartheta$, an economic agent's discounted wealth process is
\begin{equation} \label{eq: wealth process, all}
X^{x, \vartheta} \, := \, x + \int_0^\cdot \vartheta(t) \mathrm d S(t).
\end{equation}
When modeling frictionless trading, credit constraints have to be imposed on investment in order to avoid \emph{doubling strategies}. Define then $\mathcal{X}$ to be the set of all nonnegative wealth processes, i.e., all $X^{x, \vartheta}$ in the notation of \eqref{eq: wealth process, all} such that $X^{x, \vartheta} \geq 0$.
\subsection{Equivalent local martingale deflators}
An \textsl{equivalent local martingale deflator} (ELMD) is a nonnegative process $Z$ with $Z(0) = 1$ and $Z(T) > 0$, such that $Z X$ is a local martingale for all $X \in \mathcal{X}$. Since $1 \equiv X^{1, 0} \in \mathcal{X}$, an ELMD is in particular a strictly positive local martingale.
\subsection{Arbitrage of the first kind}
An $\mathcal{F}(T)$-measurable random variable $\xi$ will be called an \textsl{arbitrage of the first kind} if $\mathbb{P}[\xi \geq 0] = 1$, $\mathbb{P}[\xi > 0] > 0$, and \emph{for all $x > 0$ there exists $X^{x, \vartheta} \in \mathcal{X}$ (for some $\vartheta$ which may depend on $x$), such that $X^{x, \vartheta} (T) \geq \xi$}. If there exists no arbitrage of the first kind in the market, we shall say that condition NA$_1$ holds.
It is straightforward to see that condition NA$_1$ is weaker than condition NFLVR of \cite{MR1304434}. Actually, using a combination of Lemma A.1 in \cite{MR1304434} and Lemma 2.3 in \cite{MR1768009}, it is shown in \cite[Proposition 1.2]{Kar_09_fin_add_ftap} that condition NA$_1$ is equivalent to the requirement that the set $\{X (T) \, | \, X \in \mathcal{X} \text{ with } X_0 = 1\}$ is bounded in probability. The latter condition has been coined BK in \cite{MR1647282} and NUPBR in \cite{MR2335830}.
\subsection{The main result} The next result can be seen as a weak version of the FTAP. Though simple to state, its proof is quite technical and is given in Section \ref{sec: proof}.
\begin{thm} \label{thm: main}
Condition \emph{NA}$_1$ \ is equivalent to the existence of at least one \emph{ELMD}.
\end{thm}
\begin{rem}
In \cite{Kar_09_fin_add_ftap}, which is in a certain sense a sequel to this paper, it is argued that although an ELMD does not generate a probability measure, its local martingale structure allows one to define a \emph{finitely additive} probability that is \emph{locally countably additive} and \emph{weakly equivalent} to $\mathbb{P}$, and further makes discounted asset-price processes behave like ``local martingales''. More precisely, Theorem \ref{thm: main} can be reformulated to state that condition NA$_1$ is valid if and only if there exists $\mathsf{Q} : \mathcal{F} \mapsto [0,1]$ and a a sequence $(\tau_n)_{n \in \mathbb N}$ of stopping times with $\lim_{n \to \infty} \mathbb{P} \bra{\tau_n = T} = 1$ such that:
\begin{itemize}
\item $\mathsf{Q}[\emptyset] = 0$, $\mathsf{Q}[\Omega] = 1$, and $\mathsf{Q}$ is (finitely) additive: $\mathsf{Q}[A \cup B] = \mathsf{Q}[A] + \mathsf{Q}[B]$ whenever $A \in \mathcal{F}$ and $B \in \mathcal{F}$ satisfy $A \cap B = \emptyset$;
\item for $A \in \mathcal{F}$, $\mathbb{P}[A] = 0$ implies $\mathsf{Q}[A] = 0$;
\item when restricted on $\mathcal{F}_{\tau_n}$, $\mathsf{Q}$ is countably additive and equivalent to $\mathbb{P}$, for all $n \in \mathbb N$.
\item $\int_{\Omega} X_{\tau^n \wedge \tau} \mathrm d \mathsf{Q} = X_0$ holds for all $X \in \mathcal{X}$, $n \in \mathbb N$ and all stopping times $\tau$.
\end{itemize}
Using this reformulation, Theorem \ref{thm: main} bears more resemblance to the FTAP of \cite{MR1304434}. In fact, as already mentioned in the Introduction, in \cite{Kar_09_fin_add_ftap} Theorem \ref{thm: main} is used as an intermediate step in proving the FTAP in \cite{MR1304434}.
\end{rem}
\begin{rem}
Theorem \ref{thm: main} is stated for one-dimensional semimartingales $S$, as even for this ``simple'' case the proof is quite technical and requires taking care of many different issues, as the reader will appreciate in Section \ref{sec: proof} below. There is no doubt that the result is still valid for the multi-dimensional semimartingale case, albeit its proof is expected to be significantly more involved.
\end{rem}
\section{The Proof of Theorem \ref{thm: main}} \label{sec: proof}
\subsection{Proving Theorem \ref{thm: main} with the help of an auxiliary result}
The proof of one implication of Theorem \ref{thm: main} is easy and somewhat classic, but will be presented anyhow here for completeness. Start by assuming the existence of an ELMD $Z$ and pick any sequence $(X_k)_{k \in \mathbb N}$ of wealth processes in $\mathcal{X}$ such that $\lim_{k \to \infty} X_k(0) = 0$ as well as $X_k(T) \geq \xi$ for some nonnegative random variable $\xi$. Since $Z X_k$ is a nonnegative local martingale, thus a $\mathbb{P}$-supermartingale,
\[
\mathbb{E}[Z(T) \xi] \leq \mathbb{E}[Z(T) X_k(T)] \leq Z(0) X_k(0) = X_k(0)
\]
holds for all $k \in \mathbb N$. Therefore, $\mathbb{E}[Z(T) \xi] \leq 0$. Since $\mathbb{P}[Z(T) > 0, \, \xi \geq 0] = 1$, $\mathbb{E}[Z(T) \xi] \leq 0$ holds only if $\mathbb{P}[\xi = 0] = 1$. Therefore, $\xi$ cannot be an arbitrage of the first kind, and condition NA$_1$ holds.
\smallskip
It remains to prove the other implication, which is considerably harder. Define
\[
\mathcal{X}_{++} \, := \, \set{X \in \mathcal{X} \ | \ X > 0 \text{ and } X_- > 0}.
\]
Since condition NA$_1$ is equivalent to condition NUPBR of \cite{MR2335830}, the general results of the latter paper imply that condition NA$_1$ is equivalent to the existence of $\widehat{X} \in \mathcal{X}_{++}$ with $\widehat{X}(0) = 1$ such that, with $Z \, := \, 1 / \widehat{X}$, $Z X$ is a supermartingale for all $X \in \mathcal{X}_{++}$. (Note that the results of \cite{MR2335830} have been established when $S \in \mathcal{X}_{++}$; however, this condition is unnecessary. At any rate, in the present paper we give a full treatment instead of depending on results from \cite{MR2335830}.) Unfortunately, when jumps are present in $S$, these last supermartingales might fail to be local martingales. In order to achieve our goal, we shall have to slightly alter the original probability using the predictable characteristics of $S$. (The idea of how to perform such a change of probability is already present in \cite{MR1647282} and \cite{MR1804665}.) In \S\ref{subsec: dynamic case} below we shall establish the following result, certainly interesting in its own right. Before stating it, recall that for a signed measure $\mu$ on $(\Omega, \mathcal{F})$, its \textsl{total variation} norm is defined as $\normTV{\mu} \, := \, \sup_{A \in \mathcal{F}} \abs{\mu[A]}$.
\begin{thm} \label{thm: help}
Assume that condition \emph{NA}$_1$ \ holds. Then, for any $\epsilon > 0$, there exists a probability $\widetilde{\mathbb{P}} = \widetilde{\mathbb{P}}(\epsilon)$ with the following properties:
\begin{enumerate}
\item $\widetilde{\mathbb{P}}$ is equivalent to $\mathbb{P}$ on $\mathcal{F}(T)$.
\item $\normTV{\widetilde{\mathbb{P}} - \mathbb{P}} \leq \epsilon$.
\item There exists $\widetilde{X} \in \mathcal{X}_{++}$ with $\widetilde{X}(0) = 1$ such that $X / \widetilde{X}$ is a \emph{local $\widetilde{\mathbb{P}}$-martingale} for all $X \in \mathcal{X}$.
\end{enumerate}
\end{thm}
To see how Theorem \ref{thm: help} completes the proof of Theorem \ref{thm: main}, assume that condition NA$_1$ holds, as well as the statement of Theorem \ref{thm: help}. Define the process $Z$ via $Z_t \, := \, (1 / \widetilde{X}(t))( \mathrm d \widetilde{\mathbb{P}} / \mathrm d \mathbb{P})|_{\mathcal{F} (t)}$ for $t \in \mathbb R_+$, where $( \mathrm d \widetilde{\mathbb{P}} / \mathrm d \mathbb{P})|_{\mathcal{F} (t)}$ denotes the Radon-Nikod\'ym derivative of $\widetilde{\mathbb{P}}$ with respect to $\mathbb{P}$ when the two probabilities are restricted on the sigma-algebra $\mathcal{F}(t)$. Then, Theorem \ref{thm: help}(1) implies that $Z(0) = 1$ and $Z(T) > 0$, and the fact that $Z X$ is a local martingale for all $X \in \mathcal{X}$ follows by Theorem \ref{thm: help}(3).
\subsection{The proof of Theorem \ref{thm: help}} \label{subsec: dynamic case}
In the course of the proof, results regarding the general theory of stochastic processes from \cite{MR1943877} are used. There are ideas from \cite{MR2335830} that are utilized throughout the proof; as the latter paper is long and technical, and in an effort to be as self-contained as possible, we are providing full arguments whenever possible. In fact, there is only one result from \cite{MR2335830} whose statement will just be assumed; this happens at the end of \S \ref{subsubsec: growth rates}.
\subsubsection{Predictable characteristics}
In order to prove Theorem \ref{thm: help},
we can assume without loss of generality that $S$ is a special semimartingale under $\mathbb{P}$. Indeed, if this is not the case, we can change the original probability $\mathbb{P}$ into another equivalent $\overline{\mathbb{P}}$ using the Radon-Nikod\'ym density
\[
\frac{\mathrm d \overline{\mathbb{P}}}{\mathrm d \mathbb{P}} \, := \, \frac{1}{\mathbb{E} \bra{\pare{1 + \gamma \sup_{t \in \mathbb R_+} |S(t)|}^{-1} }} \pare{1 + \gamma \sup_{t \in \mathbb R_+} |S(t)|}^{-1},
\]
where $\gamma > 0$ is small enough so that $\normTV{\overline{\mathbb{P}} - \mathbb{P}} \leq \epsilon / 2$. Then, $\overline{\mathbb{E}} \big[ \sup_{t \in \mathbb R_+} |S(t)| \big] < \infty$, where ``$\overline{\mathbb{E}}$'' denotes expectation under $\overline{\mathbb{P}}$; in particular, $S$ is a special semimartingale under $\overline{\mathbb{P}}$. Then, the validity of Theorem \ref{thm: help} can be shown for $\overline{\mathbb{P}}$ and with $\epsilon / 2$ replacing $\epsilon$.
\smallskip
Now, assuming that $S$ is a special semimartingale under $\mathbb{P}$, write its \emph{canonical} decomposition $S = S_0 + A + S^\mathsf{c} + \int_{(0, \cdot] \times \mathbb R} x \pare{\mu [\mathrm d t, \mathrm d x] - \nu [\mathrm d t, \mathrm d x]}$. Here, $A$ is \emph{predictable and of finite variation}, $S^\mathsf{c}$ is a local martingale with \emph{continuous} paths and $\int_{(0, \cdot] \times \mathbb R} x \pare{\mu [\mathrm d t, \mathrm d x] - \nu [\mathrm d t, \mathrm d x]}$ is a
\emph{purely discontinuous} local
martingale. As usual, $\mu$ is the \textsl{jump measure} of $S$ defined via $\mu (D) := \sum_{t \in \mathbb R_+} \mathbb{I}_{D} (t, \Delta S(t)) \mathbb{I}_{\mathbb R \setminus \set{0} } (t)$, for $D \subseteq \mathbb R_+ \times \mathbb R$, and $\nu$ is the \textsl{predictable compensator} of the measure $\mu$. Since $S$ is a special semimartingale, we have $\int_{\mathbb R_+ \times \mathbb R} \pare{|x| \wedge |x|^2} \,\nu[\mathrm d t, \mathrm d x] < \infty$. We introduce the \textsl{quadratic covariation} process $C := [S^\mathsf{c}, S^\mathsf{c}]$ of $S^\mathsf{c}$, and define the predictable nondecreasing scalar process
\[
G \, := \, C + \int_{(0, \cdot]} |\mathrm d A(t)| + \int_{(0, \cdot] \times \mathbb R} \pare{|x| \wedge |x|^2} \, \nu[\mathrm d t, \mathrm d x].
\]
All three processes $A$, $C$, and $\nu$ are absolutely continuous with respect to $G$. Therefore, we can write
\[
A= \int_{(0, \, \cdot]} a(t) \mathrm d G(t), \ C = \int_{(0, \, \cdot]} c(t) \mathrm d G(t), \text{ and } \nu [(0, \cdot] \times E ] = \int_{(0, \, \cdot]} \kappa (t)[E] \mathrm d G(t),
\]
where $a$, $c$ and $\kappa$ are predictable, $a$ is a scalar process, $c$ a nonnegative scalar process, $\kappa$ a process with values in the set of measures on $(\mathbb R, \mathcal{B}(\mathbb R))$, where $\mathcal{B}(\mathbb R)$ is the Borel sigma-algebra on $\mathbb R$, that do not charge $\set{0}$ and integrate the function $\mathbb R \ni x \mapsto |x| \wedge |x|^2$, and $E \in \mathcal{B}(\mathbb R)$.
Condition NA$_1$ enforces some restrictions on the triplet of predictable characteristics of $S$. The next result is a consequence of \cite[Theorem 3.15(2)]{MR2335830}, but we provide the quick argument for completeness.
\begin{lem} \label{lem: consequences of na1}
Assume condition \emph{NA}$_1$ \ in the market. Then, with $\Lambda \, := \, \Lambda_+ \cup \Lambda_-$, where
\begin{align*}
\Lambda_+ &\, := \, \set{\kappa[(- \infty, 0)] = 0, \ c = 0, \ a > \int_{(0, \infty)} x \kappa [\mathrm d x]} \text{ and} \\
\Lambda_- &\, := \, \set{\kappa[(0, \infty)] = 0, \ c = 0, \ a < \int_{(-\infty, 0)} x \kappa [\mathrm d x]},
\end{align*}
the predictable set $\Lambda$ is $(\mathbb{P} \otimes G)$-null. (In particular, $\set{\kappa[\mathbb R] = 0, \ c = 0, \ a \neq 0}$ is $(\mathbb{P} \otimes G)$-null.)
\end{lem}
\begin{proof}
Define $\vartheta \, := \, \mathbb{I}_{\Lambda_+} - \mathbb{I}_{\Lambda_-}$. Then, it is straightforward to see that
\[
X^{0, \vartheta} = \int_{(0, \cdot]} \mathbb{I}_{\Lambda} (t) \abs{a(t) - \int_{\mathbb R} x \kappa(t) [\mathrm d x]} \mathrm d G(t) + \sum_{t \in (0, \cdot]} \mathbb{I}_{\Lambda} (t) |\Delta S(t)|,
\]
where observe that the integral $\int_{\mathbb R} x \kappa [\mathrm d x]$ is always well defined on $\Lambda$. It is clear that $X^{0, \vartheta}$ is non-decreasing, i.e., $X^{0, \vartheta} \in \mathcal{X}$. Furthermore, if $\Lambda$ fails to be $(\mathbb{P} \otimes G)$-null, then $\mathbb{P}[X^{0, \vartheta} (T) > 0] > 0$. Let $\xi \, := \, X^{0, \vartheta} (T)$, since $X^{x, \vartheta} (T) = x + \xi \geq \xi$ for all $x > 0$, $\xi$ is an arbitrage of the first kind. Therefore, under condition NA$_1$, $\Lambda$ has to be $(\mathbb{P} \otimes G)$-null.
\end{proof}
\subsubsection{Changes of probability} \label{subsubsec: change of prob}
In what follows, a \textsl{strictly positive predictable random field} will refer to a function $Y : \Omega \times \mathbb R_+ \times \mathbb R \mapsto (0, \infty)$ that is measurable with respect to the product of the predictable sigma-algebra on $\Omega \times \mathbb R_+$ with the Borel sigma-algebra on $\mathbb R$. For any strictly positive predictable random field $Y$, let $\nu^Y$ be the predictable random measure that has density $Y$ with respect to $\nu$; in other words,
\begin{equation} \label{eq: nuY}
\nu^Y [(0, \cdot] \times E ] = \int_{(0, \, \cdot]} \kappa^Y (t) [E] \mathrm d G (t) = \int_{(0, \, \cdot]} \pare{ \int_E Y(t, x) \kappa (t)[\mathrm d x] } \mathrm d G(t)
\end{equation}
holds for all $E \in \mathcal{B}(\mathbb R)$. For all $t \in \mathbb R_+$, $Y(t, \cdot)$ is the density of $\kappa^Y (t)$ with respect to $\kappa (t)$.
Define the $(0, \infty)$-valued predictable process
\[
\eta \, := \, \frac{\epsilon}{2 \abs{1 + G}^2},
\]
where we shall be assuming without loss of generality that $0 < \epsilon < 1$. In the sequel, we shall only consider strictly positive predictable random fields $Y$ such that the following properties are additionally identically satisfied:
\begin{enumerate}
\item[(Y1)] $\int_\mathbb R \pare{|x| \wedge |x|^2} \, \kappa^Y [\mathrm d x] < \infty$.
\item[(Y2)] $\int_\mathbb R \abs{Y(x) - 1} \, \kappa[\mathrm d x] \leq \eta$.
\item[(Y3)] $\kappa[\mathbb R] = \kappa^Y [\mathbb R]$.
\end{enumerate}
(The dependence of processes on $(\omega, t) \in \Omega \times \mathbb R_+$ is usually suppressed from notation to ease the reading. Whenever appropriate from the context, and for clarification purposes, we shall sometimes write $Y(x)$ or $Y(t,x)$ for $Y$.)
Property (Y2) of $Y$ implies the estimate
\begin{eqnarray}
\nonumber \int_{\mathbb R_+ \times \mathbb R} \abs{ Y(t, x) - 1 } \nu[\mathrm d t, \mathrm d x] &=& \int_{\mathbb R_+} \pare{\int_{\mathbb R} \abs{ Y(t, x) - 1 } \, \kappa(t)[\mathrm d x]} \mathrm d G(t) \\
\label{eq: pre-hellinger} &\leq& \int_{\mathbb R_+} \eta(t) \mathrm d G(t) \ = \ \frac{\epsilon}{2} \int_{\mathbb R_+} \frac{\mathrm d G(t)}{|1 + G(t)|^2} \ \leq \ \frac{\epsilon}{2}.
\end{eqnarray}
It follows that the process $M \, := \, \int_{(0, \cdot] \times \mathbb R} \pare{ Y (t, x) - 1 } \pare{ \mu [\mathrm d t, \mathrm d x] - \nu[\mathrm d t, \mathrm d x]}$ is a well defined local martingale. Observe that for all $t \in \mathbb R_+$, we have
\[
\Delta M (t) = Y(t, \Delta S(t)) - 1 - \pare{\int_{\mathbb R} \pare{Y(t,x) -1 } \kappa [\mathrm d x]} \Delta G(t) = Y(t, \Delta S(t)) - 1 > - 1,
\]
holding in view of the fact that $Y$ is strictly positive and $\int_{\mathbb R} \pare{Y(t,x) -1 } \kappa [\mathrm d x] = \kappa^Y[\mathbb R] - \kappa [\mathbb R] = 0$, which follows from (Y3). With ``$\mathcal E$'' denoting the \textsl{stochastic exponential} operator, define
\[
L \, := \, \mathcal E (M) = \mathcal E \pare{ \int_{(0, \cdot] \times \mathbb R} \pare{ Y (t, x) - 1 } \pare{ \mu [\mathrm d t, \mathrm d x] - \nu[\mathrm d t, \mathrm d x]}}.
\]
Combining \eqref{eq: pre-hellinger} with $\Delta M > -1$, a use of \cite[Theorem 12]{MR515738} gives that $L$ is a uniformly integrable martingale with $\mathbb{P} [L(T) > 0] = 1$. However, because the last paper may be hard to obtain, we provide a quick argument in the present special case. At the same time, we show that the probability defined by $L$ satisfies requirement (2) of Theorem \ref{thm: help}.
\begin{lem} \label{lem: hellinger}
Let $Y$ be a strictly positive random field such that (Y1), (Y2) and (Y3) hold. With the above notation, we have $\mathbb{P} [L(T) > 0] = 1$ and $\mathbb{E} \bra{\sup_{t \in \mathbb R_+} |L(t) - 1|} \leq \epsilon$. In particular, the recipe $\mathrm d \mathbb{P}^Y / \mathrm d \mathbb{P} = L(T)$ defines a probability $\mathbb{P}^Y$ that is \emph{equivalent} to $\mathbb{P}$ on $\mathcal{F}(T)$ such that $\normTV{\mathbb{P}^Y - \mathbb{P}} \leq \epsilon$.
\end{lem}
\begin{proof}
Since $\Delta M > -1$ and $M$ is a local martingale, $\mathbb{P} [L(T) > 0] = 1$ follows.
Let $H \, := \, \int_{(0^\cdot]} |Y(t, x) - 1| \mu [\mathrm d t, \mathrm d x]$ and $F \, := \, \int_{(0^\cdot]} |Y(t, x) - 1| \nu [\mathrm d t, \mathrm d x]$. The process $F$ is the predictable compensator of $H$ and we have $\mathbb{P} \bra{F (\infty) \leq \epsilon/2} = 1$ in view of \eqref{eq: pre-hellinger}. In particular, $M$ is a local martingale of finite variation.
Using the fact that $L = 1 + \int_{(0, \cdot]} L (t -) \mathrm d M (t)$, we obtain
\[
\mathbb{E} \bra{\sup_{t \in \mathbb R_+} \abs{L(t) - 1}} \leq \mathbb{E} \bra{\int_{(0, \infty)} L (t-) \mathrm d H (t) + \int_{(0, \infty)} L (t-) \mathrm d F (t)} = 2 \mathbb{E} \bra{\int_{(0, \infty)} L (t-) \mathrm d F (t)}.
\]
Furthermore, with $(\tau_n)_{n \in \mathbb N}$ being a localizing sequence for $L$, we have
\[
\mathbb{E} \bra{\int_{(0, \tau_n]} L (t-) \mathrm d F (t)} = \mathbb{E} \bra{L (\tau_n) F (\tau_n)} - \mathbb{E} \bra{\int_{(0, \tau_n]} F (t) \mathrm d L (t)} \leq \frac{\epsilon}{2} \mathbb{E} \bra{L (\tau_n)} \leq \frac{\epsilon}{2}.
\]
As the previous is valid for all $n \in \mathbb N$, $\mathbb{E} \bra{\sup_{t \in \mathbb R_+} \abs{L (t) - 1}} \leq \epsilon$ follows from a straightforward application of the monotone convergence theorem. In particular, $\mathbb{E} \bra{\sup_{t \in \mathbb R_+} \abs{L (t)}} < \infty$ which implies that $L$ is a uniformly integrable martingale and, therefore, $\mathbb{P}^Y$ is well defined and equivalent to $\mathbb{P}$ on $\mathcal{F}(T)$. Furthermore, $\normTV{\mathbb{P}^Y - \mathbb{P}} = \mathbb{E} \bra{\abs{L(T) - 1}} \leq \epsilon$, which completes the proof.
\end{proof}
Consider the probability $\prob^Y$ of Lemma \ref{lem: hellinger}. According to Girsanov's Theorem (Theorem III.3.24, page 172 of \cite{MR1943877}), under assumptions (Y1), (Y2) and (Y3) on $Y$, $S$ is still a special semimartingale under $\prob^Y$ with canonical decomposition $S = S_0 + A^Y + S^{\mathsf{c}, Y} + \int_{(0, \cdot] \times \mathbb R} x (\mu[\mathrm d t, \mathrm d x] - \nu^Y[\mathrm d t, \mathrm d x])$, where the predictable compensator $\nu^Y$ of $\mu$ under $\prob^Y$ was defined previously in \eqref{eq: nuY}, and where $A^Y = \int_{(0, \, \cdot]} a^Y(t) \mathrm d G(t)$, with $a^Y \, := \, a + \int_\mathbb R x (Y(x) -1 ) \, \kappa [\mathrm d x]$. For the continuous local $\mathbb{P}^Y$-martingale part $S^{\mathsf{c}, Y}$ we have $C^Y := [S^{\mathsf{c}, Y}, S^{\mathsf{c}, Y}] = [S^{\mathsf{c}}, S^{\mathsf{c}}] = C$, i.e., $C^Y = \int_{(0, \, \cdot]} c^Y(t) \mathrm d G(t)$ with $c^Y = c$.
\subsubsection{Relative rate of return} \label{subsubsec: rrr}
Remember that $Y$ always denotes a strictly positive predictable random field satisfying (Y1), (Y2), and (Y3) of \S\ref{subsubsec: change of prob}. We aim at understanding what extra condition must $Y$ satisfy in order for $\widetilde{\mathbb{P}} \equiv \prob^Y$ to satisfy all the requirements of Theorem \ref{thm: help}.
Define a pair of processes $(\ell, r)$ via
\[
\ell \, := \, \inf \set{p \in \mathbb R \, | \, \kappa[\set{x \in \mathbb R \, | \, 1 + p x < 0}] = 0} \text{ and } r \, := \, \sup \set{p \in \mathbb R \, | \, \kappa[\set{x \in \mathbb R \, | \, 1 + p x < 0}] = 0}.
\]
($\ell$ and $r$ are mnemonics for ``left'' and ``right'' respectively.) It is straightforward that $\ell \leq 0 \leq r$, as well as that both $\ell$ and $r$ are predictable: for example, $\set{\ell \leq p} = \Omega \times \mathbb R_+$ if $p \in \mathbb R_+$, while
\[
\set{\ell \leq p} = \bigcap_{n \in \mathbb N} \Big\{ \kappa[\set{x \in \mathbb R \, | \, 1 + (p + 1/n ) x < 0}] = 0 \Big\} \text{ if } p \in \mathbb R \setminus \mathbb R_+;
\]
in both cases, $\set{\ell \leq p}$ is predictable. Of course, nothing changes in the definition of $\ell$ and $r$ if we replace $\kappa$ with $\kappa^Y$. Define $I := [\ell, r] \cap \mathbb R$.
Note that $\mathsf{conv.supp}(\kappa) = [-1/r, -1/ \ell] \cap \mathbb R$, where ``$\mathsf{conv.supp}$'' denotes the convex hull of the support of a measure.
For two $I$-valued predictable processes $p$ and $p'$, define a predictable process
\begin{equation} \label{eq: rel_perf}
\rel^Y(p \, | \, p') \, := \, (p - p') \pare{a^Y - p' c^Y - \int_{\mathbb R} \frac{p' |x|^2}{1 + p'x} \, \kappa^Y [\mathrm d x] }.
\end{equation}
The last expression is closely related to the \emph{relative rate of return} of wealth processes in $\mathcal{X}_{++}$, as the proof of the following result reveals.
\begin{lem} \label{lem: rrr}
Suppose that $Y$ is a strictly positive predictable random field satisfying \emph{(Y1)}, \emph{(Y2)}, and \emph{(Y3)}. Further, suppose that $\widetilde{p}$ is an $I$-valued predictable, $S$-integrable process such that $\rel^Y(p \, | \, \widetilde{p}) = 0$ holds for all other $I$-valued predictable processes $p$. Define $\widetilde{X} \, := \, \mathcal E(\int_0^\cdot \widetilde{p}(t) \mathrm d S(t))$. Then, $\widetilde{X}_0 = 1$, $\widetilde{X} \in \mathcal{X}_{++}$, and $X / \widetilde{X}$ is a local $\prob^Y$-martingale for all $X \in \mathcal{X}$.
\end{lem}
\begin{proof}
Since $\widetilde{p}$ is $S$-integrable, $\widetilde{X}$ is well defined. In view of \eqref{eq: rel_perf}, the fact that $\mathsf{rel}(0\, | \, \widetilde{p}) = 0$ implies that $\kappa^Y[\set{x \in \mathbb R \, | \, \widetilde{p} x = -1}] = 0$. Therefore, $\widetilde{p} \Delta S > -1$, i.e., $\widetilde{X} > 0$ and $\widetilde{X}_- > 0$ hold. With $\widetilde{\vartheta} \, := \, \widetilde{p} \widetilde{X}_-$, we have $\widetilde{X} = X^{1, \widetilde{\vartheta}}$ in the notation of \S \ref{eq: wealth process, all}. Therefore, $\widetilde{X} \in \mathcal{X}_{++}$.
Pick any $X = X^{x, \vartheta} \in \mathcal{X}_{++}$. Let $p \, := \, \vartheta / X_-$; then, $X = x \mathcal E(\int_0^\cdot p(t) \mathrm d S(t))$. We shall show that
\[
\frac{X}{\widetilde{X}} = x \, \frac{\mathcal E(\int_0^\cdot p(t) \mathrm d S(t))}{\mathcal E(\int_0^\cdot \widetilde{p}(t) \mathrm d S(t))}
\]
is a local $\prob^Y$-martingale. Since $X > 0$, $X_- > 0$, $\widetilde{X} > 0$, and $\widetilde{X}_- > 0$ hold, it follows that we can write $X / \widetilde{X} = x \mathcal E(R^{p \, | \, \widetilde{p}})$ for some semimartingale $R^{p \, | \, \widetilde{p}}$ with $\Delta R^{p \, | \, \widetilde{p}} > -1$. In fact,
\[
R^{p \, | \, \widetilde{p}} = \int_0^\cdot \pare{p(t) - \widetilde{p}(t)} \mathrm d S(t) - \int_0^\cdot \pare{p(t) - \widetilde{p}(t)} \widetilde{p}(t) \mathrm d [S^\mathsf{c}, S^\mathsf{c}] (t) - \sum_{t \leq \cdot} \frac{\pare{p(t) - \widetilde{p}(t)} \widetilde{p}(t) |\Delta S(t)|^2}{1 +\widetilde{p}(t) \Delta S(t)};
\]
indeed, using Yor's formula it can be easily checked that
\begin{align*}
\mathcal E \pare{ \int_0^\cdot \widetilde{p}(t) \mathrm d S(t) } \mathcal E \pare{ R^{p \, | \, \widetilde{p}} } &= \mathcal E \pare{ \int_0^\cdot \widetilde{p}(t) \mathrm d S(t) + R^{p \, | \, \widetilde{p}} + \bra{\int_0^\cdot \widetilde{p}(t) \mathrm d S(t), R^{p \, | \, \widetilde{p}}}} \\
&= \ldots = \mathcal E \pare{ \int_0^\cdot p(t) \mathrm d S(t) }.
\end{align*}
By a comparison of \eqref{eq: rel_perf} with the formula for $R^{p \, | \, \widetilde{p}}$ above, $\rel^Y(p \, | \, \widetilde{p}) = 0$ implies that $R^{p \, | \, \widetilde{p}}$ is a sigma $\prob^Y$-martingale. (For information and properties of sigma-martingales, the reader is referred to \cite{MR2013413}.) Since $X / \widetilde{X} = x \mathcal E(R^{p \, | \, \widetilde{p}})$, it follows that $X / \widetilde{X}$ is a sigma $\prob^Y$-martingale. For nonnegative processes, the sigma martingale property is equivalent to the local martingale property; therefore, we conclude that $X / \widetilde{X}$ is a local $\prob^Y$-martingale.
Now, let $X \in \mathcal{X}$. Since $(1 + X) \in \mathcal{X}_{++}$, the discussion of the previous paragraph implies that $(1 + X) / \widetilde{X}$ is a local $\prob^Y$-martingale. Again, by the discussion of the previous paragraph, $1 / \widetilde{X}$ is a local $\prob^Y$-martingale. It follows that $X / \widetilde{X}$ is a local $\prob^Y$-martingale.
\end{proof}
In view of Lemma \ref{lem: rrr}, Theorem \ref{thm: help} will be proved if we can find a strictly positive predictable random field $Y$ satisfying (Y1), (Y2) and (Y3), as well as an $I$-valued predictable, $S$-integrable process $\widetilde{p}^Y$ such that $\rel^Y(p \, | \, \widetilde{p}^Y) = 0$ holds for any other $I$-valued predictable process $p$. In \S \ref{subsubsec: growth rates}, we shall see how $\widetilde{p}^Y$ should be picked, given a strictly positive predictable random field $Y$ satisfying (Y1), (Y2) and (Y3); then, in \S \ref{subsubsec: construction of density}, we shall construct the appropriate strictly positive predictable random field.
\subsubsection{Growth rates} \label{subsubsec: growth rates}
In order to understand how $Y$ has to be picked, we shall use the fact that the relative rate of return is essentially the directional derivative of the growth rate. In more detail, define a predictable random field $\g^Y$ via $\g^Y (\mathrm{p}) \, := \, \mathrm{p} a^Y - (1/2) c^Y |\mathrm{p}|^2 - \int_{\mathbb R} \pare{\mathrm{p} x - \log(1 + \mathrm{p} x)}\, \kappa^Y [\mathrm d x]$ for $\mathrm{p} \in I$, and set $\g^Y (\mathrm{p}) = - \infty$ when $\mathrm{p} \notin I$. The assumption $\int_{\mathbb R} \pare{|x| \wedge |x|^2} \kappa^Y [\mathrm d x] < \infty$ ensures that $\mathsf{g}$ is well-defined and finite in the interior of $I$, thought it might be the case that $\g^Y(\ell) = - \infty$ or $\g^Y(r) = - \infty$. It is obvious that for fixed $(\omega, t) \in \Omega \times \mathbb R_+$, $\g^Y(\omega, t, \cdot) : \mathbb R \mapsto \mathbb R \cup \set{- \infty}$ is a concave function. With all set-inclusions involving subsets of $\Omega \times \mathbb R_+$ from now on to be understood in a $(\mathbb{P} \otimes G)$-a.e. sense, an application of Lemma \eqref{lem: consequences of na1} (with $a^Y$ and $\kappa^Y$ replacing $a$ and $\kappa$ there respectively) gives $\set{ r = \infty}= \set{\kappa^Y [(-\infty, 0)] = 0} \subseteq \set{\lim_{\mathrm{p} \to \infty} \g^Y(\mathrm{p}) \leq 0}$. Indeed, $\set{\kappa^Y [(-\infty, 0)] = 0, \ c > 0} \subseteq \set{\lim_{\mathrm{p} \to \infty} \g^Y(\mathrm{p}) = - \infty}$, while $\set{\kappa^Y [(-\infty, 0)] = 0, \ c = 0} \subseteq \set{\lim_{\mathrm{p} \to \infty} \g^Y(\mathrm{p}) = a - \int_{(0, \infty)} x \kappa [\mathrm d x]}$. Similarly, one can show that $\set{\ell = - \infty} \subseteq \set{\lim_{\mathrm{p} \to - \infty} \g^Y(\mathrm{p}) \geq 0}$. Since $\g^Y(0) = 0$, it follows that $\g^Y$ always achieves its supremum at some point in $I$.
Define now the ``derivative'' predictable random field $\ngo^Y : \Omega \times \mathbb R_+ \times \mathbb R \mapsto \mathbb R \cup \set{- \infty, \infty}$ via
\begin{equation} \label{eq: growth der}
\ngo^Y (\mathrm{p}) \, := \, a^Y - \mathrm{p} c^Y - \int_{\mathbb R} \frac{\mathrm{p} |x|^2}{1 + \mathrm{p} x} \, \kappa^Y [\mathrm d x] \, = \, \nabla \g(\mathrm{p}) + \int_\mathbb R \frac{x}{1 + \mathrm{p} x} \pare{Y(x) -1} \kappa [\mathrm d x],
\end{equation}
for $\mathrm{p} \in I$ (where $\nabla \g \equiv \nabla \g^1$), $\ngo^Y (\mathrm{p}) = \ngo^Y (\ell)$ for $\mathrm{p} < \ell$, and similarly $\ngo^Y (\mathrm{p}) = \ngo^Y (r)$ for $\mathrm{p} > r$. The concavity of $\g^Y$ and straightforward applications of the dominated convergence theorem imply that, for fixed $(\omega, t) \in \Omega \times \mathbb R_+$, $\nabla \g$ is nonincreasing and continuous on $I$. Note that on $\set{\ell = 0 = r} = \set{\mathsf{supp}(\kappa) = \mathbb R}$ it is impossible to define $\nabla \g$. In this case, we simply force $\ngo^Y(\mathrm{p}) = 0$ for all $\mathrm{p} \in \mathbb R$; we shall see later how such convention is useful.
Define a process $\widetilde{p}^Y \, := \, \inf \set{\mathrm{p} \in I \, | \, \ngo^Y (\mathrm{p}) \leq 0 }$, where we set $\widetilde{p}^Y = r$ in case the last set is empty and $\widetilde{p}^Y = 0$ on $\set{\ngo^Y (\ell) = 0 = \ngo^Y(r)}$. It is clear that $\widetilde{p}^Y$ is a predictable process. Furthermore, on $\{ \ngo^Y (\ell) \geq 0, \, \ngo^Y (r) \leq 0 \}$, which is a predictable set, we have $\ngo^Y (\widetilde{p}^Y) = 0$, and, therefore, $\rel^Y(p \, | \, \widetilde{p}^Y) = (p - \widetilde{p}^Y) \ngo^Y (\widetilde{p}^Y) = 0$ for all $I$-valued predictable processes $p$.
The point of the above discussion is the following: Suppose that for some strictly positive predictable random field $Y$ satisfying (Y1), (Y2) and (Y3), both $\ngo^Y (\ell) \geq 0$ and $\ngo^Y (r) \leq 0$ hold for all $(\omega, t) \in \Omega \times \mathbb R_+$, which as usual will be suppressed from notation in the sequel. Then, we can construct a predictable $I$-valued process $\widetilde{p}^Y$ such that $\rel^Y(p \, | \, \widetilde{p}^Y) = (p - \widetilde{p}^Y) \ngo^Y (\widetilde{p}^Y) = 0$ for all $I$-valued predictable processes $p$. (Observe how $\rel^Y(p \, | \, \widetilde{p}^Y) = (p - \widetilde{p}^Y) \ngo^Y (\widetilde{p}^Y) = 0$ trivially also holds on $\set{\ell = 0 = r} = \set{\mathsf{supp}(\kappa) = \mathbb R}$ in view of our convention, as $I = \set{0}$.) In view of Lemma \ref{lem: rrr}, Theorem \ref{thm: help} will follow as soon as we know that $\widetilde{p}^Y$ is $S$-integrable. Luckily, this is \emph{always} the case under condition NA$_1$. The proof of this fact is quite technical, and basically follows the treatment in \cite[Section 8]{MR2335830}, where Proposition 4.16 of the latter paper is proved. We shall, however, provide some details for completeness. In view of \cite[Corollary 3.6.10, page 128]{MR1906715}, failure of $S$-integrability of $\widetilde{p}^Y$ implies that there exist a sequence of $[0,1]$-valued predictable processes $(h_n)_{n \in \mathbb N}$, such that each $h_n \widetilde{p}^Y$, $n \in \mathbb N$, is $S$-integrable and the sequence of terminal values $\pare{\int_0^T \pare{h_n(t) \widetilde{p}^Y (t)} \mathrm d S(t)}_{n \in \mathbb N}$ fails to be bounded in probability. (Note that, a priori, the previous sequence can fail to be bounded in probability either from above or below, or even from both sides.) For each $n \in \mathbb N$, define $X_n \in \mathcal{X}_{++}$ with $X_n(0) = 1$ via
\[
X_n \, := \, \mathcal E \pare{\int_0^\cdot \pare{h_n(t) \widetilde{p}^Y (t)} \mathrm d S(t)}.
\]
Since $h_n$ is $[0,1]$-valued, the definition of $\widetilde{p}^Y$ implies that $\rel^Y(0 \,|\, h_n \widetilde{p}^Y) \leq 0$. (This follows because the predictable function $[0,1] \ni u \mapsto \mathsf{g}(u \widetilde{p}^Y)$ is nondecreasing.) Therefore, $1 / X_n$ is a nonnegative $\mathbb{P}$-supermartingale for all $n \in \mathbb N$. Then, it follows from \cite[Lemma 8.1]{MR2335830} that failure of boundedness in probability of $\pare{\int_0^T \pare{h_n(t) \widetilde{p}^Y (t)} \mathrm d S(t)}_{n \in \mathbb N}$ also implies failure of boundedness in probability of the sequence $(X_n(T))_{n \in \mathbb N}$. (Although intuitively plausible, passing from failure of boundedness in probability of processes to failure of boundedness in probability of their stochastic exponentials is not always possible, because the stochastic exponential is not a monotone operator. The fact that this can be done in the present case is due to the fact that each process $1 / X_n$ is a nonnegative $\mathbb{P}$-supermartingale --- see also \cite[Remark 8.2]{MR2335830}.) However, condition NA$_1$ is equivalent to the requirement that the set $\set{X(T) \, | \, X \in \mathcal{X} \text{ with } X(0) = 1}$ is bounded in probability, making it impossible for $(X_n(T))_{n \in \mathbb N}$ to fail to be bounded in probability. We conclude that $\widetilde{p}^Y$ is $S$-integrable under the validity of condition NA$_1$.
\subsubsection{Construction of the appropriate predictable random field} \label{subsubsec: construction of density}
We now move to the most technical part of the proof of Theorem \ref{thm: help}, by constructing a strictly positive predictable random field $Y$ satisfying (Y1), (Y2), and (Y3), as well as the following condition:
\begin{enumerate}
\item[(Y4)] $\nabla \g^Y (\ell) \geq 0$ and $\nabla \g^Y (r) \leq 0$.
\end{enumerate}
(Note that the last condition is always trivially satisfied on $\set{\ell = 0 = r} = \set{\mathsf{supp}(\kappa) = \mathbb R}$.) From the discussion of \S \ref{subsubsec: rrr} and \S \ref{subsubsec: growth rates}, existence of such a strictly positive predictable random field $Y$ will complete the proof of Theorem \ref{thm: help}.
The strictly positive predictable random field $Y$ will actually depend on the predictable processes $(a, \kappa, \eta)$ and will have to be defined differently on each of nine predictable sets $(P_i)_{i=1, \ldots, 9}$ that constitute a partition of $\Omega \times \mathbb R_+$. (By construction, it will be immediately clear that $Y$ is actually a predictable random field.) On each of these predictable sets we shall show that (Y1) to (Y4) are valid. The reader will notice how the one-dimensional structure of the asset-price process is used in a non-trivial way when defining $Y$. The method certainly does not generalize for the case of multiple assets --- it appears a big challenge to provide a proof in a multi-dimensional setting.
Before we delve into the technicalities of the proof, recall that under condition NA$_1$, any strictly positive predictable random field $Y$ satisfying (Y1), (Y2) and (Y3) is such that $\set{\ell = - \infty} \subseteq \set{\ngo^Y (\ell) \geq 0}$ and $\set{r = \infty} \subseteq \set{\ngo^Y (r) \leq 0}$. This is true in view of Lemma \ref{lem: consequences of na1} --- see also the discussion in \S \ref{subsubsec: growth rates}.
\smallskip
\noindent $\bullet$ We start with the set $P_1 \, := \, \set{\ell = 0, \, r = \infty}$. (All the predictable-set inclusions below are understood to hold on $P_1$, until we move to the next case where they will be understood to hold on $P_2$, and so forth.) Here, $\nabla \g(\ell) = \nabla \g(0) = a$. Since, as explained above, $\set{r = \infty} \subseteq \set{\nabla \g^Y (r) \leq 0}$, we only have to carefully define $Y$ on $\set{a < 0}$. Notice that $\set{\ell = 0, r = \infty} = \set{\mathsf{conv.supp}(\kappa) = [0,\infty)}$, and define $Y_1 \, := \, y_1(a, \kappa, \eta)$, where, with
\[
\delta \, := \, 1 + \frac{4}{\kappa[\mathbb R]} + \inf \set{x \in \mathbb R \
\Big| \ \kappa[(0, x]] \geq \frac{\kappa[\mathbb R]}{2} } \text{ and } b \, := \, \abs{\delta - a + \frac{2}{\eta}}^2,
\]
we set
\[
y_1(a, \kappa, \eta; \, x) \, := \, 1 + \pare{\frac{1}{\sqrt{b} \, \kappa \bra{ (b, \infty)}} \mathbb{I}_{( b, \, \infty)} (x)
- \frac{1}{\sqrt{b} \, \kappa \bra{ (0, \delta]}} \mathbb{I}_{(0, \delta]} (x) } \mathbb{I}_{\set{a < 0}} \text{ for } x \in \mathbb R,
\]
(In the definition of $y_1(a, \kappa, \eta)$, the term $1 / ( \sqrt{b} \, \kappa \bra{ (0, \delta]} )$ is understood to be zero on $\set{\kappa[\mathbb R] = \infty}$.) We shall show below that $Y_1$ satisfies (Y1) through (Y5). On $\set{a \geq 0}$ this is trivial, since $Y_1 = 1$. Therefore, focus will be given only on $\set{a < 0}$ below. First of all, it is easy to see that $Y_1 \geq 1/2$. Indeed, on $\set{\kappa[\mathbb R] = \infty }$ we have $Y_1 \geq 1$; also, on $\set{\kappa[\mathbb R] < \infty}$,
\[
\sqrt{b} \, \kappa \bra{ (0, \delta]} > \delta \kappa \bra{ (0, \delta]} > \frac{4}{\kappa[\mathbb R]} \, \frac{\kappa[\mathbb R]}{2} = 2
\]
holds from the definition of $\delta$. Proceeding, the fact that $Y_1$ is bounded from above coupled with $\int_\mathbb R \pare{|x| \wedge |x|^2} \, \kappa[\mathrm d x] < \infty$ implies $\int_\mathbb R \pare{|x| \wedge |x|^2} Y_1 (x) \, \kappa[\mathrm d x] < \infty$. For the estimate of the distance between $\kappa$ and $\kappa^{Y_1}$ observe that
\[
\int_\mathbb R |Y_1(x) - 1| \, \kappa[\mathrm d x] \leq \frac{2}{\sqrt{b}} \leq \frac{2}{ 2 / \eta} = \eta.
\]
Now, on $\set{\kappa[\mathbb R] = \infty}$ we have $Y_1 \geq 1$ and obviously $\kappa^{Y_1} [\mathbb R] = \infty$; on the other hand, on $\set{\kappa[\mathbb R] < \infty}$ the equality $\kappa^{Y_1} [\mathbb R] = \kappa[\mathbb R]$ follows in a straightforward way from the definition of $Y_1$. Finally, since $\nabla \g (0) = a$, use \eqref{eq: growth der} to estimate
\begin{eqnarray*}
\nabla \g^{Y_1} (0) &=& a + \int_{(b, \infty)} \frac{x}{\sqrt{b} \, \kappa \bra{ (b, \infty)}} \kappa[\mathrm d x] - \int_{(0, \delta]} \frac{x}{\sqrt{b} \, \kappa \bra{ (0, \delta]}} \kappa[\mathrm d x] \\
&\geq& a + \sqrt{b} - \frac{\delta}{\sqrt{b}} \ = \ a - a + 2 / \eta + \delta - \frac{\delta}{\delta - a + 2 / \eta} \ \geq \ 0.
\end{eqnarray*}
(The last inequality follows from $\eta > 0$ and $\delta > 1$, which imply also $\delta - a + 2 / \eta > 1$, since $a < 0$.)
\smallskip
\noindent $\bullet$ The situation on $P_2 \, := \, \set{\ell = - \infty, \, r = 0}$ is symmetric to the previous one. With
\[
\delta \, := \, 1 + \frac{4}{\kappa[\mathbb R]} - \sup \set{ x \in \mathbb R \ \Big| \ \kappa[[x, 0)] \geq \frac{\kappa[\mathbb R]}{2} } \text{ and } b \, := \, \abs{\delta + a + \frac{2}{\eta}}^2,
\]
define $Y_2 \, := \, y_2(a, \kappa, \eta)$, where
\[
y_2(a, \kappa, \eta; \, x) \, := \, 1 + \pare{\frac{1}{\sqrt{b} \, \kappa \bra{ ( - \infty, \, - b)}} \mathbb{I}_{( - \infty, \, - b)} (x)
- \frac{1}{\sqrt{b} \, \kappa \bra{ [- \delta, 0)}} \mathbb{I}_{[- \delta, 0)} (x)} \mathbb{I}_{\set{a > 0}} \text{ for } x \in \mathbb R.
\]
One can then follow the exact same steps that we carried out on $P_1$.
\smallskip
\noindent $\bullet$ We now move to the set $P_3 \, := \, \set{\ell = - \infty, \, 0 < r < \infty}$, on which $\mathsf{conv.supp}(\kappa) = [-1/r, 0]$. Since $\ell = - \infty$, we have $\nabla \g(\ell) \geq 0$. Also, on $\set{\kappa[\set{-1 / r}] > 0}$ we have $\mathsf{g}(r) = - \infty$, and $\nabla \g(r) = - \infty$ follows easily. Then, define $Y_3 \, := \, y_3(a, \kappa, \eta)$, where, with
\[
\beta \, := \, \frac{1}{r} \min \set{ \frac{1}{2}, \, \exp \pare{- \frac{2 r}{\kappa[\mathbb R]}}, \, \exp \pare{- \frac{2 r}{\eta}}},
\]
$y_3(a, \kappa, \eta; \, x)$ is for all $x \in \mathbb R$ equal to
\[
1 + \pare{\frac{r}{\kappa[\mathbb R] \log (r \beta) }
+ \mathbb{I}_{(-\frac{1}{r}, \, \beta -\frac{1}{r}]} (x) \int_{x}^{\beta -\frac{1}{r}} \frac{|r|^2}{(1 + rw) \, |\log (1 + rw)|^2 \, \kappa \bra{(-\frac{1}{r}, \, w]}} \mathrm d w} \mathbb{I}_{\set{\kappa \bra{\set{-\frac{1}{r}}] = 0}}}
\]
Since $\log(r \beta) \leq - 2 r / \kappa[\mathbb R]$, we easily get $Y_3 \geq 1 / 2 > 0$. On $\set{\kappa[\mathbb R] = \infty}$, $Y_3 \geq 1$ and $\kappa^{Y_3}[\mathbb R] = \infty$ trivially follows; on the other hand, on $\set{\kappa[\mathbb R] < \infty}$, $\kappa^{Y_3} [\mathbb R] = \kappa[\mathbb R]$ follows as long as one notices that the double integral
\[
\int_{(-1 / r, \, \beta - 1/r]} \pare{\int_{x}^{\beta - 1/r} \frac{|r|^2}{(1 + rw) \, |\log (1 + rw)|^2 \, \kappa \bra{(-1 / r, \, w]}} \mathrm d w} \kappa [\mathrm d x]
\]
is, in view of Fubini's theorem, equal to
\begin{equation} \label{eq: helpful estimate}
\int_{-1 / r}^{\beta - 1/r} \frac{|r|^2}{(1 + rw) \, |\log (1 + r w)|^2} \mathrm d w = r \int_{0}^{r \beta} \frac{1}{w \, |\log w|^2} \mathrm d w = - \frac{r}{\log (r \beta)}.
\end{equation}
The above estimate also implies $\int_\mathbb R \pare{|x| \wedge |x|^2} Y_3(x) \, \kappa[\mathrm d x] < \infty$. Indeed, note that
\[
Y_3(x) \leq 1 + r / (\kappa[\mathbb R] \log (r \beta))
\]
for $x \in I \setminus (-1 / r, \, \beta - 1/r]$, while, using the fact that $\beta \leq 1 / (2 r)$, we obtain
\[
\int_{(-1 / r, \, \beta - 1/r]} \pare{|x| \wedge |x|^2} Y_3(x) \, \kappa[\mathrm d x] \leq \frac{1}{r \min \set{1, r}} \int_{(-1 / r, \, \beta - 1/r]} Y_3(x) \, \kappa[\mathrm d x] < \infty.
\]
For estimating the distance between $\kappa$ and $\kappa^{Y_3}$, note that
\[
\int_\mathbb R |Y_3(x) - 1| \, \kappa[\mathrm d x] \leq - 2 r / \log (r \beta) \leq \eta,
\]
which follows from the definition of $\beta$ and the calculations that lead to \eqref{eq: helpful estimate}. We shall now show that $\mathsf{g}^{Y_3} (r) = - \infty$, therefore establishing that $\nabla \g^{Y_3}(r) \leq 0$. Start with the observation that, for $x \in (-1 / r, \, \beta - 1/r]$, integration by parts gives
\begin{eqnarray*}
\log(1 + r x) Y_3 (x) &=& \log (r \beta) + \frac{r}{\kappa[\mathbb R]} - \int_{x}^{\beta - 1/r} \frac{r}{1 + r w} Y_3 (w) \mathrm d w + \\
& & \int_{x}^{\beta - 1/r} \frac{|r|^2}{(1 + r w) \, \log (1 + r w) \, \kappa \bra{(-1 / r, \, w]}} \mathrm d w \\
&\leq& \frac{r}{\kappa[\mathbb R]} + \int_{x}^{\beta - 1/r} \frac{|r|^2}{(1 + r w) \, \log (1 + r w) \, \kappa \bra{(-1 / r, \, w]}} \mathrm d w. \\
\end{eqnarray*}
The above estimate and Fubini's theorem imply that $\int_{(-1/r, \, \beta - 1/r]} \log(1 + r x) Y_3 (x) \, \kappa[\mathrm d x]$ is bounded from above by the quantity
\[
\frac{r \kappa[(-1/r, \, \beta - 1/r]]}{\kappa[\mathbb R]} + |r|^2 \int_{-1/r}^{\beta - 1/r} (1 + rw)^{-1} \, \log^{-1} (1 + rw) \mathrm d w = - \infty.
\]
This last fact, together with \eqref{eq: growth der} and $\int_\mathbb R \pare{|x| \wedge |x|^2} \, \kappa[\mathrm d x] < \infty$ gives $\mathsf{g}^{Y_3} (r) = - \infty$. Of course, $\nabla \g^{Y_3} (\ell) \geq 0$ follows because $\ell = - \infty$.
\smallskip
\noindent $\bullet$ The situation on $P_4 \, := \, \set{- \infty < \ell < 0, \, r = \infty}$ is symmetric to $P_3$ and, therefore, details will be omitted. Just define $Y_4 \, := \, y_4 (a, \kappa, \eta)$, where, with
\[
\beta \, := \, \frac{1}{\ell} \min \set{ \frac{1}{2} , \, \exp \pare{ \frac{2 \ell}{\kappa[\mathbb R]}}, \, \exp \pare{ \frac{2 \ell}{\eta}}},
\]
$y_4 (a, \kappa, \eta; \, x)$ is for all $x \in \mathbb R$ equal to
\[
1 + \pare{\frac{\ell}{\kappa[\mathbb R] \log (\ell \beta) }
+ \mathbb{I}_{(\beta - \frac{1}{\ell}, \, - \frac{1}{\ell}]} (x) \int_{\beta - \frac{1}{\ell}}^{x} \frac{|\ell|^2}{(1 + \ell w) \, |\log (1 + \ell w)|^2 \, \kappa \bra{[w, - \frac{1}{\ell})}} \mathrm d w} \mathbb{I}_{\set{\kappa \bra{\set{-\frac{1}{\ell}}] = 0}}}.
\]
\smallskip
\noindent $\bullet$ We now move to $P_5 \, := \, \set{\ell = 0, \, 0 < r < \infty}$. Here, we shall use a combination of the work we carried out for $P_1$ and $P_3$. Remembering the definitions of the deterministic functionals $y_1$ and $y_3$, define
\[
Y_5 \, := \, y_1 \pare{a^{y_3 (a, \kappa, \eta/2)}, \kappa^{y_3 (a, \kappa, \eta/2)}, \, \eta / 2} \, y_3 (a, \kappa, \eta/2).
\]
The definition of $Y_5$ is essentially realized in two steps. First there is a change according to $y_3$. This forces $\mathsf{g}^{y_3 (a, \kappa, \eta/2)} (r) = - \infty$ as on $P_3$. Also, (Y1), (Y2) and (Y3) hold, with $\eta / 2$ replacing $\eta$ in (Y2). In the second step there is a change using $y_1$. Since $y_1(a^{y_3 (a, \kappa, \eta/2)}, \kappa^{y_3 (a, \kappa, \eta/2)}, \, \eta / 2; x) = 1$ for all $x \in (- \infty, 0)$, $\mathsf{g}^{Y_5} (r) = - \infty$ (and, therefore, $\nabla \g^{Y_5}(r) \leq 0$) still holds, while now it is also the case that $\nabla \g^{Y_5} (\ell) \geq 0$, as was the case on $P_1$. It is clear that $Y_5 > 0$ (since both of the predictable random fields appearing in the definition of $Y_5$ are strictly positive), and that (Y1) to (Y4) all hold.
\smallskip
\noindent $\bullet$ On $P_6 \, := \, \set{- \infty < \ell < 0, \, r = 0}$, define
\[
Y_6 \, := \, y_2 \pare{a^{y_4 (a, \kappa, \eta/2)}, \kappa^{y_4 (a, \kappa, \eta/2)}, \, \eta / 2} \, y_4 (a, \kappa, \eta/2).
\]
The situation is symmetric to the one on $P_5$ --- just follow the exact same reasoning.
\smallskip
\noindent $\bullet$ Moving to $P_7 \, := \, \set{- \infty < \ell < 0 < r < \infty}$, we shall use a combination of the treatment on $P_3$ and $P_4$. Define
\[
Y_7 \, := \, y_3 \pare{a^{y_4 (a, \kappa, \eta/2)}, \kappa^{y_4 (a, \kappa, \eta/2)}, \, \eta / 2} \, y_3 (a, \kappa, \eta/2).
\]
The validity of (Y1), (Y2), (Y3) and (Y4) follow by the same reasoning carried out on the set $P_5$.
\smallskip
\noindent $\bullet$ On $P_8 \, := \, \set{\ell = 0, \, r = 0} \subseteq \set{\nabla \g(0) = 0}$ there is no need to do anything: simply set $Y_8 \, := \, 1$.
\smallskip
\noindent $\bullet$ Finally, on $P_9 \, := \, \set{\ell = - \infty, \, r = \infty} = \set{\mathsf{conv.supp}(\kappa) = \emptyset}$ there is also no need to do anything; set $Y_9 \, := \, 1$. Indeed, we either have $c = 0$, which implies that $a =0$ and, therefore, $\nabla \g(- \infty) = \nabla \g(+\infty) = 0$, or $c > 0$, in which case $\nabla \g(- \infty) = \infty$ and $\nabla \g(+\infty) = -\infty$.
\bibliographystyle{siam}
|
2,869,038,154,962 | arxiv | \section{Introduction}
The quest to discover the conjectured critical point of the QCD phase diagram is a central motivation of modern heavy-ion collision experiments at collider facilities, such as the Large Hadron Collider at CERN and the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory. In the beam energy scan currently executed at RHIC, the phase diagram of QCD is explored over a wide range of temperatures and baryon densities by depositing different amounts of energy in the initial collision volume. As the fireball expands and cools, the efficient exchange of energy and momentum among quarks and gluons leads to local thermalization over time. The question to answer is: if a critical point exists and some of the volume of the fireball evolves close to it, does the dynamical buildup of long range fluctuations leave any discernible mark on the yields of measurable particles?
Understanding the out-of-equilibrium dynamics of heavy-ion collisions thus remains one of the most pressing theory challenges in heavy-ion physics. So far, genuinely nonperturbative \textit{ab-initio} calculations of the equilibration process of the quark-gluon plasma and the dynamics close to the phase transition remain out of reach.
In order to make progress, we therefore set out to shed light onto pertinent aspects of the physics of dynamical thermalization in heavy-ion collisions by deploying a low-energy effective theory of QCD, the two-flavor quark-meson model. This model incorporates the off-shell dynamics of the lowest mass states in QCD, the pseudoscalar pions, the scalar sigma-mode, and the
light up and down quarks. Further degrees of freedom, in particular the gluons, heavier quark flavors as well as higher mass hadronic resonances carry masses $\gtrsim 500$\,MeV and are neglected here. This low-energy effective theory reflects the central and physically relevant feature of low-energy QCD: chiral symmetry breaking in vacuum and its restoration at finite temperature and density. At its critical endpoint, the model is expected to lie in the same universality class as QCD and hence constitutes a viable low-energy effective theory to explore dynamical critical phenomena in QCD at finite temperature and density at scales $\lesssim 500$\,MeV.
In the present work, we consider the real-time dynamics of the two-flavor quark-meson model with small current quark masses in a nonexpanding scenario; for progress on the out-of-equilibrium quark-meson model, see \cite{Berges:2002wr, Berges:2009bx, Berges:2010zv, Berges:2013oba}.
In the presence of such an explicit chiral symmetry breaking, the equilibrium chiral transition at finite temperature is a crossover as confirmed for QCD at vanishing and small density; for recent results, see \cite{Bonati:2018nut, Borsanyi:2018grb, Ding:2019prx}. By the help of different initial conditions defined via the initial occupations of sigma and quark fields, we map out the thermalization dynamics for different regions of the phase diagram.
This allows, for the first time, to fully study the thermalization dynamics including that of order parameters of chiral symmetry. An extension of the present study to the scenario of an expanding fireball should give access to the freeze-out physics of heavy-ion collisions.
The evolution toward thermal equilibrium is viewed through the lens of the one- and two-point functions of the theory, which are computed with the two-particle irreducible (2PI) approach by means of their quantum equations of motion. These correlation functions not only provide complementary order parameters for the study of chiral symmetry restoration but also give direct access to the spectral properties, including the quasiparticle content of the system. Being genuine nonequilibrium quantities they map out the whole time evolution of the system including the physics of the crossover transition in the late-time limit.
This paper is organized as follows. In Section~\ref{sec:model} we briefly review the quark-meson model and give an overview over our nonequilibrium and nonperturbative treatment. The numerical setup for the time evolution starting from free-field initial conditions quenched to a highly nonequilibrium environment is described. In Section~\ref{sec:spectra} we discuss the spectral functions of the bosonic and fermionic degrees of freedom, which provide information about the masses as well as the lifetimes of the dynamical degrees of freedom. We investigate the late-time limit of our simulations, which reveals the dynamical emergence of the fluctuation-dissipation relation and hence allows us to define a thermalization temperature. Finally, Section~\ref{sec:field} covers the results for the sigma field describing the order parameter of the quark-meson model. We further discuss the behavior of different order parameters in equilibrium which lead to a consistent pseudocritical temperature.
In Section~\ref{sec:conclusion} we conclude with a summary. Appendix~\ref{app:eom} provides details about the evolution equations of the model including the relevant expressions for the deployed approximation scheme.
\section{The quark-meson model}
\label{sec:model}
QCD evolves from a theory of dynamical quarks and gluons at large momentum scales, the fundamental degrees of freedom, to a theory of dynamical hadrons at low momentum scales. This transition of the dynamical degrees of freedom is related to the mass gaps of the respective fields. It is by now well understood that the gluon degrees of freedom start to decouple at about 1\,GeV, that is above the chiral symmetry breaking scale $k_\chi$ of about 400\,MeV.
Most of the hadron resonances are too heavy for taking part in the off-shell dynamics and we are left with the up, down and to some extend the strange quarks, as well as the pions and the scalar sigma mode; for details, see \cite{Pawlowski:2014aha, Fu:2019hdw}. Indeed, low-energy effective theories emerge naturally at low momentum scales from first principle QCD, and their systematic embedding leads us to the quark-meson model and its Polyakov loop enhanced version as QCD-assisted low-energy effective theories. While its quantitative validity has been proven for momentum scales $k$ with $k\lesssim 300$\,MeV \cite{Alkofer:2018guy}, it reproduces qualitative QCD features up to $k\lesssim 700$\,MeV. It is this natural QCD embedding as well as its robust QCD-type chiral properties that has triggered a plethora of works with the quark-meson model on the QCD phase structure with functional methods, see e.g.~\cite{Berges:1998sd, Schaefer:2004en, Skokov:2010wb, Herbst:2010rf, Kamikado:2012cp, Fu:2016tey}.
More recently also real-time correlation functions in equilibrium have been investigated in e.g.\ \cite{Floerchinger:2011sc, Tripolt:2013jra, Pawlowski:2015mia, Jung:2016yxl, Yokota:2016tip, Pawlowski:2017gxj, Yokota:2017uzu, Wang:2017vis, Tripolt:2018qvi, Wang:2018osm, Jung:2019nnr}.
(Pre-)Thermalization has been studied in the $O(N=4)$ symmetric scalar model coupled to fermions using a two-loop expansion to next-to-leading order of the 2PI effective action in \cite{Berges:2002wr,Berges:2004ce}. The model was studied extensively in Refs.~\cite{Berges:2009bx, Berges:2010zv} in the context of inflaton dynamics to describe nonequilibrium instabilities with fermion production from inflaton decay. In \cite{Berges:2013oba}, the model was investigated for highly occupied bosonic fields, where the predictions were shown to agree well with lattice simulation results in the classical-statistical regime. Further results
for spectral functions in and out of equilibrium with 2PI effective action techniques can be found in \cite{Shen:2019jhl}, and with classical-statistical simulations in \cite{PineiroOrioli:2018hst,Boguslavski:2019ecc, Schlichting:2019tbr} for scalar theories, and in \cite{Boguslavski:2018beu} for Yang-Mills theory.
In this work, we build on these results and investigate the nonequilibrium evolution of the two-flavor quark-meson model: we consider two light quark flavors with isospin symmetry, up and down quarks with an identical current quark mass $m_{u/d}=m_\psi$, coupled to a scalar mesonic field $\sigma$ and a triplet of pseudoscalar “pions” $\pi^\alpha$ ($\alpha=1,2,3$) through a Yukawa coupling $g$. The classical action reads
\begin{widetext}
\begin{align}
S[ \bar{\psi}, \psi, \sigma, \pi]
&= \int \diff ^4 x \,
\Big[
\bar{\psi} \left(i \gamma^\mu \partial_\mu - m_\psi \right) \psi
- \dfrac{g}{N_f} \bar{\psi} \left(\sigma + i \gamma_5 \tau^\alpha \pi^\alpha \right) \psi
+ \frac{1}{2} \left( \partial_\mu \sigma \partial^\mu \sigma + \partial_\mu \pi^\alpha \partial^\mu \pi ^\alpha \right)
\nonumber\\[1em]&\hspace{6cm}
- \dfrac{1}{2} m^2 \left(\sigma^2 + \pi^\alpha \pi^\alpha\right)
- \dfrac{\lambda}{4!N} \left(\sigma^2 + \pi^\alpha \pi^\alpha\right)^2
\Big]\,,
\label{eq:action}
\end{align}
\end{widetext}
with $ \tau^\alpha $ ($ \alpha=1,2,3 $) denoting the Pauli and $ \gamma^\mu $ ($ \mu=0,1,2,3 $) the Dirac matrices, while spinor and flavor indices are suppressed. In \eqref{eq:action}, $m_\psi$ is the current quark mass and $ m^2 $ the mesonic mass parameter. The lowest mass states of the mesonic scalar-pseudoscalar multiplet, $\sigma$ and $\vec\pi$, are given by the $ N=4 $ scalar components of the bosonic field $ \varphi_a (x) = \{ \sigma(x), \pi^1(x), \pi^2(x), \pi^3(x)\} $ interacting via a quartic self-coupling $ \lambda $.
The boson fields $ \varphi_a $ are coupled to the fermion fields $ \psi $ and $ \bar{\psi} = \psi ^ \dagger \gamma^0 $ via the Yukawa interaction $ g $, which we also express in terms of $ h = g/N_f $.
The $ \pi $ mesons play the role of the light Goldstone bosons in the chirally broken phase whereas the $ \sigma $ meson represents the heavy mode. Assigning these roles to the components of the scalar field is achieved by choosing a coordinate system in field space where the field expectation value has a single component which defines the $ \sigma $ direction, i.e. $ \phi_a(x) = \braket{\varphi(x)}=\{ \braket{\sigma(x)}, 0, 0, 0\} $.
The quasiparticle excitation spectrum of the quark-meson model is encoded in the spectral functions of the respective fields. For the bosonic and fermionic fields, the spectral function is defined as the expectation value of the commutator and anticommutator, respectively,
\begin{align}\nonumber
\rho^\phi_{ab}(x,y)
&=
\ i \ \braket{[{\varphi}_a(x), {\varphi}_b(y)]}\,,\\[1ex]
\rho^\psi_{AB}(x,y)
&=
\ i \ \braket{\{{\psi}_A(x), {\bar{\psi}}_B(y)\}}\,,
\label{eq:spectral_functions} \end{align}
where $ a, b = 1, \dots, N$ denote field space and $ A,B = 1, \dots, 4 $ correspond to Dirac spinor indices. Fermion flavor indices are omitted and the operator nature of the quantum fields is implied.
We consider systems with spatial isotropy and homogeneity such that the spectral functions depend on times and relative spatial coordinates, i.e. $ \rho(t, t', |\mathbf{x} - \mathbf{y}|) $ or in momentum space $ \rho(t, t', |\mathbf{p}|) $, while the field expectation value only depends on time, i.e. $ \braket{\sigma(t)} $.
Due to the remaining $ O(N-1) $ symmetry of the chirally broken model, the bosonic spectral function can be written as $ \rho^\phi_{ab} = \diag (\rho_\sigma, \rho_\pi, \rho_\pi, \rho_\pi)$ where the components $ \rho_i $ with $ i = \sigma, \pi $ describe the respective mesons.
The fermionic spectral function can be decomposed into Lorentz components according to
\begin{align}
\rho^\psi &=
\rho_S
+ i \gamma_5 \rho_P
+ \gamma_\mu \rho_V^\mu
+ \gamma_\mu \gamma_5 \rho_A^\mu
+ \frac{1}{2} \sigma_{\mu\nu} \rho_T^{\mu \nu}
\end{align}
with $ \sigma_{\mu\nu} = \frac{i}{2} \left[\gamma_\mu, \gamma_\nu \right] $ and $ \gamma_5 = i \gamma^0 \gamma^1\gamma^2\gamma^3 $.
The corresponding \textit{Lorentz components} are given by
\begin{align}\nonumber
\rho_S
&= \frac{1}{4}\Tr \left[ \rho^\psi\right] \,, &
\rho_P
&= \frac{1}{4}\Tr \left[ -i \gamma_5 \rho^\psi\right]
\,, \\[1ex]\nonumber
\rho_V^\mu
&= \frac{1}{4}\Tr \left[ \gamma^\mu \rho^\psi\right]
\,, &
\rho_A^\mu
&= \frac{1}{4}\Tr \left[ \gamma_5 \gamma^\mu \rho^{\psi}\right]
\,, \\[1ex]
\rho_T ^{\mu \nu}
&= \frac{1}{4}\Tr \left[ \sigma^{\mu \nu}\rho^{\psi}\right]\,,
\label{eq:Lorentz_components}\end{align}
where the trace acts in Dirac space.
In spatially homogeneous and isotropic systems with parity and CP invariance, the only nonvanishing components are the scalar, vector and $ 0i $-tensor components. Rotational invariance allows us to write
\begin{align}\nonumber
\rho_S(x^0, y^0, \mathbf{p})
&= \rho_S(x^0, y^0, |\mathbf{p}|)\,, \\[1ex]\nonumber
\rho_V^0 (x^0, y^0, \mathbf{p})
&= \rho_0(x^0, y^0, |\mathbf{p}|)\,, \\[1ex]\nonumber
\rho_V^i (x^0, y^0, \mathbf{p})
&= \dfrac{p^i}{|\mathbf{p}|}\, \rho_V(x^0, y^0, |\mathbf{p}|)\,, \\[1ex]
\rho_T^{0i} (x^0, y^0, \mathbf{p})
&= \dfrac{p^i}{|\mathbf{p}|} \,\rho_T(x^0, y^0, |\mathbf{p}|)\,,
\label{eq:fermion_components}%
\end{align}
where we refer to the two-point functions
$ \rho_S$, $\rho_0$, $ \rho_V$ and $\rho_T $ on the right-hand sides as the scalar, vector, vector-zero and tensor components. The relevant contributions to the quark spectral function are the scalar, vector-zero and vector components, where the vector-zero component represents the quark excitations of the system \cite{Tripolt:2018qvi,Kitazawa:2006zi}. For chiral symmetric theories with $ m_\psi = 0 $ the scalar and tensor components vanish.
The spectral functions also encode the equal-time commutation and anticommutation relations of the quantum theory, implying that
\begin{align}
i \partial_t \rho^\phi(t, t', |\mathbf{p}|)\Big|_{t=t'} = 1\,, &&
\rho_0 (t,t, |\mathbf{p}| ) = i\,,
\label{eq:commutation_relations}
\end{align}
while all other fermion components vanish at equal time.
In addition to the spectral functions, we may consider the so-called \textit{statistical functions}. These are the anticommutator and commutator expectation values,
\begin{align}\nonumber
F^\phi(x,y)
&=
\frac{1}{2} \braket{\{\varphi(x),\varphi(y)\}} - \phi(x) \phi(y)\,,
\\[1ex]
F^\psi(x,y) &=
\frac{1}{2} \braket{[\psi(x),\bar{\psi}(y)]}\,,
\label{eq:statistical_functions}
\end{align}
where field space, Dirac, and flavor indices are suppressed. The statistical functions carry information about the particle density of the system, i.e., the occupation of the available modes in the system.
Together, the spectral and statistical functions fully describe the time-ordered connected two-point correlation function, commonly denoted as $G(x,y)=\braket{T \varphi(x)\varphi(y)}
-\braket{ \varphi(x)}
\braket{ \varphi(y)}$ for the bosonic and $\Delta(x,y)=\braket{T\psi(x)\bar{\psi}(y)}$ for the fermionic sector. Note that in nonequilibrium settings, the time-ordering occurs along the closed time path also known as Schwinger-Keldysh contour.
\subsection{2PI effective action real-time formalism at NLO}
\begin{figure*}[t]
\centering
\includegraphics[width=.8\textwidth]{Diagrams}
\caption{2PI diagrams at NLO in $ 1/N $ and $ g $. Full lines represent boson propagators, crossed circles macroscopic field insertions and dashed lines fermion propagators.
The first two-loop diagram in the first row corresponds to the leading-order contribution in $ 1/N $. The last diagram in the second row shows the fermion boson loop. The other diagrams in the first and second row depict the infinite series of NLO diagrams in $ 1/N $.
}
\label{fig:Diagrams}
\end{figure*}
One can derive closed and nonsecular evolution equations for the one- and two-point functions of the quark-meson model out of equilibrium. These equations follow from the 2PI effective action $\Gamma[\phi,G,\Delta]$, the quantum counterpart of the classical action $S[ \bar{\psi}, \psi, \sigma, \pi]$, via a variational principle (see e.g. \cite{Berges2004}).
The 2PI effective action of the quark-meson model can be written as
\begin{align}
\Gamma[\phi, G, \Delta]
&=
S[\phi]
+ \dfrac{i}{2} \Tr \ln \left[G^{-1}\right ]
+ \dfrac{i}{2} \Tr \left[G^{-1}_{\text{cl}}(\phi) G\right ]
\nonumber\\[1em]& \quad
-i \Tr \ln \left[\Delta^{-1}\right ]
-i \Tr \left[\Delta^{-1}_{\text{cl}}(\phi) \Delta\right ]
\nonumber\\[1em]& \quad
+ \Gamma_2[\phi, G , \Delta] + \text{const}. \,, \label{eq:2PIeffectiveAction_QMM}
\end{align}
where $ S $ is the classical action given by \eqref{eq:action}, and $ G^{-1}_{\mathrm{cl}} $ and $ \Delta^{-1}_{\mathrm{cl}} $ are the classical meson and quark propagators derived from it. Traces, logarithms and products have to be evaluated in the functional sense.
The term $ \Gamma_2[\phi, G, \Delta] $ contains two-loop and higher order quantum fluctuations that correspond to 2PI diagrams.
The relevant evolution equations for the one- and two-point functions have the form (explicit expressions can be found in Appendix~\ref{app:eom}):
\begin{align}\nonumber
\left[
\square _x + M^2(x)
\right]
\phi(x)
&=
\int_{0}^{x^0}\!\!\!\! \diff z\,
\Sigma_\phi(x,z) \phi(z)
+ J_\phi(x)
\,,\\[1ex]\nonumber
\left[
\square _x + M^2_\phi(x)
\right]
\rho^\phi(x,y)
&=
\int_{y^0}^{x^0} \diff z\,
\Sigma^\phi_\rho(x,z)
\rho^\phi(z,y)\,,\\[1ex]
\left[i \slashed{\partial}_x + M_\psi(x)
\right]
\rho^\psi(x,y)
&=
\int_{y^0}^{x^0} \diff z\,
\Sigma^\psi_\rho(x,z)
\rho^\psi(z,y)\,,
\label{eq:eom_sketch}%
\end{align}
with shorthand notation $\int_{t_1}^{t_2}{\diff z} \equiv \int_{t_1}^{t_2}{\diff z^0 \int{\diff^3 z}}$ and the dependence of the self-energies $ \Sigma_i $ on $\phi,G, \Delta $ is implied. Similar expressions hold for the statistical functions.
On the left-hand side the Klein-Gordon or Dirac operators act on the corresponding expectation value. Thereby, effective masses take into account local quantum corrections.
On the right, the effects of quantum fluctuations appear in so-called memory integrals that encode the generally non-Markovian effects of fluctuations in the past. The source term $ J _\phi$ in the field equation arises in the chirally broken case and describes the fermion backreaction on the field. It pushes the field to nonzero field expectation values even in the case where $ \phi(t) = \partial_t\phi(t)= 0 $ at initial time.
In order to carry out explicit computations, the self-energies $\Sigma_i$ need to be approximated. Here we deploy an expansion to next-to-leading order (NLO) in $ 1/N $ for the bosons, where $ N $ is the number of scalar field components, and a NLO expansion in $ g $, the Yukawa coupling. The large $ N $ expansion provides a controlled nonperturbative approximation scheme, which at NLO includes scattering as well as off-shell and memory effects, capable of handling relatively large couplings \cite{Berges:2001fi}.
The loop expansion in $ g $ to NLO contributes with a fermion-boson loop originally discussed in \cite{Berges:2002wr}. The 2PI diagrams contributing in this approximation are sketched in Figure~\ref{fig:Diagrams}.
The explicit equations of motions are presented in Appendix~\ref{app:eom}, where also the self-energy expressions for the given approximation scheme are provided. To study the time evolution of the system, we iteratively solve the equations of motion without further approximations.
\subsection{Initial conditions}
The derivation of the nonequilibrium 2PI effective action and the equations of motion following from it rely on the assumption of a Gaussian initial state. This corresponds to a system initially exhibiting the characteristics of a noninteracting theory. However, higher order correlation functions build up during the subsequent time evolution.
While this appears at first sight to correspond to a very limited choice of initial conditions, it still allows for a wide variety of different configurations through which we can determine for instance the energy density $\varepsilon_{\rm init}$ at the beginning of our computation. In particular, the Gaussian initial state represents a genuine nonequilibrium state in the fully interacting nonequilibrium system, in which the time evolution takes place.
We allow for spontaneous symmetry breaking by using a negative mesonic bare mass squared $ m^2 <0$ in the classical potential of the system. Since the initial state is determined by a free theory with $ m^2 = m^2_\mathrm{init} > 0 $, the sign flip of $ m^2 $ leads to a quench of the classical potential from positive to negative curvature in the first time step. At initial time, the classical potential is minimal at vanishing field expectation value while the minimum at $ t>0 $ becomes nonzero by taking $ m^2 < 0 $.
A Gaussian initial state can be fully specified in terms of the one- and two-point functions. Since the field evolution equation involves second order time derivatives, one has to specify both the sigma field value and its initial time derivative. We choose the latter to vanish and refer to the initial field expectation value as $ \sigma_0 $,
\begin{align}
\sigma(t=0) = \sigma_0\,, && \partial_t \sigma(t) \Big|_{t=0} = 0\,,
\end{align}
where $ \sigma(t) $ now denotes the expectation value of the sigma field.
As pointed out above, due to the presence of a finite bare quark mass $m_\psi$ the field can move away from $\sigma_0=0$ due to the backreaction with the fluctuations of the theory.
We specify the initial conditions for the two-point functions in terms of the spectral and statistical components. The initial conditions for the bosonic (fermionic) spectral functions are fully determined by the equal-time commutation and (anti)commutation relations \eqref{eq:commutation_relations}.
For the remaining statistical functions we employ free-field expressions with a given initial particle number.
The bosonic statistical function then reads
\begin{align}
F_i(t, t', |\mathbf{p}| )
&= \dfrac{n_i(t, |\mathbf{p}|) + \frac{1}{2}}{\omega_i(t, |\mathbf{p}|) }
\cos\left[ \omega_i(t, |\mathbf{p}|)(t-t')\right]\,,
\end{align}
with $ i = \sigma, \pi $ and where at initial time $ t = t' = 0 $ the dispersion is set to $ \omega_i(0, |\mathbf{p}|) = \sqrt{|\mathbf{p}|^2 + m_\mathrm{init}^2}$ with initial mass squared $ m^2_\mathrm{init}>0 $ and the particle distribution given by $ n_i(0, |\mathbf{p}|) = 0 $. For the fermions the free statistical function can be written as
\begin{align}
F^\psi(t, t, |\mathbf{p}|) = \dfrac{- \gamma^i p_i + m_\psi}{\omega_\psi(t,|\mathbf{p}| )}
\left(
\frac{1}{2} - n_\psi(t, |\mathbf{p}|)
\right )\,,
\end{align}
where we choose the initial dispersion to be $ \omega_\psi(0, |\mathbf{p}|) = \sqrt{|\mathbf{p}|^2 + m_\psi^2}$ and the initial particle distribution to be constant, i.e. $ n_\psi(0, |\mathbf{p}|) = n_0 $.
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth]{ThermalizationSketch.pdf}
\caption{Sketch of the setup deployed in this study. We consider the real-time evolution from nonequilibrium initial states characterized by an energy density sourced either through a finite $\sigma$ field expectation value (blue circle) or a nonzero occupancy of fermionic modes (orange triangle). Depending on the initial energy contained in the system, one of three discernible final states, the chiral broken phase, the crossover regime or the (almost) symmetric phase is approached.
}
\label{fig:Initial_setup}
\end{figure*}
The energy contained in the initial state via $\varepsilon_{\rm init}$ determines the temperature at which the system thermalizes. By preparing different initial conditions, we can study the thermalization process toward different temperatures and hence phases of the model as sketched in Figure~\ref{fig:Initial_setup}.
\subsection{Numerical implementation}
\label{sec:numerics_QMM}
As is customary in the context of the 2PI effective action, we discretize the system on the level of the equations of motion \eqref{eq:eom_sketch}. The explicit form of the equations allows us to deploy a leap-frog scheme, where in particular the fermionic two-point functions are discretized in a temporally staggered fashion. The two-point functions, as the name suggests, carry an explicit dependence on two temporal coordinates. Since the memory integrals contain the full time history, the required memory grows quadratically with the number of time steps. In order to keep the computation manageable we reduce the memory burden by exploiting isotropy and homogeneity, which reduces the effective spatial dimensions to one. A modified Fourier transform based on Hankel functions allows us to evaluate the self-energy contributions in coordinate space and to simplify the convolutions in the memory integrals in momentum space. For this project we extended the code used in Ref.~\cite{Berges:2009bx} to include the additional nonvanishing fermionic two-point functions present in our setup; the source code for this project is available via the Zenodo repository under \cite{shen_linda_2020_3698136}.
In the spirit of effective field theories, we choose a UV cutoff at a high enough momentum scale. Below this scale, we consider quantum and statistical fluctuation within the 2PI framework. The ultraviolet parameters of our effective field theory are cutoff dependent and chosen such that physical observables, i.e., mass ratios and the pion decay constant, are reproduced.
The numerical time evolution is computed using a spatial grid with $ N_x = 200$ lattice points and a lattice spacing of $ a_x = 0.2 $. The time step size is chosen to be $ a_t = 0.05\, a_x$ guaranteeing energy conservation at the level of a few percent for the times analyzed. In the following all dimensionful quantities will be given in units of the pseudocritical temperature $ T_{pc} $, which has the value $ T_{pc} = 1.3\, a_x^{-1}$ determined according to the procedure described in Section~\ref{sec:equilibrium}; see Figure~\ref{fig:order_parameters}. Subsequently, we use $ m $ to denote the dimensionless ratio $ m / T_{pc} $ and likewise for all other dimensionful quantities.
Interactions between the macroscopic field, the bosonic and the fermionic propagators lead to an exchange of energy between the different sectors. To observe an efficient energy exchange and equilibration process at computationally accessible times, it is necessary to study large couplings. We choose the quartic self-coupling $ \lambda = 90.0 $, the Yukawa coupling $ g=5.0 $, the bare mass squared $ m^2 = -0.0047 $ and the bare fermion mass $ m_\psi = 0.15 $.
These parameters not only allow us to observe the equilibration of the system on time scales accessible computationally but also lead to reasonable values for the observables when compared to the phenomenological values known at $ T=0 $, where the pion decay constant is $ f_\pi \simeq \SI{93.5}{MeV} $, the meson masses are $ m_\sigma \simeq \SI{400}{MeV} $ and $ m_\pi \simeq \SI{135}{MeV} $, and the constituent quark mass is $ m_q = \SI{350}{MeV} $ \cite{Tanabashi:2018oca}.
The above choices are close to that used in equilibrium computations of the quark-meson model with functional methods and a physical ultraviolet cutoff $\Lambda_{\textrm{UV}}\approx 1$\,GeV, (see e.g. \cite{Cyrol:2017ewj, Fu:2019hdw}). In these computations it can be shown that the self-interaction is of subleading relevance for the fluctuation dynamics, despite the large size of the classical coupling $\lambda$.
In the present 2PI framework, the quantum interactions are obtained through an NLO resummation and for large occupancies or large classical coupling they can be shown to be small.
The functional equilibrium studies \cite{Cyrol:2017ewj, Fu:2019hdw}, as well as a comparison of the quark-meson model to QCD, (see e.g. \cite{Alkofer:2018guy}) reveal that a one-to-one correspondence of the low-energy limits of both theories in quantitative approximations to the full dynamics in the quark-meson model either requires a far smaller UV cutoff for the latter or a systematic improvement of the model towards QCD-assisted low-energy effective theories \cite{Pawlowski:2014aha, Fu:2019hdw}. In the present work, we restrict ourselves to studying the qualitative properties of the nonequilibrium dynamics as a first step.
When identifying the sigma field expectation as pion decay constant, we can reproduce $ f_\pi < m_\pi < m_q < m_\sigma $ at low temperatures. At the lowest temperatures considered in this work, we find $ f_\pi / m_\pi \simeq 0.65$, which is very close to the phenomenologically known value of approximately $ 0.69 $, the meson mass ratio $ m_\sigma / m_\pi \simeq 1.75$, smaller than the vacuum value of around $ 2.9$ but expected to increase when going to lower temperatures, and $ m_q / m_\pi = 1.45$, being on the order of magnitude with zero-temperature value of $ 2.6 $. Hence we expect our findings to qualitatively reproduce the QCD dynamics. Note however, that the meson mass ratio $ m_\sigma / m_\pi < 2$ leads to another order of the thresholds for scattering processes, and hence respective difference in the spectral functions.
For the bosonic sector, we use vacuum initial conditions, i.e. $ n_\phi(t=0, |\mathbf{p}|) = 0 $. The initial mass is fixed at $ m^2_\mathrm{init} = 0.0047 $. The fermion initial distribution is chosen to be constant $ n_\psi(t=0, |\mathbf{p}|) = n_0 $. We study simulations with fluctuation dominated initial conditions where the fermion number $ n_0 $ is varied between $ 0 $ and $ 1$ while the initial field value is $ \sigma_0 = 0 $.
Furthermore, the field dominated initial conditions with a nonvanishing field value of $ \sigma(t=0) = \sigma_0 $ between 0 and $ 2.0 $ with vanishing fermion number $ n_0 = 0 $ are investigated. Unless otherwise specified, plots are shown for the case $ n_0 = 0$ and $ \sigma_0 = 0 $. For plots showing spectral and statistical functions in frequency space a cubic spline interpolation of the data points is employed.
\section{Spectral functions}
\label{sec:spectra}
In this section, we explore the nonequilibrium evolution of the quark-meson model from the point of view of its quark and meson spectral functions. As these quantities are derived from the two-point correlation functions, they provide insight on the (quasi)particle content of the theory, the dispersion relation of propagating modes and their decay widths, providing insight into the modification of the system due to the presence of a (non)equilibrium medium. Our numerical simulations find clear indications for quasiparticles in both the IR and UV, revealing the presence of additional light propagating fermion modes for temperatures above the pseudocritical temperature.
It is convenient to analyze the spectral functions in the \textit{Wigner representation} where the Fourier transformed spectral function can be interpreted as the density of states such that its structure provides information about the quasiparticle states of the system.
Therefore, the temporal dependence of the unequal-time two-point correlation functions on the two times $ t $ and $ t' $ is rephrased in terms of Wigner coordinates: the central time $ \tau = (t + t')/2 $ and the relative time $ \Delta t = t-t' $. The dynamics in $ \Delta t $ describes microscopic properties of the system while the evolution in $ \tau $ describes macroscopic properties governed by nonequilibrium characteristics of the system.
In order to study the frequency spectrum of the spectral functions, we then apply a \textit{Wigner transformation} to the propagators.
This corresponds to a finite range Fourier transformation of the propagators with respect to the relative time $\Delta t $, which is constrained by $ \pm 2\tau$ in initial value problems where $ t, t' \geq 0 $. As a result, we obtain the frequency space spectral function
\begin{align}
\rho(\tau, \omega, |\mathbf{p}|)
&= \int _{-2\tau}^{2\tau}\mathrm{d} \Delta t \
e^{i\omega \Delta t}
\rho \left( \tau, \Delta t, |\mathbf{p}|\right )\,,
\label{eq:Wigner_transformation}%
\end{align}
with analogous expressions for all statistical functions.
For a real and antisymmetric spectral function (as in the bosonic case and for the fermionic scalar, vector and tensor components) as well as for an imaginary and symmetric spectral function (as for the fermionic vector-zero component), the Wigner transform $ \rho(\tau, \omega, |\mathbf{p}|) $ is imaginary. Due to symmetry, it is sufficient to present the Wigner transformed spectral functions for positive frequencies $ \omega $. Since the frequency space spectral functions are imaginary in our definition, we always plot $ -i \rho $ in the subsequent sections, thereby omitting the $ -i $ in the plot labels to ease notation.
The commutation and anticommutation relations \eqref{eq:commutation_relations} can be rephrased in frequency space,
\begin{align}
\int \dfrac{\diff \omega}{2\pi}\ \omega\, \rho^\phi(\tau,\omega, |\mathbf{p}|) =i\,, &&\int \dfrac{\diff \omega}{2\pi}\ \rho_0(\tau, \omega, |\mathbf{p}|) = i\,,\label{eq:sum_rules}
\end{align}
where they are referred to as \textit{sum rules}.
In our numerical computations, the bosonic and fermionic sum rules are satisfied at the level of
$ \mathcal{O}(10^{-2}) $ and $ \mathcal{O}(10^{-6}) $, respectively.
\subsection{Establishing thermal equilibrium at late times}
\label{sec:thermal_equilibrium}
Before embarking on a detailed study of the dynamical approach to thermal equilibrium, we first ascertain that our simulations of the quark-meson model exhibit thermalization at late times. We do so by observing the dynamic emergence of the fluctuation-dissipation theorem. One needs to keep in mind that as discussed in \cite{Berges:2002wr}, the idealized thermal equilibrium state cannot be reached in principle due to the time reversibility of the evolution equations. The simulation approaches the state more and more closely over time and at some point becomes indistinguishable from it for a given resolution. Hence, we expect the computation to approach a steady state.
The fluctuation-dissipation theorem is reflected in a particular property of the spectral and statistical functions in thermal equilibrium: they are not independent of each other. In four-dimensional Fourier space, it reads
\begin{align}\nonumber
F^\phi_{\text{eq}}(\omega, \mathbf{p})
&=-i \left(
\frac{1}{2} + n_{\text{BE}} (\omega)
\right )\rho^\phi_{\text{eq}}(\omega,\mathbf{p})\,,\\[1ex]
F^\psi_{\text{eq}}(\omega, \mathbf{p})
&=-i \left(
\frac{1}{2} - n_{\text{FD}} (\omega)
\right )\rho^\psi_{\text{eq}}(\omega,\mathbf{p})\,,
\label{eq:FDT}%
\end{align}
with $ n_{\text{BE}} (\omega) = (e^{\beta\omega} - 1)^{-1}$ being the Bose-Einstein and $ n_{\text{FD}} (\omega) = (e^{\beta\omega} + 1)^{-1}$ the Fermi-Dirac distribution. In \eqref{eq:FDT}, the frequency $ \omega $ is the Fourier conjugate to the relative time $ \Delta t = t-t' $ as the time dependence of $ F_\mathrm{eq} $ and $ \rho _\mathrm{eq}$ can be fully described in terms of $ \Delta t$ due to the time-translation invariance of thermal equilibrium.
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{Evolution_FDT}
\caption{
We show the time evolution of the effective particle number defined in \eqref{eq:n_eff} for bosonic and fermionic components (rows) and two different momenta (left and right columns).
At late times (red curve), the effective partcle number becomes time and momentum independent and approaches the shape of a Bose-Einstein and Fermi-Dirac distribution, respectively. The shown data are interpolated using a cubic spline.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Evolution_FDT}
\end{figure*}
Out of equilibrium, the independence of $ F $ and $ \rho $ manifests itself in the fact that the ratio $ F/\rho $ in general carries a momentum dependence. The equilibrium relation \eqref{eq:FDT} on the other hand allows us to define the generalized particle distribution function \cite{Berges2004}
\begin{align}
n_i (\tau, \omega, |\mathbf{p}|)&=\pm\left(
i\ \dfrac{F_i(\tau, \omega, |\mathbf{p}| )}{\rho_i(\tau, \omega, |\mathbf{p}|)} - \dfrac{1}{2}
\right)
\,,
\label{eq:n_eff}
\end{align}
with a positive (negative) sign for bosonic (fermionic) components and $ i = \sigma, \pi, V $. This kind of distribution function has been studied in the context of nonthermal fixed points in relativistic as well as nonrelativistic scalar field theories \cite{Orioli:2015dxa}.
Considering \eqref{eq:n_eff} the approach of thermal equilibrium in a general nonequilibrium time evolution setup is characterized by $ n_i (\tau, \omega, |\mathbf{p}|)\rightarrow n_\mathrm{BE/FD}(\omega) $.
In Figure~\ref{fig:Evolution_FDT} we show the time evolution of the particle distribution defined in \eqref{eq:n_eff} for low and high momenta (left and right columns). One can see that at late times (red curves) the same shape is approached for small and large momenta, whereas at early times the distribution functions differ from each other. This loss of momentum dependence is required for the thermalization process and reflects the emergence of the fluctuation-dissipation relation in the equilibrium state. From the late-time distributions shown in Figure~\ref{fig:Evolution_FDT} one can already guess that thermal distribution functions are reached.
We also observe that the evolution of the effective particle number is different for fermions and bosons. The bosonic distribution functions $ n_\sigma $ and $ n_\pi $ show strong oscillations along frequencies at low momenta whereas oscillations at high momenta are weak. Since the particle distributions are computed by taking the ratio of the statistical and spectral functions, $ n_i $ plotted against $ \omega $ essentially describes how similar the peaks shapes of $ F $ and $ \rho $ are. In the high-momentum range we find that the quasiparticle peaks of the bosonic statistical and spectral functions resemble one another from early times on, while in the low-momentum range more time is required for the peak shapes to become aligned. In contrast, the quarks show an opposite behavior. Their distributions have much stronger frequency oscillations for large momenta than for small momenta, i.e. it takes longer for the high-momentum modes to approach a thermal distribution.
Putting the pieces together, we can see that a redistribution of the occupancies in fermionic and bosonic degrees of freedom occurs during the nonequilibrium time evolution. While the time scales to converge to thermal distribution functions depend on the particle species and the momentum modes we find that the distribution functions all become stationary for times $ \tau \gtrsim 100$, reflecting the time-translation invariant property of thermal equilibrium.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Thermalization}
\caption{
Left: The generalized Boltzmann exponents defined in \eqref{eq:thermalization_temperature} shown as a function of frequency $ \omega $ at a given momentum $ |\mathbf{p}| $ for bosonic and fermionic components. For better visibility, only every 39th data point is shown. Using a linear fit one can determine the slope $ \beta $ and hence the temperature $ T $ for each component. The temperature $ T $ indicated in the plot is averaged over all momenta and the three components.\\
Right: The relative deviation from the thermalization temperature $ \Delta = (T_i - T) / T $ shown for all three components as a function of momentum. The results for the bosonic and fermionic sectors agree very well. \\
In both plots dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Thermalization}
\end{figure*}
Although Figure~\ref{fig:Evolution_FDT} already indicates the approach of thermal distribution functions, we still need to prove whether our final state actually fulfils the fluctuation-dissipation theorem.
For a quantitative analysis, we compute the \textit{generalized Boltzmann exponents}
\begin{align}
A_i(\tau, \omega, |\mathbf{p}|) = \ln \big[
n_i^{-1}(\tau, \omega, |\mathbf{p}|) \pm 1
\big]\,,
\label{eq:thermalization_temperature}
\end{align}
with positive (negative) sign for bosonic (fermionic) components and $ i = \sigma, \pi, V $. In thermal equilibrium, the fluctuation-dissipation theorem \eqref{eq:FDT} requires these exponents to suffice $ A_i(\tau, \omega, |\mathbf{p}|) = \beta \omega $, implying in particular that they become independent of momentum $ |\mathbf{p}| $ and time $ \tau $, where the latter is fulfilled by our late-time states.
A linear fit of our simulation data for the generalized Boltzmann exponents to $ \beta \omega $ yields the thermalization temperature $ T_\mathbf{p} = \beta^{-1}_\mathbf{p}$, which can in general be $ \tau $ dependent. An example for such a fit is presented on the left side of Figure~\ref{fig:Thermalization}. The plot shows that the Boltzmann exponent of all three components $ i = \sigma, \pi, V $ nicely fits to the same line with slope $ \beta $. We compute the temperature averaged over all momenta to obtain $ T_i $ for each component. The system temperature denoted by $ T $ is taken to be the mean over all three components.
For every simulation, we compute the temperatures at each momentum $ |\mathbf{p}| $ and study the momentum dependence of the obtained temperature $ T_\mathbf{p} $.
As pointed out in \cite{Berges:2004ce}, thermodynamic relations can become valid before real thermal equilibrium is attained, a phenomenon known as \textit{prethermalization}. Thermal equilibrium is characterized by $ T_\mathbf{p} $ being equal to some equilibrium temperature for all modes $ |\mathbf{p} |$.
On the right side of Figure~\ref{fig:Thermalization}, the deviations from the mean thermalization temperature $ T $ are plotted. As can be seen, the deviations are very small. Hence, the Boltzmann exponents at late times $ \tau $ become momentum independent and the late-time states are thermal in the sense that they fulfill the fluctuation-dissipation theorem. The thermalization temperatures for all simulations in this work have been determined at time $ \tau = 130$. For the example shown in Figure~\ref{fig:Thermalization} it was checked that the thermalization temperatures found in the time range between $ \tau = 100 $ and $ \tau = 160 $ are constant at the level of $\mathcal{O}(10^{-3}) $. We have checked for all simulations in this work that the temperature has reached a stationary value at time $ \tau = 130 $.
Having clarified the successful approach to quantum thermal equilibrium in our system, we are now able to study the differences during the out-of-equilibrium evolution leading to the thermal states in detail.
\subsection{Nonequilibrium time evolution of the spectral and statistical functions}
\label{eq:noneq_evol_2PF}
In this section we study the dynamics of the thermalization process, starting from fluctuation or field dominated initial conditions. We investigate the time evolution of the spectral and statistical functions and consider derived quantities such as particle masses and widths. While the initial conditions strongly influence the nonequilibrium dynamics taking place, the final states are universal and characterized by the initial energy density ${\varepsilon_{\rm init}}$ that translates into a unique temperature.
The time evolution leads to the emergence of quasiparticle peaks in the spectral functions of both quark and mesons. The value of the particle mass and its decay width are a consequence of the interactions taking place among the microscopic degrees of freedom. While the initial states correspond to free particles, which would have a spectrum given by a $ \delta $ distribution located at the mass parameters of the classical action, the scattering effects included in the nonequilibrium evolution lead to peaks with finite widths in the spectrum.
In Figure~\ref{fig:Spectra_evolution_summary} we present a representative set of fermionic spectral functions from the vector-zero channel, which describes the quark excitation spectrum \cite{tuprints2209, Kitazawa:2006zi}. The three columns correspond to three different field dominated initial conditions of increasing initial energy density, as sketched by the blue dots in Figure~\ref{fig:Initial_setup}. The top row shows the Wigner space spectral function at the lowest available momentum (IR), the bottom row at the highest momentum (UV). We can identify several characteristic properties of these spectral functions from a simple inspection by eye.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_evolution_summary}
\caption{A representative selection of spectral functions from the fermion vector-zero channel in the infrared (top row) and the UV (bottom row) in three different regimes labeled by the temperatures of their final state. Each panel contains four curves indicating different snapshots along the thermalization trajectory. All three simulations employ field dominated initial conditions, i.e. $ \sigma_0 >0 $ and $ n_0 = 0 $. Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).}
\label{fig:Spectra_evolution_summary}
\end{figure*}
In the UV, a single quasiparticle structure is present at all times and at all energy densities. With increasing energy density in the initial state, corresponding to an increasing final temperature, the position of the peak and its width increase. This is consistent with the expectation that a fermion in an energetic medium will be imbued with an in-medium mass (to lowest order in perturbation theory it would be proportional to the temperature). Higher energy densities go hand in hand with an increased chance of scattering between the fermion and the other medium constituents, which also leads to a larger in-medium width. In the UV, no qualitative difference exists between the broken, crossover, or symmetric phase behavior.
On the other hand, in the IR a clear distinction between the crossover region and all other energy density regimes is visible. While we also find a single quasiparticle structure at low and high initial energy densities, in the crossover region at early times no well-defined peaks are present at all. Instead, as times passes, two structures emerge. One dominant peak is located where one would expect the usual quasiparticle excitation to reside, another peak sits close to the frequency origin, denoting a significantly lighter additional propagating mode.
In general, we find that also for the other fermionic and bosonic spectral functions the approach of the equilibrium state depends on the initial conditions. In the presence of a nonzero initial field value $ \sigma_0 $, the spectral functions evolve differently than in the case where $ \sigma_0 = 0 $ but the fermion occupation is finite, i.e. when the initial state contains more energy in terms of fermion occupations. As pointed out in Figure~\ref{fig:Spectra_evolution_summary}, the most interesting dynamical features can be seen in the low-momentum area, which we therefore focus on during the following analysis.
\subsubsection{High energy densities}
\label{sec:high_energy_densities}
Here, we study the quark-meson model at high enough initial energy densities such that the late-time evolution thermalizes in the high-temperature phase, where chiral symmetry is restored.
For our analysis we compare two simulations starting from different initial conditions characterized by almost indistinguishable energy densities. One is dominated by the field $ \sigma_0 = 1.36 $ and $ n_0 = 0 $, while the other is dominated by fermion fluctuations $ \sigma_0 = 0 $ and $ n_0 = 0.8 $.
The final states feature similar thermalization temperatures of $ T = 3.15 $ and $ T =3.18$, respectively. However, since the initial states are very different from each other, the evolution toward thermal equilibrium takes significantly different paths.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_evolution_highT_pion}
\caption{
Time evolution of the pion spectral and statistical functions shown for two different initial conditions at the smallest available momentum $ |\mathbf{
p}| = 0.012 $.
The left column shows a simulation deploying field dominated initial conditions with $ \sigma_0 = 1.36$,
the right column fluctuation dominated initial conditions with $ n_0 = 0.8 $.
Both simulations lead to thermal states at temperatures where chiral symmetry is restored.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Spectra_evolution_highT_pion}
\end{figure*}
For such high initial energy densities, the differences in the time evolution are most apparent in the bosonic sector. This can be studied by looking at the bosonic spectral and statistical functions. Numerical results are shown in Figure~\ref{fig:Spectra_evolution_highT_pion}, where only the pion spectral and statistical functions are presented since the behavior of the sigma meson is analogous.
The final states of both simulations (red curve) are characterized by the same peak shapes for both spectral and statistical functions. However, the functions at intermediate times exhibit a completely different behavior.
For field dominated initial conditions (left column in Figure~\ref{fig:Spectra_evolution_highT_pion}), the peak position of the spectral function moves toward smaller frequencies with time, which means that the mass of the quasiparticle state decreases during the time evolution. In addition, the nonzero initial field leads to large amplitudes in the pion statistical function at early times (lower left plot in Figure~\ref{fig:Spectra_evolution_highT_pion}) which corresponds to relatively high occupancies in the bosonic sector compared to the final thermal distribution. These occupancies have to redistribute to other bosonic momentum modes $ |\mathbf{p}| $ and the fermionic sector to let the system equilibrate.
This behavior can be readily understood from the microscopic evolution equations of the system. The finite-valued initial field drives the fluctuations in the bosonic sector because it contributes to the bosonic self-energy at initial time $ t=0 $. Since the nonequilibrium time evolution takes into account the full time history since $ t=0 $, these initial fluctuations not only play a role at initial time but also at intermediate times.
Only at late times, the system loses memory about the details of the initial state. Since the macroscopic field only couples to the bosons directly but not to the fermions, the energy provided by the initial field is first turned into bosonic fluctuations before being transferred to fermionic modes. As a consequence, the thermalization of an initial state with nonzero initial field value shows rich dynamics in the bosonic spectra.
In contrast, for fluctuation dominated initial conditions (right column in Figure~\ref{fig:Spectra_evolution_highT_pion}), one observes a continuous increase of the amplitudes of both spectral and statistical functions until the maximum is reached in the thermal state. If the initial energy density is provided via fermionic fluctuations, the thermal final state is found to be realized already at intermediate times.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Evolution_dispersion_width}
\caption{
Time evolution of the dispersion relation and the momentum dependent width of the pion. The inset shows the time evolution of pion mass obtained from fits of the dispersion relation to $ Z \sqrt{|\mathbf{p}|^2 + m_\pi^2} $ at various times $ \tau $, where $ Z = 1.07 $ is obtained for all times analyzed. The data are shown for field dominated initial conditions with $ \sigma_0 = 1.24 $ and $ n_0 = 0 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Evolution_dispersion_width}
\end{figure*}
The spectral functions can be used to deduce the dispersion relation and lifetimes of the corresponding quasiparticle species. Following \cite{aarts2001nonequilibrium} we assume for the moment that the spectral function decays exponentially and can be approximated as $ \rho(t, t', |\mathbf{p}|) = e^{-\gamma_\mathbf{p}|t-t'|} \omega^{-1}_\mathbf{p} \sin [\omega_\mathbf{p}(t-t')]$ with a dispersion $ \omega_\mathbf{p}$ and a damping rate $ \gamma_\mathbf{p}$, which are both allowed to be $ \tau $ dependent. The corresponding Wigner transform is given by $ \rho(\tau,\omega, |\mathbf{p}|) = \rho_{\text{BW}}(\tau, \omega, |\mathbf{p}|) + \delta \rho (\tau, \omega, |\mathbf{p}|)$ where $\rho_{\text{BW}} $ denotes the relativistic Breit-Wigner function
\begin{align}
\hspace{-.1cm}
\rho_{\text{BW}}(\tau, \omega, |\mathbf{p}|)
=
\dfrac{2\omega \Gamma(\tau, |\mathbf{p}|)}{\big[\omega^2 - \omega^2(\tau, |\mathbf{p}| )\big]^2 + \omega^2 \Gamma^2(\tau, |\mathbf{p}| )}
\,,
\label{eq:rel_BW}
\end{align}
which describes a peak with width $\Gamma(\tau, |\mathbf{p}|) = 2\gamma_\mathbf{p}(\tau) $
at position $\omega = \omega(\tau, |\mathbf{p}|) $. The term $\delta \rho \sim \exp(-2 \tau \gamma_\mathbf{p})$ describes boundary effects due to the finite integration range in \eqref{eq:Wigner_transformation}. Since $ \delta \rho $ decreases exponentially with $ \tau \gamma_\mathbf{p}$, this term is negligible for sufficiently large damping ratios and/or sufficiently late times \cite{aarts2001nonequilibrium}. Otherwise, the frequency space spectral function suffers under severe noise coming from boundary effects. For all times shown in this work, we find that boundary effects are irrelevant.
We observe that peak shapes of the bosonic spectral functions can be well approximated by the Breit-Wigner function \eqref{eq:rel_BW}. At some given time $ \tau $, performing Breit-Wigner fits of the spectral function at all momenta $ |\mathbf{p} |$ yields the dispersion relation $ \omega(\tau, |\mathbf{p}|) $ and the momentum dependent width $ \Gamma(\tau, |\mathbf{p}|) $.
For initial states with high energy densities, such as considered in this section, the spectral and statistical functions exhibit quasiparticle peak structures already at early times (see Figure~\ref{fig:Spectra_evolution_highT_pion}).
Consequently, it is possible to fit a Breit-Wigner function to the spectral functions at any stage such that the time evolution of the dispersion relation $ \omega_i(\tau, |\mathbf{p}|) $ and momentum dependent width $\Gamma_i(\tau, |\mathbf{p}|) $ for $ i = \sigma, \pi $ can be mapped out.
In the left plot of Figure~\ref{fig:Evolution_dispersion_width} we show the dispersion relation of the pion at different times $ \tau $ encoded in the color scheme. A fit of $ \omega(\tau, |\mathbf{p}|)$ to the relativistic dispersion relation $ Z \sqrt{|\mathbf{p}|^2 + m^2} $ at various times $ \tau $ yields the quasiparticle masses $ m(\tau)$, which are shown in the inset. In the following, the stationary late-time value is denoted as $ m$.
We note that the mass corresponds to the dispersion relation in the limit of vanishing momentum, i.e. $ m = \omega(\tau, |\mathbf{p}| \rightarrow 0) $. The right plot of Figure~\ref{fig:Evolution_dispersion_width} displays the momentum dependent width of the pion extracted from the Breit-Wigner fits. We find a plateau in the IR and a maximum in the UV.
In analogy to the dispersion, where the quasiparticle mass describes the zero-momentum limit, we can extract the asymptotic value of the width in the limit of vanishing momentum, $ \Gamma = \Gamma(\tau, |\mathbf{p}| \rightarrow 0) $. Since $ \Gamma $ corresponds to the width of the spectral function that is peaked at the quasiparticle mass, it can be viewed as the width of the quasiparticle.
As the right plot in Figure~\ref{fig:Evolution_dispersion_width} indicates, $ \Gamma$ is increasing with time.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Evolution_mass_width}
\caption{
Time evolution of the pion mass and the pion width in the limit $ |\mathbf{p}| \rightarrow 0 $. Results are shown for field dominated initial conditions with $ \sigma_0 = 1.36 $ and $ n_0 = 0 $ (blue dots) as well as fluctuation dominated initial conditions with $ \sigma_0 = 0 $ and $ n_0 = 0.8 $ (orange triangles).
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Evolution_mass_width}
\end{figure*}
We can now work out the differences observed in Figure~\ref{fig:Spectra_evolution_highT_pion} in a quantitative fashion. There is an apparent difference in the approach of the late-time values of the mass $ m_\pi$ and the width $ \Gamma_\pi $ when comparing the time evolution starting from the two different initial conditions. The results are shown in Figure~\ref{fig:Evolution_mass_width}, where again only the pion data are shown because the sigma meson behaves accordingly.
For field dominated initial conditions the effective mass of the pion meson decreases during the time evolution, whereas for fluctuation dominated initial conditions it grows, albeit only slightly. This is in accordance to the previous observation of the shifting peak position for field dominated initial conditions.
It is important to note that the mass of the quasiparticles is not contained in the initial state, since $ m_\mathrm{init} $ is much smaller than the particle masses of the thermal state, but generated dynamically during the time evolution. The quasiparticle masses build up from the fluctuations contained in the self-energies. Since the nonzero initial field value leads to large bosonic self-energy contributions in the beginning of the time evolution, at early times the masses are larger than in the case of vanishing initial field.
The time dependence of the spectral width shown in the right plot of Figure~\ref{fig:Evolution_mass_width} can be understood in terms of the sum rule \eqref{eq:sum_rules} according to which the bosonic spectral functions are normalized.
Due to the additional factor of $ \omega $ in the integrand, which arises from the time derivative on one of the fields in the boson commutation relation, a larger mass automatically implies smaller widths. Consequently, the behavior of mass and width in the time evolution must be converse to each other.
After discussing the dynamics of the meson spectral and statistical functions at high initial energy densities, we now turn to the quark sector.
After decomposing the Dirac structure of fermionic two-point functions and imposing symmetries, we are dealing with four components for the quark spectral and statistical functions, the scalar, vector-zero, vector and tensor components as introduced in \eqref{eq:fermion_components}. Of these four components, the vector-zero component contains information about which states can be occupied \cite{tuprints2209, Kitazawa:2006zi}. Since it is normalized to unity according to the sum rule \eqref{eq:sum_rules}, the vector-zero component quark spectral function can be interpreted as the density of states for the quarks.
We note that in a chiral symmetric theory with vanishing fermion bare mass one finds $ \rho_S = \rho_P = \rho_T^{\mu \nu } = 0$ since only components in \eqref{eq:Lorentz_components} that anticommute with $ \gamma_5 $ are allowed. Here, we consider a setup where chiral symmetry restoration takes place. For initial conditions with high energy densities and the corresponding final states in the high-temperature chiral symmetric regime, the quark dynamics can be studied in terms of the vector-zero and vector components.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_evolution_highT_V}
\caption{
Time evolution of the vector component quark spectral and statistical functions shown for the same initial conditions as in Figure~\ref{fig:Spectra_evolution_highT_pion} at momentum $ |\mathbf{p}| = 0.016 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Spectra_evolution_highT_V}
\end{figure*}
As was shown in Figure~\ref{fig:Spectra_evolution_summary}, for high energy densities there is not much dynamics taking place in the excitation spectrum of the quarks. More insight can be gained by looking at the vector component which is presented in Figure~\ref{fig:Spectra_evolution_highT_V} for the same field or fluctuation dominated initial conditions as discussed before for the bosons.
The interesting case is again the evolution starting from field dominated initial conditions. The corresponding vector spectral function (upper left plot) shows that the peak position moves toward smaller frequencies, just as in the bosonic case. It indicates that the energy of both meson and quark quasiparticles decreases during the time evolution. However, it is important to note that---in contrast to the mesons---the amplitude of the fermion statistical function increases during the time evolution. As discussed before, the nonzero initial field leads to strong fluctuations and hence occupancies in the bosonic sector. It takes time for these fluctuations to be transferred to the fermionic sector, which is why we observe that the fermion occupation grows slowly during the time evolution.
For the fluctuation dominated initial conditions, we again observe that the spectral and statistical functions approach their late-time behavior very quickly. We conclude that the available states and their occupation quickly approach their thermal final state if energy is provided in terms of particles rather than the field in the initial state.
\subsubsection{Intermediate energy densities}
\label{sec:intermediate_energy_densities}
From Figure~\ref{fig:Spectra_evolution_summary} we can see that the most interesting dynamics is taking place for systems thermalizing in the crossover region. Thus, we aim to study the evolution of the vector-zero quark spectral function for two simulations thermalizing in the cross-over region.
Again we compare two simulations employing field or fluctuation dominated initial conditions, respectively, but in this case we are able to probe initial conditions that lead to the same late-time state.
When comparing the late-time field expectation value $ \overline{\sigma} $, the mass ratio $ m_\sigma / m_\pi $, and the temperature $ T $ of the final state of these two simulations, we find that the respective quantities differ by less than $ \SI{0.5}{\percent} $. Also, the shapes of the spectral and statistical functions in frequency space are the same for both bosonic as well as fermionic components. Quantitatively, we find that $ | \rho_1 - \rho_2 | /\max{(\rho_1)} $ is smaller than $ \mathcal{O}(10^{-2}) $ for all frequencies $ \omega $ and momenta $ |\mathbf{p}| $, where the indices 1 and 2 denote the two simulations compared and $ \max{(\rho)} $ the maximal amplitude of $ \rho $. Larger deviations are observed for the vector-zero component statistical function and for the tensor component spectral and statistical functions, where the amplitudes are of order $ \mathcal{O}(10^{-7}) $ such that numerical inaccuracies come into play. In conclusion, we consider the late-time state of the two simulations to be the same thermal state, universal in the sense that the dependence on the initial conditions is lost. It is characterized solely by a temperature of $ T = 1.04 $, a mass ratio of $ m_\sigma / m_\pi = 1.46 $ and a field expectation value of $ \overline{\sigma} = 0.33 $. As we will see later, this corresponds to a state in the crossover region.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_evolution_crossover_V0}
\caption{
Time evolution of the vector-zero component quark spectral and statistical functions shown for two different initial conditions at momentum $ |\mathbf{p}| = 0.012 $. The left column shows field dominated initial conditions with $ \sigma_0 = 0.98 $, the right column fluctuation dominated initial conditions with $ n_0 = 0.11$. Both lead to the same late-time state with $ T = 1.04 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Spectra_evolution_crossover_V0}
\end{figure*}
The regime of intermediate energy densities distinguishes itself from high and low energy density initial conditions by showing a double-peak structure in the quark spectral functions. Our findings in a nonperturbative real-time setting corroborate previous observations of such double peak structures with perturbative computations or spectral reconstructions reported, e.g., in \cite{Kitazawa:2005mp,Kitazawa:2006zi,Kitazawa:2007ep,Karsch:2009tp,Mueller:2010ah,Qin:2010pc,Qin:2013ufa,Fischer:2017kbq}.
First, consider the vector-zero component describing the excitation spectrum of the quarks. In Figure~\ref{fig:Spectra_evolution_crossover_V0} we show the time evolution of both spectral and statistical functions. As before, for fluctuation dominated initial conditions (right column) the system quickly approaches the shape of the late-time two-point functions.
However, in the case of field dominated initial conditions, the double-peak structure of the spectral function only emerges at later times. At early times, the spectral function reveals a single broad structure.
We further point out that the statistical function $ F_0 $ decays to zero during the time evolution, implying that the fermion occupation is not contained in the vector-zero component but in other components. This agrees well with the effective quasiparticle number that has been employed previously \cite{tuprints2209, Berges:2010zv},
\begin{align}
\hspace{-.1cm}
n_\psi(t, |\mathbf{p}|) = \dfrac{1}{2}
-
\dfrac{|\mathbf{p}| F_V(t, t, |\mathbf{p}|) + M_\psi(t)F_S(t,t,|\mathbf{p}|)}{\sqrt{|\mathbf{p}|^2 + M_\psi^2(t)}}\,,
\end{align}
with effective mass $ M_\psi(t) = m_\psi + h \sigma(t) $. This definition of an effective particle number only provides a good description of the quark content in the system if the occupations in the vector-zero and tensor component are negligible. In our computations, we find that $ F_0 $ and $ F_T $ are of the order $ \mathcal{O}(10^{-7}) $ and hence irrelevant for the quark particle number.
In order to study the particle content, we take into account the vector component which is shown in Figure~\ref{fig:Spectra_evolution_crossover_V}. We can see that the double-peak structure observed in the vector-zero component is also visible in the vector component, in particular in both spectral and statistical functions. From this, we learn that the additional light degrees of freedom, provided in the low-frequency peak of the quark spectral density, is actually occupied in terms of the vector component quark statistical function. Hence, for states thermalizing in the crossover temperature regime, there is an additional light mode with finite occupation in the quark sector available to participate in the dynamics.
We further observe that for fixed momentum $ |\mathbf{p}| $ the energy of the light mode increases with rising temperature. At sufficiently high temperatures this additional mode reaches energies comparable with the main quasiparticle mode such that the two peaks merge into the single peak persistent in the high-temperature regime.
We conclude this section with a comment on the dynamics found for initial states with low energy-densities. In contrast to the cases of intermediate and high energy densities, we find well-defined quasiparticle peaks for both quarks and mesons.
The smaller energy density leads to lower thermalization temperatures and a stronger chiral symmetry breaking, reflected by a mass difference between the $ \sigma $ and $ \pi $ mesons. After discussing the nonequilibrium time evolution of the spectral functions, we now turn to the equilibrium properties.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_evolution_crossover_V}
\caption{
Time evolution of the vector component quark spectral and statistical functions shown for two different initial conditions at momentum $ |\mathbf{p}| = 0.012 $. The left column shows field dominated initial conditions with $ \sigma_0 = 0.98 $, the right column fluctuation dominated initial conditions with $ n_0 = 0.11$. Both lead to the same late-time state with $ T = 1.04 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $
}
\label{fig:Spectra_evolution_crossover_V}
\end{figure*}
\subsection{Late-time thermal limit}
\label{sec:equilibrium_spectra}
In this section we discuss the spectral functions of quarks and mesons in the state of quantum thermal equilibrium according to the definition introduced in Section~\ref{sec:thermal_equilibrium}. The properties of spectral functions at different temperatures reflect the crossover transition of the quark-meson model from the chiral broken to a chiral symmetric phase. We find that the shapes of the final states are universal in the sense that they only depend on the temperature and not on the details of the initial state.
\subsubsection{Mesons}
\label{sec:equilibrium_spectra_mesons}
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{equilibrium_boson_width_mass}
\caption{
Left: Temperature dependence of the characteristic decay momentum $ Q $ shown for the $ \sigma $ and $ \pi $ mesons. The inset shows examples for the momentum dependent width at high and low temperatures. $ Q $ corresponds to the momentum at which the width $ \Gamma(\tau, |\mathbf{p}|) $ is maximal. \\
Right:
Temperature dependence of quasiparticle masses.
Restoration of chiral symmetry is reflected in identical masses of the $ \sigma $ and $ \pi $ mesons at high temperatures.
The quark $q$ quasiparticle mass is obtained from the dominant peak of the vector-zero component quark spectral function. We also plot the ``plasmino" branch $p$ obtained from the quark spectral function.
In both plots gray lines show cubic spline interpolations of the data points. Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:equilibrium_boson_width_mass}
\end{figure*}
Information about the different phases of the model can be obtained from the temperature dependence of the late-time thermal spectral functions of the mesons.
We find that the shape of the bosonic spectral functions is described by a Breit-Wigner function for all considered temperatures. Thereby, the width and the position of the Breit-Wigner peak only depend on the temperature but not on the initial conditions chosen.
As discussed in Section~\ref{sec:high_energy_densities}, the momentum dependent width and the dispersion relation are obtained by applying Breit-Wigner fits to the spectral functions.
Although the Breit-Wigner function \eqref{eq:rel_BW} has two parameters, the width $ \Gamma(\tau, |\mathbf{p}|) $ and the peak position given by $ \omega(\tau, |\mathbf{p}|) $, there is only one free parameter since the normalization condition given by the sum rule \eqref{eq:sum_rules} must be satisfied.
In the right plot of Figure~\ref{fig:Evolution_dispersion_width} we already saw that there is a characteristic momentum mode $ |\mathbf{p}| $ at which the momentum dependent width becomes maximal. This corresponds to the momentum at which the decay is strongest and can be considered as the \textit{main decay mode}, in the following denoted by $ Q $.
In the left plot of Figure~\ref{fig:equilibrium_boson_width_mass} we show the main decay mode $ Q $ as a function of temperature for both meson species.
At low temperatures, the strongest decays are found in the IR, whereas at high temperatures the strongest decays occur in the UV. There is an abrupt change at some critical temperature, above which $ Q>0 $ meaning that the momentum dependent width has a maximum at a nonzero momentum, as shown by the upper line in the inset. Comparing the momentum dependent width at low $ T $ and high $ T $, we can see that the transition from the chiral broken to the chiral symmetric phase is characterized by new decay modes in the UV. Thereby, the main decay mode is suddenly shifted from the IR to the UV.
Another prominent signature for the crossover transition is provided by the quasiparticle masses of the $ \sigma $ and $ \pi $ mesons.
The two meson species are distinguishable in the chiral broken phase, where they have different masses, while they become identical in the chiral symmetric phase.
When plotting the meson masses as a function of temperature, as shown in the right plot of Figure~\ref{fig:equilibrium_boson_width_mass}, we can nicely visualize the restoration of chiral symmetry, manifest in the quasiparticle masses of $ \sigma $ and $ \pi $ becoming identical (pink and cyan data points).
We observe a softening of the masses at intermediate temperatures, i.e. the quasiparticle masses are minimal in the temperature region where the crossover phase transition occurs.
Decreasing masses indicate growing correlation lengths.
In the limit of a second order phase transition, which is characterized by diverging correlation lengths, the masses would vanish at the transition point.
In the high-temperature range, masses grow with rising temperatures.
This reflects that the quasiparticle masses can be considered as \textit{thermal masses} in the sense that they contain self-energy contributions and are generated by quantum fluctuations, which increase with temperature.
We further note that one could also study the temperature dependence of the width $ \Gamma = \Gamma(\tau, |\mathbf{p}|\rightarrow 0) $ instead of $ m = \omega(\tau, |\mathbf{p}| \rightarrow 0) $. However, the information is equivalent due to the normalization of the spectral functions, as pointed out above. Consequently, the behavior of $ \Gamma $ is converse to the behavior of $ m $ and not presented here explicitly.
The width $ \Gamma $ is small at low temperatures, strongly grows toward intermediate temperatures where it reaches a maximum value in the crossover temperature regime, and then decays slowly when going to higher temperatures.
\subsubsection{Quarks}
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{dispersion_V0}
\caption{
The dispersion relation of the vector-zero quark spectral function shown for three different temperatures.
At low temperature a fit to the relativistic dispersion relation $ Z\sqrt{|\mathbf{p}|^2 + m_q^2} $ is shown by the black dashed line. For higher temperatures the behavior at small and large momenta differs as the additional low-frequency peak and the main peak merge into one peak.
We perform separate fits at low and high momenta, shown by the dashed and dotted black lines.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).}
\label{fig:dispersion_V0}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectral_function_V0_3D_scaled_pw}
\caption{
The vector-zero quark spectral function as a function of frequency $ \omega $ shown for a range of spatial momenta $ |\mathbf{p}| $. The three plots correspond to the same three temperatures as in Figure~\ref{fig:dispersion_V0}.
The purple line indicates peak position of the spectral function in the $|\mathbf{p}|$-$ \omega $-plane and is therefore equivalent to the dispersion relation shown in Figure~\ref{fig:dispersion_V0}.
The spectral function reveals a narrow quasiparticle peak at low temperatures. As the temperature is increased the light mode interferes with the low-momentum spectral function, leading to a broad peak at small momenta. At high momenta, the quasiparticle peak remains narrow.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}). }
\label{fig:Spectral_function_V0_3D}
\end{figure*}
Let us now consider the thermal spectral functions for the quark sector. Several aspects of the different components invite for discussion. Let us begin with a recap of the findings shown in the vector-zero component of the quark spectral function. As presented in Figure~\ref{fig:Spectra_evolution_summary} the spectral density has different shapes at low, intermediate and high temperatures. In particular, the intermediate temperature range of the crossover transition is characterized by a double-peak structure.
The temperature dependence of the fermionic quasiparticle masses is depicted in Figure~\ref{fig:equilibrium_boson_width_mass}. The mass of the low-frequency mode (plasmino branch, denoted by $p$) grows continuously with rising $ T $ until it merges with the main peak (denoted by $q$), forming the wide quasiparticle peak found for initial states with large energy densities. For related studies with perturbative computations or spectral reconstructions see, e.g.,~\cite{Kitazawa:2005mp,Kitazawa:2006zi,Kitazawa:2007ep,Karsch:2009tp,Mueller:2010ah,Qin:2010pc,Qin:2013ufa,Fischer:2017kbq}. Note also that this double-peak structure is only visible in the small momentum regime. It can be studied by considering the dispersion relation obtained from the vector-zero quark spectral function.
For temperatures below some critical temperature in the crossover regime, the vector-zero spectral function
reveals the shape of a nonrelativistic Breit-Wigner function, also known as the Lorentz function,
\begin{align}
\rho_{\text{L}}(\tau, \omega, |\mathbf{p}|)
=
\dfrac{A\ \Gamma(\tau, |\mathbf{p}|)}{\big[\omega - \omega(\tau, |\mathbf{p}| )\big]^2 + \Gamma^2(\tau, |\mathbf{p}| )}
\,,
\label{eq:Lorentz}
\end{align}
where $ A $ is a normalization constant, $ \Gamma(\tau, |\mathbf{p}|) $ the width and $\omega(\tau, |\mathbf{p}| ) $ the dispersion. When temperature is increased, the vector-zero quark spectral function ceases to be described in terms of \eqref{eq:Lorentz} as the low-frequency mode arises and grows in amplitude.
Due to appearance of the additional peak, it is not possible to perform a Lorentz fit at all temperatures.
As a consequence, we choose to compute the dispersion relation of the quarks by determining the peak position of the main peak of $ \rho_0 $. The obtained dispersion relation is shown for three temperatures in Figure~\ref{fig:dispersion_V0}.
For low temperatures, where no additional peak is present, the quark dispersion is well described by a relativistic dispersion relation; see left plot of Figure~\ref{fig:dispersion_V0}.
When going to intermediate temperatures, the additional light mode leads to a double-peak structure. As long as the two peaks are distinguishable, one can determine the dispersion relation of the main peak, which yields the same shape as in the low-temperature regime.
However, when the main peak and the side peak merge into a single peak, the dispersion relation obtained from the overlap of the two peaks has a dispersion relation of the form shown by the middle plot of Figure~\ref{fig:dispersion_V0}. There is a clearly visible dip in the dispersion, showing that for small momenta the peak position is determined by the light mode, while for large momenta the peak position is determined by the main peak.
We can fit the low-momentum and high-momentum areas separately to a relativistic dispersion relation, as shown by the dashed and dotted lines in Figure~\ref{fig:dispersion_V0}.
When considering higher temperatures, the position of the dip moves toward larger frequencies and is not visible by eye anymore. However, we find that the dispersion relation cannot be described by the relativistic dispersion relation $ Z \sqrt{|\mathbf{p}|^2 + m^2} $ over the whole momentum range but still distinguishes between high-momentum and low-momentum regimes.
We conclude that the single peak of $ \rho_0 $ at large temperatures is still the result of an overlap of a small low-frequency peak with the main peak.
More insight is gained by considering the momentum dependence of the corresponding spectral functions, which is shown in Figure~\ref{fig:Spectral_function_V0_3D}. The spectral function $ \rho_0(\tau, \omega, |\mathbf{p}|) $ is shown at some late time $ \tau $ where the system has approached thermal equilibrium. The peak position of the spectral functions corresponds to the dispersion relations shown in Figure~\ref{fig:dispersion_V0}. At low temperatures, we find a single narrow quasiparticle peak. For higher temperatures, however, an additional light mode interferes with the main peak. At intermediate temperatures, where a softening of the mass occurs, the light mode and the main peak have comparable frequencies in the infrared. The superposition of the main peak and the light mode leads to a broad peak at small momenta, whereas the peak remains narrow at high momenta.
As temperature increases, the light mode is only visible at higher momenta. An example is shown by the right plot in Figure~\ref{fig:Spectral_function_V0_3D}, where one can see a small enhancement of the spectral function at low frequencies for intermediate momenta.
This observation indicates that the quark spectral function harbors additional degrees of freedom at high temperatures, as compared to the low-temperature regime.
From the dispersion relation of the vector-zero quark spectral function, we determine the constituent quark mass by taking the asymptotic value at vanishing momentum, i.e. $ m_q = \omega_0(\tau,|\mathbf{p}| \rightarrow0 ) $.
The constituent quark mass behaves analogously to the bosonic masses, i.e., a softening of the mass in the crossover temperature range occurs, see violet data points in the right plot of Figure~\ref{fig:equilibrium_boson_width_mass}.
At low temperatures, the constituent quark mass lies between the $ \sigma $ and $ \pi $ masses, which is in qualitative agreement with the particle masses known at $ T=0 $.
For temperature below $ T \simeq 2 $ we find that the pion is the lightest particle in the theory. This supports chiral perturbation theory as an effective theory for QCD where only pion degrees of freedom are considered.
On the contrary, at high temperatures the constituent quark mass is smaller than the meson mass. As light modes are easier to excite, they dominate the dynamics in a system. Hence, our observation matches our idea that the chiral symmetric phase is dominated by quark degrees of freedom whereas the chiral broken phase is described by hadronic degrees of freedom, in particular by pions.
Finally, we shortly discuss the scalar component of the quark spectral function.
In a chiral symmetric theory with vanishing fermion bare mass, the scalar component of the quark spectral function vanishes, i.e. $ \rho_S = 0 $. Although chiral symmetry is broken explicitly here, we expect the system to restore chiral symmetry at high temperatures, implying that the limit of a vanishing scalar component quark spectral function is approached.
In Figure~\ref{fig:Spectra_thermal_S} we present the numerical results for a range of temperatures. The clear quasiparticle peak existing at low temperatures widens and flattens with rising temperature. The amplitude of the scalar component finally decays to zero, visualizing the predicted restoration of chiral symmetry in the course of the crossover transition.
We further note that the peak position of the scalar component spectral function qualitatively shows the same behavior as the vector-zero component. The peak moves toward small frequencies at intermediate temperatures, corresponding to the softening of a mass, and is shifted toward higher frequencies at low and high temperatures.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{Spectra_thermal_S}
\caption{
The thermalized scalar component of the quark spectral function as a function of frequency shown for different temperatures. Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:Spectra_thermal_S}
\end{figure*}
\section{The macroscopic field}
\label{sec:field}
In this section, we study the time evolution of the expectation value of the macroscopic field $\braket{ \sigma(t)} $, for the set of different initial conditions deployed also in the previous section. In addition, we study the model for different fermion bare masses in order to analyze the effects of spontaneous symmetry breaking in the model.
\subsection{Nonequilibrium time evolution of the field}
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{field_evolution_IC}
\caption{
Left: The time evolution of the field shown for field dominated initial conditions with different initial field values as indicated in the legend.
Right: The time evolution of the field shown for field dominated initial conditions with $ \sigma_0 = 1.98 $ (blue) and fluctuation dominated initial conditions with $ n_0 = 0.11 $ (orange). The same late-time field value $ \bar{\sigma} = 0.33 $ is approached for both initial states.
The gray line in both plots serves as a guide to the eye for $ \braket{\sigma(t)}= 0 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:field_evolution_IC}
\end{figure*}
The classical potential of the sigma field is given by
\begin{align}
V(\sigma) = \dfrac{1}{2} m^2 \sigma^2 + \dfrac{\lambda}{4!N}\sigma^4\,,
\label{eq:classical_potential}
\end{align}
where the parameter choice of $ m^2 < 0 $ allows for spontaneous symmetry breaking. Thus, the potential has the shape of a double well with minima located at $ \sigma= \pm \sqrt{- 3!N \,m^2 / \lambda } $. For the parameters employed in this work the minimum is located at $ \sigma \approx 0.04 $.
The time evolution of a classical field in this potential is described by the classical equation of motion
\begin{align}
\left[
\partial_t^2 + m^2 + \dfrac{\lambda}{6N} \sigma^2(t) \right] \sigma(t)
= 0\,,
\label{eq:classical_eom_field}
\end{align}
where spatial homogeneity and isotropy are assumed. If the initial field value, or the initial field derivative, is nonzero, the field rolls down a potential hill and oscillates until it equilibrates at the minimum of the potential.
Here, we go beyond the classical theory and compute the nonequilibrium time evolution including additional quantum fluctuations. As discussed above, we employ an approximation that includes quantum corrections at NLO in $ 1/N $ and $ g $. The quantum corrections lead to an effective potential and additional terms in the field equation \eqref{eq:classical_eom_field}.
The full evolution equation at the given approximation can be found in Appendix~\ref{app:eom}.
Depending on the initial conditions the time evolution of the field shows different properties.
Let us first consider field dominated initial conditions, where the initial field is set to a finite value $ \sigma_0 $. The time evolution for the expectation value of the field $ \braket{\sigma(t)} $ is shown for different $ \sigma_0 $ in the left plot of Figure~\ref{fig:field_evolution_IC}. One can see that the field oscillates and eventually reaches a stationary value.
In contrast to the classical theory, where the field always reaches the same equilibrium value given by the position of the potential minimum, the field reaches different late-time values.
The reason is that the field itself generates quantum fluctuations as it rolls down a potential hill. These dynamically emerging fluctuations again influence the effective potential in which the nonequilibrium time evolution takes place. As the initial field value effects the amount of quantum fluctuations in the system and hence the shape of effective potential, different values of $ \sigma_0 $ lead to different late-time values for $ \braket{\sigma(t)} $. Before we come to a more detailed discussion of the plots in Figure~\ref{fig:field_evolution_IC}, we provide some intuition for the influence of fluctuations on the effective potential.
Quantum fluctuations can be represented as loop corrections of the effective action. The effective potential is obtained when evaluating this effective action at a constant field.
For a nonvanishing fermion bare mass, i.e. $ m_\psi \neq 0 $, the chiral symmetry breaking tilts the effective potential toward negative values. Thereby, larger $ m_\psi $ cause stronger tilts. On the other hand, bosonic fluctuations provide positive contributions, pushing the potential toward a symmetric shape. Together, this leads to a tilted Mexican hat potential with a minimum at some finite field expectation value. The position of the minimum of the effective action can easily be much larger than the position of the minimum of the classical potential.
The influence of these quantum corrections to the effective potential can be visualized by looking at the energy density of the system, which we compute from the energy-momentum tensor. We distinguish classical, bosonic and fermionic contributions to the energy density, with the relevant expressions presented in Appendix~\ref{sec:EMT}. Quantum fluctuations are taken into account for the fermionic and bosonic parts of the energy density. Hence, the energy density reflects the amount of fluctuations in the system.
In order to study the influence of the initial field, we consider the energy density computed at initial time $ \varepsilon_\mathrm{init} $. In Figure~\ref{fig:energy_density} we show the contributions from the field, the bosons and the fermions separately. The blue and pink lines show, how the field and the bosonic energy densities exhibit a positive curvature. In contrast, the fermionic contribution shown in violet leads to a tilt toward negative curvature, which is a consequence of the explicit chiral symmetry breaking. Summing the three parts together, one obtains the total energy density that has a minimum at a nonzero field value, as represented by the gray curve.
It is important to note that the energy density $ \varepsilon_\mathrm{init} $ computed at time $ t=0 $ does not include the quantum fluctuations that are generated dynamically by the field. As the field generates further fluctuations, the effective potential is pushed toward a more symmetric form with its minimum moving toward smaller field expectation values. In order to see this, we also look at the energy density computed at late times, where the system is thermalized. The result is shown by the black line in Figure~\ref{fig:energy_density}. It can be seen that the energy density indeed becomes steeper and the minimum moves toward smaller field values. Thus, the energy density provides a useful quantity in order to study the impact of quantum fluctuations on the effective potential, although we emphasize that the energy density and the effective potential are two different quantities.
\begin{figure*}[t]
\centering
\includegraphics[width=.5\textwidth]{energy_density}
\caption{
The energy density at initial time $ \varepsilon_\mathrm{init} = \varepsilon (t=0) $ and at late times $ \varepsilon_\mathrm{th} = \overline{\varepsilon} $ as a function of the initial field value.
We present the classical, bosonic and fermionic contributions to the initial energy density separately. Together, they form a bounded shape with minimum at a nonzero initial field value (gray curve).
At late times, the energy density reaches the constant shape shown by $ \varepsilon_\mathrm{therm} $ in black. The minimum of the energy density at late times corresponds to the maximal field values found.
The initial field is given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:energy_density}
\end{figure*}
Having this qualitative picture of the effective quantum potential in mind, we can understand the behavior of the three curves shown in the left plot of Figure~\ref{fig:field_evolution_IC}.
If the initial field sits close to the minimum of the effective potential, it barely oscillates and hence almost no additional fluctuations are created dynamically. Accordingly, the shape of the potential does not change with the time evolution such that the position of the minimum stays the same. This is shown by the black line.
In contrast, the field can be placed on a point away from the potential minimum. As it starts moving toward the potential minimum, the field dynamically generates fluctuations. These fluctuations change the shape of the potential, thereby altering the position of the minimum. The further away the field is from the potential minimum in the beginning, the more fluctuations are generated and the stronger the potential deforms. As we increase the distance of the initial field from the potential minimum at time $ t=0 $, the minimum of the potential at late times moves toward zero. Examples of this behavior are depicted by the green and red curves in the left plot Figure~\ref{fig:field_evolution_IC}.
As discussed in the previous section, energy cannot only be provided in terms of a nonzero initial field value (and the fluctuations this field generates), but also in terms of occupancies.
Hence, the same late-time field value can be approached for different initial conditions.
In the right plot of Figure~\ref{fig:field_evolution_IC} the time evolution of $ \braket{\sigma(t)} $ is shown for the two simulations discussed in Section~\ref{sec:intermediate_energy_densities}. The blue line displays the time evolution of the field starting from field dominated initial conditions, while the orange line shows the time evolution starting from fluctuation dominated initial conditions. For both initial conditions the quantum potential has the same minimum, characterized by a late-time stationary field value of $\overline{\sigma} = 0.33 $.
Although we commonly say that initial conditions with the same energy density lead to the same thermal state, there is a caveat.
Two initial states thermalizing at the same late-time state usually do not have the same energy density at time $ t=0 $ because the energy density computed at initial time does not include dynamically generated fluctuations. What one means is that different initial conditions provide the same amount of fluctuations to the system. The way they are provided depends on the initial state and partly they are generated dynamically.
However, for the quantum thermal equilibrium state that is approached at late times only the amount of fluctuations introduced to the system is relevant.
\subsection{Thermal equilibrium}
\label{sec:equilibrium}
\subsubsection{Field expectation value}
After discussing the time evolution of the field expectation value, we now turn to its late-time properties.
We denote the stationary value of the field at late times by $ \overline{\sigma} $. As discussed in \ref{sec:thermal_equilibrium}, at these times the fluctuation-dissipation theorem is satisfied and the system state is considered to be thermal. Thus, we consider $ \overline{\sigma} $ to be the thermal field expectation value.
The late-time field values $ \overline{\sigma} $ are determined by the average of field values over a time range $ [t^*, t^* + \Delta t] $ with $ t^* $ being a time at which the field is sufficiently stationary. For the results shown in this work, we use $ t^* = 130 $ and $ \Delta t = 130 $, such that the standard deviation of the mean is $ \mathcal{O}(10^{-4}) $ to $ \mathcal{O}(10^{-11})$ depending on the initial conditions used.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth]{field_value_IC}
\caption{
The value of the thermalized one-point function for different initial conditions. On the right, the thermal field value $ \overline{\sigma} $ is shown for initial conditions with different field values $ \sigma_0 $. The gray dashed line indicates $ \overline{\sigma} = \sigma_0 $.
On the left, the thermal field value is shown for different initial fermion occupation numbers $ n_0 $.
In both plots, the black star indicates the value obtained for initial conditions with $ n_0 = 0 $ and $ \sigma_0 = 0 $.
In both plots gray lines show cubic spline interpolations of the data points. Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:field_value_IC}
\end{figure*}
First, let us look at the time evolution of the macroscopic field for different initial field values $ \sigma_0 $. Naively, one might expect that larger field values automatically imply increasing energy densities in the initial state, hence a higher thermalization temperature and smaller field value. However, as the discussion above already pointed out, this is not the case. In the left plot of Figure~\ref{fig:field_value_IC} we show how the late-time field value $ \overline{\sigma}$ depends on the initial field $ \sigma_0 $. With increasing $ \sigma_0 $, the thermal field $ \overline{\sigma} $ first grows and then decays to zero.
The maximal value for $ \overline{\sigma} $ is expected, when the least amount of fluctuations is generated dynamically, as these fluctuations would push the minimum of the potential and thus $ \overline{\sigma} $ toward zero. We indeed find the largest late-time field values for $ \sigma_0 \approx \overline{\sigma}$, which in indicated by the gray dashed line in the plot.
Second, we consider fluctuation dominated initial conditions where the field value is set to $ \sigma_0 = 0 $ while the initial fermion occupation is taken to be constant, i.e. $ n_\psi(t=0, |\mathbf{p}|) = n_0 $, and varied between zero and one.
In the right plot of Figure~\ref{fig:field_value_IC} we can see that increasing the fermion occupation number $ n_0 $ goes along with smaller thermal field values $ \overline{\sigma} $. Thus, for larger $ n_0 $ higher temperatures are reached, emphasizing again that larger fermion occupation numbers lead to a rise of the fluctuations that make the effective potential more symmetric.
\begin{figure*}[t]
\centering
\includegraphics[width=.5\textwidth]{order_parameters}
\caption{
Order parameters of the quark-meson model as a function of temperature. In the upper plot, the order parameter is given by the macroscopic field $ \overline{\sigma} $ which is the thermalized value of the one-point function.
In the lower plot, the order parameter is given by the ratio of the $ \sigma $ meson and pion masses. The masses are derived from the two-point functions of the corresponding bosonic fields.
The gray lines show cubic spline fits to the data points. The inflection points are indicated by the black vertical lines.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ defined as the inflection point $ T_\mathrm{inflection} $ of the order parameter $ \overline{\sigma} $ shown in the upper plot.
}
\label{fig:order_parameters}
\end{figure*}
\subsubsection{Crossover phase transition}
Our results regarding the thermal state of the system can be summarized in an analysis of the crossover transition between the chiral broken and the chiral symmetric phase of the quark-meson model. When a system becomes thermal, the thermodynamic concept of a phase diagram can be applied. The conjectured phase diagram of the quark-meson model contains important features of the QCD phase diagram. It exhibits a chiral symmetric phase with vanishing field expectation value at high temperature $ T $, as well as a chiral broken phase with nonzero field corresponds at low $ T $ .
In order to study the phase transition and the transition temperature, we employ two different order parameters, one deduced from the one-point function and one from the two-point functions.
The first one is the field expectation value of the thermalized field $ \overline{\sigma} $. It is nonzero in the chirally broken phase and zero in the chirally symmetric phase. Often this field value is identified as the pion decay constant $ f_\pi $.
The second one is the mass ratio $ m_\sigma / m_\pi$, where $m_ \sigma $ and $m_\pi $ are the masses of the $ \sigma $ meson and the pion, respectively. The masses are determined from the bosonic spectral functions as discussed in Section~\ref{sec:equilibrium_spectra_mesons}. In the chiral limit, the mass ratio is expected to go to unity.
Starting from the different initial states analyzed, we find that the system thermalizes at different temperatures. Thereby, the dependence of an order parameter on the temperature provides insight into the nature of the phase transition.
In Figure~\ref{fig:order_parameters} we show our numerical results for the temperature dependence of the two order parameters, $ \overline{\sigma} $ in the upper and $ m_\sigma / m_\pi$ in the lower plot.
Every point in the diagram corresponds to a simulation with a different initial state. As indicated in the legend, we are considering initial states of various fermion occupations, described by $ n_0 $, and initial field values, described by $ \sigma_0 $. It is reassuring to see that the order parameters obtained from field or fluctuation dominated initial conditions align themselves on a single curve, which is characteristic for a smooth crossover transition. This is yet another way of seeing that the thermal states are independent of the details of the initial conditions.
As chiral symmetry is restored with rising temperature, the field value decays to zero while the mass ratio goes down to one.
The field expectation value $ \overline{\sigma} $ is often considered as a first approximation for the pion decay constant $ f_\pi $. As can be seen, in the limit $ T\rightarrow 0 $ some value $ \overline{\sigma} \simeq \mathcal{O}(1)$ is approached. At the lowest temperature considered we find $ \overline{\sigma} / m_\pi \simeq 0.65 $, matching the phenomenological value $ f_\pi / m_\pi \simeq 0.69 $ \cite{Tanabashi:2018oca}.
Further, we can see from the lower plot in Figure~\ref{fig:order_parameters} that the mass ratio is only $ m_\sigma/m_\pi \simeq 1.8 $ at the lowest temperatures available, which is smaller than the expectation from the known values of the masses. However, the mass ratio is expected to further increase with decreasing temperature.
We perform a cubic spline fit to the data points and identify the inflection point of the field $ \overline{\sigma} $ as the pseudocritical temperature of the crossover $ T_{pc} $. We indicate the inflection point of both the field and the mass ratio in the plots of Figure~\ref{fig:order_parameters}. It can be seen that the temperatures deduced from the two different order parameters are comparable with each other. We find that the pseudocritical temperature is of the order of the pion mass. This is in agreement with the expectation of the QCD phase transition being at around $ \SI{150}{MeV}$ for vanishing baryon density.
\subsection{Spontaneous symmetry breaking}
\label{sec:SSB}
\begin{figure*}[t]
\centering
\includegraphics[width=1.\textwidth]{field_SSB}
\caption{
Left: The time evolution of the field with initial value $ \sigma_0 = 0.62 $ shown for three different bare fermion masses $ m_\psi $. The field reaches the stationary value $ \overline{\sigma} $ at late times.
Right: The asymptotic field value $ \overline{\sigma} $ shown for different bare fermion masses $ m_\psi $. The green, red and black data points correspond to the simulations shown in the left plot. The field value decreases with the fermion bare mass and approaches an asymptotic value for $ m_\psi \rightarrow 0 $.
Dimensionful quantities are given in units of the pseudocritical temperature $ T_{pc} $ (cf. Figure~\ref{fig:order_parameters}).
}
\label{fig:field_SSB}
\end{figure*}
We have seen that the explicit chiral symmetry breaking in the system leads to nonzero field expectation values. Here, we analyze the limit of vanishing explicit symmetry breaking, i.e. $ m_\psi \rightarrow 0 $, with spontaneous symmetry breaking still present.
If the fermion bare mass vanishes, i.e., $ m_\psi = 0 $, the action of the quark-meson model \eqref{eq:action} is invariant under chiral $ SU_L(2) \times SU_R (2) \sim O(4)$ transformations and therefore symmetric under chiral symmetry.
Still, a nonzero field expectation value can break this symmetry spontaneously.
For nonzero fermion bare masses, chiral symmetry is explicitly broken and the minimum of the potential is located at some nonzero field value. If the field expectation value stays nonzero for $ m_\psi \rightarrow 0$, we expect to observe spontaneous symmetry breaking.
We compare simulations with different fermion bare masses $ m_\psi $ while all other parameters of the theory are kept fixed.
The system is studied for initial conditions with $ \sigma_0 = 0.62 $ and vanishing fermion and boson occupations, i.e. $ n_0 = 0 $.
In the left plot of Figure~\ref{fig:field_SSB}, we show the time evolution of the field expectation value $ \braket{\sigma(t)} $, for three examples with different fermion bare masses. As before, the field oscillates before it equilibrates to the thermal late-time value $ \overline{\sigma} $.
Since the fermion bare mass $ m_\psi $ governs the strength of the chiral symmetry breaking and thus the deformation of the potential, increasing fermion bare masses yields larger values for $ \overline{\sigma} $. At the same time $ m_\psi $ determines the fermion backreaction on the field, i.e. how strong the field is pushed away from its current value. The field only reaches a stationary value, if the backraction from the fermions on the field and the bosonic interactions with the field balance out.
Here, fermion bare masses with values from $ \mathcal{O}(10^{-4}) $ ranging to $ \mathcal{O}(1) $ are considered.
We find that the field approaches the asymptotic value $ \overline{\sigma} = 0.48 $ for $ m_\psi \rightarrow 0 $, which is shown in the right plot of Figure~\ref{fig:field_SSB}.
This analysis shows that our numerical simulations of the quark-meson model reproduce the expected spontaneous symmetry breaking in the limit of vanishing fermion bare mass.
\section{Summary and conclusion}
\label{sec:conclusion}
Motivated by current experimental studies of the QCD phase diagram in heavy-ion collisions, we investigated the dynamical approach of the quark-meson model to thermal equilibrium using a range of different initial conditions dominated by either the sigma field or fermionic fluctuations. The time evolution of one- and two-point functions was computed numerically using closed equations of motion derived from the 2PI effective action at NLO in $1/N$ and the Yukawa coupling.
We show that our simulations correctly capture the approach to thermal equilibrium, which depends only on the energy density of the initial condition. The crossover phase transition from the chiral broken phase at low temperatures to the chiral symmetric phase at high temperatures is reproduced by the late-time equilibrium states. Thermalization in the chiral broken phase is characterized by a finite field expectation value, a mass difference between the sigma meson and the pions as well as narrow quasiparticle peaks in the spectrum. The restoration of chiral symmetry in the high-temperature regime expresses itself in the field expectation value decreasing to zero, the mass ratio of $ \sigma $ and $ \pi $ mesons going to unity and the scalar component of the quark spectral functions decaying to zero.
Our investigation focused in detail on the dynamical thermalization revealing differences in the time evolution depending on the initial state employed.
We not only studied the time evolution of the field expectation value but also probed the dynamical properties of the two-point functions, expressed in terms of the spectral and statistical functions, which carry information about the available quasiparticle states and their occupation in the system, respectively.
For initial states with vanishing initial field but energy supplied by fermion occupation, the spectral and statistical functions of both quarks and mesons approach their late-time thermal shapes already at early times.
In contrast, if the energy density is predominantly provided by the nonzero initial field value, the redistribution of energy from the field first to the bosonic sector and subsequently to the fermionic sector leads to high occupancies of the mesons at intermediate stages.
This is also reflected by the different behavior found in the time evolution of the quasiparticle masses depending on the initial conditions.
The deployed nonequilibrium setup of the quark-meson model captures important features of the low-energy behavior of QCD.
By studying the temperature dependence of the quasiparticle masses, we find that the lightest degrees of freedom are given by the pions at temperatures below and by quarks above the phase transition. This implies that quarks are the relevant degrees of freedom at high temperatures while pions dominate below the critical temperature.
Furthermore, we learn from the width that at high temperatures the more energetic high-momentum decay modes are more pronounced than for low temperatures.
The nonvanishing expectation value of the sigma field describes the order parameter of the chiral phase transition. Its dynamics depends on the initial state. If the initial field value is close to the minimum of the effective potential, the field remains almost constant. Otherwise, the field rolls down a potential hill and starts oscillating, thereby dynamically generating fluctuations.
Having shown that the dynamics of the thermalization process reveal interesting features before approaching the final thermal state, we lay the foundation for future investigations of the quark-meson model with nonzero baryon chemical potential.
In particular, the possibility of probing the dynamical thermalization of systems surpassing the critical point of the chiral phase transition is of outermost interest.
\section*{Acknowledgments}
The authors acknowledge support by the state of Baden-W\"urttemberg through bwHPC and the German Research Foundation (DFG) through the Collaborative Research Centre ``SFB 1225 (ISOQUANT)".
A. R. acknowledges funding from the Research Council of Norway under the FRIPRO Young Research Talent Grant No. 286883 and Grant No. 295310. This work has utilized computing resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX ``Real-time dynamics of nuclear matter under extreme conditions".
\begin{widetext}
|
2,869,038,154,963 | arxiv | \section{Introduction}
A Cataclysmic Variable (CV hereinafter) is a semidetached binary star system that is particularly stable (Frank et al. 2002). A CV consists of a white dwarf (WD) primary star and a lower mass main-sequence secondary star, mainly a M star, although the spectral class can range from a K to L type star. The condition that defines the distance between the two components is that the main sequence star fills its corresponding Roche lobe and loses matter through the $L_1$ Lagrangian point. The matter accretes onto the WD via an accretion disc, unless the WD has a strong enough magnetic field to prevent the formation of the disc. If the system is disturbed for any reason, it tends to restore to equilibrium.
Two mechanisms maintain the balance of a CV: the evolutionary expansion of the secondary, and the decrease of the semi-major axis of the binary due to the loss of angular momentum. The decrease in angular momentum has two possible sources, depending on the orbital period of the binary system. The first one is magnetic braking, and it is the dominant effect for systems that have orbital periods above three hours (Whyte and Eggleton 1985,Livio and Pringle 1994). The second one is the emission of gravitational waves, for systems that have orbital periods below two hours (Faulkner 1976; Chau and Lauterborn 1977). In the 2-3~h period range, neither mechanism is efficient for the angular momentum removal. Hence, the number of known systems in that range, called the period gap, is significantly smaller.
The material that the secondary loses through the $L_1$ point can not fall straight to the primary; instead, it forms an accretion disc (Frank et al. 2002; Ritter 2008). This accretion disc is so luminous that it outshines both stars. The disc brightness is proportional to the mass transfer rate. Therefore, if the mass transfer rate changes, the luminosity of the system changes. In particular, a change in the location of the $L_1$ point will change, the mass transfer rate, and as a consequence, the luminosity of the whole system will change.
CVs are notorious for variability on different time and magnitude scales. In this paper, we are going to consider a specific one: a relatively low amplitude (0.07--0.97~mag) variability with periods exceeding the orbital ones hundreds to thousands of times. The very long photometric period (VLPP, hereinafter) was first singled out in {\sl FS Aur} (Chavez et al. 2012, 2020), although the object shows many other variabilities. More VLPPs have been identified in other CVs (e.g. Thomas et al. 2010; Kalomeni 2012; Chavez et al. 2012; Yang et al. 2017; Chavez et al. 2020).
Different mechanisms have been proposed to explain VLPPs. Thomas et al. (2010) found a long-term modulation with a period of 4.43 days in the CV {\sl PX And}, using eclipse analysis, and proposed the disc precession period as the origin of the VLPP. Another example is the {\sl DP Leo} system, where Beuermann et al. (2011) found a period of 2.8 yr, using eclipse time variations, and concluding that a third body was the best explanation for the VLPP. Honeycutt, Kafka and Robertson (2014) found a 25--d periodicity in {\sl V794}. Kalomeni~(2012) discovered several magnetic CVs that have long--term variability, with a time scale of hundreds of days and concluded that those VLPPs are likely to originate from the modulation of mass--transfer due to the magnetic cycles in the companion star.
More recently, Chavez~et~al. (2012, 2020), using dynamical analysis, proposed that a third body can induce a VLPP by secular perturbations on the inner binary. The third body can introduce oscillations of the $L_1$ point of the close binary and, therefore, the mass transfer rate changes. This mechanism induces periods as long as the VLPPs in the inner binary by means of secular perturbations observed in the above mentioned CVs.
The VLPP was observed in the long--term light curves of ten CVs by Yang~et~al.~(2017). As a possible mechanism of the VLPP for five out of ten systems, these authors proposed a third body orbiting the close binary, with the system being in Kozai--Lidov resonance (Kozai 1962, Lidov 1962) which requires an orbital inclination between the plane of the binary and the orbit of the third object larger than 39.2$^{\circ}$. By that, they were able to estimate the possible orbital period of the third body.
Our main goal in this research is to investigate whether a third body can explain the observed VLPP of the four CVs, rather than obtaining a precise value on the mass of the third body.
This paper is organized as follows. Section~2 provides information about the CVs considered in this work, and their initial parameters. Section~3 gives the properties of the third body that result from our analysis in order to explain the observed VLPPs. Section~4 briefly addresses the potential role of a post-Newtonian correction on the VLPPs. In Section~5, we address the effects of a probable third body on the mass transfer rate and brightness of the four CVs. In section~6, we presents our results and discussion, and we provide final comments on this work in Section~7.
\section{THE CATACLYSMIC VARIABLES STUDIED AND THEIR INITIAL PARAMETERS}
\label{Sec:CVs}
Yang~et~al.~(2017) matched 344 out of 1580 known CVs, and extracted their data from the Palomar Transient Factory (PTF) data repository. These images were combined with the Catalina Real-Time Transit Survey (CRTS) light curves. They found ten systems with unknown VLPPs; {\sl BK Lyncis} (2MASS J09201119+3356423), {\sl CT Bootis}, {\sl LU Camelopardalis} (2MASS J05581789+6753459), {\sl QZ Serpentis} (SDSS J155654,47+210719.0), {\sl V825 Herculis} (2MASS J17183699+4115511), {\sl V1007 Herculis} (1RXS J172405.7+411402), {\sl Ursa Majoris 01} (2MASS J09193569+5028261), {\sl Coronae Borealis 06} (2MASS J15321369+3701046), {\sl Herculis 12} (SDSS J155037.27+405440.0) and {\sl VW Coronae Borealis} (USNO-B1.0 1231-00276740).
They analyse each system and depending on the value of its VLPP propose a most likely origin, such as the precession of the accretion disc, hierarchical three--body systems and magnetic field change of the companion star. They argue that if the long--term period is less than several tens of days, the disc precession explanation is preferred. However, the hierarchical three body system or the variations in the magnetic field are favoured for longer periods.
Six out of those ten systems they propose to be a hierarchical triple: {\sl BK Lyn}, {\sl LU Cam}, {\sl QZ Ser}, {\sl V1007 Her}, {\sl Her 12} and {\sl UMa 01}.
{\sl UMa 01}, has a long orbital period of $P_{1}=404.10 \pm 0.30$~min (6.735~h); long compared with other systems in the sample. According to the orbital period distribution for CVs, the number of systems with such period or larger is very small and therefore most of the statistical results cannot be applied to them (Knigge 2006; Knigge, Baraffe and Patterson 2011). {\sl UMa 01} has been presumably formed recently and there are not good enough estimates of its parameters (e.g. mass, radius, temperature) of either component.
Additionally, {\sl Her 12} was identified as a CV by Adelman-McCarthy~et~al.~(2006), but we do not model it since its period is not well constrained, possibly being in the range between $P_{1}=76 $--$174$~min (Yang et al. 2017).
We study each of the four systems remaining (i.e.{\sl LU Cam}, {\sl QZ Ser}, {\sl V1007 Her} and {\sl BK Lyn}) to learn more about their dynamical attributes.
The values reported by Knigge~et~al.~(2011) are used for calculating the mass of each member of the CV. We did so since in their article they give all the parameters that we later use in this research such as mass, radius, semi-major axis for each component of the binary. In that study they used eclipsing CVs and theoretical restrictions to obtain semi--empirical donor sequence for CVs with orbital periods $P_{1}<6$~h. They estimate all key physical, photometric and spectral--type parameters of the secondary and primary as a function of the orbital period.
We use the data from their Table~6 and 8 (Knigge~et~al.~2011) to obtain the parameters of the CV\textquotesingle s in our selection. In practice, we use the online version of those tables (that are far more complete) to obtain the adequate values for the CVs studied here. If the systems have any peculiar features we will point it out in the text and we will state the reference used for such a value.
\subsubsection{White Dwarf mass}
First, we briefly describe the mass value used by Knigge~et~al.~(2011). In their research they explain that they used the mean value of $\langle M_{1}\rangle= 0.75 \pm 0.05 \, {\rm M_{\odot}}$. In 2011 the new data pointed to a mean value for the WD in CV\textquotesingle s of $\langle M_{1} \rangle= 0.79 \pm 0.05 \, {\rm M_{\odot}}$. They stated that since they had already begun to assemble the grid of donors sequence and evolution tracks ``we chose to retain $\langle M_{1}\rangle= 0.75 \pm 0.05 \, {\rm M_{\odot}}$ as a representative of WD mass''. More recently, in a review by Zorotovic and Schreiber~(2020), it was reported that the mean WD value could be even higher, between $\langle M_{1}\rangle= 0.82-0.83 \pm 0.05 \, {\rm M_{\odot}}$.
We decided to use the Knigge~et~al.~(2011) values for all parameters of the WD to be self--consistent throughout this article and also because they provide estimates for $M_{1}$ and $R_{1}$ (WD\textquotesingle s mass and radius) corresponding to the orbital period of each CV (both values necessary for the calculations of the next sections).
To understand how this affects the calculations we would like to point out that back in Chavez~et~al.~(2012), the calculations were done with $M_{1}=0.7 \, {\rm M_{\odot}}$ and then we updated it to $M_{1}=0.75 \,{\rm M_{\odot}}$ (a change of 7\%) in Chavez~et~al.~(2020).
The minimum in the plot of Fig.~8 (2012 article) middle panel (semi--major axis vs mass of the third body) has a value of $M_{3}=50\, {\rm M_{J}}$, while when $M_{1}=0.75\, {\rm M_{\odot}}$ is used (Fig.~3, 2020 article) the minimum corresponds to $M_{3}=30\, {\rm M_{J}}$.
That is a 40\% decrease of the mass of the third body at the minimum.
\subsubsection{LU Camelopardalis}
{\sl LU Cam} is a dwarf nova CV and the first spectrum of this system was obtained by Jiang~et~al.~(2000).
Its orbital period was first reported by Sheets~et~al.~(2007) to be $P_{1}=0.1499686(7)$~days$=3.599246$~hr. There, they point out that the averaged spectrum shows a strong blue continuum. Yang~et~al.~(2017) report a VLPP of 265.76 days and point out the hierarchical triple system explanation as their best candidate to explain it.
Using data from Knigge~et~al.~(2011) we obtain $M_{1}=0.75 \, {\rm M_{\odot}}$, $M_{2}=0.26 \, {\rm M_{\odot}}$. We show all the parameters of the system in Table \ref{tab:initial}.
\subsubsection{QZ Serpentis}
{\sl QZ Ser} is a system that has been classified as a dwarf nova. The system has an orbital period of $P_{1}=119.752(2)$~min $=1.99584$~h according to Thorstensen~et~al.~(2002a).
These authors found that the system is not a usual CV, as it is one of a few objects known with a short orbital period and a secondary non--standard K-type star. This K-type secondary has a much smaller mass than a usual K star because of unstable thermal scale mass transfer evolution. There are other examples of this type of CVs. For instance, Thorstensen~et~al.~(2002b) found a K4 in the dwarf nova 1RXS J232953.9+062814, while Ashley~et~al.~(2020) found a K5 around a CV with a period of 4.99~h.
Thorstensen~et~al.~(2002a) used evolutionary models to estimate {\sl QZ Serpentis} parameters such as $M_{2}=0.125 \pm 0.025 \, {\rm M_{\odot}}$ which yielded $R_{2}=0.185 \pm 0.013 \, {\rm R_{\odot}}$, where $R_{2}$ is the secondary\textquotesingle s radius. They also used a typical white dwarf mass value of $M_{1}=0.7 \, {\rm M_{\odot}}$, widely used in 2002 (Jiang et al. 2000; Thorstensen at al. 2002a).
Thorstensen~et~al.~(2002a) estimated from observations of the ellipsoidal variations that the inclination (with respect to sky\textquotesingle s plane) of the system must be $i=33.7^{\circ} \pm 4^{\circ}$.
Then decided to use this estimate to constrain the secondary's mass. They proceeded to check mass ratios between the primary and the secondary between 0.1 to 0.4 for this system. Using the secondary's velocity amplitude they give a mass function of $f=0.075 (5) \, {\rm M_{\odot}}$. The inclination can be calculated from the masses and the mass function using the following equation:
\begin{equation}
\label{incl}
i = \arcsin \Bigg[ \bigg( {(M_{1}+M_{2})^2 f \over M_{1}^3} \bigg) ^ {1 \over 3} \Bigg].
\end{equation}
Taking $M_{2}=0.125 \, {\rm M_{\odot}}$ and $M_{1}=0.7 \, {\rm M_{\odot}}$, Thorstensen et~al.~(2002a) obtained a value of $i=32^{\circ}$.
If we calculate the statistical values obtained by Knigge~et~al.~(2011), for the parameters of this CV (using the orbital period to do so) we find that $M_{1}=0.75 \, {\rm M_{\odot}}$, $R_{1}=0.0107 \, {\rm R_{\odot}}$, $M_{2}=0.15 \, {\rm M_{\odot}}$ and $R_{2}=0.1923 \, {\rm R_{\odot}}$. Therefore, these $M_{2}$ and $R_{2}$ estimates are both well within the uncertainties of the estimates of
Thorstensen~et~al.~(2002a). As we pointed out earlier, we decided to use the Knigge~et~al.~(2011) values to be self--consistent throughout this article since we need estimates for $M_{1}$ and $R_{1}$; both values will be used in the following sections.
Additionally, using these values in Eq.~\ref{incl} we obtain a value for the inclination $i=31.7^{\circ}$, which is well within the observational inclination uncertainty estimated by Thorstensen~et~al.~(2002a) and very close to the value they provide.
The VLPP found by Yang~et~al.~(2017) is 277.72, which is the longest among the four systems studied, and conclude that a hierarchical triple system is the best scenario that can explain this period. Table \ref{tab:initial} shows the parameters used for this system in this work.
\subsubsection{V1007 Herculis}
This CV was discovered by Greiner~et~al.~(1998). They found that it is a polar system with an orbital period of $P_{1}=404.10 \pm 0.30$~min $=1.9988$~hr. Since it is a polar system there is no disc around it, and there are no periods associated with the disc. Greiner~et~al.~(1998) estimated the mass of the secondary using the orbital period and found it to be $M_{2}=0.16 \, {\rm M_{\odot}}$; to do so they assumed a mass--radius relationship for main sequence stars using Patterson~(1984).
Using the parameters of Knigge~et~al.~(2011) for this CV we obtain that $M_{1}=0.75 \, M_{\odot}$ and $M_{2}=0.15\, M_{\odot}$, also shown in Table \ref{tab:initial} along with the rest of the parameters. The observed VLPP by Yang~et~al.~(2017) is 170.59 days.
\subsubsection{BK Lyncis}
{\sl BK Lyn} is a nova--like CV which was discovered by Green~et~al.~(1998). The calculated orbital period is $P_{1}=107.97 \pm 0.07$ min$=1.7995$ h, found by Ringwald et~al.~(1996). In addition, the secondary was found to be a M5V star by using infrared spectroscopy by Dhillon et~al.~(2000). The accretion rate was found to be between $\dot{M}_{WD} \approx 10^{-8}$--$10^{-9} \,\textrm{M}_{\odot}/\textrm{yr}$, constraining the mass of the WD in a wide range of values between 0.4$\,{\rm M}_\odot$ and 1.2$\,{\rm M}_\odot$. Yang~et~al.~(2017) found that the VLPP for this system is 42.05 days (the lowest among all CVs studied here) and ruled out other possible explanations except for a hierarchical triple system one.
Using Knigge~et~al.~(2011), as pointed out in the previous subsection, we obtain $M_{1}=0.75\, {\rm M}_{\odot}$ and $M_{2}=0.13 \, {\rm M}_{\odot}$, with all the parameters of the system shown in Table~\ref{tab:initial}
\begin{table*}
\caption{\label{tab:initial} Initial parameters and magnitudes for all systems are calculated using Knigge~et~al.~(2011). The observed minimum magnitude ($M_{Bmin}$), maximum ($M_{Bmax}$) and overall change ($\Delta M_{B}$) due to VLPP (Yang~et~al.~2017) are shown.}
\begin{tabular}{@{}lcclrrlllllr}
\hline
Name of the CV& Binary Period & $M_{1}$ & $M_{2}$ & $R_1$ & VLPP & $M_{2} / M_{1}$ & $a$ & $M_{Bmax}$ & $M_{Bmin}$ & $\Delta M_{B}$ & $\log (\dot{M}_{2})$ \\
& (hours) & (${{\rm M}_{\odot}}$) & (${{\rm M}_{\odot}}$) & (${\rm R}_{\odot}$) & (days) & & (AU) & & & & $\ \ ({{\rm M}_{\odot}/{\rm yr} })$\\
\hline
\hline
{\sl LU Camelopardalis} & 3.5992 & 0.75 & 0.26 & 0.011 & 265.76 & 0.34 & 0.0055 & 15.55 & 16.10 & 0.55 &-9.02 \\
{\sl QZ Serpentis} & 1.99584 & 0.75 & 0.15 & 0.011 & 277.72 & 0.20 & 0.0036 & 17.43 & 17.50 & 0.07 & -10.09\\
{\sl V1007 Her} & 1.99883 & 0.75 & 0.15 & 0.011 & 170.59 & 0.20 & 0.0036 & 17.83 & 18.80 & 0.97 & -10.09\\
{\sl BK Lyncis} & 1.7995 & 0.75 & 0.13 & 0.011& 42.05 & 0.17 & 0.0033 & 14.40 & 15.08 & 0.68 & -10.14 \\
\hline
\end{tabular} \\
\end{table*}
\section{THREE--BODY CATACLYSMIC VARIABLE}
As pointed out earlier, Yang~et~al.~(2017) proposed the hierarchical triple system hypothesis for the four systems studied here after ruling out other explanations. There, they explored the Lidov--Kozai resonances as a possible explanation for the VLPP observed, and found the possible semi--major axis of the third body. The mutual inclination between the inner binary orbital plane and the third--body orbital plane should be greater than $39.2^{\circ}$ for this mechanism to be effective in disturbing the inner binary effectively.
Here we explore a new possibility, namely that the secular perturbation by a low eccentricity and low inclination third object explains the VLPP and also the change of magnitude observed in these four CVs.
\subsection{Third body on a close near--circular planar orbit}
Chavez~et~al.~(2012), while investigating the system {\sl FS Aurigae}, ruled out that the VLPP could correspond directly to the period of a third body, since the object would be too distant to have an important effect on the inner binary. A series of numerical integrations were performed and showed that indeed the effect is minimal and could not explain the VLPP of the CV {\sl FS Aurigae}.
It was concluded that a third body on a close near--circular planar orbit could produce perturbations on the central binary eccentricity, and they are modulated at three different scales, the period of the binary $P_1$, the period of the perturber $P_2$ and the much longer secular period--VLPP. Secular perturbations have been studied both analytically and numerically by Georgakarakos~(2002, 2003, 2004, 2006, 2009). A third body prevents the complete circularization of the orbit due to tides by producing a long--term eccentricity modulation (e.g. Mazeh~\&~Shaham~1979; Soderhjelm~1982, Soderhjelm~1984, Chavez~et~al~2012, 2020). From Georgakarakos~(2003) it is possible to estimate the amplitude of such eccentricity by using the following equation:
\begin{equation}
\label{deltae}
\Delta e_{1} \propto q_{3} \Big( {P_{1} \over P_{2}} \Big)^{8 / 3} e_{2} \big( 1- e_{2}^2 \big)^{-5 / 2},
\end{equation}
where $P_{2}$ is the period of the third body around the inner binary, $e_{2}$ is the eccentricity of the orbit and $q_{3}=M_{3}/(M_{1}+M_{2}+M_{3})$. Therefore any changes over time on the eccentricity $e_{2}$, such as the modulations studied in Chavez~et~al.~(2012), will have an effect on the eccentricity $e_{1}$ of the CV, modulating and changing the position of the $L_1$ point and hence changing the brightness of the system. The details of the numerical modelling will be given in the next section.
\subsection{Numerical modelling for the circular case}
We performed dynamical simulations of the CVs with a hypothetical third body. The high--order Runge--Kutta--Nystrom RKN 12(10)~17M integrator of Brankin~et~al.~(1989) was used for the equations of motion of the complete three body problem in the barycentre inertial reference frame. The total energy was conserved to $10^{-5}$ or better for all numerical experiments.
As in Chavez~et~al.~(2012), tidal deformation of the stars in the close binary is not important for CVs in general and the two objects can be considered point masses. Hence, all three bodies are considered point masses in our integrations. The binary is initially on a circular orbit, and the third mass moves initially on its own circular orbit around the inner binary in the same plane. The mass $M_3$ and its orbital period $P_2$ are chosen across an ensemble of numerical experiments.
We proceed as follows. We fix the value of the period of the third body $P_2$, we change its mass $M_3$, we perform the numerical integrations, and then the eccentricity $e_1$ is calculated as a function of time. We obtain the secular period on each integration from $e_1$ using a Lomb--Scargle periodogram (Lomb~1976, Scargle~1982). All this shows the effect that the mass has on the secular period.
In Figs.~\ref{fig:LUCamel}--\ref{fig:BKLyn} we show, as a function of mass ,the VLPPs and semi--major axis obtained from our numerical experiments for each of our CVs studied. Each curve represents a given $P_2$ period that remains constant as we change the mass. We joined the points by using an interpolated curve (spline method) on each case. A black point that appears, for example, in Figure~\ref{fig:LUCamel} middle panel represents a system that can explain the observed VLPP; i.e., any given point represents a combination of semi--major axis and mass that can produce by secular perturbations the observed VLPP.
\subsection{Analytical modelling of the third body on an eccentric and inclined orbit}
Following Chavez~et~al.~(2020), we also investigate the effect that eccentricity and inclination of the third body may have on the resulting VLPP and the expected parameters of mass and semi-major axis of the third body.
We decided to use previously derived analytical results to see the effect of eccentricity and inclination. The orbital evolution of hierarchical triple systems has been studied in a succession of articles (Georgakarakos~2002, 2003, 2004, 2006, 2009, 2013, and Georgakarakos, Dobbs-Dixon \& Way,~2016).
Part of these studies was focused on the secular evolution of such systems. These analytical results can give us estimates about the inner binary\textquotesingle s frequency and period of motion. Hence, we can determine which mass values and orbital configurations of a potential third body companion can give rise to the secular periods observed in each CV.
We use the results of Georgakarakos~(2009) for a coplanar perturber on a low eccentricity orbit, and for coplanar systems with eccentric perturbers we make use of Georgakarakos~(2003). Finally, for systems with low eccentricity and low mutual inclinations (with $i_m<$39.23$^{\circ}$) the results of Georgakarakos~(2004) are used.
The analytical expressions for the frequencies and periods can be found in the appendix of this article, while details of derivations can be found in the articles mentioned above.
In Figs.~\ref{fig:LUCamel}--\ref{fig:BKLyn} we show the analytical estimates as curves in different colors depending on the third\textquotesingle s body initial eccentricity or inclination.
\section{Effect of Post--Newtonian correction}
Here, we also consider other dynamical effects that may produce the long term signal we observe in the light curve of the stellar binaries.
We study the effect of a first order post--Newtonian general relativity (GR) correction to the orbit of the stellar binary.\\
For all stellar pairs under investigation, the small semi-major axis of the orbit makes it an interesting case to include a post-Newtonian correction to describe the system\textquotesingle s motion more accurately. Inclusion of a post-Newtonian correction to our orbit produces an additional precession of the pericentre at the following rate (e.g. Naoz~et~al.~2013, Georgakarakos~\&~Eggl~2015):
\begin{equation}
\label{eqverpi}
\dot{\varpi}=\frac{3 {G}^{\frac{3}{2}}(M_1+M_2)^{\frac{3}{2}}}{c^2a^{\frac{5}{2}}_1(1-e^2_1)},
\end{equation}
where $G$ is the gravitational constant, $c$ is the speed of light in vacuum, $a_1$ is the semi--major axis of the inner binary and $e_1$ the eccentricity of the inner binary.
Based on the precession rate given in the above equation, the post--Newtonian pericentre circulation period for all systems is shown in Table~\ref{tab:GR}. periods calculated are too long to explain any of the VLPPs.
\section{Effect of the third body on the mass transfer rate and brightness}
\subsection{Non--Magnetic cases}
It is possible to estimate how the modulation of the inner binary, due to the secular perturbation of the third body, affects the mass transfer and the brightness of the system. First we focus our attention in the non-magnetic cases, that is {\sl LU Cam}, {\sl QZ Ser} and {\sl BK Lyn}. This subsection follows Chavez~et~al.~(2020), and a brief review is provided here.
To calculate the mass loss of the secondary it is necessary to make use of the definition of $R_{L}(2)$. Calculating directly the volume of the Roche lobe is difficult, so it is better to define an equivalent radius of the Roche lobe
as the radius, $R_{L}(2)$, of a sphere with the same volume as the Roche lobe.
Sepinsky~et~al.~(2007) generalized the definition of $R_{L}(2)$ including eccentric binaries, as:
\begin{equation}
\label{RL2}
R_L(2)=r_{12}(t) \; {{0.49 q^{2/3}} \over {0.6 q^{2/3} + \ln{(1+q^{1/3})}}},
\end{equation}
where $r_{12}$ is the distance between the two stars at any given time. We can obtain $r_{12}$ from our numerical integrations for each system.
Now we want to know the change in magnitude that produces that particular combination of parameters, and then we can compare with the observed magnitude change in the light curve. Therefore, we can find the system in each case that better explains observations according to our calculations.
We proceed as follows to estimate the change in magnitude due to the previous choice of parameters. We can calculate the maximum $R_L(2)_{max}$, shown as a blue horizontal line in Fig.~\ref{fig:LUCamelexplain} and the minimum $R_L(2)_{min}$, shown as a red horizontal line in Fig.~\ref{fig:LUCamelexplain} for each system directly form our numerical results. From here, we can estimate the mass transfer rate $\dot{M} (2)$ and hence the value of the luminosity of each CV.
Assuming that the secondary is a polytrope of index 3/2 and that the density around $L_1$ is decaying exponentially, it is possible to estimate the mass transfer rate using Eq.~2.12 of Warner~(1995):
\begin{equation}
\label{dotM2}
\dot{M}(2)= - C {M(2) \over P_{1}} \Bigg({\Delta R \over {R(2)} } \Bigg)^{3},
\end{equation}
where $C$ is a dimensionless constant $\approx 10-20$, $R(2)$ is the secondary stellar radius and $\Delta R$ is the amount by which the secondary overfills its Roche Lobe: $\Delta R=R(2)-R_L(2)$; $P_{1}$ is the inner binary period.
The $R(2)$ distance needs to be calculated carefully since the equation for $\dot{M}(2)$ is very sensitive to the amount of overfill. We decided to adjust $R(2)$ to obtain the $\dot{M}(2)$ value that we report here in Table \ref{tab:initial}; in Fig. \ref{fig:LUCamelexplain} the value of $R(2)$ is represented by a purple horizontal line. Since $R_{L}(2)$ is a function of time, instead of using that, we use the mean value of it, $R_{L}(2)_{mean}$, shown as a green line. Hence we adjust the value $R(2)$ for each integration (in Figure \ref{fig:LUCamelexplain} the system is {\sl LU Cam}), until the difference given by $\Delta R=R(2)-R_L(2)_{mean}$ is the right one, such that $\log \dot{M}(2)$ is as in Table~1.
We can calculate the maximum and minimum of the mass transfer rate by using the values of $R_L(2)_{max}$ and $R_L(2)_{min}$ to obtain $\dot{M}(2)_{max}$ and $\dot{M}(2)_{min}$.
There are two main sources to CV\textquotesingle s luminosity; the hot spot and the disc. The luminosity due to the so--called hot spot is produced when a stream of stellar mass crosses the $L_1$ point and collides with the disc; its expression (Warner~1995) is given by:
\begin{equation}
L(SP) \approx {G M(1) \dot{M}(2) \over r_{d}},
\end{equation}
where $L(SP)$ is the luminosity due to the hot spot, the radius of the disc is typically $r_{d} \approx 0.40\times a_{1}$ with $a_{1}$ being the semi--major axis of the inner binary (see Table~\ref{tab:initial}).
Applying this equation to our extreme values on $R_L(2)$ we obtain the $L(SP)_{max}$ and $L(SP)_{min}$ values.
Alternatively, the luminosity due to the accretion disc using Eq. 2.22a of Warner~(1995), is:
\begin{equation}
L(d)\approx {1 \over 2} {G M(1) \dot{M}(2) \over R_{1}}.
\end{equation}
Using this equation we can obtain the extreme values of $L(d)_{max}$ and $L(d)_{min} $ for each system. The total luminosity for each extreme is found by adding the estimated luminosity of the hot spot plus the luminosity of the disc, obtaining $L(d)_{T_{max}}$ and $L(d)_{T_{min}}$ for each system.
Then, it is possible to calculate the bolometric magnitude using $M_{bol}=-2.5 \log (L/L_0)$, with $L_{0}=3.0128 \times 10^{28}$ Watts used as a standard luminosity for comparison. From the extreme values we obtained $M_{B_{max}}$ and $M_{B{min}}$, leading to a magnitude difference $\Delta M_{B}$.
\subsection{Magnetic case}
{\sl V1007 Her is the only magnetic system among our selection,} which according to Wu~\&~Kiss~(2008) is a polar system. The accretion luminosity of an accreting white dwarf is given by:
\begin{equation}
\label{eq:Lpolar}
L_{acc} = - {G M(1) \dot{M}(2) \over R_{1}}.
\end{equation}
\begin{table}
\caption{\label{tab:GR} GR periods for all systems obtained using the first order post--Newtonian correction. }
\begin{tabular}{@{}lrrr}
\hline
Name of the CV& VLPP & GR period & GR period \\
& (days) & (days) & (years) \\
\hline
\hline
{\sl LU Camelopardalis} & 265.76 & 27851.13 & 76.25 \\
{\sl QZ Serpentis} & 277.72 & 11445.08 & 31.33 \\
{\sl V1007 Her} & 170.59 & 11245.42 & 30.79 \\
{\sl BK Lyncis} & 42.05 & 9626.77 & 26.36 \\
\hline
\end{tabular} \\
\end{table}
\begin{table}
\fontsize{9}{10}\selectfont
\caption{\label{tab:brightness} Summary of values used to estimate the integration that best fits the VLPP and the change of magnitude for each system.}
\begin{tabular}{@{}lcccc}
\hline
Variable & {\sl LU Cam} & {\sl QZ Serp} & {\sl V1007 Her} & {\sl BK Lyn} \\
\hline
\hline
$P_{2}/P_{1}$ & 7.1 & 12.5 & 13.0 & 5.9 \\
$M_3$ (M$_J$) & 97 & 0.63 & 148 & 88 \\
$a_2$ (AU) & 0.021 & 0.019 & 0.021 & 0.011 \\
$\Delta M_{B}$ & 0.55 & 0.07 & 0.73 & 0.68 \\
\hline
\end{tabular} \\
\end{table}
For polars in a high state, $L_{acc}$ is much higher than the intrinsic luminosity of the two stars. Thus, we have $L_{bol}\approx L_{acc}$. Polars are Roche--lobe filling systems, with the mass transfer rate given by Eq. \ref{dotM2}, again using Sepinsky~et~al.~(2007) to calculate $R_{L}(2)$ directly from the integration. Therefore, from Eqs.~\ref{dotM2} and \ref{eq:Lpolar}, it is possible to estimate the change in brightness for {\sl V1007 Her} from $L_{acc\_max}$ and $L_{acc\_min}$.
\section{Results and Discussion}
We studied an ensemble of initial conditions for a hypothetical third body in each system and the way it affects both the VLPP and the change of brightness. All the results of the numerical integrations are shown as black points in Figs.~1,~2,~3 and 4 which correspond to {\sl LU Cam}, {\sl QZ Serp}, {\sl V1007 Her} and {\sl BK Lyn}, respectively.
The upper panel of each figure shows the resulting secular periods of the binary eccentricity as a function of the mass of the perturber. Each curve corresponds to different $P_2/P_{1}$ ratios. The thick horizontal line corresponds to the VLPP value of each system.
For a given $P_2/P_{1}$ ratio (i.e. a given curve), some of our integrations produce secular perturbations as we change the mass of the system that never reach the VLPP line. We argue that only systems that cross the VLPP line can explain the long--term change in the light--curve.
The middle panel is a plot of the perturber's semi-major axis against its mass. The black points denote the results of the numerical integrations, while the solid curves are analytical solutions from Georgakarakos~(2009) ($e_2=0$, blue curve) and Georgakarakos~(2003) (eccentric cases, green and red curves). The straight line denotes the orbital stability limit as given in Holman~\&~Wiegert~(1999), while the dotted line is the stability limit based on the Georgakarakos (2013) results. In contrast to Holman \& Wiegert~(1999), Georgakarakos~(2013) does not assume a massless particle for any of the three bodies. Hence, two branches of the dotted line are due to the dependence of the stability limit on the mass of the perturber.
The lower panels in Figures \ref{fig:LUCamel}--\ref{fig:BKLyn} are similar to what we presented in the middle panel, but the inclination is varied here. For the coplanar case (blue curve) we use Georgakarakos~(2009), while for the three dimensional cases (green and red curve), we make use of Georgakarakos~(2004).
Table~\ref{tab:brightness} lists the ratio between the period of the third body compared to the period of the inner binary (that is $P_{2} / P_{1} $), the mass of the third body (in Jupiter masses ${\rm M}_J$), and the semi-major axis of the third body ($a_2$ in AU) and the change of magnitude of each system ($\Delta M_{B}$). We can compare the magnitude change for each system to the observed ones that appear in Table~\ref{tab:initial}.
Now we will discuss some details of the results for each CV.
We searched for all the numerical integrations whose secular period matched the observed period of the system, and then made all the required calculations in order to estimate the change in magnitude that arises from the perturbations of the third body. A search was done until a system was found that matched the observed change of magnitude of the system. It led to a system that can simultaneously explains the VLPP and the change in magnitude.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{LuCamelallplotsfinals.eps}
\vspace{-0.75cm}
\caption{Results for {\sl LU Camelopardalis} system. Numerical integrations performed are represented by black points. ({\sl Top}) Period of the long--term modulation (secular period) as a function of the third-body mass. Each blue curve joining black points correspond to different $P_2 / P_1$ ratios. The black line around 2.4 corresponds to the observed VLPP. Only numerical integrations that can explain the observed VLPP are shown ({\sl middle}) . The blue curve corresponds to the planar and circular planar analytical solution. The green line represents the analytical planar systems with eccentricity of 0.2 and the red line represents the planar systems with eccentricity of 0.5. The doted line represents the inner stability limit calculated by Georgakarakos~(2013) and the grey solid line that of Holman \& Wiegert~(1999). ({\sl Bottom}) Similar quantities as in middle panel, but for a circular orbit with different inclinations. The third body values consistent with observations obtained here are: $M_3=97\, M_J$ and $P_2=1.06$~days.
}
\label{fig:LUCamel}
\end{center}
\end{figure}
\subsection{\sl LU Camelopardis}
This CV has an observed VLPP of 265.76~days, with ${M_{2} / M_{1}} = 0.34$, which is the largest ratio among the CVs studied here. Figure~\ref{fig:LUCamel} shows our numerical results for this system.
The stability limit given by Holman~\&~Wiegert~(1999) and Georgakarakos~(2013) are also shown. Holman \&
Wiegert rule out any $a< 0.013$~AU (grey horizontal line), while Georgakarakos rules out any $a< 0.015$~AU
(black dashed line). Care has to be exercised in the eccentric cases when dealing with small values of the semi-major
axis since the analytical formulae have singularities, This holds for the rest
of the systems.
In this particular system the third body initially is on a circular orbit that explains the observational value $P_{2} / P_{1} = 7.1$, that is $P_{2}=25.5$ h = 1.06 days, its mass being $M_{3} = 97 \, \textrm{M}_{\textrm{J}}$, and a semi--major axis of $a_{3}=0.021$~AU. These system parameters also match the observed $\Delta M_{B}=0.55$.
Alternatively, the long period calculated using first--order GR correction for this systems is 27851.13 days (76.25 years), which is far too large to explain the observed VLPP of 265.76 days.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{QZSerpallFinals.eps}
\vspace{-0.75cm}
\caption{Results for {\sl QZ Serpentis} system. In this system the third body is found to have a mass $M_{3} = 0.63 \textrm{M}_{\textrm{J}}$ and $P_{2}=1.04$ days.
}
\label{fig:QZSer}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{V1007HerallplotsFinals.eps}
\vspace{-0.75cm}
\caption{Results for the {\sl V1007 Herculis} system. Integrations yield for the third body a mass of $M_{3} = 148 \textrm{M}_{\textrm{J}}$ and $P_{2}=1.08$ days. }
\label{fig:V1007}
\end{center}
\end{figure}
\subsection{\sl QZ Serpentis}
The VLPP observed for this system is 278~days, the mass ratio is $ {M_{2} / M_{1}} = 0.20$. Figure~\ref{fig:QZSer} shows our numerical and analytical results for the circular and eccentric conditions.
Here, Holman \& Wiegert (1999) rule out any $a< 0.0075$~AU (grey horizontal line). Alternatively, Georgakarakos~(2013) rules out $a< 0.010$~AU (black dashed line).
Recall that some singularities can appear for small values of $a$.
In this particular system the third body that is initially on a circular orbit and explains the observed
$P_{2} / P_{1} = 12.5$, that is $P_{2}=24.9$ h = 1.04 days, with its mass being $M_{3} = 0.63 \, \textrm{M}_{\textrm{J}}$ (the smallest among the systems), and a semi--major axis of $a_{2}=0.019$~AU. This system matches the observed $\Delta M_{B}=0.07$.
In a similar fashion, for the inclined orbits with $i=15^{\circ}$ we observed that the values in {$a$} get higher than in the circular case, but at a faster rate than when exploring the eccentric cases. For $i=30^{\circ}$ the masses increase faster than the circular case as we decrease the semi--major axis $a$.
The GR first order correction for this system yields a period of 11445.08 days (31.33 years), far too large to explain the 277.72 days of the observed period.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{BKLyncisALLFinals.eps}
\vspace{-0.75cm}
\caption{Results for the {\sl BK Lyncis} system. Here, the third body has a mass of $M_{3} = 88 \textrm{M}_{\textrm{J}}$ and period of $P_{2}=0.44$ days.
}
\label{fig:BKLyn}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=10cm,clip=20cm]{RL2PLOT7p1.eps}
\caption{Method used to calculate the change of magnitude due to the third body. The time evolution of $R_{L}(2)$ for the CV {\sl LU Cam} is shown as an example. The blue horizontal line is the maximum value for the $R_{L}(2)$ that the system reaches ($R_L(2)_{max}$), the red one corresponds to the minimum value ($R_L(2)_{min}$), green is the mean value ($R_L(2)_{mean}$) and purple is the $R(2)$ value. See text for more details.}
\label{fig:LUCamelexplain}
\end{center}
\end{figure*}
\subsection{\sl V1007 Herculis}
This CV has an observed VLPP of 170.59~days, and the mass ratio is given by $M_{2} / M_{1} = 0.202$. Figure~\ref{fig:V1007} shows the result of the numerical integrations performed, including circular (numerical integrations) and eccentric (analytical) orbits as well as two cases with different inclinations (analytical) as we did before.
Holman \& Wiegert~(1999) rule out any $a< 0.0075$~AU (grey horizontal line), and Georgakarakos~(2013) rules out any $a< 0.010$~AU (black dashed line).
In this particular system the third body initially on a circular orbit that give us a closer value to what we observe is $P_{2} / P_{1} = 13.0$ (i.e. $P_{2}=25.98$ h = 1.08 days), with mass of $M_{3} = 148 \, \textrm{M}_{\textrm{J}}$, and a semi--major axis of $a_{2}=0.021$~AU. This system does not match the observed $\Delta M_{B}=0.97$ but is the closest to that value with $\Delta M_{B}=0.73$.
Lastly, the GR post Newtonian first--order correction give us a period of 11245.42 days (or 30.788 years), which can not explain the VLPP of 170.59 days.
\subsection{\sl BK Lyncis}
The observed VLPP is 42.05~days and $M_2/M_1=0.1674$, with both values are being the lowest observed in our set of CVs. Similarly to the previous figures, Fig.~\ref{fig:BKLyn} shows our results for {\sl BK Lyncis}.
The empirical criterion of Holman \& Wiegert~(1999) rules out any $a< 0.007$~AU (grey horizontal line), while the work of Georgakarakos~(2013) implies that $a>0.009$~AU (black dashed line) can be ruled out.
In this system, since both stability limits are higher than the minimum of the curve for the initially circular--planar case (black dots and blue curve on Figure 5 middle and lower panel), the possible solutions for this system are likely to be higher than those limits. In fact, we could not find numerically (black dots) stable orbits below 0.0094~AU; this will be discussed in the following section.
For {\sl BK Lyncis} the third body on an initially circular--planar orbit that best reproduces what we observed in the light curve of the CV has $P_{2} / P_{1} = 5.9$ ($P_{2}=10.62$~h = 0.44~days), mass $M_{3} = 88 \, \textrm{M}_{\textrm{J}}$, and $a_{2}=0.011$~AU. This system matches the observed $\Delta M_{B}=0.68$.
The GR first--order correction can not explain the observed VLPP period of 42.05 days, since the predicted period is 9626.77 days (26.36 years), which is far too large.
\section{Final Comments}
In this article we explored the possible origin of the very long photometrical periods (VLPPs) observed in four cataclysmic variables; {\sl LU Camelopardalis}, {\sl QZ Sepentis}, {\sl V1007 Herculis}, {\sl BK Lyncis}, all of them first reported by Yang~et~al.~(2017).
We find that three out of four of the systems can be explained by the secular perturbations of a third body orbiting around each CV. In the case of {\sl V1007 Herculis} we could not find an initially circular planar orbit that could explain the relatively large change in magnitude observed $\Delta M_{B} \approx 1$.
All of our numerical integrations and modelling are based on the parameters estimations calculated using Knigge~et~al.~(2011), and using the orbital period of the CV as the starting point. Then, we estimated the best dynamical parameters of each system.
{\sl Lu Camelopardalis} was explored assuming initially circular planar orbit by numerical means and found that the configuration that explains both the observed VLPP and change of magnitude $\Delta M_{B}$ has a period of $P_{2} = 25.5$ h, and a mass of $M_{3} = 97 \, \textrm{M}_{\textrm{J}}$, that is larger than the minimum mass required for having nuclear reactions at its centre (83~M$_{J}$).
Alternatively, for {\sl QZ Sepentis} the configuration that can explain both the observed VLPP and change of magnitude $\Delta M_{B}$ has a $P_{2} = 24.9$ h and $M_{3} = 0.63~\textrm{M}_{\textrm{J}}$ . This third body mass is small compared to the rest of the CVs and is well within the planetary mass. These results are most likely because the observed change of magnitude is quite small ($\Delta M_{B}=0.07$).
In {\sl V1007 Herculis} the third body that best fits the observed VLPP and the change of magnitude has $P_{2} = 25.98$ h and $M_{3} = 148\, \textrm{M}_{\textrm{J}}=0.141 \, {\rm M}_\odot$; this system produces a $\Delta M_{B}=0.73$. This mass is far too big (as the mass of a red dwarf) and even using this high value it was not possible to reproduce the observed change in magnitude of $\Delta M_{B}=0.97$. We conclude that, if all our estimations are correct, it is only marginally possible that a third body in a close to circular and near planar orbit can explain the VLPP and change of magnitude on this system.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{ResonanceBKLyncis.eps}
\vspace{-0.55cm}
\caption{Search for mean motion resonances in the {\sl BK Lyncis} system. Eccentricity as a function of the pericentre distance $q(AU)$ is shown. The time evolution of two systems is displayed: one in resonance (purple) and the other not (orange).
The lines in red, from left to right, represent the 2:1, 3:1, 4:1, 5:1, 6:1, 7:1, 8:1, 9:1, 10:1, 11:1, 12:1, 13:1 and 14:1 mean motion resonances between the inner binary and the third body. See text for discussion.
}
\label{fig:BKLynRes}
\end{center}
\end{figure}
Lastly, we have the system {\sl BK Lyn} which was the most challenging one to model. First we realized that there are not stable systems with orbits below 0.0094~AU. Then we could not find the full curve for the numerical integrations in Figs.~\ref{fig:BKLyn} (middle and bottom plots). The initially circular analytical curve (shown in blue) helps as a reference of how the full curve would look like.
It is important to point out that even when the analytical curves are very helpful to see what the effect of the eccentricity and inclination have on the VLPP, we do not know which orbits are stable or not.
The system that can explain both the observed VLPP and change of magnitude $\Delta M_{B}$ has $P_{2} = 5.9$ h and $M_{3} = 88 \, \textrm{M}_{\textrm{J}}$ which is close to the mass threshold of $83\, \textrm{M}_{\textrm{J}}$ of becoming an M star.
Also in this system we noticed that in Fig.~\ref{fig:BKLyn} top panel, the curves for low $P_{2} / P_{1}$ (that is the four curves at the bottom of the plot), when they are close to 90--100 $ \textrm{M}_{\textrm{J}}$ their behaviour changes and seem to oscillate and have an abrupt increase in the value of Pmod (y--axis). After careful consideration, we realize that those sudden changes must originate from mean motion resonances.
Fig.~\ref{fig:BKLynRes} shows the eccentricity as a function of pericentre distance for two systems. The orange dots correspond to $M_{3} = 63 \, \textrm{M}_{\textrm{J}}$, $a=0.0062$~AU and mean eccentricity of $e=0.399$. The purple ones correspond to a system with $M_{3} = 139 \, \textrm{M}_{\textrm{J}}$, $a=0.0067$~AU and $e=0.302$.
These initial conditions were chosen to provide an example of a system that evolves due to secular perturbations (orange) and another that evolves due to resonant perturbations (purple). It is observed from Fig.~\ref{fig:BKLynRes} that the resonant system (purple) is immersed in the 5:1 Mean Motion Resonance (MMR) while the orange system is between the 4:1 and 5:1 MMR.
We also show in Figure~\ref{fig:BKLyn} top plot where the purple and orange systems (as circles with their corresponding color) are located. This indicates that the peaks observed in the low-right part of the plot are indeed due to resonances.
As it can be appreciated in Fig.~\ref{fig:BKLyn} top plot, the curves associated with resonances increase their Pmod value very quickly as the system approaches the resonance (crossing the observational VLPP black line). Then it is possible to find configuration families similar to Fig.~\ref{fig:BKLyn} middle and bottom plots that can explain the observed VLPP but using resonant systems, instead of the secular families that we studied here. We decided to leave the study of the resonant families for a future contribution.
We find that for all systems the first--order post Newtonian GR corrections cannot account for the observed VLPP in any of the systems, since the predicted periods are far too large compared to the observed ones.
All the parameters of the pair of stars (masses, semi--major axes, radii, etc) that form the CVs were estimated using the orbital period observed. Since these are based on average statistical values, our estimations on the parameters of the third body, rather than being precise values, are also estimates for the possible third body that might explain the observed characteristics.
In three out of the four systems we found a third body on an initially circular orbit that explains both the observed VLPP and change in magnitude by secularly perturbing the inner binary.
In the case of {\sl V1007 Herculis} it was not possible to find a numerical model that could account for the $\sim 1$ change in magnitude.
We also find that for {\sl BK Lyncis} mean motion resonances are important, hence it is possible that a third body in resonance could also account for the observed VLPP. Further exploration of the role of resonances in these systems is postponed for future works, since here we focused on secular perturbations.
\section*{Acknowledgements}
We thank the referee for the useful comments and corrections.
CEC would like to thank IAChR and JRChR for their helpful discussions and to WBRA for her advises and help on the development of this article.
GT acknowledges support from PAPIIT project IN110619.
\section*{Data Availability}
The data presented and discussed in this article will be shared on reasonable request to the corresponding author.
|
2,869,038,154,964 | arxiv | \section{Introduction}
Since the first mathematical approach to the spread of a disease by
Daniel \citet{bernoulli1760}, epidemic models lie at the core of our
understanding about infectious diseases. As experimenting in-vivo
epidemics is not a viable option, modeling approaches have been the main
resort to compare and test theories, as well as to gauge uncertainties
in intervention strategies. The acclaimed work of \citet{siroriginal},
defining the modern mathematical modeling of infectious diseases, has
evolved along the years in an impressive body of work, whose culmination
is well represented by the monumental summary of \citet{anderson92}. At
the same time, the epidemic modeling metaphor has been introduced to
describe a wide array of different phenomena. The spread of information,
cultural norms and social behavior can be conceptually modeled as a
contagion process. How black-outs spread on a nationwide scale or how
efficiently memes can spread on social networks are all phenomena whose
mathematical description relies on models akin to classic epidemic
models \cite{Vespignani:2012fk}. Although the basic mechanisms of each
phenomenon are different, their effective mathematical description often
defines similar constitutive equations and dynamical behaviors framed in
the general theory of reaction-diffusion processes \cite{vankampen}. It
is not surprising then that epidemic modeling is a research field that
crosses different disciplines and has developed a wide variety of
approaches ranging from simple explanatory models to very elaborate
stochastic methods and rigorous results \cite{Keeling07book}.
In recent years we are witnessing a second golden age in epidemic
modeling. Indeed, the real-world accuracy of the models used in
epidemiology has been considerably improved by the integration of
large-scale datasets and the explicit simulation of entire populations
down to the scale of single individuals
\cite{Eubank2004,Ferguson2005,Longini2005,Halloran2008,Chao2010,Balcan2009,Merler2011}.
Mathematical models have evolved into microsimulation models that can be
computationally implemented by keeping track of billions of
individuals. These models have gained importance in the public-health
domain, especially in infectious disease epidemiology, by providing
quantitative analyses in support of policy-making processes. Many
researchers are advocating the use of these models as real-time,
predictive tools
\cite{Nishiura2011,Tizzoni2013,Nsoesie2013}. Furthermore, these models
offer a number of interesting and unexpected behaviors, whose
theoretical understanding represents a new challenge, and have
stimulated an intense research activity. In particular, modeling
approaches have expanded into schemes that explicitly include spatial
structures, individual heterogeneity and the multiple time scales at
play during the evolution of an epidemics \cite{Riley2007}.
At the core of all data-driven modeling approaches lies the structure of
human interactions, mobility and contacts patterns that finds its best
representation in the form of networks
\cite{Vespgnani_2009,butts:revisiting,Jackson2010,Newman10,Vespignani:2012fk}.
For a long time, detailed data on those networks was simply unavailable.
The new era of the social web and the data deluge is, however, lifting
the limits scientists have been struggling with for a long time. The
pervasive use of mobile and wifi technologies in our daily life is
changing the way we can measure human interactions and mobility network
patterns for millions of individuals at once. Sensors and tags are able
to produce data at the micro-scale of one-to-one interactions. Proxy
data derived from the digital traces that individuals leave in their
daily activities (microblogging messages, recommendation systems,
consumer ratings) allow the measurement of a multitude of social
networks relevant to the spreading of information, opinions, habits,
etc.
Although networks have long been acknowledged as a key ingredient of
epidemic modeling, the recent abundance of data is changing our
understanding of a wide range of phenomena and calls for a detailed
theoretical understanding of the interplay between epidemic processes
and networks. A large body of work has shown that most real-world
networks exhibit dynamic self-organization and are statistically
heterogeneous---typical hallmarks of complex systems
\cite{barabasi02,newman-review,Dorogovtsev:2002,baronchelli13,boccaletti2006cns,caldarelli2007sfn,Newman10,mendesbook,havlinbook,fontourareview}.
Real-world networks of relevance for epidemic spreading are very
different from regular lattices. Networks are hierarchically organized
with a few nodes that may act as hubs and where the vast majority of
nodes have very few interactions. Both social and infrastructure
networks are organized in communities of tightly interconnected
nodes. Although randomness in the connection process of nodes is always
present, organizing principles and correlations in the connectivity
patterns define network structures that are deeply affecting the
evolution and behavior of epidemic and contagion processes. Furthermore,
network's complex features often find their signature in statistical
distributions which are generally heavy-tailed, skewed, and varying over
several orders of magnitude.
The evidence of large-scale fluctuations, clustering and communities
characterizing the connectivity patterns of real-world systems has
prompted the need for mathematical approaches capable to deal with the
inherent complexity of networks. Unfortunately, the general solution,
handling e.g. the master equation of the system, is hardly achievable
even for very simple dynamical processes. For this reason, an intense
research activity focused on the mathematical and computational modeling
of epidemic and diffusion processes on networks has started across
different disciplines~\cite{dorogovtsev07:_critic_phenom}. The study of
network evolution and the emergence of macro-level collective behavior
in complex systems follows a conceptual route essentially similar to the
statistical physics approach to non-equilibrium phase transitions
\cite{Henkel}. Hence, statistical physics has been leading the way to
the revamped interest in the study of contagion processes, and more
generally dynamical processes in complex networks. In the last ten
years, an impressive amount of methods and approaches ranging from
mean-field theories to rigorous results have provided new quantitative
insights in the dynamics of contagion processes in complex
networks~\cite{danonreview,keeling05:_networ}.
However, as it is often the case in research areas pursued by different
scientific communities, relevant results are scattered across domains
and published in journals and conference proceedings with completely
different readership. In some cases, relevant advances have been derived
independently by using different jargons as well as different
assumptions and methodologies. This fragmented landscape does not
advance the field and is, in many cases, leading to the
compartmentalization and duplication of the research effort. We believe
that a review is timely to contextualize and relate the recent results
on epidemic modeling in complex networks. Although infectious diseases
will be at the center stage of our presentation, social contagion
phenomena and network dynamics itself are discussed, offering a general
mathematical framework for all social and information contagion
processes that can be cast in the epidemic metaphor. The final goal is
to provide a coherent presentation of our understanding of epidemic
processes in populations, that can be modeled as complex networks.
After a review of the fundamental results in classical epidemic modeling
and the characterization of complex networks, we discuss the different
methodologies developed in recent years to understand the dynamic of
contagion processes in the case of heterogeneous connectivity
patterns. In particular, in Section IV we specifically spell out the
assumptions inherent to each methodology and the range of applicability
of each approach. In Section V those theoretical approaches are applied
to classic epidemic models such as the susceptible-infected-susceptible
(SIS) and susceptible-infected-removed (SIR) models. In those Sections
particular care is devoted to shed light on the role of the interplay of
the time-scales of the epidemic process and of the network dynamics and
on the appropriateness of different modeling approximations. In Sections VI
and VII we focus on various approaches to the mitigation and containment
of epidemic processes and on the analysis of several variations of the
basic epidemic models, aiming at a more realistic description of
contagion processes and contact patterns. In Section VIII we provide a
summary of recent results concerning time-varying networks. Although
this is an area that is rapidly advancing due to both theoretical and
data gathering efforts, we report on results that are expected to become
foundational. In Section IX we discuss the generalization of epidemic
processes in complex, multi-species reaction diffusion processes, an
area relevant in the analysis of epidemics in structured populations.
Finally, in Section X, we will review the generalization of epidemic
modeling of social contagion phenomena. The number of specific models
for social contagion is extensive and we therefore confine ourselves to
the most relevant to highlight differences and novel dynamical behaviors
in the evolution of the epidemic process. We conclude with an outlook to
the field and the challenges lying ahead of us.
The upsurge of interest in epidemic modeling in complex networks has led
to an enormous body of work: a query on the Thompson Web of Science
database with the keywords "epidemic" and "networks" returns more than
3600 papers in just the last 15 years. A review of all these papers is
unfortunately hardly feasible. Therefore, we have concentrated our
attention to, what we believe, are the most influential papers. In
providing a unified framework and notation for the various approaches,
we aim at fostering synergies across application domains and provide a
common knowledge platform for future efforts in this exciting research
area.
\section{The mathematical approach to epidemic spreading}
\subsection{Classical models of epidemic spreading}
\label{sec:class-models-epid}
In more than 200 years of its history, the mathematical modeling of
epidemic spreading has evolved into a research area that cuts across
several fields of mathematical biology as well as other disciplines and
is treated in the classic books
by~\citet{anderson92,epidemics,Keeling07book,brauer2010,Diekmann_Heesterbeek_Britton_boek2012,Andersson2000}.
Here, we merely set the notation and present some of the basic elements
and approximations generally used in the modeling of epidemic phenomena,
in order to provide the necessary conceptual toolbox needed in the
following sections.
Epidemic models generally assume that the population can be divided into
different classes or compartments depending on the stage of the disease
\cite{anderson92,epidemics,Keeling07book}, such as susceptibles (denoted
by $S$, those who can contract the infection), infectious ($I$, those
who contracted the infection and are contagious), and recovered ($R$,
those who recovered from the disease). Additional compartments can be
used to signal other possible states of individuals with respect to the
disease, for instance immune individuals. This framework
can be extended to take into account vectors, such as mosquitoes for
malaria, for diseases propagating through contact with an external
carrier. Epidemic modeling describes the dynamical evolution of the
contagion process within a population. In order to understand the
evolution of the number of infected individuals in the population as a
function of time we have to define the basic individual-level processes
that govern the transition of individuals from one compartment to
another.
The simplest definition of epidemic dynamics considers the total
population in the system as fixed, consisting of $N$ individuals, and
ignores any other demographic process (migrations, births, deaths,
etc.). One of the simplest two-state compartmentalizations is the
susceptible-infected-susceptible (SIS) model with only two possible
transitions: The first one, denoted $S\to I$, occurs when a susceptible
individual interacts with an infectious individual and becomes infected.
The second transition, denoted $I\to S$, occurs when the infectious
individual recovers from the disease and returns to the pool of
susceptible individuals. The SIS model assumes that the disease does not
confer immunity and individuals can be infected over and over again,
undergoing a cycle $S \to I \to S$, which, under some conditions, can be
sustained forever. Another basic model is the classic three-state
susceptible-infected-recovered (SIR) model. In the SIR model, the
transition $I\to S$ of the SIS process is replaced by $I\to R$, which
occurs when an infectious individual recovers from the disease and is
assumed to have acquired a permanent immunity, or is removed (e.g. has
died). Clearly, the SIR process always stops, when no more infected
individuals are present.
\begin{figure}[t]
\centering
\includegraphics[clip=true,width=8.5cm]{fig1.pdf}
\caption{Typical profile of the density $i(t)$ of infected individuals
versus time in a given epidemic outbreak. In the first regime $t <
t_1$, the outbreak is subject to strong statistical
fluctuations. In the second regime, $t_1 < t < t_2$ there is an
exponential growth characterized by the details of the epidemic
process. In the final regime ($t > t_2$), the density of infected
individuals either converges to zero, for SIR-like models, or to a
constant, possibly zero, for SIS-like models.}
\label{fig:IIevolution}
\end{figure}
The SIR and SIS models exemplify a basic classification of epidemic
models given in terms of their long time behavior, see
Fig.~\ref{fig:IIevolution}. In the long time regime, the SIS model can
exhibit a stationary state, the \textit{endemic} state, characterized by
a constant (in average) fraction of infected individuals. In the SIR
model, instead, the number of infected individuals always tends to zero.
In the SIS and SIR models, the infection and recovery processes
completely determine the epidemic evolution. The $I\to R$ and $I\to S$
transitions occur spontaneously after a certain time the individuals
spend fighting the disease or taking medical treatments; the transition
does not depend on any interactions with other individuals in the
population. The $S\to I$ transition instead occurs only because of the
contact/interaction of the susceptible individual with an infectious
one. In this case the interaction pattern among individuals is a
specific feature of the transition and has to be taken into account.
For many types of disease, the amount of time spent in the infectious
class is distributed around a well-defined mean value. The distribution
of the "infectious period" and the transition probability can be
generally estimated from clinical data. However, in a simplistic
modeling scheme, the probability of transition is often assumed
constant. In this way, a discrete-time formulation defines the recovery
probability $\mu$, that an individual will recover at any time step.
The time an individual will spend on average in the infectious
compartment, the mean infectious period, is then equal to $\mu^{-1}$
time steps. In a continuous-time formulation and assuming a Poisson
process \cite{renewal}, $\mu$ is a rate (probability per unit time) and
the probability that an individual remains infected for a time $\tau$
follows an exponential distribution
$P_\mathrm{inf}(\tau) = \mu e^{-\mu \tau}$, with an average infection
time $\av{\tau} = \mu^{-1}$. The Poisson assumption for the processes of
infection and recovery leads naturally to a Markovian description of
epidemic models \cite{Ross1996}.
The probability of the $S\to I$ transitions is more complicated and it
is dependent on several factors and on the modeling approximations
considered. In the absence of detailed data on human interactions, the
most basic approach considers a homogenous mixing approximation
\cite{anderson92} which assumes that individuals interact randomly with
each other. In this assumption, the larger the number of infectious
individuals among an individual's contacts, the higher the probability
of transmission of the infection. This readily translates to the
definition of the {\it force of infection} $\alpha$, that expresses the
probability, also called the risk, at which one susceptible individual
may contract the infection in a single time step. In the continuous-time
limit we can define $\alpha$ as a rate and assume that
\begin{equation}
\alpha= \bar{\beta}\frac{N^I}{N},
\end{equation}
where $\bar{\beta}$
depends on the specific disease as well as the contact pattern of the
population, and $N^I$ is the number of infected individuals. Thus,
$\alpha$ is proportional to the fraction $\rho^I=N^I/N$ of infected
individuals in the population. In some cases $\bar{\beta}$ is explicitly
split in two terms as $\beta k$, were $\beta$ is now the rate of
infection per effective contact and $k$ is the number of contacts with
other individuals. This form of the force of infection corresponds to
the mass action law \cite{hethcote2000}, a widely used tool in the basic
mean-field description of many dynamical processes in chemistry and
physics. The force of infection depends only on the density of
infectious individuals and decreases for larger populations, all the other
factors being equal. It is possible however to consider forces of infection of
the type $\alpha=\beta N^I$, where the per capita infection probability
is proportional to the actual number of infected individuals $N^I$, and
assumes that the number of contacts scales proportionally to the size of
the population. Indeed, also intermediate expressions for the force of
infection depending on the size of the population as $N^{-a}$ have been
discussed in the literature \cite{anderson92}.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{fig2.pdf}
\caption{Diagrammatic representation of different epidemic models in
terms of reaction-diffusion processes. Boxes stand for different
compartments, while the arrows represent transitions between
compartments, happening stochastically according to their respective
rates.}
\label{fig:epidiagrams}
\end{figure}
Generalizing the previous approach, an epidemic can be rephrased as a
stochastic \textit{reaction-diffusion process} \cite{vankampen}.
Individuals belonging to the different compartments can be represented
as different kinds of ``particles'' or ``species'', that evolve
according to a given set of mutual interaction rules, representing the
different possible transitions among compartments, and that can be
specified by means of appropriate stoichiometric equations. In the
continous-time limit each reaction (transition) is defined by an
appropriate \textit{reaction rate}. We can therefore adopt the
reaction-diffusion formalism to describe the basic epidemic models, see
Figure~\ref{fig:epidiagrams}. The SIS model is thus governed by the
reactions
\begin{eqnarray}
S +I &\overset{\beta}{\rightarrow}& 2 I,\label{eq:sis1} \\
I &\overset{\mu}{\rightarrow}& S, \nonumber
\end{eqnarray}
where $\beta$ and $\mu$ are transition rates for infection and recovery,
respectively. In this model infection can be sustained forever for
sufficiently large $\beta$ or small $\mu$. The
Susceptible-Infected-Recovered (SIR) model \cite{siroriginal} is instead
characterized by the three compartments S, I and R, coupled by the
reactions
\begin{eqnarray}\label{eq:sir1}
S +I &\overset{\beta}{\rightarrow}& 2 I, \\
I &\overset{\mu}{\rightarrow}& R. \nonumber
\end{eqnarray}
For any value of $\beta$ and $\mu$, the SIR process will always
asymptotically die after affecting a given fraction of the population.
Many more epidemic models can be defined analogously to the SIS and SIR
models. A useful variant is the SI model, which only considers the
first transition in Eqs.~(\ref{eq:sis1}) and~(\ref{eq:sir1}),
i.e. individuals become infected and never leave this state. While the
SI model is a somewhat strong simplification (valid only in cases where
the time scale of recovery is much larger than the time scale of
infection), it approximates the initial time evolution of both SIS and
SIR dynamics. More realistic models are defined in order to better
accommodate the biological properties of real diseases. For instance,
the SIRS (Susceptible-Infected-Removed-Susceptible) model is an epidemic
model incorporating a temporary immunity. It can be defined from the SIR
model by adding a microscopic transition event
\begin{equation}
\label{eq:waning}
R \overset{\eta}{\rightarrow} S,
\end{equation}
where $\eta$ is the rate at which the immunity of a recovered
individual is lost, rendering him/her susceptible again. The SEIR
model is a variation of the SIR model including the effects of exposed
($E$) individuals, which have been infected by the disease but cannot
yet transmit it. The SEIR model is one of the paradigmatic
models for the spreading of influenza-like illnesses and in the compact
reaction-diffusion notation reads as
\begin{eqnarray}
S +I &\overset{\beta}{\rightarrow}& E + I,\\
E &\overset{\gamma}{\rightarrow}& I, \nonumber \\
I &\overset{\mu}{\rightarrow}& R. \nonumber
\end{eqnarray}
All the above models can be generalized to include demographic effects
(birth and death processes in the population), the age structure of the
population, other relevant compartments (such as asymptomatic infected
individuals), etc. A more complete and detailed review of epidemic
models and their behavior can be found in
\citet{anderson92,Keeling07book,brauer2010}.
\subsection{Basic results from classical epidemiology}
\label{sec:class-results}
Although epidemic spreading is best described as a stochastic
reaction-diffusion process, the classic understanding of epidemic
dynamics is based on taking the continuous-time limit of difference
equations for the evolution of the average number of individuals in each
compartment. This deterministic approach relies on the homogeneous
mixing approximation, which assumes that the individuals in the
population are well mixed and interact with each other completely at
random, in such a way that each member in a compartment is treated
similarly and indistinguishably from the others in that same
compartment. This approximation, which is essentially equivalent to the
mean-field approximation commonly used in statistical physics, for both
equilibrium \cite{stanley} and nonequilibrium \cite{Marrobook} systems,
can be shown to be correct in regular lattices with high dimension, but
it is not exact in low dimensions
\cite{havlin_diffusion_reaction}. Under this approximation, full
information about the state of the epidemics is encoded in the total
number $N^\alpha$ of individuals in the compartment $\alpha$ or,
analogously, in the respective densities $\rho^\alpha = N^\alpha / N$,
where $N$ is the population size. The time evolution of the epidemics is
described by deterministic differential equations, which are constructed
applying the law of mass action, stating that the average change in the
population density of each compartment due to interactions is given by
the product of the force of infection times the average population
density \cite{hethcote2000}.
The deterministic equations for the SIR and SIS processes are obtained by applying the law of mass action and read as
\begin{eqnarray}
\frac{d \rho^I}{dt} &=& \beta \rho^I \rho^S - \mu \rho^I \\
\frac{d \rho^S}{dt} &=& - \beta \rho^I \rho^S + \chi \rho^I ,
\end{eqnarray}
where $\chi = \mu$ for the SIS process and $\chi = 0$ for the SIR model,
and the force of infection is $\alpha=\beta \rho^I$. These equations
are complemented with the normalization condition,
$\rho^R = 1 - \rho^S - \rho^I$ and $\rho^S = 1 - \rho^I$ for the SIR and
SIS model, respectively. If we consider the limit $\rho^I \simeq 0$,
generally valid at the early stage of the epidemic, we can linearize the
above equations obtaining for both the SIS and SIR models the simple
equation
\begin{equation}
\frac{d \rho^I}{dt} \simeq (\beta - \mu) \rho^I.
\end{equation}
whose solution
\begin{equation}
\rho^I(t) \simeq \rho^I(0) e^{(\beta - \mu)t}
\label{initial_growth_approx}
\end{equation}
represents the early time evolution. Equation
\ref{initial_growth_approx} illustrates one of the key concepts in the
classical theoretical analysis of epidemic models. The
number of infectious individuals grows exponentially if
\begin{equation}
\beta - \mu > 0 \quad \Rightarrow \quad R_0 = \frac{\beta}{\mu} >1,
\end{equation}
where we have defined the \textit{basic reproduction number} $R_0$ as
the average number of secondary infections caused by a primary case
introduced in a fully susceptible population \cite{anderson92}. This
result allows to define the concept of epidemic threshold: only if
$R_0 > 1$ (i.e. if a single infected individual generates on average
more than one secondary infection) an infective agent can cause an
outbreak of a finite relative size (in SIR-like models) or lead to a
steady state with a finite average density of infected individuals,
corresponding to an \textit{endemic} state (in SIS-like models). If
$R_0 < 1$ (i.e. if a single infected individual generates less than one
secondary infection), the relative size of the epidemics is negligibly
small, vanishing in the thermodynamic limit of an infinite
population\footnote{In the present context, since we do not consider
spatial effects, the thermodynamic limit is simply defined as the
limit of an infinitely large number of individuals.} (in SIR-like
models) or leading to a unique steady state with all individuals healthy
(in SIS-like models). This concept is very general and the analysis of
different epidemic models \cite{anderson92} reveals in general the
presence of a \textit{threshold behavior}, with a reproduction number
that can be expressed as a function of the rates of the different
transitions describing the epidemic model.
A few remarks are in order here. First, although we have stated that
epidemic processes can be considered as reaction-diffusion systems, the
classic approach completely neglects the diffusion of
individuals. Spatial effects can be introduced by adding diffusive
continuous terms or by considering patch models.
Furthermore, epidemic spreading is governed by an inherently
probabilistic process. Therefore, a correct analysis of epidemic models
should consider explicitly its stochastic nature
\cite{Andersson2000}. Accounting for this stochasticity is particularly
important when dealing with small populations, in which the number of
individuals in each compartment is reduced. For instance, while the
epidemic threshold condition $R_0 > 1$ is a necessary and sufficient
condition for the occurrence of an epidemic outbreak in deterministic
systems, in stochastic systems this is just a necessary
condition. Indeed even for $R_0 > 1$ stochastic fluctuations can lead to
the epidemic extinction when the number of infectious individuals is
small. Analogously, all the general results derived from deterministic
mean-field equations can be considered representative of real systems
only when the population size is very large (ideally in the
thermodynamic limit) and the fluctuations in the number of individuals
can be considered small. Indeed, most of the classical results of
mathematical epidemiology have been obtained under these assumptions
\cite{anderson92}.
Another point worth stressing is the Poisson assumption. Although we
will mostly focus on Poissonian epidemic processes (see Sections
\ref{sec:real-epid-models} and \ref{sec:epid-proc-temporal-nets} for
some remarks on the non-Poissonian case), a different phenomenology,
both more complex and interesting, can be obtained from non
exponentially distributed infection or recovery processes.
Finally, the classic deterministic approach assumes \emph{random and
homogeneous mixing}, where each member in a compartment is treated
similarly and indistinguishably from the others in that same
compartment. In reality, however, each individual has his/her own
social contact network over which diseases propagate,
usually differing from that of other members in a group or
compartment. \citet{Diekmann_Heesterbeek_Britton_boek2012} illustrate
the weakness of $R_{0}$ by discussing a line and square lattice topology
and they conclude that network and percolation theory needs to be
consulted to compute the epidemic threshold, leading to a new definition
of the basic reproduction number depending on the topology of the
network. Thus, for example, in the case of a homogeneous contact network
in which every individual is in contact with the same number of
individuals $\langle k \rangle$, the basic reproduction number takes the form
\begin{equation}
R_0 = \langle k \rangle \frac{\beta}{\mu},
\label{homogeneousR0}
\end{equation}
The impact of heterogeneous connectivity patterns, reflected by an
underlying network topology, on the epidemic behavior is the focus of
the present review.
\subsection{Connections with other statistical physics models}
\label{sec:2.C}
The interest that models for epidemic spreading have attracted within
the statistical physics community stems from the close connection
between these models and more standard nonequilibrium problems in
statistical physics~\cite{Marrobook,Henkel}. In particular, the
epidemic threshold concept is analogous to the concept of phase
transition in non-equilibrium systems. A phase transition is defined as
an abrupt change in the state (\textit{phase}) of a system,
characterized by qualitatively different properties, and that is
experienced varying a given \textit{control parameter} $\lambda$. The
transition is characterized by an \textit{order parameter} $\rho$
\cite{yeomans}, which takes (in a system of infinite size) a non-zero
value in one phase, and a zero value in another (see
Figure~\ref{fig:phasetrans}). The phase transition takes place at a
particular value of the control parameter, the so-called
\textit{transition point} $\lambda_c$, in such a way that for
$\lambda>\lambda_c$ we have $\rho >0$, while for
$\lambda \leq \lambda_c$, $\rho =0$. Apart from the determination of the
transition point, the interest in physics lies in the behavior of the
order parameter around $\lambda_c$, which in \textit{continuous, or
critical phase transitions}\footnote{In \textit{first order
transtions} the order parameter takes a discontinuous jump at the
transition point \cite{stanley}.} takes a power law form,
$\rho(\lambda) \sim (\lambda- \lambda_c)^{\beta_{crit}}$, defining the
\textit{critical exponent} $\beta_{crit}$ \cite{yeomans}.
\begin{figure}[t]
\includegraphics*[width=8.5cm]{fig3.pdf}
\caption{Phase diagram of a typical non-equilibrium absorbing state
phase transition (SIS-like).
Below the critical point $\lambda_c$, the order
parameter is zero (healthy phase in an epidemics
interpretation). Above the critical point, the order parameter
attains a non-zero average value in the long time regime (endemic
or infected epidemic phase).}
\label{fig:phasetrans}
\end{figure}
The SIS dynamics thus belongs to the wide class of
non-equilibrium statistical models possessing \textit{absorbing states},
i.e. states in which the dynamics becomes trapped with no possibility
to escape. The paradigmatic example of a system with an absorbing state
is the contact process \cite{harris74}, where all nodes of a
lattice or network can be either occupied or empty. Occupied nodes
annihilate at rate $1$; on the other hand, they can reproduce at rate
$\lambda$, generating one offspring that can occupy an empty nearest
neighbor. The contact process experiences an \textit{absorbing-state
phase transtion} \cite{Marrobook,Henkel} at a critical point $\lambda_c$ between an active phase, in
which activity lasts forever in the thermodynamic limit, implying a
finite average density of occupied nodes, and an absorbing phase, in
which activity eventually vanishes, corresponding to an empty
system. In
the case of the SIS model, the active phase is given by the infected
state, and the absorbing phase by the state where no individual is
infected, see Figure~\ref{fig:phasetrans}. The order parameter is
therefore the \textit{prevalence} or density of infected individuals,
and the control parameter is given by the \textit{spreading rate} or {\em effective infection rate}, which
equals $\lambda = \beta/\mu$.
The \textit{epidemic threshold} (critical point) $\lambda_c$ separates
thus the infected from the healthy phase. While this distinction is
strictly true in the thermodynamic limit, for finite systems the
dynamics for any value of $\lambda$ sooner or later visits the
absorbing-state and remains trapped there. The absorption event can occur even in
the active phase well above the critical point, because of random
fluctuations, illustrating that the determination of the critical point is a nontrivial task, both for
theoretical approaches and numerical simulations
\cite{Marrobook,Henkel}. It is interesting to note that the dynamics of
the SIS process is essentially identical to that of the contact process
in lattices; indeed, the difference between the SIS and the contact
process lies exclusively in the number of offsprings that an active
individual can generate. While in the contact process one particle
generates always in average one offspring per unit time, an infected
individual in the SIS model can infect all his/her nearest neighbors in the
same time interval. This difference is trivial when the number of
nearest neighbors is fixed, but it can lead to a dramatic difference
when the number of nearest neighbors has large fluctuations (see
Section~\ref{sec:epid-proc-heter}).
The SIR model also exhibits a transition between a phase where the
disease outbreak reaches a finite fraction of the population and a phase
where only a limited number of individuals are affected. This is
strongly reminiscent of the transition occurring in
\textit{percolation}~\cite{stauffer94,Grassberger1983}. In the simplest
possible setting of (bond) percolation in a lattice, the connections
between nearest neighbors of a lattice or network are erased with
probability $1-p$ and kept with complementary probability $p$. A
critical value $p_c$ separates a super-critical percolating phase, where
a macroscopic connected cluster spans the whole lattice, from a
sub-critical phase where only connected clusters of finite size
exist. The order parameter describing the transition is the probability
$P_G(p)$ that a randomly chosen site belongs to the spanning cluster.
In the case of networks, the percolating phase corresponds to the
presence of a largest connected component with a size proportional to
the network size (the \textit{giant component}, see
Section~\ref{sec:general-definitions}), while in the sub-critical phase
it has a relative size that vanishes in the termodynamic limit. In the
case of networks, the order parameter is proportional to the
relative size of the giant component. The mapping between SIR and
bond percolation is made through the assimilation of the size of connected
components with the size of epidemic outbreaks, with a control parameter
that depends on the spreading rate
$\lambda=\beta/\mu$. This connection will be further developed and
exploited in Sec.~\ref{sec:4.B}.
Finally, it is worth mentioning first-passage
percolation~\cite{hammersley1965first,kesten2003first} as another
classical problem related to epidemics. In this model, a nonnegative
value $\tau_{ij}$ is defined on each edge of a graph and interpreted as
the time needed to cross the edge. Given a topology and the distribution
of the times $\tau$, first passage percolation investigates which points
can be reached in a certain time starting from a fixed origin. The SI
model for epidemics can be seen as the limit of first-passage
percolation with all passage times equal.
\section{Network measures and models}
\label{sec:netw-epid-whorps}
Although very common, the homogeneous assumption used in the previous
Section to derive the constitutive deterministic equations of basic
epidemic processes maybe inadequate in several real-world situations
where individuals have large heterogeneity in the contact rate, specific
frozen pattern of interaction or are in contact with only a small part
of the population. These features may have different relevance
depending on the disease or contagion process considered. However, a
wide range of social and biological contagion processes require
capturing the individuals' contact pattern structure in the mathematical
modeling approaches. This is even more relevant, because most real-world
systems show very complex connectivity patterns dominated by large-scale
heterogeneities described by heavy-tailed statistical distributions.
Network theory \cite{Newman10}
provides a general framework to discuss interactions among individuals
in detail. In this Section, we provide a short summary of the main
definitions and properties of networks, relevant for epidemic spreading,
and a basic introduction to the language of graph theory that is
necessary for a formal analysis of networks properties. Network science
is burgeoning at the moment, and for more extensive accounts of this
field we refer to the books
\cite{mendesbook,caldarelli2007sfn,Dorogobook2010,Newman10,havlinbook,networksciencebook}.
\subsection{General definitions}
\label{sec:general-definitions}
Networks are mathematically described as graphs. A graph is a collection
of points, called \textit{vertices}, (\textit{nodes} in the physics
literature or \textit{actors} in the social sciences). These points are
joined by a set of connections, called \textit{edges}, \textit{links} or
\textit{ties}, in mathematics, physics and social sciences,
respectively. Each edge denotes the presence of a relation or
interaction between the vertices it joins.
Edges can represent a bidirectional interaction between vertices, or
indicate a precise directionality in the interaction. In the first case
we talk about \textit{undirected} networks, and in the second case,
about \textit{directed} networks or \textit{digraphs}. From an
epidemiological point of view, the directedness of a network is indeed
relevant since it imposes restrictions on the possible paths of
propagation of the contagion. A compact way to specify all connections
present in a graph of size $N$ (i.e. with $N$ vertices) is the
$N \times N$ adjacency matrix $A$, with elements $a_{ij}=1$ if an edge
is connecting nodes $i$ and $j$ and zero otherwise. $A$ is symmetric in
undirected graphs, and asymmetric in directed graphs.
A path $\mathcal{P}_{i_0, i_n}$ connecting vertices $i_0$ and $i_n$ is a
sequence of different edges $\{(i_j, i_{j+1})\}$, $j=0,\ldots, n-1$; the
number of edges traversed, $n$, is the hopcount, also called the length,
of the path. A graph is
\textit{connected} if there exists a path connecting any two vertices in
the graph. A \textit{loop} is a closed path with $i_0 \equiv i_n$.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig4.pdf}
\caption{Component structure of a directed graph. Figure adapted from
\citet{dorogodirect01}.}
\label{fig:bowtie}
\end{figure}
A \textit{component} $\mathcal{C}$ of a graph is
\index{component}defined as a connected subgraph.
The \textit{giant component} is the component or subgraph, whose size
scales as the number of vertices in the graph. From an epidemiological
perspective, a disease in the giant component may in principle infect a
macroscopic fraction of the graph, while if the disease starts outside
of the giant component, the total number of infected vertices will be
necessarily limited, representing a fraction that decreases with the
network size.
In the case of directed graphs, the structure of the components is more
complex as the presence of a path from the node $i$ to the node $j$ does
not necessarily guarantee the presence of a corresponding path from $j$
to $i$. In general (see Figure~\ref{fig:bowtie}) the component structure
of a directed network can be decomposed into a giant weakly connected
component (GWCC), corresponding to the giant component of the same graph
in which the edges are considered as undirected, plus a set of smaller
disconnected components. The GWCC is itself composed of several parts
because of the directed nature of its edges: (1) the giant strongly
connected component (GSCC), in which there is a directed path joining
any pair of nodes; (2) the giant in-component (GIN), formed by the nodes
from which it is possible to reach the GSCC by means of a directed path;
(3) the giant out-component (GOUT), formed by the nodes that can be
reached from the GSCC by means of a directed path; (4) the tendrils,
that connect nodes that cannot reach the GSCC or be reached from it and
(5) the tubes, that connect the GIN and GOUT, but do not belong to the
GSCC.
\subsection{Network metrics}
A large number of metrics have been defined to characterize
different aspects of the topology of complex networks.
\subsubsection{Shortest path length and network diameter}
\label{sec:shortest-path-length}
In order to characterize the distance among nodes we introduce the
\textit{shortest path length}, sometimes also referred to as the
\textit{chemical} distance or \textit{geodesical} distance. The
shortest path distance $\ell_{ij}$ between two nodes $i$ and $j$ is
defined as the length of the shortest path (not necessarily unique)
joining $i$ and $j$. The \textit{diameter} of a network is the maximum
value of all the pairwise shortest path lengths, and the average
shortest path length $\av{\ell}$ is the average of the value of
$\ell_{ij}$ over all pairs of vertices in the network.
\subsubsection{Degree and degree distribution}
\label{sec:degr-degr-distr}
The degree $k_i$ of vertex $i$ in an undirected network is the number
of edges emanating from $i$, i.e. $k_i = \sum_j a_{ij}$. In the case
of directed networks, we distinguish between in-degree, $k^\textrm{in}_i$, and
out-degree, $k^\textrm{out}_i$, as the number of edges that end in $i$ or start
from $i$, respectively. In undirected networks we
define the \textit{degree distribution} $P(k)$ as the probability that
a randomly chosen vertex has degree $k$, or, in finite networks, as
the fraction of vertices in the graph with degree exactly equal to
$k$. In the case of directed networks, there are instead two different
distributions, the out-degree $P_\mathrm{out}(k^\textrm{out})$ and the in-degree
$P_\mathrm{in}(k^\textrm{in})$ distributions. The
in-degree and out-degree of a given vertex might not be
independent. Correlations are encoded in the joint probability
distribution $P(k^\textrm{in}, k^\textrm{out})$ that a randomly chosen vertex has in-degree
$k^\textrm{in}$ and out-degree $k^\textrm{out}$. It is
useful to consider the moments of the degree distribution, $\av{k^n} =
\sum_k k^n P(k)$. The first moment, the \textit{average degree} $\av{k} = 2L/N$, twice the ratio between the
number $L$ of edges (or links) and the number $N$ of nodes, provides information
about the density of the network. A network is called \textit{sparse}
if its number of edges $L$ grows at most linearly with the network size
$N$; otherwise, it is called \textit{dense}.
In directed networks, since every edge contributes to one node in-degree
and other node out-degree we have that $\av{k^\textrm{in}} = \av{k^\textrm{out}}$.
\subsubsection{Degree correlations}
\label{sec:degree-correlations}
Two-vertex degree correlations can be conveniently
measured by means of the conditional probabilility $P(k'|k)$ that an
edge departing from a vertex of degree $k$ is connected to a vertex of
degree $k'$ \cite{alexei}. A network is called \textit{uncorrelated}
if this conditional probability is independent of the originating
vertex $k$. In this case, $P(k'|k)$ can be simply estimated as the
ratio between the number of edges pointing to vertices of degree
$k'$, $k' P(k')N /2$, and the total number of edges, $\av{k} N/2$, to yield
$P^\mathrm{un}(k'|k) = \frac{k'P(k')}{\av{k}}$.
The empirical evaluation of $ P(k'|k)$ turns out to be quite noisy in
real networks, due to finite size effects. A related, simpler,
measure of correlations is the average degree of the
nearest neighbors of vertices of degree $k$, $\bar{k}_{nn}(k)$ which
is formally defined as \cite{alexei}
\begin{equation}
\label{eq:knndef}
\bar{k}_{nn}(k) = \sum_{k'} k' P(k'|k).
\end{equation}
For uncorrelated networks,
$\bar{k}_{nn}^\textrm{un}(k) = \av{k^2}/{\av{k}}$ does not depend on
$k$. Therefore, a varying $\bar{k}_{nn}(k)$ is the signature of degree
correlations. The analysis of empirical networks has suggested a broad
classification of networks in two main classes, according to the nature
of their degree correlations
\cite{assortative}:
\textit{Assortative networks} exhibit an increasing $\bar{k}_{nn}(k)$,
indicative that high degree nodes tend to connect to high degree nodes,
while low degree nodes are preferentially attached to low degree
nodes. \textit{Disassortative networks}, on the other hand, show a
decreasing $\bar{k}_{nn}(k)$ function, suggesting that high degree nodes
connect to low degree nodes, and viceversa. Assortativity by degree can
be characterized by the Pearson correlation coefficient $r$
\cite{assortative}: Uncorrelated networks have
$r=0$, while assortative (disassortative) networks present $r>0$
($r<0$), respectively.
\subsubsection{Clustering coefficient and clustering spectrum}
\label{sec:clust-coeff-clust}
The concept of clustering refers to network transitivity, i.e.
the relative propensity of two nodes to be connected, provided that
they share a common neighbor. The clustering coefficient $C$ is
defined as the ratio between the number of loops of length three in
the network (i.e. triangles), and the number of connected triples (three nodes
connected by two edges). A local measure $c_i$ of clustering
\cite{watts98} can also be defined as the ratio between the actual
number of edges among the neighbors of a vertex $i$, $e_i$, and its
maximum possible value, measuring thus directly the probability that
two neighbors of vertex $i$ are also neighbors of each other. The mean
clustering of the network $\av{c}$ is defined as the average of $c_i$
over all vertices in the network. The clustering spectrum
$\bar{c}(k)$ is defined as the average clustering coefficient of the
vertices of degree $k$ \cite{alexei02,ravasz_hierarchical_2003},
satisfying $\av{c} =\sum_k P(k) \bar{c}(k)$.
\subsubsection{Centrality and structure in networks}
\label{sec:centrality}
The concept of \textit{centrality} encodes the relative importance of a
node inside a network, a relevant issue in the context of social network
analysis \cite{wass94}. Many different definitions of centrality have
been proposed, based on different indicators of the structural
importance of nodes. The simplest of them is the degree, referred to as
\textit{degree centrality}. The higher its degree, the more the node
can be considered influential/central in the network. Alternative
definitions are based on the shortest paths between vertices. Thus, the
\textit{closeness centrality} $\mathcal{C}_i$ is defined as the inverse
of the average of the shortest path lengths from vertex $i$ to all other
vertices in the network. With this measure, we consider a vertex
central if it is situated in average at a short distance to all other
vertices in the network. A very different perspective on centrality is
provided by the \textit{betweenness centrality} $b_i$ of vertex $i$,
defined as number of shortest paths between any two vertices in the
network that pass through vertex $i$. More precisely, if $L_{h,j}$ is
the total number of shortest paths from $h$ to $j$, and $L_{h,i,j}$ is
the number of these shortest paths that pass though vertex $i$, then
$b_i = \sum_{h \neq j} L_{h,i,j}/ L_{h,j}$. Betweeness measures thus
centrality from the perspective of the control of information flowing
between different nodes, assuming this information flows following the
shortest path route \cite{freeman77}.
Another way to characterize the centrality of nodes resides in the concept
of $K$-coreness.
The $K$-core of a network is a maximal connected subgraph, such that all
vertices in the subgraph have degree $k \geq K$~\cite{Seidman1983269}.
The $K$-core decomposition is an iterative procedure that
classifies the vertices of the network in nested levels of increasing
connectivity (increasing $K$-core). The algorithm runs as follows: One
starts with the complete network, and removes iteratively all
vertices with degree $k=1$, until only vertices with degree $k \geq 2$
are present. The set of removed nodes represents the
$K=1$-\textit{shell}, while the remaining nodes constitute the
$K=2$-core. In the next iteration of the process, all vertices with
degree $k=2$ are removed (the $K=2$-shell), are we are left with the
$K=3$-core. This iterative process is stopped when we arrive at the
maximum $K_S$-core, where one more application of the algorithm
leaves no vertices. At each node is assigned a centrality measure
equal to its $K$-core index, the deeper the more central.
It is worth remarking that real networks can display higher levels
of architecture that are difficult to capture with a single number. Many
networks possess a \textit{community structure}, in which
different sets of nodes, called \textit{communities} or
\textit{modules}, have a relatively high density of internal
connections, while they are more loosely connected among them. The
problem of computing the community structure of a given network has
been a very active topic in network science and a large number of
different approaches have been considered (see
\citet{Fortunato201075} for a specific review).
\subsection{Generalizations of simple graphs}
\label{sec:gener-simple-graphs}
The simple concept of graph considered above can be refined at
different levels, adding more and more complexity and detail in order
to better represent the real system under consideration. A first
extension is that of \textit{bipartite} graphs, in which we have $2$
different kinds of nodes, and edges join only two nodes of a different
kind. A classical example are the networks of heterosexual sexual
relationships \cite{liljeros_web_2001}.
Another important generalization consists in the definition of
\textit{weighted networks}, in which a real number $\omega_{ij}$ (the
weight) is associated to the edge between vertices $i$ and
$j$. Weighted networks constitute the natural choice to represent many
systems, including transportation networks (e.g. the airport network),
in which the weight of an edge measures the fraction of people or
goods transported by the edge in a given interval of time, or social
networks, for which weights measure the relative intensity or
frequency of contacts between pairs of vertices. The addition of
weights allows to define a complete new set of topological metrics
\cite{Braunstein03,Barrat16032004,onnela05:_inten,PhysRevE.76.016101,PhysRevE.74.055101}.
Among those, the strength of a node $s_i$, defined as the sum of the
weights of all edges incident to it, i.e. $s_i = \sum_j \omega_{ij}$,
generalizes to weighted networks the concept of degree.
\subsection{Network classes and basic network models}
\label{sec:basic-network-models}
The recent abundance of data and measurements of real-world networks
has highlighted the existence of different classes of networks,
characterized by a large variability in basic metrics and statistical
properties. This classification in its turn has fueled an intense
theoretical research effort devoted to the study of different network
generation models. The usefulness of these models in the present
context is that
they serve as generators of synthetic networks, with controlled
topological properties, in which the behavior of dynamical processes
such as epidemics can be studied in detail. In the following we will
survey some of the main network classes and models that are used for
exploring the properties of epidemic processes.
\subsubsection{Random homogenous networks}
The first theoretical model of random networks is the \textit{classical
random graph} model \cite{solomonoff51,gilbert59,erdos59}. In its
simplest formulation, the graph $G_{p}(N)$ is constructed from a set of
$N$ nodes in which each one of the $N(N-1)/2$ possible links is present
with probability $p$. The degree distribution is given by a binomial
form, which, in the limit of constant average degree (i.e.
$p = \av{k}/ N$) and large $N$ can be approximated by a Poisson
distribution $ P(k) = e^{-\av{k}} \frac{\av{k}^k}{k!}$. The clustering
coefficient is simply given by $\av{c}=p$, and the average shortest path
length is $\av{\ell} \simeq \log N / \log \av{k}$ \cite{Dorogobook2010}.
This model is therefore adequate in the case of networks governed only
by stochasticity, although $G_{p}(N)$ tends to a regular graph for large
$N$ and constant $p$. The degree distribution is peaked around the
average value, thus denoting a statistical homogeneity of the nodes.
Interestingly, the model features for $\av{k} > 1$ the small diameter
observed in most real-world networks. However, any other structural
properties, including the generally high clustering coefficient observed
in real world networks, cannot be reproduced by this model.
\subsubsection{Small-world networks}
The \textit{small-world model} of \citet{watts98} represents a first
attempt to obtain a network with small diameter $\av{\ell}$ and large
clustering coefficient. This model considers an ordered lattice,
such as a ring of $N$
vertices, each one of which symmetrically connected to its $2 m$
nearest neighbors. This initial configuration has large clustering
coefficient
and large average shortest path length.
Starting from it, a fraction $p$ of edges in the network are rewired, by
visiting all $m$ clock-wise edges of each vertex and reconnecting them,
with probability $p$, to a randomly chosen node. In another version of
the model \cite{Monasson-1999}, a fraction $p$ of edges are added
between randomly chosen pairs of vertices. The overall effect of the
rewiring processes is to add long-range shortcuts, that, even for a
small value of $p \sim N^{-1}$, greatly reduce the average shortest path
length, while preserving a large clustering for not very large values of
$p$. This model, although better suited for social networks with high
clustering coefficient, has a degree distribution and centrality
measures decaying exponentially fast away from the average value. The
small-world model thus generates homogeneous networks where the average
of each metric is a typical value shared, with little variations, by all
nodes of the network.
\subsubsection{Heavy-tailed networks}
Empirical evidence from different research areas has shown that many
real-world networks exhibit levels of heterogeneity not anticipated
until few years ago. The statistical distributions characterizing
heterogeneous networks are generally skewed, and varying over several
orders of magnitude. Thus, real-world networks are structured in a
hierarchy of nodes with a few nodes having very large connectivity (the
hubs), while the vast majority of nodes have much smaller degrees. More
precisely, in contrast with regular lattices and homogeneous graphs
characterized by a typical degree $k$ close to the average $\av{k}$,
heterogeneous networks exhibit heavy-tailed degree distributions often
approximated by a power-law behavior of the form $P(k)\sim k^{-\gamma}$,
which implies a non-negligible probability of finding vertices with very
large degree. The degree exponent $\gamma$ of many real-world networks
takes a value between $2$ and $3$. In such cases networks are called
\textit{scale-free}, since the second moment of the degree distribution
diverges in the infinite network size limit ($N\to\infty$). It is
understood that in real-world networks the finite size $N$ and the
presence of biological, cognitive and physical constraints impose an
upper limit to the second degree moment. However, the second moment of
the distribution is in many case overwhelmingly large, reflecting
enormous connectivity fluctuations. The presence of large-scale
fluctuations associated with heavy-tailed distributions is often true
not only for the degree of nodes but it is also observed for the
intensity carried by the connecting links, transport flows, and other
basic quantities.
Several variations of the classical random graph model have been
proposed in order to generate networks with a power-law degree
distribution. One variation, the so-called \textit{configuration model}
\cite{benderoriginal,molloy95}, considers a random network with a fixed
degree distribution, instead of the fixed average degree of classical
random graphs. Its construction is as follows: To each of the vertices,
we assign a degree $k_i$, given by a random number selected from the
probability distribution $P(k)$, subject to the conditions
$m \leq k_i \leq N$, where $m$ is the desired minimum degree, and such
that $\sum_i k_i$ is an even number. The actual graph is constructed by
randomly connecting the nodes with $\sum_i k_i /2 $ edges, preserving
the degree originally assigned.
In finite networks, an \textit{average} maximum degree or degree
\textit{cut-off} $k_m$, known as the \textit{natural cut-off} of the
network~\cite{mariancutofss} is often observed, which is a function of the network size
of the form $k_m(N) \sim N^{1/(\gamma-1)}$~\cite{Cohen00}.
The original \textit{configuration model} leads for power-law
distributions with $\gamma \leq 3$ to the formation of networks with
multiple and self-connections. The additional prescription that
multiple and self-connections are removed leads to the generation of
disassortative correlations~\cite{maslovcorr,PhysRevE.68.026112}.
These correlations are avoided in the \textit{uncorrelated
configuration model}~\cite{Catanzaro05} by imposing a hard
\textit{structural cut-off} $k_m \sim N^{1/2}$.
A different modeling paradigm, namely the class of growing network
models, is based on the empirical observation that many real networks do
not have a constant number of vertices and edges, but are instead
growing entities, in which nodes and links are added over time. The
first undirected model of this kind is the Barab\'asi-Albert (BA) model
\cite{Barabasi:1999}, based on the assumption that newly added edges
will tend in general to be connected to nodes chosen via some
\textit{preferential attachment} rule. The simplest of these
preferential rules is a degree-biased rule, in which the probability to
add a connection to a vertex $i$ is some function $F(k_i)$ of its
degree. The \citet{Barabasi:1999} model, assuming the simplest, linear,
form for the preferential attachment function, is defined as follows:
(\textit{i}) The network starts with a small nucleus of $m_0$ connected
vertices; every time step a new node is added, with $m$ ($m \leq m_0)$
edges which are connected to old vertices in the network. (\textit{ii})
New edges are connected to the $i$-th node in the network with
probability equal to $F(k_i) = k_i / \sum_j k_j$. In the long time
limit, the network thus generated has a degree distribution
$P(k) \sim k^{-3}$ \cite{Barabasi:1999,mendes99}. The original growing
network model has been subject to an impressive number of variations and
extensions towards realistic growing dynamics and to accommodate for
different exponents of the degree distribution and other properties such
as high clustering and tunable degree-degree
correlations~\cite{Newman10}.
\subsection{Static versus dynamic networks}
\label{sec:static-vs-dynamic}
So far, we
have assumed that the topology
defining the network is \textit{static}: the set of nodes and links do not change over time.
However, many other real
networks are far from static, their links being created, destroyed and
rewired at some intrinsic time scales. In some of these dynamical
networks, such as the Internet \cite{romuvespibook}, the time
scale of the network evolution is quite slow. A static network provides a good approximation, when the
properties of dynamical processes evolve at a much faster time
scale than topological changes. The opposite limit defines the so-called \textit{annealed
networks}
\cite{gil05:_optim_disor,stauffer_annealed2005,PhysRevE.76.046111,Boguna09},
which describe the case when the evolution of the network is much
faster than the dynamical processes. In this limit, the
dynamical process unfolds on a network that is rapidly rewiring so that
the dynamics effectively occurs on an average network in which each
connection is possible according to a specific probability that
depends on the degree distribution $P(k)$ and the two-node degree
correlations $P(k'|k)$. An annealed network is thus described by a
mean-field version of the adjacency matrix that will be presented in
Section~\ref{sec:theoreticalmethods}.
The two above limits are relevant in the definition of the
approximations and the limits of applicability of the most commonly used
theoretical approaches to epidemic spreading in networks. There are,
however, several other instances of networks, such as in social systems,
where the connectivity pattern varies over time scales comparable to
those of the dynamical processes on top of it and it is crucial to take
explicitly into account the concurrent dynamics of the spreading process
and the connectivity pattern. The effect on epidemic spreading of the
dynamical nature of such \textit{temporal} \cite{Holme:2011fk} networks
is discussed in Section~\ref{sec:epid-proc-temporal-nets}.
Finally,
co-evolution of the network and the dynamical process occurs
when the topological structure of a network \textit{reacts}
dynamically to the evolution of a dynamical process taking place on
top of it. Indeed, individual social activity can be altered by the
presence of an epidemic outbreak (e.g. avoiding contacts that amount
to link deletion), thus affecting the topology of the underlying social
network, which in turn feeds back nontrivially on the spreading
dynamics. The coupling of topology with disease evolution in such
\textit{coevolving} networks is discussed in Section~\ref{sec:6.C}.
\section{Theoretical approaches for epidemic modeling on networks}
\label{sec:theoreticalmethods}
A continuous-time epidemic process with constant transition rates
between compartments on any graph can be described by Markov theory.
Let us consider a network
defined by its adjacency matrix $A$ and a general epidemic process with
$q$ compartments. The state of node $i$ at time $t$ is specified by a
random variable $X_{i}\left( t\right) \in\{0,1, \ldots, q-1\}$, where
$X_{i}\left( t\right) = \alpha$ means that node $i$ belongs to
compartment $\alpha$ at time $t$. We assume that all transitions
between compartments are given by independent Poisson processes with
given rates. Under these conditions, the evolution of the epidemic
process can be described in terms of a Markov chain
\cite{vankampen,PVM_PAComplexNetsCUP}. In a network with $N$ nodes, the
total number of states equals $q^{N}$, all possible combinations in
which all $N$ nodes can take a value from $0$ to $q-1$. The elements of
the $q^{N} \times q^{N}$ infinitesimal generator $Q$ of the
continuous-time Markov chain are explicitly computed for $q=2$ in
\citet{PVM_ToN_VirusSpread,PVM_EpsilonSIS_PRE2012,Simon_Taylor_Kiss_MathBiol2011},
while the general case is treated in \citet{PVM_GEMF}. Once the
infinitesimal generator $Q$ and the initial infection probabilities are
known, the state probabilities
$\Pr\left[ X_{1}\left( t\right) =x_{1},\ldots,X_{N}\left( t\right)
=x_{N}\right] $
at time $t$, for each $x_{j}=0, 1, \ldots, q-1$, can be computed using
ordinary matrix operations, from which all desired information can be
deduced in principle.
Although the Markov approach is exact, its use has been limited to a
few exact results in the case of the SIS model. Indeed, using an
exact Markov approach is impervious for a number of reasons. First,
the linear set of $q^{N}\times q^{N}$ equations to be solved limits
the analysis to very small graphs. Second, the structure of the
infinitesimal generator $Q$ is rather complex, which prevents from
gaining general insights, although it is possible
\citep{PVM_EpsilonSIS_PRE2012} to deduce a recursion relation between
the $Q$ matrix in a graph with $N$ and $N+1$ nodes. Third, in most
cases, we are interested in the steady-state (or stationary) behavior
or in the final size of the epidemic. The peculiar property of the
exact continuous-time Markov process is the appearance of an absorbing
state, which is equal to the overall-healthy state ($x_{j}=0$ for each
node $j$) in which the activity (virus, information spreading etc.)
has disappeared from the network. Mathematically, an absorbing state
means that the $Q$ matrix has a row of zero elements, the Markov chain
is reducible and the steady-state is equal to this overall-healthy
state for finite $N$. These complications mean that only a
time-dependent analysis, focusing on metastable states, may answer
questions of practical interest.
More in general, few exact results have been derived for epidemic
spreading in networks. For this reason, the derivation of explicit
results on the behavior of epidemic
spreading processes in networks mostly relies on mean-field
theoretical approaches of different kind. In the following we review these
approaches, and discuss the different approximations and assumptions on which they are based.
The detailed applications of these approaches to the paradigmatic cases of the SIS
and SIR models will be presented in
Section~\ref{sec:epid-proc-heter}.
\subsection{Individual-based mean-field approach}
\label{sec:quenched-mean-field}
Individual-based mean-field theory (IBMF) represents a drastic
simplification of the exact description presented above. The basic
idea \cite{Wang03,Chakrabarti_2008,PVM_ToN_VirusSpread,Gomez10} is to
write down evolution equations for the probability $\rho^\alpha_i$
that the node $i$ belongs to the compartment $\alpha$, for any node $i$,
assuming that the dynamic state of every node is statistically independent
of the state of its nearest neighbors.
The mean-field equations can be obtained, under
this assumption, by applying an extended version of the law of mass
action, i.e. assuming that the probability that node $i$ is in state
$\alpha$ and its neighbor node $j$ in state $\alpha'$ is
$\rho_i^\alpha \rho_j^{\alpha'}$. More systematically, they can be
obtained directly from the governing equations derived from the
$q^N$-state Markov chain, assuming that the expected values
of variables pairs factorize: $E[X_i X_j] = E[X_i] E[X_j]$.
This method is akin to the classic assumption of the
mean-field theory, while keeping the full topological structure of the
network encoded in all the entries of the adjacency matrix $a_{ij}$,
that it is considered to be static or \textit{quenched}, using the
language of mean-field theory in statistical mechanics.
The solutions of IBMF theories depend in general on the spectral
properties of the adjacency matrix, and in particular on the value of
its largest eigenvalue $\Lambda_1$. Their predictions are generally in
agreement with numerical simulation results obtained for static
networks. As well-known from the theory of critical phenomena, the
agreement tends to decrease, when the densities $\rho_i^\alpha \to 0$ and the
independence assumption breaks down.
Individual-based mean-field approximations can be extended by using
pair-approximation approaches \cite{PhysRevA.45.8358}, in which the expectation $E[X_i X_j]$ are
considered as relevant dynamical quantities, for which the
evolution equations are written. In order to provide these equations
in closed form, the three-point correlations functions $E[X_i X_j
X_m]$ are factorized as a function of the single and two points
correlation functions. By the same token it is possible to derive
exact equations for the correlation functions up to $n$
points~\citet{PVM_upperbound_SIS_epidemic_threshold}. An
approximation is, however, always required to close the set of equations
by expressing $n+1$-points correlations as functions of correlations
of lower order. As the order $n$ grows, these approximations are
characterized in general by increasing levels of accuracy.
Although the IBMF method can be generalized to time-dependent adjacency matrices and adaptive models, explicit solutions
have been obtained mainly for the SIS models on static
networks.
\subsection{Degree-based mean-field approach}
\label{sec:heter-mean-field-1}
Degree-based mean field (DBMF) theory was the first theoretical
approach proposed for the analysis of general dynamical processes on
complex networks, and its popularity is due to its applicability to a
wide range of dynamical processes on
networks~\cite{dorogovtsev07:_critic_phenom,barratbook}. The DBMF
approximation for dynamical processes on networks starts with the
assumption that all nodes of degree $k$ are statistically
equivalent. This assumption implies that, instead of working with
quantities $\Phi_i$ specifying the state of vertex $i$ (as in IBMF
theory), the relevant variables $\Phi_k$ are specifying the state of
all vertices with degree $k$, the \textit{degree class}
$k$~\cite{marian1}. The assumption also implies that any given vertex
of degree $k$, is connected with the same probability $P(k'|k)$ to any
node of degree $k'$. The approach is a convenient complexity reduction
technique that consists in a drastic reduction in the number of
degrees of freedom of the system.
DBMF theory for epidemic models focuses on
the partial densities of individuals of degree $k$ in the compartment $\alpha$,
$\rho^\alpha_k(t)$, or, in other words, the probability that an
individual in the population with degree $k$ is in the compartment
$\alpha$. These variables are not independent, but fulfill the
condition $\sum_\alpha \rho^\alpha_k(t) =1$. The total fraction of individuals in the compartment $\alpha$ is
$\rho^\alpha(t) = \sum_k P(k) \rho_k^\alpha(t)$. The explicit rate equations
for the quantities $\rho^\alpha_k(t)$ are obtained by using the law of mass action
and assuming the independence of the expectation
values (see Section~\ref{sec:class-results}).
The DBMF theory implicitly contains an approximation that is not
always clearly stated. The statistical equivalence within degree classes
considers the network itself in a mean-field perspective, in which the
adjacency matrix $a_{ij}$ is completely destroyed, only
the degree and the two-vertex correlations of each node being preserved.
This is equivalent to replacing the adjacency matrix in the IBMF
theory by its ensemble average $\bar{a}_{ij}$,
expressing the probability that vertices $i$
and $j$ are connected (\textit{annealed network approximation}),
taking the form \cite{dorogovtsev07:_critic_phenom,Boguna09}
\begin{equation}
\bar{a}_{ij} = \frac{k_j P(k_i|k_j)}{N P(k_i)}.
\label{eq:annealedadjacencymatrix}
\end{equation}
In the case of uncorrelated networks, the simple form $\bar{a}_{ij} =
k_i k_j /(N \av{k})$ is obtained.
The solutions obtained from DBMF theories depend in general on the
statistical topological properties of the underlying networks, and in
the case of uncorrelated networks, on the moments of its degree
distribution. Although the DBMF theory is obviously a strong
approximation in the case of dynamical processes occurring on static
networks, it appears to be a suitable approximation to capture the
behavior of epidemics mediated by interaction patterns changing on a
time scale much faster than the timescales of the spreading
process. In this limit, we can consider the epidemic process to spread
on a network that is constantly rewired, while preserving the given
functional form for $P(k)$ and $P(k'|k)$. This process amounts to a
contagion process spreading on an effective mean-field network
specified by the \textit{annealed network approximation}. Furthermore,
the DBMF provides a good description of a wide range of dynamical
processes that include complex compartment transitions, multiple
occupancy of nodes and time-varying connectivity patterns.
\subsection{Generating function approach}
\label{sec:mapping-percolation}
For the SIR model and similar models without steady-state, the long time
(static) properties of the epidemic outbreak can be mapped into a
suitable bond percolation problem (see Section~\ref{sec:2.C}). In this
framework, the probability $p$ that a link exists is related to the
probability of transmission of the disease from an infected node to a
connected susceptible one.
The problem of percolation in networks
\cite{molloy95,Cohen00,Callaway2000} can be elegantly tackled with
generating functions ~\cite{Wilf:2006:GEN:1204575}.
Let us consider the case of bond percolation, in which
edges in a network are removed with probability $1-p$ and kept with
probability $p$ (see Section~\ref{sec:2.C}). Let us define $u$ as the
probability that a randomly chosen edge \textit{does not} lead to a
vertex connected to the (possibly existing) giant component. A randomly
chosen edge is not connected to the giant component if either it has
been removed, or if it leads to a vertex of degree $k$, whose remaining
$k-1$ edges either do not exist or do not lead to the giant component,
i.e.:
\begin{equation}
u = 1-p + \sum_{k} \frac{k P(k)}{\av{k}} (1-p+pu)^{k-1}.
\label{eq:perco1}
\end{equation}
This equation is valid for degree uncorrelated networks which have no
loops\footnote{The formalism can be extended to degree correlated
networks, see Section~\ref{sec:effects-degr-corr} and
\citet{PhysRevE.78.051105}.}, in which a randomly chosen edge points
to a vertex of degree $k$ with probability $k P(k)/\av{k}$, see
Section~\ref{sec:degree-correlations}. The probability $1-P_G$ that a
randomly chosen vertex does not belong to the giant component,
is proportional to the probability that it has degree $k$, and all
of its outgoing edges either have been removed or do not
lead to the giant component, i.e.
\begin{equation}
P_G(p) = 1 - \sum_k P(k) (1-p+up)^k.
\label{eq:perco2}
\end{equation}
Eqs~(\ref{eq:perco1}) and~(\ref{eq:perco2}) can be conveniently
written in terms of the degree distribution generating function
\cite{Wilf:2006:GEN:1204575} $G_0(z) = \sum_k P(k) z^k$ and the excess
degree generating function $G_1(z) = \sum_k (k+1)P(k+1) z^k/\av{k}$,
taking the form
\begin{eqnarray}
u & = & 1 - p + G_1(1-p+pu) \label{eq:perco3}\\
P_G(p) & = & 1 - G_0(1-p+pu).
\end{eqnarray}
The condition for the existence of a giant component translates into
the condition for the existence of a nonzero solution of
Eq.~(\ref{eq:perco3}), which is \cite{Callaway2000}
\begin{equation}
p > p_c = \frac{G_0'(1)}{G_0''(1)}= \frac{\langle k \rangle}{\av{k^2}-\langle k \rangle}.
\label{eq:percothreshold}
\end{equation}
In the vicinity of the critical point, the expansion of the generating
functions around the nonzero solution yields the scaling behavior of the
order parameter, $P_G(p) \sim (p-p_c)^{\beta_{perc}}$, with
$\beta_{perc}=1$ in the case of homogeneous networks. In the case of
heterogeneous networks with degree distribution $P(k)\sim k^{-\gamma}$,
we surprisingly find that the percolation threshold tends to zero for
$\gamma<3$ in the limit of an infinite network size, $N\to\infty$
~\cite{Cohen02}. The critical exponent $\beta_{perc}$ assumes in this
class of networks the following values~\cite{Cohen02}
\begin{equation}
\beta_{perc} = \left\{\begin{array}{ll}
1/(3-\gamma) & \mathrm{for} \; \gamma < 3\\
1/(\gamma-3) & \mathrm{for} \; 3< \gamma \leq 4\\
1 & \mathrm{for} \; \gamma \geq 4\\
\end{array} \right. . \label{eq:SIRbetaexponent}
\end{equation}
For the case $\gamma=3$, a stretched exponential form $P_G(p) \sim
e^{1/p}$ is expected, based on the mapping to the SIR model, see
Sec.~\ref{sec:heter-mean-field}.
The above expressions are very general, and can be used also to study
immunization strategies and other containment measures in the case of
SIR-like models. See also~\citet{Karrer2014,Hamilton2014} for very recent
further improvements on these results.
\section{Epidemic processes in heterogeneous networks}
\label{sec:epid-proc-heter}
\subsection{Susceptible-Infected-Susceptible model}
\label{sec:susc-infect-susc}
An impressive research effort has been devoted to understanding
the effects of complex network topologies on the SIS model. The SIS
dynamics involves only two-state variables and may reach a stationary
state, making it ideal for the application of several theoretical
approaches. For this reason, there are a large number of results
concerning the SIS model, obtained with approaches ranging from
approximate mean-field theories to exact methods. In the following, we
will follow a historical perspective that starts with the basic and
easily generalizable mean-field approaches and moves then to recent
exact results that put our understanding of the SIS model in complex
networks on firm theoretical ground.
\subsubsection{Degree-based mean-field theory}
\label{sec:heter-mean-field-2}
The first approach to the study of the SIS model in complex networks
\cite{pv01a} used a degree-based mean-field (DBMF) theory (commonly
referred in the physics literature as the heterogeneous mean-field
approach), whose general methodology can be extended to a wealth of
dynamical processes in networks~\cite{barratbook}. In the DBMF
approach, the SIS model is described in terms of the probability
$\rho^I_k(t)$ that a node of degree $k$ is infected at time $t$,
assuming the statistical equivalence of all nodes of degree $k$. The
SIS dynamical equation for $\rho^I_k(t)$ is derived by applying the law
of mass action,
\begin{equation}
\frac{d \rho^I_k(t) }{dt}
= - \rho^I_k(t)+ \lambda k [1- \rho^I_k(t)]\sum_{k'} P(k'|k)
\rho^I_{k'}(t),
\label{eq:HMFSISequation}
\end{equation}
where, without loss of generality, we have
rescaled time by $\mu^{-1}$, so that the recovery rate is unitary and the infection rate is equivalent to the spreading rate $\lambda=\beta/\mu$.
The first term accounts for the recovery of nodes
of degree $k$, proportional to the probability $\rho^I_k(t)$ that a
node of degree $k$ is infected.
The second term accounts for the infection of new nodes, and is proportional
to the probability that a node of degree $k$ is susceptible,
$1-\rho^I_k(t)$, times the probability $P(k'|k)$ that this node is
connected to a node of degree $k'$,
multiplied by the probability $\rho^I_{k'}(t)$ that this
last node is infected, times the rate of infection $\lambda$. This
factor is summed over all the possible values of $k'$. The extra
factor $k$ takes into account all the possible edges through which the
disease can arrive at a node of degree $k$.
The set of Eqs.~(\ref{eq:HMFSISequation}) for the DBMF approximation to
the SIS model cannot be solved in a closed form for general
degree correlations. The value of the epidemic threshold can however
be obtained by means of a linear stability analysis~\cite{marian1}.
Performing an expansion of
Eq.~(\ref{eq:HMFSISequation}) at first order in $\rho^I_k(t)$ leads to
\begin{equation}
\frac{d \rho^I_k(t) }{dt} \simeq \sum_{k} J_{k k'} \rho^I_{k'}(t),
\end{equation}
where the Jacobian matrix element is
$J_{kk'} = - \delta_{kk'} + \lambda k P(k'|k)$ and
where $ \delta_{ij}$ is the Kronecker delta symbol.
A null steady state, corresponding to the healthy phase, is stable
when the largest eigenvalue of the Jacobian is negative. The endemic
phase will thus take place when $-1 + \lambda \Lambda_M >0$,
where $ \Lambda_M$ is the largest eigenvalue of the
\textit{connectivity matrix} \cite{marian1}, whose elements are
\begin{equation}
C_{kk'} = k P(k'|k).
\label{eq:SISconnectivitymatrix}
\end{equation}
From Perron-Frobenius Theorem \cite{Gantmacher}, since $C$ is
non-negative, and assuming that it is irreducible, its largest
eigenvalue is real and positive. Therefore, the endemic state occurs
for
\begin{equation}
\lambda > \lambda_c^\mathrm{DBMF} = \frac{1}{\Lambda_M}.
\label{eq:HMSSISthreshold}
\end{equation}
In the case of uncorrelated networks, in which $ P(k'|k) = k' P(k')/\av{k}$,
it is possible to obtain an explicit solution of the DBMF equations by writing
\begin{equation}
\frac{d \rho^I_k(t) }{dt} = - \rho^I_k(t)+ \lambda k [1-
\rho^I_k(t)] \Theta,
\end{equation}
where
\begin{equation}
\Theta =
\sum_{k'} \frac{k'
P(k')}{\av{k}} \rho^I_{k'}(t)
\label{eq:HMFSISDeftheta}
\end{equation}
The latter expression gives the probability to find an infected node
following a randomly chosen edge.
In the steady state, imposing the stationarity
condition $ \frac{d \rho^I_k(t) }{dt} = 0$, we obtain
\begin{equation}
\rho^I_{k} = \frac{ \lambda k \Theta (\lambda)}{1 + \lambda k
\Theta (\lambda)},
\label{eq:HMFSISStationary}
\end{equation}
where $\Theta$ is now a constant that depends on the spreading rate $\lambda$.
The set of Eqs.~(\ref{eq:HMFSISStationary}) shows that the
higher the degree of a node, the higher its infection probability, indicating that strongly
inhomogeneous connectivity patterns impact the epidemic spreading. The factor $\Theta(\lambda)$
can be computed self-consistently, introducing
(\ref{eq:HMFSISStationary}) into the definition
Eq.~(\ref{eq:HMFSISDeftheta}), to obtain
\begin{equation}
\Theta (\lambda) = \frac{1}{\av{k}} \sum_{k} k P(k) \frac{ \lambda
k \Theta (\lambda)}{1 + \lambda k
\Theta (\lambda)}.
\label{eq:DefTheta}
\end{equation}
The self-consistent equation (\ref{eq:DefTheta}) admits a non-zero
solution, corresponding to the endemic state, only when the following
threshold condition for uncorrelated networks is fulfilled~\cite{pv01a}
\begin{equation}
\lambda > \lambda_c^\mathrm{DBMF, unc} = \frac{\av{k}}{\av{k^2}}.
\label{eq:HMSSISthresholdunc}
\end{equation}
The uncorrelated threshold can also be obtained from the general
expression Eq.~(\ref{eq:HMSSISthreshold}) by noticing that the
elements of the connectivity matrix reduce to $C_{kk'}
= k k'P(k')/\av{k}$, which has a unique non-zero eigenvector with
eigenvalue $\av{k^2}/\av{k}$. For a fully homogeneous (regular) network with
$\av{k^2} = \av{k}^2$, Eq.~\eqref{eq:HMSSISthresholdunc} recovers the
result $\lambda_c^\mathrm{DBMF} = 1 / \av{k}$, as expected from the
simple arguments from Section~\ref{sec:class-results} (see
Eq.~\eqref{homogeneousR0}).
Eq.~(\ref{eq:HMSSISthresholdunc}) implies that, in networks with a
power-law degree distribution with exponent $2<\gamma \leq3$, for
which $\av{k^2} \to \infty$ in the limit of a network of infinite
size, the epidemic threshold tends asymptotically to
\textit{zero}. This was one of the first results pointing out the crucial effect of degree heterogeneities on epidemic
spreading. The critical behavior of the prevalence in the vicinity of
the epidemic threshold can be obtained by solving
Eq.~(\ref{eq:DefTheta}) for $\Theta$ in the continuous degree
approximation and introducing the result into the definition
$\rho^I(\lambda) = \sum_k P(k) \rho^I_k$. From these manipulations,
one obtains \cite{Pastor01b} $\rho^I(\lambda) \sim (\lambda -
\lambda_c^\mathrm{DBMF})^{\beta^\mathrm{DBMF}_\mathrm{SIS}}$, with the
critical exponent
\begin{equation}
\beta^\mathrm{DBMF}_\mathrm{SIS} = \left\{\begin{array}{ll}
1/(3-\gamma) & \mathrm{for} \; \gamma < 3\\
1/(\gamma-3) & \mathrm{for} \; 3< \gamma \leq 4 \\
1 & \mathrm{for} \; \gamma \geq 4\\
\end{array} \right. . \label{eq:DBMFSISbetaexponent}
\end{equation}
For the case $\gamma=3$, a prevalence following a stretched exponential
form is obtained, namely $\rho^I(\lambda) \sim e^{-1/(m \lambda)}$
\cite{pv01a}. Noticeably, this exponents take the exact same form as
those observed for the percolation problem,
Eq.~(\ref{eq:SIRbetaexponent}). It is interesting to note that for
$2<\gamma \leq3$ the exponent governing the prevalence behavior close to
the threshold is larger than one. As noted in~\citet{pv01a} this
implies that, while the vanishing threshold makes the spreading of
pathogens more easy, the very slow growth of the epidemic activity for
increasing spreading rates makes epidemic in these networks less
threatening.
\subsubsection{Individual-based mean-field theory}
\label{sec:quenched-mean-field-1}
As introduced in Section~\ref{sec:theoreticalmethods}, the state of
the system in the SIS model is fully defined by a set of Bernoulli
random variables $X_{i}\left( t\right) \in\{0,1\}$: $X_{i}\left(
t\right) =0$ for a healthy, susceptible node and $X_{i}\left(
t\right) =1$ for an infected node. It is possible to
construct a $2^N$ Markov chain
\cite{PVM_ToN_VirusSpread,PVM_EpsilonSIS_PRE2012,Simon_Taylor_Kiss_MathBiol2011},
specifying exactly the time evolution of the SIS model. While exact,
as mentioned above, the Markov chain approach complicates
analytical calculations. A simpler route to derive rigorous
results on the SIS model is to use the property of a Bernoulli random variable $X_{i}$ that the expectation
$E\left[ X_{i}\right]$ is equal to the probability that
node $i$ is infected, i.e. $E\left[ X_{i}\right] =\Pr\left[
X_{i}=1\right] \equiv \rho^I_i(t)$. This allows to write the
exact equations for the expectation of being infected for each
node $i$ of the SIS model
\citep{PVM_upperbound_SIS_epidemic_threshold,PVM_PAComplexNetsCUP},
\begin{equation}
\frac{dE\left[ X_{i}\left( t\right) \right] }{dt} =E\left[ -\mu
X_{i}\left( t\right) +\left( 1-X_{i}\left( t\right) \right) \beta
\sum_{j=1}^{N}a_{ij}X_{j}\left( t\right) \right]
\label{governing_eq_SIS}
\end{equation}
Eq.~(\ref{governing_eq_SIS}) holds also for asymmetric adjacency
matrices, i.e. for both directed and undirected networks and for
time-varying networks where the adjacency matrix $A(t)$ depends on
time $t$~\citep{Guo2013}. The SIS governing equation
(\ref{governing_eq_SIS}) states that the change over time of the
probability of infection $E\left[ X_{i}\left( t\right) \right]
=\Pr\left[ X_{i}\left( t\right) =1\right] $ of node $i$ equals the
average of two competing random variables: (a) if the node $i$ is
infected ($X_{i} = 1$), then $\frac{dE\left[ X_{i}\right] }{dt}$ decreases
with rate equal to the curing rate $\mu$ and (b) if the node is
healthy ($X_{i} = 0$), it can be infected with infection rate $\beta$
from each infected neighbor. The total number of infected neighbors of
node $i$ is $\sum_{j=1}^{N}a_{ij}X_{j}$.
For a static network, Eq.~(\ref{governing_eq_SIS}) reduces to
\cite{Sharkey2011,PhysRevE.87.042815,PVM_PAComplexNetsCUP}
\begin{eqnarray}
\frac{d \rho^I_i(t) }{dt}&=&-
\rho^I_i(t) +\lambda\sum_{j=1}^{N}a_{ij} \rho^I_j(t)
\nonumber\\
&&-\lambda\sum_{j=1}^{N}a_{ij}E\left[
X_{i}\left(t \right) X_{j}\left(t \right) \right],
\label{dE[X_i]_met_joint_probabilities}
\end{eqnarray}
where $t$ has been rescaled by $1/\mu$ and $\lambda=\beta/\mu$.
The above equations do not lend themselves to an explicit solution because the equation
for $\rho^I_i(t)$ depends on the two-node expectation $E\left[
X_{i}\left( t\right) X_{j}\left( t\right) \right]$. Its exact
computation requires the knowledge of the joint
probability distribution $\Pr\left[ X_{i}=1, X_{j}=1\right]$ for the
state of nodes $i$ and $j$. In order to derive a closed set of $N$
dynamical equations, the
Individual-Based Mean-Field (IBMF) approximation is usually made [also termed Quenched
Mean-Field (QMF) or N-Intertwined Mean-Field Approximation (NIMFA)],
which assumes that
the states of neighboring nodes are statistically~\textit{independent}, i.e.
\begin{equation}
E\left[ X_{i}\left( t\right) X_{j}\left( t\right) \right]
\equiv E\left[ X_{i}\left( t\right) \right] E\left[ X_{j}\left(
t\right) \right] = \rho^I_i(t) \rho^I_j (t)
\label{independence_assumption_NIMFA}
\end{equation}
Under this approximation the dynamical
equations~(\ref{dE[X_i]_met_joint_probabilities}) for the SIS model
become~\cite{hethcote1984gonorrhea,Wang03,Chakrabarti_2008,PVM_ToN_VirusSpread}
\begin{equation}
\frac{d \rho^I_i(t) }{dt}=-
\rho^I_i(t) +\lambda [1-\rho^I_i(t)] \sum_{j=1}^{N}a_{ij}
\rho^I_j(t).
\label{eq:IBMFSISequations}
\end{equation}
The physical interpretation is immediate: the change in the probability
$\rho^I_i$ has a destruction term, equal to the probability that node $i$ is
infected times the rate of recovery $\mu=1$, and a creation term, equal
to the probability that node $i$ is susceptible, times the total probability
that any of its nearest neighbors is infected, times the effective
transmission rate $\lambda=\beta/\mu$. Again, time has been rescaled in
Eq.~(\ref{eq:IBMFSISequations}).
Noticeably, Eq.~(\ref{eq:IBMFSISequations}) can be derived using other
approaches. For example, \citet{Gomez10} propose a discrete time
equation taking additionally into account the possibility of reinfection
in a single time step of length $\Delta t$. The equation thus obtained
leads to Eq.~(\ref{eq:IBMFSISequations}) in the continuous time limit
$\Delta t \to 0$.
To obtain a prediction of the threshold, we can apply a linear
stability analysis on
Eq.~(\ref{eq:IBMFSISequations}). Indeed, linearizing
Eq.~(\ref{eq:IBMFSISequations}) leads to the Jacobian matrix, with
elements $J_{ij} = -\delta_{ij} + \lambda a_{ij}$.
An endemic state occurs when the largest
eigenvalue of $J$ is positive. This condition translates in the epidemic
threshold
\begin{equation}
\lambda \geq \lambda^\mathrm{IBMF}_c, \quad \lambda^\mathrm{IBMF}_c =
\frac{1}{\Lambda_1},
\label{eq:IBMFthreshold}
\end{equation}
where $\Lambda_1$ is the largest eigenvalue of the adjacency matrix
\cite{Wang03,Chakrabarti_2008,PVM_ToN_VirusSpread}.
In networks with a power-law degree
distribution, $P(k) \sim k^{-\gamma}$, eq. (\ref{eq:IBMFthreshold})
can be combined with $\Lambda_1 \sim
\max\{\sqrt{k_\mathrm{max}}, \av{k^2}/\av{k} \}$ \cite{Chung03}, where
$k_\mathrm{max}$ is the maximum degree in the network, to produce an
expression for the scaling of the epidemic threshold \cite{Castellano2010,Castellano2012}
\begin{equation}
\lambda^\mathrm{IBMF}_c \simeq \left \{
\begin{array}{lr}
1/\sqrt{k_\mathrm{max}} & ~~~~~~~ \ \ \gamma > 5/2 \\
\av{k}/\av{k^2} & ~~~~~~~ 2< \gamma < 5/2
\end{array} \right. .
\label{together}
\end{equation}
The relevance of this result is the prediction, in the thermodynamic
limit, of a vanishing epidemic threshold for \textit{every} network for
which the maximum degree is a growing function of the network size,
which is essentially the case for all random, non-regular networks.
Although the expression for the epidemic threshold obtained from the
IBMF theory is not exact, (see~\citet{Givan2011} for a detailed
assessment of the independence assumption), it provides a relatively
good accuracy when compared with the results of extensive numerical
simulations, see Section~\ref{sec:numerical-results}.
It is worth bridging the IBMF approach with the DBMF approach presented in the
previous section.
As stated in Section~\ref{sec:theoreticalmethods}, the DBMF approach is
based on the assumption of the statistical equivalence of all nodes
with the same degree $k$, actually defining the spreading process on
an effective mean-field graph, whose adjacency matrix is given by the
annealed form $\bar{a}_{ij} = k_j P(k_i|k_j)/(N P(k_i))$.
This elucidates the connection between the IBMF and DBMF approaches.
The latter can be simply derived by substituting the annealed adjacency
matrix in the Eqs.~(\ref{eq:IBMFSISequations}).
By performing a degree-based average
$\rho^I_k = \sum_{i \in k} \rho^I_i/ (NP(k))$,
the
equations~(\ref{eq:HMFSISequation}) are thus recovered from the IBMF
approach. Hence, DBMF is equivalent to IBMF with the additional
approximation that the detailed topological network structure is
replaced by its annealed version.
Within the framework of IBMF theory, it is also possible to derive
the behavior of the prevalence $\rho^I$ in the stationary state
just above the epidemic
threshold~\cite{PVM_epidemic_phase_transition2011,Goltsev12}
\begin{equation}
\rho^I \left( \lambda \right) \simeq \frac{1}{
N}\frac{\sum_{j=1}^{N}\left( x_{1}\right) _{j}}{\sum_{j=1}^{N}\left(
x_{1}\right) _{j}^{3}} \frac{\lambda -\lambda_c}{\lambda_c}
\label{y_infty_general_behavior_just_above_epidemic_threshold}%
\end{equation}
where $\vec{x}_1$ is the principal eigenvector (PEV) corresponding
to the largest eigenvalue of the adjacency matrix.
The complete expansion of the prevalence in the stationary state around the epidemic threshold is derived in \citet{PVM_viral_conductance}.
Based on Eq.~(\ref{y_infty_general_behavior_just_above_epidemic_threshold}),
the validity of the IBMF prediction for the epidemic threshold has been
recently questioned~\cite{Goltsev12} according to the following
argument. For $\lambda_c^\mathrm{IBMF}$ to be the true epidemic
threshold, the stationary state above it must be endemic, with a
finite fraction of the network infected. This requires that for
$N\to\infty$ the prefactor
\begin{equation}
\mathcal{A} = \frac{1}{%
N}\frac{\sum_{j=1}^{N}\left( x_{1}\right) _{j}}{\sum_{j=1}^{N}\left(
x_{1}\right) _{j}^{3}}
\end{equation}
in Eq.~(\ref{y_infty_general_behavior_just_above_epidemic_threshold})
must tend to a constant of $\mathcal{O}(1)$. Whether $\mathcal{A}$ is
constant or not depends on the localization of the PEV,
i.e. whether its weight is evenly distributed (delocalized) on all
nodes of the network, or localized in a few nodes. Goltsev \textit{et
al.} apply this idea to the analysis of power-law distributed
networks, arguing by means of analytical calculations and numerical
experiments (see also \citet{2014arXiv1401.5093M}) that, for $\gamma
\leq 5/2$, the PEV is delocalized, while it is localized for $\gamma >
5/2$. This would imply that, while $\lambda_c^\mathrm{IBMF}$ always
marks a transition to an active state, this one is endemic only for
$\gamma<5/2$, corresponding to a delocalized PEV; for $\gamma>5/2$,
instead, a localized PEV indicates that the transition at
$\lambda_c^\mathrm{IBMF}$ is not to an endemic state, but to a
subendemic state, in which activity is restricted to the neighborhood
of the hubs with largest degree. Support to this argument (which is
mean-field in nature, based on
Eq.~(\ref{y_infty_general_behavior_just_above_epidemic_threshold}))
is provided in ~\citet{Lee2013}, who
characterize the sub-endemic state as a Griffiths phase
(see also~\citet{PhysRevLett.111.068701}).
\subsubsection{Extensions of degree-based and individual-based mean-field
approaches}
\label{sec:extens-mean}
Several extensions of the degree-based and individual-based mean-field
theories have been proposed, taking into account the role of dynamical correlations,
which are neglected in both approaches.
A natural way to include the effect of correlations is to consider
additional variables representing the state of pairs, triples etc. of
neighboring nodes. \citet{Eames2002} introduced an extended
degree-based approach where the evolution of the average number
$\av{I^k}$ of nodes of degree $k$ in the infected state depends on the
number $\av{S^k I^l}$ of connections between susceptibles of degree $k$
with infected nodes of degree $l$. The dynamics
can be written in terms of the properties of triples, such as
$\av{S^k S^l I^m}$ and so on so forth. If averages for triples are
approximated with averages for pairs and single nodes, the dynamical
equations are reduced to a set of $O(k_{max}^2)$ nonlinear ordinary
differential equations. This procedure can be iterated, but
the increased accuracy is counteracted by a rapid growth in the number of
equations.
Similarly, \citet{Gleeson11}, building on the results
of~\citet{Marceau2010}, proposed a general theory for binary-state
dynamics in networks. This approach takes into account explicitly
the dynamical correlations between adjacent nodes (see
also~\citet{Lindquist2011} for a similar approach). The theory is based
on a set of master equations for the quantities $s_{k,m}(t)$ and
$i_{k,m}(t)$ which, in the context of the SIS model, are defined as the
fraction of nodes of degree $k$ which are susceptible (resp. infected)
at time $t$ and are connected to $m \leq k$ infected neighbors. By means
of combinatorial arguments, these quantities can be related to the
prevalence $\rho^I_k$ of nodes of degree $k$, allowing the determination
of the prevalence and epidemic threshold. This theoretical approach
provides a good description of the time evolution of the prevalence
\cite{PhysRevX.3.021004} and good estimates of the epidemic threshold
for random regular lattices \cite{Gleeson11}. Gleeson's approach
presents again the drawback that the estimation of the threshold in more
complex networks requires the numerical solution of large sets of
coupled equations, which hinders the analysis of large network sizes.
Another degree-based approach, proposed by \citet{PhysRevLett.111.068701},
takes into account long distance
correlations by considering explicitly the possibility of reinfection
between nodes $i$ and $j$, separated by a topological distance
$\ell_{ij}$ possibly larger than one.
For this purpose, the original SIS dynamics is replaced by a modified
description valid over coarse-grained time scales. In such longer
temporal intervals, a given infected node $i$
can propagate the infection to any other node $j$ at distance $\ell_{ij}$
in the network, via a sequence of microscopic infection events of
intermediate, nearest neighbors nodes.
The infection rate $\beta$ is then replaced by an
effective rate $\bar{\beta}(\ell_{ij}, \beta)$.
On the coarse-grained time scale also the recovery rate $\mu$ of node $i$ is
replaced by an effective rate $\bar{\mu}(k_i, \beta)$.
Both parameters
$\bar{\beta}(\ell_{ij}, \beta)$ and $\bar{\mu}(k_i, \beta)$ can be
estimated from the properties of the network and the SIS
model. Writing down a mean-field theory for such extension of the SIS
mode
, upper bounds for the epidemic threshold $\lambda_c$
of the original SIS model are deduced, which are in good agreement with
numerical simulations, see Section~\ref{sec:numerical-results}.
For individual-based approaches, the consideration of dynamical
correlations can be introduced in a systematic way, by the analogue of a
cluster expansion \cite{PhysRevA.45.8358}. The exact SIS
Eqs.~(\ref{dE[X_i]_met_joint_probabilities}) are,
as discussed above, not closed, due to the presence of the term
involving dynamical correlations between pairs of adjacent nodes. One
way to proceed
consists in complementing Eq.~(\ref{dE[X_i]_met_joint_probabilities})
with an equation for the evolution of the pair correlations $E\left[
X_{i}\left( t\right) X_{k}\left( t\right) \right]$. The $\binom{N}{2}$
governing equations for
$\frac{dE\left[ X_{i}X_{j}\right] }{dt}$ for $i\neq j$ take the
form \cite{PVM_secondorder_SISmeanfield_PRE2012}
\begin{eqnarray}
\frac{dE\left[ X_{i}X_{j}\right] }{dt}
& =-2\mu E[X_{i}X_{j}]+\beta\sum_{k=1}^{N}a_{ik}E[X_{j}X_{k}]\nonumber\\
& \hspace{0.5cm}+\beta
\sum_{k=1}^{N}a_{jk}E[X_{i}X_{k}]\nonumber\\
& \hspace{0.5cm}-\beta\sum_{k=1}^{N}(a_{ik}+a_{jk})E[X_{i}X_{j}X_{k}]
\label{second_order_pair_correlation_E[XiXj]}%
\end{eqnarray}
while for $i=j$, obviously Eq.~(\ref{governing_eq_SIS}) holds.
Equations~(\ref{governing_eq_SIS})
and~(\ref{second_order_pair_correlation_E[XiXj]}) are still an exact
description of the dynamics involving now the terms
$E\left[ X_{i} X_{j} X_{k} \right] $, that in turn need to be determined,
via $\binom{N}{3}$ differential equations involving joint fourth order
expectations and so on. In summary,
the approach leads to a set of $\sum_{k=1}^{N}\binom{N} {k}=2^{N}-1$
exact equations describing the evolution of the SIS process
(to be complemented with the conservation of probability)
that form a hierarchy: the equations for the evolution of correlations of
order $n$ depending on those of order $n+1$.
To allow computations in practice, this hierarchy must be limited to some small $n$ by imposing a
closure condition for the set of equations. The simplest closure
condition, $ E[X_{i}X_{j}] = E[X_{i}]E[X_{j}]$, leads to the IBMF
approximation. Higher order closures include dynamical correlations in a
more detailed way, thus providing a more accurate description of the
system dynamics. The assumption of different closure relations leads to
different degrees of tractability of the ensuing equations. Some of
those can be proved to be exact for simple networks
\cite{2013arXiv1307.7737K}. For example, focusing on general closure
forms, \citet{PVM_secondorder_SISmeanfield_PRE2012} propose the
expression $ E[X_{i}X_{j}X_k] = E[X_{i}X_j]E[X_{k}]$. Analogously,
\citet{0295-5075-103-4-48003}, applying standard techniques from pair
approximations in statistical physics, propose the closure
\begin{equation}
E[X_{i}X_{j}X_k] = \frac{ E[X_{i}X_{j}] E[X_{j}X_k]}{E[X_{j}]}.
\label{eq:pairapproxSilvio}
\end{equation}
The particular interest of the closure (\ref{eq:pairapproxSilvio}) is that it allows deriving an explicit
expression for the epidemic threshold in terms of the largest
eigenvalue of the new Jacobian matrix of the dynamical equations
\cite{0295-5075-103-4-48003}:
\begin{equation}
J_{ij} = - \left(1+ \frac{\lambda^2 k_i}{2 \lambda+2}\right)
\delta_{ij} + \frac{\lambda(2+\lambda)}{2 \lambda+2}a_{ij}.
\label{eq:jacobianSilvio}
\end{equation}
A completely different approach to determine the epidemic threshold for the
SIS model has been proposed by~\citet{Parshani2010}. The idea is to
map the SIS dynamics with fixed infection time, to a percolation process,
mirroring the approach successfully used for the
SIR model (see Sec.~\ref{sec:mapping-percolation-SIR}).
In SIS dynamics, however, the mapping is approximate and one
has to take into account the reinfection
probability $\pi$, i.e. the probability that an infected node reinfects
the node from which it originally received the disease.
By estimating $\pi$ and using it in a modified percolation approach, values
of the epidemic threshold are derived, in good agreement with numerical
simulations, also for heavy-tailed degree distributions.
\subsubsection{Exact results}
\label{sec:exact-results}
Although the above mean-field approaches provide a general theoretical
picture of the behavior of the SIS model in networks,
a few exact results exist that provide rigorous
bounds for the threshold and the dynamical behavior of the model.
A first exact result concerning the lower bound of the epidemic
threshold \cite{van_mieghem_non-markovian_2013} can be achieved by revisiting
Eq.~(\ref{dE[X_i]_met_joint_probabilities}).
Since $0\leq\sum_{k=1}^{N}a_{ki}X_{i}\left( t\right) X_{k}\left(
t\right) $, it is possible to write the inequality:
\begin{equation}
\frac{d \rho^I_i(t) }{dt}\leq- \rho^I_i(t)
+\lambda \sum_{k=1}^{N}a_{ki}
\rho^I_k(t)
\label{eq:sisboundeq}
\end{equation}
Denoting the vector
$W=\left( \rho^I_{1},\rho^I_{2},\cdots,\rho^I_{N}\right) $, the solution of
the inequalities (\ref{eq:sisboundeq}) is
\begin{equation}
W\left(t \right) \leq e^{\left( \lambda A-I\right) t } W\left(0\right).
\label{upper_bound_W(t)}
\end{equation}
The exponential factor is dominated by the fastest growing mode,
which is $\lambda\Lambda_{1}-1$,
where $\Lambda_{1}$ is the largest eigenvalue of the non-negative
matrix $A$, which is real and positive, by the Perron-Frobenius
Theorem~\cite{Gantmacher}.
When $\lambda\Lambda_{1}-1\leq0$, then $W_{i}=\rho^I_i(t)$ decreases
exponentially in $t$ towards zero and the epidemic dies out fast, so that
\begin{equation}
\lambda_{c}\geq\frac{1}{\Lambda_{1}}.
\label{lowerbound_SIS_epidemic_threshold}%
\end{equation}
Interestingly, this lower bound coincides with the IBMF result.
\citet{Ganesh05} have proven that the average time $E\left[
T\right]$ for the SIS Markov process to hit the absorbing state, when
the effective infection rate $\lambda<\frac{1}{\Lambda_{1}}$, obeys
\begin{equation}
E\left[ T\right] \leq\frac{\log N+1}{1- \lambda \Lambda_{1}}
\label{mean_time_to_absorption_below_tau_c}
\end{equation}
from which Eq.~(\ref{lowerbound_SIS_epidemic_threshold}) is deduced.
Above the epidemic threshold instead, the activity must be endemic,
so that the average time to absorption is
$E\left[ T\right] = O(e^{cN})$ for some constant $c>0$.
\citet{Chatterjee_Durret2009} proved that in graphs with power-law degree
distribution $E\left[ T\right] > O(e^{bN^{1-\delta}})$ for any
$\delta>0$. This result pointed to a vanishing threshold in the
large $N$ limit, but still left the possibility open for nonendemic
long-lived metastable states, as those predicted by~\citet{Goltsev12,Lee2013}.
This possibility has been recently ruled
out by the work of \citet{Mountford2013},
showing that for any $\lambda>0$ and large $N$, the time to absorption
on a power law graph grows exponentially in $N$, implying
that there is endemic activity for any $\lambda>0$.
For the complete graph, the exact average survival time has been
determined using the Markov theory \cite{XXXXXX}. In
particular, for the complete graph, the average survival time for all
$\lambda$ and $N$ is
\begin{equation}
E[T] = \sum_{j=1}^{N}\sum_{r=0}^{j-1}\frac{(N-j+r)!}{j(N-j)!}\lambda^{r} \label{ET_KN}
\end{equation}
whose asymptotic for large $N$ is
\[
E\left[ T\right] \sim\frac{1}{\mu}\frac{\frac{\lambda}{\lambda_{c}}\sqrt{2\pi}%
}{\left( \frac{\lambda}{\lambda_{c}}-1\right) ^{2}}\frac{\exp\left( N\left\{
\log\frac{\lambda}{\lambda_{c}}+\frac{\lambda_{c}}{\lambda}-1\right\} \right) }{\sqrt
{N}}
\]
for an effective infection rate $\lambda=\frac{\beta}{\mu}$ above the
epidemic threshold $\lambda_{c}$. Since an infection can survive the
longest in the complete graph, the maximum average lifetime (or
survival) time of an SIS epidemic in any network with $N$ nodes is not
larger than (\ref{ET_KN}), or than
$E[T]=O\left( e^{N\ln\frac{\lambda}{\lambda_{c}}}\right)$.
For power-law graphs, \citet{Chatterjee_Durret2009} provide exact bounds
for the exponent $\beta_{SIS}$ governing the singular behavior
$\rho^I \sim \lambda^{\beta_{SIS}}$ of the activity at the
transition, namely $\gamma-1 \leq \beta_{SIS} \leq 2 \gamma-3$.
This implies that the mean-field value $\beta_{SIS}=1$ does not hold
for any $\gamma>2$, as well as the failure of the DBMF
prediction, Eq.~(\ref{eq:DBMFSISbetaexponent}).
For a few special classes of simple graphs such as the complete graph
and the star, the $2^N$-state Markov chain can be reduced to a much
smaller number of states, enabling an exact solution
\cite{PVM_EpsilonSIS_PRE2012,PVM_MSIS_star_PRE2012,PhysRevE.87.042815,
PVM_decay_SIS2014}. More results can be classified as
\emph{asymptotic} exact results, where the network size
$N\rightarrow\infty$. An overview of asymptotic exact results is given
by \citet{Durrett_PNAS2010}.
\subsubsection{Numerical simulations of the SIS model on networks}
\label{sec:numerical-results}
As presented above, the different approximations of the SIS process on networks yield different results
for the numerical value of the epidemic threshold. This is
particularly important in the case of networks with a heavy-tailed
degree distribution $P(k) \sim k^{-\gamma}$, where the two main
approximations, IBMF and DBMF, lead to the same result for $\gamma <
5/2$, but to noticeable differences for $\gamma > 5/2$, especially in
the case $\gamma > 3$. In this region, while DBMF predicts a finite
threshold, IBMF indicates a vanishing one, albeit at a relatively
small rate with the system size.
Computational efforts have been mostly devoted to the numerical
determination of the epidemic threshold of the SIS model on
power-law distributed networks, in order to assess the validity of the
different theoretical approaches. For a detailed study on graphs of
small size see~\cite{PVM_comparisonSIS_meanfield_PRE2012}.
The standard numerical procedure to study absorbing phase transitions,
such as the epidemic transition of SIS, is based on
the determination of the average of the order parameter (in this case
the density of infected nodes), restricted only to surviving runs
\cite{Marrobook}, i.e., runs which have not reached the absorbing
state up to a given time $t$. Such a technique is not efficient,
because close to the threshold long time surviving configurations are very rare
and an exceedingly large number of realizations of the
process are needed in order to get substantial statistics. This
problem is particularly severe for a large network size, for which
very large simulation times are required, due to the presence
of a long initial transient.
These issues make the standard procedure impractical and have not
led to reliable conclusions until recently.
In order to overcome the restrictions of the surviving runs method,
\citet{Ferreira12, 0295-5075-103-4-48003} use the quasi-stationary state
(QS) method~\cite{DeOliveira05,FFCR11}, based on the idea of
constraining the system in an active state. This procedure is
implemented by replacing the absorbing state, every time the system
tries to visit it, with an active configuration randomly taken from the
history of the simulation (see also~\citet{PVM_EpsilonSIS_PRE2012} for
an implementation of the same idea by means of an external field). With
this technique, the threshold is estimated by studying the
susceptibility \cite{Ferreira12}, defined as
\begin{equation}
\label{eq:susceptdef}
\chi = N \frac{\fluc{\rho^I}-\av{\rho^I}^2}{\av{\rho^I}}.
\end{equation}
When plotted as a function of $\lambda$ in a system of size $N$, the
susceptibility $\chi$ exhibits a maximum at a value $\lambda_p(N)$,
corresponding to a transition rounded by finite size effects. In the
thermodynamic limit, the position of the peak tends to the critical
point as
$\lambda_p(N) - \lambda_c(\infty) \sim
N^{-1/\bar{\nu}}$~\cite{Binder2010}.
Large scale simulations performed using the QS method~\cite{Ferreira12,
0295-5075-103-4-48003}, see Figure~\ref{fig:IBMFSis}, show that, for
$\gamma < 5/2$, the IBMF and a pair approximation at the individual level
(PQMF) are almost exact, coinciding asymptotically with the DBMF result
in this range of degree exponents. For $5/2 < \gamma<3$, on the other
hand, the IBMF result provides the correct scaling of the threshold with
network size.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig5.pdf}
\caption{Numerical thresholds for the SIS model as a function of the
network size $N$ in scale-free networks with degree exponent
$\gamma=2.25$, computed using the QS method, compared with different
theoretical predictions. Upper inset shows the behavior of the
susceptibility as a function of the spreading rate for different
values of $N=10^3, 10^4, 10^5, 10^6, 10^7$, from right to left. Lower
inset shows the difference between the different theoretical
thresholds and the peaks of the susceptibility. Figure adapted from
\citet{0295-5075-103-4-48003}.}
\label{fig:IBMFSis}
\end{figure}
For the crucial case $\gamma>3$, where IBMF and DBMF provide radically
different predictions, the results are not as conclusive. A new
numerical approach has been proposed to explore this region
\cite{PhysRevLett.111.068701}, based on the study of the
\textit{lifetime} of individual realizations of the SIS process
starting with a single infected node. Each realization is
characterized by duration $T$ and coverage $C$, where the latter is
the fraction of distinct nodes ever infected during the realization.
In the thermodynamic limit, realizations can be either finite
(i.e. having a finite lifetime and, therefore, vanishing coverage) or
endemic (i.e. having an infinite lifetime and coverage equal to 1.)
The average lifetime $E[ T ]$ of finite realizations plays
the role of a susceptibility, exhibiting a peak at the transition,
whose position can then be used to estimate the threshold. The
nontrivial problem to determine whether, in a finite system, a
realization is endemic or not, can be overcome by declaring endemic
all realizations for which the coverage reaches a predefined value
(e.g. $C=0.5$). Numerical simulations performed with this method
indicate that the extended DBMF approach by
\citet{PhysRevLett.111.068701} provides a very good fit to the
numerical threshold for $\gamma>3$, see Figure~\ref{fig:LifetimeSis},
with a scaling with network size that is essentially given by the IBMF
expression Eq.~(\ref{together}).
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig6.pdf}
\caption{Numerical thresholds for the SIS model as a function of the
network size $N$ in power-law distributed networks with degree
exponent $\gamma=3.5$, computed from the average lifetime method
proposed by \citet{PhysRevLett.111.068701}. Numerical data are
compared with different theoretical approaches as well as with the the
upper bound obtained from the DBMF theory with long range dynamical
correlations, developed by \citet{PhysRevLett.111.068701}. Figure
adapted from \citet{PhysRevLett.111.068701}}
\label{fig:LifetimeSis}
\end{figure}
\subsubsection{Finite size effects and the epidemic threshold}
As we have seen in the previous sections, the connectivity pattern of
the network enters explicitly in the determination of the epidemic
threshold that generally depends on the moments of the degree
distribution and/or the maximum degree of the network. This finding has
particular relevance in networks with heavy-tailed degree distributions,
where the probability of nodes with very large degree is appreciable. In
the limit of infinite size networks, the epidemic threshold may be
vanishing, thus prompting to the disruption of the classical epidemic
framework where the disease can spread only for adequate
transmissibility of the pathogen. While mathematically compelling, the
argument of a vanishing threshold has been soon recognized as not
realistic in real-world
networks~\cite{may2001infection,pastor2002fs}. Even if the connectivity
pattern of a network is well approximated by a heavy-tailed distribution
in a given range of degree values, any real-world network is composed by
a finite number of nodes $N$. For instance, the finite size of
scale-free networks is generally related to the presence of a natural
maximum degree $k_\mathrm{max}\sim N^{1/(\gamma-1)}$, as reported in
Section~\ref{sec:basic-network-models}, that translates into a finite
effective epidemic threshold. Although the finite size of the network is
often a determinant element in the estimation of the epidemic threshold, for
instance in the analysis of numerical simulations (see Section
\ref{sec:numerical-results}), there are many other limitations to the
maximum degree of the network. These limits are often imposed by
spatio-temporal constraints, such as maximum occupancy in spatial
locations and the finite time each individual can interact with other
individuals. As well, intrinsic cognitive and biological constraints may
be at work in real-world systems. One example is provided by the
so-called Dunbar's number that limits humans' degree to
between 100 and 200 individuals, a size apparently imposed by the finite
neocortical processing capacity of the
brain~\cite{dunbar1998social}. Interestingly, Dunbar's number has been
observed in a wide range of human activities, including communication on
modern information technologies, making it a relevant limit in the case
of many information diffusion
processes~\cite{Goncalves2011Dunbar,miritello2013time}.
In view of these inherent limitations, it is often convenient to assume
that even in the case of heavy-tailed networks the degree distribution
is characterized by the analytic form
$P(k)\simeq k^{-\gamma}\exp{(-k/k_{c})}$, where $k_c$ is a
characteristic degree size. The exponential cut-off makes it extremely
unlikely to observe nodes with degree much larger than $k_c$,
effectively introducing an intrinsic limit to the connectivity capacity
of nodes~\cite{pastor2002fs}. Within the DBMF approach this leads,
For large $k_c$ and $2 < \gamma < 3$, to
$\lambda_c^{DBMF,unc} \simeq \left( k_c/m \right)^{\gamma-3}$
where $m$ is the minimum degree of the network,
which can be generalized for other values of $\gamma$ and which shows
the effect of the degree limitations imposed by the intrinsic
biological, social and cognitive constraints in real-world
networks. Similar finite size effects and considerations also apply to
the epidemic threshold obtained with the IBMF theory and other
approaches.
It is important to stress however that the presence of an epidemic
threshold because of finite size effects and other connectivity
limitations should not be considered as an argument to neglect the
network heterogeneity. It is indeed possible to show with simple
calculations~\cite{pastor2002fs} that simple homogenous approaches can
overestimate the actual epidemic threshold in heterogeneous networks by
one or more orders of magnitude.
\subsection{Susceptible-Infected-Removed model}
\label{sec:4.B}
The SIR model is a cornerstone in infectious disease modeling. It
applies to the wide range of diseases that do provide immunity to the
host and it is also a widely used modeling scheme in knowledge and
information diffusion (see Sec.~\ref{sec:7.A}). Theoretically, the SIR
model represents a different challenge with respect to the SIS model
because it does not allow for a stationary state. The two most used
routes to a general analysis of the SIR model have been initially the
DBMF theory and the mapping of static properties to the percolation
model. Here, we start with a presentation of the
DBMF approach, focusing then on other degree-based, individual-based and
alternative methods which have been completing the understanding of the
SIR dynamics in networks in recent years. We end the subsection with an
overview of the exact results on static properties which can be obtained
by mapping SIR to bond percolation.
\subsubsection{Degree-based mean-field approach}
\label{sec:heter-mean-field}
The DBMF approach can be easily adapted to provide
insight into the dynamical and statical properties of the SIR model.
In the DBMF approximation, we can define as a function of time
three different partial densities, namely of infected, susceptible
and recovered nodes of degree $k$, denoted by the variables
$\rho_k^I(t)$, $\rho_k^S(t)$ and $\rho_k^R(t)$, respectively.
The order parameter (prevalence) of the SIR model, defined as the number
of removed individuals at the end of the epidemics, is then given by
$\rho^R_\infty = \lim_{t\to\infty} \sum_k P(k) \rho_k^R(t)$.
In describing the time evolution of these densities, one can follow
the analogy with the SIS model, to obtain the set of
equations~\cite{refId0,lloyd01}
\begin{eqnarray}
\label{eq:SIR_HMF}
\frac{d \rho_k^I(t)}{d t} &=& - \rho_t^I(t) + \lambda k \rho_k^S(t)
\Gamma_k(t), \\ \nonumber
\frac{d \rho_k^R(t)}{d t} &=& \rho_k^I(t),
\end{eqnarray}
complemented with the normalization condition $ \rho_k^S(t) = 1 -
\rho_k^I(t)- \rho_k^R(t)$, where
\begin{equation}
\Gamma_k(t) = \sum_{k'} P(k'|k) \rho_{k'}^I(t).
\end{equation}
The value of the epidemic threshold in the case of general
correlations can be obtained as in the SIS case, by performing a
linear stability analysis. The same result follows,
with the epidemic threshold given by the inverse of the largest
eigenvalue $\Lambda_M$ of the connectivity matrix,
Eq.~(\ref{eq:SISconnectivitymatrix}). As for SIS,
in the case of uncorrelated networks the epidemic threshold is
given by $\lambda_c = \av{k} / \av{k^2}$~\cite{refId0,lloyd01}.
For uncorrelated networks,
within the same DBMF approximation, it is also possible
to integrate the rate equations over time,
starting form a small seed, thus obtaining the full temporal evolution of
the spreading process. The solution depends on a differential equation
for an auxiliary function $\phi(t)$, which cannot be solved
analytically in general. However, in the infinite time limit, it is
possible to determine the dependence of the final prevalence
$\rho^R_\infty$ on $\lambda$
\begin{equation}
\rho_\infty^R = \sum_k P(k) (1- e^{-\lambda k \phi_\infty}),
\end{equation}
where
\begin{equation}
\phi_\infty = 1 - \frac{1}{\av{k}} - \sum_k
\frac{k P(k)}{\av{k}} e^{-\lambda k \phi_\infty}.
\label{eq4B2:1}
\end{equation}
The solution of Eq.~(\ref{eq4B2:1}) leads again to the epidemic
threshold $\lambda_c = \av{k} / \av{k^2}$, a result that again recovers
the naive expectation for regular networks, see
Eq.~\eqref{homogeneousR0}, $\lambda_c = 1 / \av{k}$. For a power-law
degree distribution, $P(k) \sim k^{-\gamma}$, a detailed analysis
\cite{refId0} leads to a prevalence, in the vicinity of the epidemic
threshold, of the form $\rho_\infty^R \sim (\lambda -
\lambda_c)^{\beta_\mathrm{SIR}}$, with exponent $\beta_\mathrm{SIR}$
coinciding with the value for bond percolation,
Eq.~(\ref{eq:SIRbetaexponent}). The above results are exact for
annealed networks, when the topology changes (preserving $P(k)$) at a
very fast rate \cite{volz_epidemic_2009}. Instead, when considering it
as an approach to static networks, the DBMF can be improved taking
into account that, in the SIR process, a vertex cannot propagate the
disease to the neighbor who originally infected it, because the latter
is necessarily not susceptible. This effect can be included in the
DBMF equations by discounting, from the number of edges pointing from
infected individuals of degree $k'$ to vertices of degree $k$, the
edge from which the original infection arrived to the vertices of
degree $k'$. In this way, the Eqs.~(\ref{eq:SIR_HMF}) are recovered
but now the $\Gamma_k(t)$ function takes the form \cite{marianproc}
\begin{equation}
\Gamma_k(t) = \sum_{k'} \frac{k' -1}{k'} P(k'|k) \rho_{k'}^I(t).
\end{equation}
The value of the epidemic threshold in this case is given by
$\lambda_c=1 / \tilde{\Lambda}_M$, where $\tilde{\Lambda}_M$ is the
largest eigenvalue of the new connectivity matrix
\begin{equation}
\tilde{C}_{k k'} = \frac{k(k'-1)}{k'} P(k'|k).
\label{sirconnectmatrix}
\end{equation}
In the case of uncorrelated networks,
the largest eigenvalue of the matrix $\tilde{C}_{k k'}$ is
$\tilde{\Lambda}_M = \av{k^2}/ \av{k}-1$
(the corresponding eigenvector has components $\tilde{v}_k = k$)
so that the epidemic threshold is
\begin{equation}
\lambda_c = \frac{\av{k}}{\av{k^2}-\av{k}}.
\label{eq:4B2:thres}
\end{equation}
As shown below, Eq.~(\ref{eq:4B2:thres}) is
an approximation of the exact result~(\ref{Tlambda}).
However, this modified DBMF approach captures the correct
qualitative behavior, discriminating between vanishing threshold,
for scale-free networks, and finite threshold, for $\gamma>3$.
The DBMF approach allows also to tackle the scaling of
the \textit{time evolution} of the epidemic outbreak. This is particularly important in the
context of models like SIR that do not have a stationary state.
For the sake of simplicity let us initially focus on the SI model
\cite{anderson92}, representing a disease in which infected
individuals never recover and keep propagating the disease
forever. The SI model can be considered the limit
of the SIR model in which the recovery rate $\mu$ is set to zero. While
this simplification leads to a trivial asymptotic state in which the whole
population becomes eventually infected, it is nevertheless interesting
due to its simplicity, which allows to obtain explicit results for
the initial time evolution of epidemic outbreaks.
The DBMF analysis of the SI model proceeds from the analogue of
Eq.~(\ref{eq:SIR_HMF}), valid for generic networks
\cite{sievolution,Barthelemy2005275}
\begin{equation}
\frac{d \rho_k^I(t)}{d t} = \beta k [1-\rho_k^I(t)] \Gamma_k(t),
\label{eq:siequation}
\end{equation}
with
\begin{eqnarray}
\label{eq:sigammafunc}
\Gamma_k(t) &=& \sum_{k'} P(k'|k) \rho_{k'}^I(0) \\ \nonumber
& + & \sum_{k'} \frac{k'-1}{k'} P(k'|k) [\rho_{k'}^I(t) - \rho_{k'}^I(0)].
\end{eqnarray}
The first term in Eq.~(\ref{eq:sigammafunc}) accounts for a very small
initial seed of infected individuals, with initial partial density
$\rho_{k}^I(0)$, which can infect all their neighbors. The second term
represents the contribution of individuals infected during the
outbreak, which can infect all their neighbors, with the exception of
those who transmitted the disease.
Linear stability analysis shows that the time evolution at very short times
(when the partial densities of infected individuals are very small) follows an
exponential growth, $\rho^I(t) \sim e^{t/\tau}$, where the
characteristic time is given by $\tau = (\beta \tilde{\Lambda}_M)^{-1}$,
where again $\tilde{\Lambda}_M$ is the largest eigenvalue of the
connectivity matrix in Eq.~(\ref{sirconnectmatrix}).
In the case of uncorrelated networks this implies
\cite{sievolution,Barthelemy2005275}
\begin{equation}
\label{tausi}
\tau = \frac{\av{k}}{\beta[\av{k^2}-\av{k}]}.
\end{equation}
The solution for the SI model can be extended to the case of the general
SIR model by allowing a nonzero healing rate, which leads to the general
time scale of the initial growth~\cite{Barthelemy2005275}
\begin{equation}
\tau = \frac{\av{k}}{\beta \av{k^2}-(\mu+\beta)\av{k}}.
\end{equation}
These results readily indicate that the growth time scale of an
epidemic outbreak is inversely proportional to the second moment of
the degree distribution $\av{k^2}$; when this quantity diverges, as in
the case of scale-free networks, not only the threshold tends to
vanish, but also the time until the establishment of
the infection becomes very small (vanishing in the thermodynamic limit).
Computer simulations allow to obtain a detailed picture of the
mechanism of spreading of a disease in a scale-free network
\cite{sievolution,Barthelemy2005275}: Initially, the infection reaches
the hubs and from them it quickly invades the rest of the network via
a cascade through progressively smaller degree classes. The dynamical
structure of the spreading is therefore characterized by a
hierarchical cascade from hubs to intermediate $k$, and finally to
small $k$ classes.
\subsubsection{Individual and pair-based mean-field approaches}
As in the SIS case, a systematic way to attack the SIR model is based on
the full master equation for the exact evolution of probabilities of
microscopic states, and the derivation, starting from it, of deterministic
evolution equations for dynamical quantities. In this framework,
\citet{Sharkey2008} considers SIR with Poissonian infection and recovery
processes and derives from the master equation the $2N$ equations for
the probabilities for the state of individuals
\begin{eqnarray}
\label{SIR_individual}
\frac{d \rho^S_i(t)}{dt} & = & - \beta \sum_{j} a_{ij} \av{S_i I_j} \\ \nonumber
\frac{d \rho^I_i(t)}{dt} & = & \beta \sum_{j} a_{ij} \av{S_i I_j} - \mu \rho^I_i
\end{eqnarray}
where $S_i$ and $I_j$ are Bernoulli variables equal to $1$ when the
node is susceptible (infected, respectively) and $0$ otherwise,
$\rho^S_i = \av{S_i}$ is the probability that node $i$ is in state
S, $\rho^I_i = \av{I_i}$, is the analogue for state I and $\av{S_i I_j}$ is
the joint probability of state $S_i I_j$.
In order to close the equations~(\ref{SIR_individual}), the
simplest possibility is to assume that the state of neighbors is
independent (individual-based mean-field approximation).
Alternatively, one can derive from the master equation the evolution
of the probabilities of pairs of neighbors, which
depend in turn on the state of triples of neighboring nodes. The closure
of the hierarchy at this level (pair-based mean-field) requires
the approximation of probabilities for triples with moments of lower order.
There are several possible ways to implement the closure and the best
choice is not a trivial problem.
The validity of the different approximation schemes is investigated
in ~\citet{Sharkey2011}, who shows
that replacing $\av{S_i I_j} = \av{S_i} \av{I_j}$ is equivalent to writing down
an equation for the evolution of $\av{S_i I_j}$ containing unphysical terms
(i.e. terms assuming that a node is at the same time susceptible and infected).
The consequences of these unphysical terms are relevant: from the
individual-based mean-field approach one can derive an expression for the
SIR epidemic threshold equal to what is found for the SIS
case~\cite{Youssef2011,Prakash2012}:
$\lambda_c = 1/\Lambda_1$, where $\Lambda_1$ is the largest eigenvalue of
the adjacency matrix. This result, however, is even qualitatively
not correct, as it predicts a vanishing threshold for power-law distributed
networks with $\gamma>3$, at odds with exact results (see below) and numerical
simulations~\cite{Castellano2010}.
The pair-based approach instead, complemented with the closure in
Eq.~(\ref{eq:pairapproxSilvio}), is proved to be an exact description of
the stochastic system for a tree topology~\cite{Sharkey2013}.
In the case of networks with loops
it is possible to find a precise connection between
the detailed loop structure and the closures that leave the description
exact~\cite{2013arXiv1307.7737K}.
From these individual and pair-based approaches, by summing over
all nodes, the equations for the probabilities
of the global quantities $\rho^I$ and $\rho^S$ can be obtained, thus
providing a microscopic foundation of equations obtained at population level
by means of the mass action principle.
Eq.~(\ref{SIR_individual}) and similar pair-based
approaches can be written also for heterogeneous infection and recovery
rates~\cite{Sharkey2008}. Hence, the approaches apply in full generality
also to directed and weighted networks.
\subsubsection{Other approaches}
Due to its great relevance, the time evolution of the SIR dynamics has
been tackled with many other approaches.
The extended degree-based approach of~\citet{Eames2002} (see
Sec.~\ref{sec:susc-infect-susc}) can be applied also to the SIR model,
providing a set of closed ODEs that can be integrated numerically or
used to derive an expression for the basic reproductive ratio $R_0$.
Also the other extended degree-based approach of ~\citet{Lindquist2011} can
be applied to SIR, by categorizing each node by its disease state (i.e., S,
I, R), as well as by the number of neighbors in each disease state.
In this way, an excellent agreement with numerical simulations for
both the temporal evolution and the final outbreak size is found.
The threshold condition derived analytically turns out to be equal
to the exact one obtained using percolation theory, Eq.~(\ref{Tlambda})
in Sec.~\ref{sec:mapping-percolation-SIR}.
An alternative approach by ~\citet{Volz2008} describes the Poissonian SIR
epidemics at the global population level. Based on the probability generating function for
the degree distribution, it describes the evolution of the
infection using only 3 coupled nonlinear ordinary differential
equations. The solution of these equations is in excellent agreement
with numerical simulations~\cite{Lindquist2011}; it is shown to be
exact in the thermodynamic limit~\cite{Decreusefond2012,Janson2013}
and it allows to derive the exact expression, Eq.~(\ref{Tlambda}),
for the epidemic threshold, in the case of static uncorrelated networks.
In this case, the approach of~\citet{Volz2008}
can be shown~\cite{House2011} to be a specific case of the extended
degree-based theory of~\citet{Eames2002}.
Volz's approach can be made more physically transparent and
simpler, reducing to a single evolution equation~\cite{Miller2011}.
The basic idea of this improved approach is to focus on the state of a
random partner instead of a random individual.
From this starting point, a fully general theoretical framework
(edge-based compartmental modelling) can be developed, allowing
to deal with many different scenarios, including static
and dynamic networks, both undirected and
directed~\cite{Miller2012,10.1371/journal.pone.0069162,Valdez2012b}.
For other approaches to SIR dynamics based on
the probability generating function,
see~\citet{marder_dynamics_2007,noel_time_2009,Noel2012}.
A derivation of a condition for the possibility of a
global spreading event starting from a single seed in SIR-like models on
generic networks is presented in \citet{Dodds2011} and generalized
in~\citet{Payne2011}.
The approach is based on the state of "node-edge"
pairs and relates the possibility of spreading to the condition that the
largest eigenvalue of a "gain ratio" matrix (encoding information on
both the topology and the spreading process) is larger than 1.
Finally, a new, substantial step forward in the understanding of the SIR
model is the recent application of the
message-passing approach to SIR dynamics~\cite{Karrer2010}.
This approach provides an exact description of the dynamics on trees,
via a closed set of integro-differential equations,
allowing the calculation of the probabilities to be in state $S$, $I$ or
$R$
for any node and any time.
When loops are present, the method gives instead a rigorous bound on the
size of disease outbreaks.
On generic (possibly directed) trees the approach of ~\citet{Karrer2010}
has been shown~\cite{Wilkinson2014} to coincide for Poissonian
infections with the pair-based moment-closure presented
by~\citet{Sharkey2013}.
Remarkably, the message-passing approach allows dealing with
fully generic (non-Poissonian) infection and recovery processes.
\subsubsection{Mapping the SIR model to a percolation process}
\label{sec:mapping-percolation-SIR}
The connection between the static properties of the SIR model
and bond percolation (see Section~\ref{sec:mapping-percolation})
was recognized long
ago~\cite{Ludwig1975,Grassberger1983,Andersson2000}.
In the context of epidemics
on complex networks, the mapping has been studied in detail
by~\citet{newman02}. Considering a SIR model with uniform infection
time $\tau$, i.e. where infected nodes become removed at time $\tau$
after infection\footnote{Notice that this does not coincide exactly
with the definition given in Section~\ref{sec:class-models-epid}},
and infection rate $\beta$, the \textit{transmissibility}
$T$ is defined as the probability that the infection will be transmitted from an
infected node to a connected susceptible neighbor before recovery
takes place. For continuous-time dynamics the transmissibility can be
computed as \cite{newman02}
\begin{equation}
T = 1 - \lim_{\delta t \to 0} (1-\beta \delta t)^{\tau/\delta t} = 1-e^{-\tau \beta}.
\end{equation}
The set of removed nodes generated by an SIR
epidemic outbreak originated from a single node is nothing else than
the cluster of the bond percolation problem (with occupation
probability $T$) to which the initial node belongs. The correspondence is
exact: all late-time static properties of the SIR model can be derived
as direct translations of the geometric properties of the percolation
problem. For tree-like networks the exact epidemic threshold is given
by Eq.~(\ref{eq:percothreshold}), so that
\begin{equation}
T_c = \frac{\langle k \rangle}{\av{k^2}-\langle k \rangle} \Rightarrow
\beta_c = \frac{1}{\tau}\ln\frac{\av{k^2}-\langle k \rangle}{\av{k^2}-2\langle k \rangle},
\label{ExactTc}
\end{equation}
The behavior of the outbreak size close to the epidemic
threshold, ruled by the equivalent percolating giant component, is
given in terms of the exponents in Eq.~(\ref{eq:SIRbetaexponent}).
Expression (\ref{ExactTc}) confirms for the SIR model that the epidemic
threshold has a qualitatively different behavior for scale-free
networks ($\gamma<3$) and for scale-rich ones ($\gamma>3$). In the
former case the second moment of the degree distribution diverges,
so that the threshold vanishes: scale-free networks are extremely
vulnerable to disease spreading.
The above results can be considered
exact only for a tree (completely loopless) structure.
In other networks, the presence of loops and multiple spreading paths
leads in general to correlations, which may invalidate the
results obtained for trees. However, for random networks which are
locally tree-like the presence of long loops (infinitely long in the
thermodynamic limit) is not sufficient to perturb the validity of
the results obtained using the tree
ansatz~\cite{dorogovtsev07:_critic_phenom}. A different conclusion
holds instead in networks with short loops (finite clustering)
as discussed in Sec.~\ref{sec:effects-clust-coeff}.
The derivation of Eq.~(\ref{ExactTc}) is based on a uniform infection time.
More realistically, we assume that
infection times $\tau_i$ and rates $\beta_{ij}$ vary between individuals. This implies that the transmissibility $T_{ij}$
depends on the specific edge $(i,j)$.
One possible approach, that reduces to the solution of
the homogeneous case~\citep{newman02}, is to neglect
fluctuations, and replace $T_{ij}$ by its mean value
\begin{equation}
\langle T_{ij} \rangle = 1- \int d\tau \int d\beta e^{-\beta \tau}
Q(\beta) P(\tau),
\label{Taveraged}
\end{equation}
where $Q$ and $P$ are the distributions of $\beta_{ij}$ and $\tau_i$,
respectively. The case of
nondegenerate $\tau_i$ includes the usual definition of the SIR model
with constant recovery rate $\mu$ for which recovery times are
distributed exponentially with average $\langle \tau_i \rangle =
1/\mu$. In such a case, performing the integral in
Eq.~(\ref{Taveraged}) and setting $\beta \langle \tau_i \rangle =
\beta/\mu = \lambda$, yields $\langle T_{ij} \rangle =
\lambda/(1+\lambda)$, implying
\begin{equation}
\lambda_c = \frac{\langle k \rangle}{\av{k^2}-2 \langle k \rangle}.
\label{Tlambda}
\end{equation}
This approximation leads to the exact
epidemic threshold, the mean outbreak size below it and the final size
above it, but fails in other respects~\citep{Kenah2007} (see
also~\citet{Trapman2007}).
The discrepancy is due to
correlations~\cite{Karrer2010}: ``if an individual recovers quickly,
then the probability of transmission of the disease to any of its
neighbors is small; if it takes a long time to recover the probability
is correspondingly larger." Newman's approximation is not exact also
when the $\tau_i$ are degenerate and the $\beta_{ij}$ vary~\cite{Miller2007}.
The correct way to take into account the heterogeneous transmissibility
maps the disease spreading to a bond percolation process,
involving now a semi-directed network (epidemic percolation
network)~\citep{Miller2007,Kenah2007}, see
Section~\ref{sec:general-definitions}.
The mapping works as follows.
For each pair of connected nodes $i$ and $j$ in the contact network,
place a directed edge from $i$ to $j$ with probability $1-e^{-\beta_{ij}\tau_i}$
and a directed edge from $j$ to $i$ with probability $1-e^{-\beta_{ji}\tau_j}$.
Tools from percolation theory on directed
networks~\cite{Boguna2005}, see Section~\ref{sec:directed-networks},
allow to characterize exactly the long time features
of the epidemic process.
In particular the epidemic transition is
associated with the formation of a giant strongly connected component
(GSCC) in the directed network.
If such a component exists, then an infection originating in
one of its nodes or in the giant in-component (GIN) will spread to all
nodes in the GSCC and in the giant out-component (GOUT), giving rise
to a macroscopic outbreak. It is crucial to recognize that the GIN and
GOUT components play completely different roles: nodes in GOUT are
necessarily part of macroscopic outbreaks but cannot originate them.
The opposite is true for nodes in GIN. As a consequence the
probability that an epidemic occurs (given by the size of
GIN $\cup$ GSCC) and the size of the epidemic (equal to the
size of GSCC $\cup$ GOUT) do not
coincide~\citep{Meyers2006,Miller2007}.
The mapping to percolation on semi-directed networks is valid for any
type of contact network underlying the SIR epidemics. For trees and locally
tree-like networks it is again possible to apply the machinery of
probability generating functions to derive explicit results for the
related percolation properties.
Other discrepancies of the mapping to percolation approach to the SIR
model are reported in~\citet{Lagorio2009755}.
\section{Strategies to prevent or maximize spreading}
\subsection{Efficient immunization protocols}
\label{sec:effic-immun-prot}
The fact that epidemic processes in heavy-tailed networks have a
vanishing threshold in the thermodynamic limit, or a very
small one in large but finite networks (see
Sec.~\ref{sec:epid-proc-heter}), prompted the study of immunization
strategies leveraging on the network structure in order to protect the
population from the spread of a disease. Immunization strategies are
defined by specific rules for the identification of the individuals
that shall be made immune, taking into account (local or non-local)
information on the network connectivity pattern. Immunized nodes are
in practice removed from the network, together with all the links
incident to them, and each strategy is assessed by the
effects of immunizing a variable fraction $g$ of nodes in the network.
The application of immunization does not only protect directly
immunized individuals, but can also lead, for a sufficiently large
fraction $g$, to an increase of the epidemic threshold up to an
effective value $\lambda_c(g) > \lambda_c(g=0)$, precluding the global
propagation of the disease. This effect is called \textit{herd
immunity}. The main objective in this context is to determine the
new epidemic threshold, as a function of the fraction of immunized
individuals. Indeed, for a sufficiently large value of
$g$, any strategy for selecting immunized nodes will lead to an
increased threshold. We define the \textit{immunization threshold}
$g_c(\lambda)$, for a fixed value of $\lambda$ such that, for values
of $g > g_c(\lambda)$ the average prevalence is zero, while for $g \leq
g_c(\lambda)$ the average prevalence is finite.
The simplest immunization protocol, using essentially no information at
all, is the random immunization, in which a number $g N$ of nodes is
randomly chosen and made immune. While random immunization in the SIS
model (under the DBMF approximation) can depress the prevalence of the
infection, it does so too slowly to increase the epidemic threshold
substantially. Indeed, from Eq.~(\ref{eq:HMFSISequation}), an epidemics
in a randomly immunized network is equivalent to a standard SIS process
in which the spreading rate is rescaled as $\lambda \to \lambda(1-g)$,
i.e. multiplied by the probability that a given node is not immunized,
so that the immunization threshold becomes \cite{PhysRevE.65.036104}
\begin{equation}
g_c(\lambda) = 1- \frac{\av{k}}{\lambda \av{k^2}}.
\end{equation}
For heterogeneous networks, for which $\av{k^2}$ diverges and any
value of $\lambda$, $g_c(\lambda)$ tends to
$1$ in the limit $N\to\infty$, indicating that almost the whole network must be immunized to
suppress the disease.
This example shows that an effective level of protection in
heavy-tailed networks must be achieved by means of \textit{optimized}
immunization strategies \cite{anderson92}, taking into account the
network heterogeneity. Large degree
nodes (the hubs leading to the large degree distribution variance)
are potentially the largest spreaders. Intuitively, an optimized
strategy should be targeting those hubs rather than small
degree vertices. Inspired by this observation, the targeted
immunization protocol proposed by \citet{PhysRevE.65.036104} considers
the immunization of the $g N$ nodes with largest degree. A simple DBMF
analysis leads to an immunization threshold given, for the SIS model, by
the implicit equation \cite{PhysRevE.65.036104}
\begin{equation}
\frac{\av{k^2}_{g_c}}{\av{k}_{g_c}} = \frac{1}{\lambda},
\label{eq:immuno1}
\end{equation}
where $\av{k^n}_{g}$ is the $n$th moment of the degree distribution
$P_g(k)$ of the network resulting after the deletion of the $gN$ nodes
of highest degree, which takes the form \cite{havlin01}
\begin{equation}
P_g(k) = \sum_{k' \geq k}^{k_c} P(k') \binom{k'}{k}(1-g)^k g^{k'-k}.
\end{equation}
Eq.~(\ref{eq:immuno1}) can be readily solved in the case of scale-free
networks. For a degree exponent $\gamma=3$, the immunization
threshold reads $g_c(\lambda) \simeq \exp[-2/(m \lambda)]$, where $m$ is the
minimum degree in the network. This result highlights the
convenience of targeted immunization, with an immunization threshold
that is exponentially small over a large range of the
spreading rate $\lambda$. A similar effect can be obtained with a
\textit{proportional} immunization strategy \cite{PhysRevE.65.036104}
(see also \citet{aidsbar} for a similar approach involving the cure of
infected individuals with a rate proportional to their degree), in
which nodes of degree $k$ are immunized with probability $g_k$, which
is some increasing function of $k$. In this case, the infection is
eradicated when $g_k \geq 1 - 1/(\lambda k)$, leading to an
immunization threshold \cite{PhysRevE.65.036104}
\begin{equation}
g_c(\lambda) = \sum_{k > \lambda^{-1}} \left(1-\frac{1}{k \lambda}
\right) P(k),
\end{equation}
which takes the form $g_c(\lambda) \simeq (m \lambda)^2/3$ for
scale-free networks with $\gamma=3$.
Other approaches to immunization stress that not only the behavior
close to the critical point should be taken into account, but also the
entire prevalence curve (the so-called viral conductance)
\cite{Rob_VC_networking2009,Mina_Caterina_VC2011,PVM_viral_conductance}. Additionally,
strategies involving possible different interventions on different
nodes have been analyzed within a game-theoretic formalism
\cite{PVM_heterogeneous_virusspread,PVM_Jasmina_Game_protection_INFOCOM2009,PVM_Gourdin_Networkprotection_DRCN2011}).
The previously discussed immunization protocols are based on a global
knowledge of the network properties (the whole degree sequence must be
known to target selectively the nodes to be immunized). Actually, the more a global knowledge
of the network is available, the more effective is the immunization strategy.
For instance, one of the
most effective targeted immunization strategies is based on the
betweenness centrality (see Sec.~\ref{sec:centrality}), which combines
the bias towards high degree nodes and the inhibition of the
most probable paths for infection transmission
\cite{PhysRevE.65.056109}. This approach can be even improved by
taking into account the order in which nodes are immunized in a
sequential scheme in which the betweenness centrality is recomputed
after the removal of every single node, and swapping the order of
immunization in different immunization sequences, seeking to minimize
a properly defined size for the connected component of susceptible individuals. This approach has been proved to
be highly efficient in the case of the SIR model
\cite{PhysRevE.84.061911}. Improved immunization performance in
the SIR model has been found with an ``equal graph
partitioning'' strategy \cite{PhysRevLett.101.058701} which seeks to
fragment the network into connected components of approximately the
same size, a task that can be achieved by a much smaller number of
immunized nodes, compared with a targeted immunization scheme.
The information that makes targeted strategies
very effective, also makes them hardly feasible in real-world situations,
where the network structure is only partially known. In
order to overcome this drawback, several local immunization strategies
have been considered. A most ingenious one is the
\textit{acquaintance} strategy proposed by \citet{Cohen03}, and
applied to the SIR model. In this protocol, a number $gN$ of
individuals is chosen at random and each one is asked to point to one
of his/her nearest neighbors. Those nearest neighbors, instead of the
nodes, are selected for immunization. Given that a randomly chosen
edge points with high probability to a large degree node, this
protocol realizes in practice a preferential immunization of the hubs,
that results to be very effective in hampering epidemics.
An analogous result can be obtained by means of a random
walk immunization strategy
\cite{0295-5075-68-6-908,1009-1963-15-12-003}, in which a random
walker diffuses in the network and immunizes every node that it
visits, until a given degree of immunization is reached. Given that a
random walk visits a node of degree $k_i$ with probability
proportional to $k_i$ \cite{PhysRevLett.92.118701}, this protocol
leads to the same effectiveness as the acquaintance immunization.
The acquaintance immunization protocol can be improved by
allowing for the consideration of additional information, always at
the local level. For example, allowing for each node to have knowledge
on the number of connections of its nearest neighbors, a large
efficiency is attained by immunizing the neighboring nodes with the
largest degree \cite{0295-5075-68-6-908}. As more information is
available, one can consider the immunization of the nodes with highest
degree found within short paths of length $\ell$ starting from a
randomly selected node \cite{gomez2006}. The random walk immunization
strategy, on the other hand, can be improved by allowing a bias
favoring the exploration of high degree nodes during the random walk
process \cite{PhysRevE.74.056105}. Variations of the acquaintance
immunization scheme have also been used for weighted networks.
The acquaintance immunization for weighted networks is outperformed by a
strategy in which the immunized neighbors are selected among those
with large edge weights \cite{Deijfen201157}.
A different approach to immunization, the \textit{high-risk
immunization strategy}, applied by \citet{nian2010} to the SIRS
model, considers a dynamical formulation, in which nodes
in contact with one or more infected individuals are immunized with a
given probability. Again, by immunizing only a small fraction of the
network, a notable reduction of prevalence and increase of the
epidemic threshold can be achieved.
Finally, for the SIR model, the mapping to percolation
suggests which nodes to target in a vaccination campaign, depending on
whether the probability of an outbreak or its size are to be
minimized~\citep{Kenah2011}. A targeted vaccination of
nodes in the GSCC implies both a reduction of the probability of a
major epidemics and of its size.
\subsection{Relevant spreaders and activation mechanisms}
\label{sec:5.A}
Although the problem of immunization is central in the study of
epidemics because of its practical implications, the attention of the
research community has been recently attracted by the somewhat related
theme of discovering which nodes are most influential/effective in the
spreading process. For instance, what node should be chosen as initial
seed in a SIR epidemic, in order to maximize the total number of
nodes eventually reached by the outbreak? This is a very natural
question to be posed~\citep{kitsak2010}, in particular when the
propagation process does not involve a disease to be contained but
rather a positive meme (such as a crucial piece of information, see
Section~\ref{sec:7.A}) whose spreading is instead to be maximized.
The traditional common wisdom, derived from early studies on the
immunization problem~\citep{PhysRevE.65.036104}, was that nodes with
the highest degree play the role of superspreaders in networks. This
view has been challenged by \citet{kitsak2010} who pointed out that
the $K$-core index (see Section~\ref{sec:centrality}) is a much
better predictor of the final outbreak size in the SIR model
spreading on several real
networks where (as opposed to uncorrelated networks) the set of nodes
with large degrees does not coincide with high $K$.
The intuitive reason is that the most densely connected core gets
easily infected by an outbreak initiated by one of its
vertices, finally transmitting the infection to a large portion of
the entire network. High degree nodes which are not part of the core may
spread the activity to a large number of neighbors but the infection
hardly extends further.
These findings have stimulated a flurry of activity aimed at
understanding which of several possible topological centrality
measures (degree, betweenness,
$K$-core index, closeness and many others) are more correlated with
spreading influence in various types of networks and contagion dynamics
~\citep{Chen2012,Li2012,Chung2012,Hou2012,daSilva2012,Bauer2012,Zeng2013,
Liu2013,Chen2013,HebertDufresne2012}. These studies consider
different issues and features of the interplay between the network
and the spreading process, and such a large variability does not allow
to reach firm conclusions.
Various quantities are used to evaluate the spreading
effectiveness: in some cases only top influential spreaders are
considered, in others complete rankings of all nodes are compared.
Moreover, the consideration of different real networks in different
papers does not help in comparing approaches and in particular to
disentangle the effects of specific topological features such as
degree heterogeneity, clustering, or assortativity. Finally not all
studies take properly into account the fact that results may be largely
different depending on which part of the epidemic phase-diagram is considered:
the absorbing phase, the transition regime or the phase where activity
is widespread. As a consequence, a clear picture that uniquely determines
the best centrality measure that identifies superspreaders for
different epidemic models and different networks has yet to emerge.
The $K$-core decomposition is in many cases a good predictor of
spreading efficiency. Nevertheless an interesting
finding~\citep{Klemm2012,HebertDufresne2012} is that the removal of a
node with high $K$-core index has a limited effect as multiple paths exist
among the nodes in the central cores.
Thus in general efficient spreaders are not necessarily also good
targets for immunization protocols. An extension of the $K$-core
decomposition to weighted networks with application to a SIR epidemics
on weighted networks (see Sec.~\ref{sec:weighted-networks}) has also
been proposed~\citep{Garas2012}.
Similar to the problem of finding efficient spreaders is the
identification of nodes which are infected earlier than the others,
thus playing the role of ``sensors'' for epidemic
outbreaks~\citep{Christakis2010,Garcia-Herranz14}. The strategy
of considering friends of randomly chosen nodes allows to select,
without any knowledge of the global network structure, individuals
with high degree, high betweenness, small clustering and high $K$-core
index, which are actually reached early by epidemic outbreaks.
This effect lies at the basis of the acquaintance immunization strategy
\cite{Cohen03} discussed above.
Another problem, conceptually close to the search for superspreaders,
is the identification of what topological features trigger global
epidemics, i.e. what network subsets determine the position
of the epidemic threshold~\citep{Castellano2012}. For SIS, the
epidemic threshold scales, within the IBMF approximation, as the
inverse of the largest eigenvalue of the adjacency matrix $\Lambda_1$
(see Section~\ref{sec:quenched-mean-field-1}). Applying the scaling
form of $\Lambda_1$ for large uncorrelated scale-free
networks~\citep{Chung03}, the scaling of the threshold with network
size is given by Eq.~\eqref{together}. This result can be interpreted
as follows \cite{Castellano2012}: For $\gamma>5/2$, the node
with the largest degree (hub) together with its direct neighbors
forms a self-sustained nucleus of activity above $\lambda_c$ which
propagates to the rest of the system. For $\gamma<5/2$ instead, the
threshold position is dictated by the set of most densely
interconnected nodes, as identificated by the $K$-core of largest
index. Topological correlations may alter the picture. For SIR
dynamics instead, the largest hub is not able to trigger the
transition and the position of the threshold is always dictated by the
max $K$-core.
All investigations described so far attempt to relate dynamical
properties of the spreading process to purely geometric features of
the contact pattern. Taking a more general
perspective,~\citet{Klemm2012} define a "dynamical influence''
centrality measure, which incorporates not only topological but also
dynamical information. The dynamical influence is the leading left
eigenvector of a characteristic matrix that encodes the interplay
between topology and dynamics. When applied to SIR and SIS epidemic
models, the characteristic matrix coincides with the adjacency matrix.
The ``dynamical influence'' predicts well which nodes are active
around the transition, while it is outperformed by other centrality
measures far from the threshold \cite{Klemm2012}.
A growing activity has also recently been concerned with the inverse
problem of inferring statistically, from the configuration of the
epidemics at a given time, which of the nodes was the initial seed
originating the
outbreak~\citep{Comin2011,Pinto2012,Lokhov2013,Altarelli2013,Brockmann2013}.
Finally, the problem of finding efficient
spreaders is not limited to disease epidemics models; it is possibly
even more important for complex contagion phenomena (such as rumor
spreading or the diffusion of innovations), see Section~\ref{sec:7.A}.
\section{Modeling realistic epidemics}
\subsection{Realistic models}
\label{sec:real-epid-models}
The simple SIS and SIR models considered so far can be generalized to
provide a more realistic description of the disease progression by
introducing additional compartments (see
Sec.~\ref{sec:class-models-epid}) and/or by allowing additional
transitions between the different compartments. These variations, that
can be studied analytically or most often numerically, may alter the
basic phenomenology of the epidemic process. In this section, we
survey some of those models and refer the reader to the work of
\citet{Masuda200664} for more complicated models that include
pathogens' competition and game-theoretical inspired
\cite{gametheorywebb} contagion processes.
\subsubsection{Non-Markovian epidemics on networks}
\label{sec:non-mark-react}
The modeling framework presented in the previous sections is mostly
based on the Poisson approximation \cite{tijms2003first} for both the
transmission and recovery processes. The Poisson approximation assumes
that the probabilities per unit time of transmitting the disease through
a given edge, or recovering for a given infected node, are constant, and
equal to $\beta$ and $\mu$, respectively. Equivalently, the total time
$\tau_i$ that a given node $i$ remains infected is a random variable
with an exponential distribution, $P_i(\tau_i) = \mu e^{-\tau_i \mu}$,
and that the time $\tau_a$ for an infection to propagate from an
infected to a susceptible node along a given edge (the inter-event time)
is also exponentially distributed,
$P_a(\tau_a) = \beta e^{-\tau_a \beta}$. A notable variation assumes
that all infected nodes remain infective for a fixed time $\tau$. The
SIR model can be analyzed exactly in this setting by means of the
generating function approach (see
Sec.~\ref{sec:mapping-percolation-SIR}).
From a practical point of view, the Poisson assumption leads to an
increased mathematical tractability. Indeed, since the rates of
transmission and recovery are constant, they do not depend on the
previous history of the individual, and thus lead to memoryless,
Markovian processes
\cite{Ross1996,tijms2003first,vankampen,PVM_PAComplexNetsCUP}.
While the Poisson approximation may be justified when only the average
rates are known \cite{burstylambiotte2013}, it is at odds with empirical
evidence for the time duration of the infective period in most diseases
\cite{BLYTHE01011988}, whose distribution usually features a peak
centered on the average value but exhibits strongly non-exponential
tails. Furthermore, the interest in non exponential transmission
processes has been also fueled by the recent evidence on the patterns of
social and communication contacts between individuals, which have been
observed to be ruled by heavy-tailed distributions of inter-event times
(see Sec.~\ref{sec:epid-proc-temporal-nets}).
The framework of non-Poissonian infection and recovery
processes can be set up as follows, for either the SIS and SIR models
\cite{boguna_simulating_2013}: Infected individuals remain infective
for a period of time $\tau_i$, after which they recover, that follows
the (non exponential) $P_i(\tau_i)$ distribution. For the sake of
simplicity, it is assumed that this distribution is the same for all
nodes. Infection events take place along \textit{active} links,
connecting an infected to a susceptible node. Active links transmit
the disease at times following the inter-event distribution
$P_a(\tau_a)$, i.e. a susceptible individual connected to an infected
node becomes infected at a time $\tau_a$, measured from the instant
the link became active. If a susceptible node is connected to more
than one infected node, it becomes infected at the time of the first
active link transmitting the disease. The complexity of this
non-Markovian process is now evident: the infection of a node does not
only depend on the number of neighbors, but also on the time at which
each connection became active.
Numerical results on non-Poissonian epidemics in networks are relatively scarce. Simple event-driven
approaches rely on a time ordered sequence of events (tickets), that
represent actions to be taken (recovery or infection) at given fixed
times, which are computed from the inter-event distributions
$P_i(\tau_i)$ and $P_a(\tau_a)$. These approaches are quite demanding,
so only small system sizes can be considered. For example,
\citet{van_mieghem_non-markovian_2013} report results for the SIS
model with Poissonian recovery, with rate $\mu$, while infection
happens with a non-exponential distribution following the
Weibull form, $P(\tau_a) \sim (x/b)^{\alpha-1}
e^{-(x/b)^{\alpha}}$. In this case, strong variations in the value of the prevalence
and of the epidemic threshold are found when varying the parameter
$\alpha$. A promising approach is provided by the general
simulation framework proposed by \citet{boguna_simulating_2013}, based
on the extension of the the Gillespie algorithm for Poissonian
processes \cite{gillespie_exact_1977}. This algorithm allows
the simulation of much larger network sizes.
The consideration of non-Poissonian infection or recovery processes
does not lend itself easily to analytical
approaches \cite{burstylambiotte2013}.
Some simple forms for the distribution of
infectious periods, such as the Erlang distribution, which can be
described as the convolution of identical Poisson processes
\cite{renewal}, can be tackled analytically by postulating an extended
epidemic model with different infective phases and Poissonian
transitions among them \cite{Lloyd07052001,Lloyd200159}. However,
general non-Poissonian forms lead to convoluted sets of
integro-differential equations \cite{Keeling03011997}.
As a consequence there are not many analytical results for
non-Poissonian transitions in complex networks.
We can mention the
results of \citet{min_suppression_2013} which consider the SIR process
on a network in which infection events follow an inter-event
distribution $P_a(\tau_a)$. Assuming that infected nodes remain in that
state for a fixed amount of time $\tau_i$, it is possible to compute
\cite{min_suppression_2013} the disease transmissibility as
\begin{equation}
T(\tau_i) = 1 - \int_{\tau_i}^\infty \psi(\Delta) d \Delta,
\label{transmissibility_T_tau_i}
\end{equation}
where $\psi(\Delta) = \int_{\Delta}^\infty P_a(\tau_a) d\tau_a /
\int_{0}^\infty P_a(\tau_a) d\tau_a$ is the probability distribution of
the time between infection (assumed uniform) and the next activation
event. Eq. (\ref{transmissibility_T_tau_i}) assumes that the dynamics of
infections follows a stationary renewal process \cite{renewal,PVM_PAComplexNetsCUP}. Applying
the generating function approach (see Sec.~\ref{sec:4.B}), the epidemic
threshold is obtained, as a function of $\tau_i$, from the implicit
equation
\begin{equation}
T({\tau_i}_c) = \frac{\av{k}}{\av{k^2} - \av{k}}.
\end{equation}
For a power-law distribution $P_a(\tau_a) \sim \tau_a^{-\alpha}$,
it is found that ${\tau_i}_c$ diverges as
$\alpha \to 2$, implying that only diseases without recovery are able
to spread through the network \cite{min_suppression_2013}.
An important step forward in the treatment of generic nonexponentially
distributed recovery and transmission times in the SIR model is
the application of a message-passing method,
as reported by \citet{Karrer2010}. This approach leads to an exact
description in terms of integro-differential equations for trees and
locally tree-like networks, and to exact bounds for non-tree-like
networks, in good agrement with simulations.
Finally, \citet{PVM_nonMarkovianSIS_NIMFA_2013} propose an extension of
the SIS IBMF theory for non-exponential distributions of infection or
healing times. Using renewal theory, their main result is the
observation that the functional
form of the prevalence in the metastable state is the same as in the
Poissonian SIS model, when the spreading rate $\lambda = \beta/\mu$ is
replaced by the average number of infection attempts during a recovery
time. The theory by \citet{PVM_nonMarkovianSIS_NIMFA_2013} also allows
to estimate the epidemic threshold in non-Markovian SIS
epidemics.
\subsubsection{The SIRS model}
\label{sec:sirs-model}
The behavior of the SIRS model on complex networks
has been analytically considered by
\citet{ForestFireSatorras09} at the DBMF level. Within this approximation,
the steady-state solution of the SIRS model can be exactly mapped to that
of the SIS model, via the identification of the densities of infected
individuals
\begin{equation}
\rho_\mathrm{SIRS}(\eta, \lambda) = \frac{\eta}{\eta+1}
\rho_\mathrm{SIS}(\lambda),
\end{equation}
where $\eta$ is the immunity decay rate.
Therefore, within DBMF, all the critical properties of the SIRS model are the same
as the SIS model, the only effect of $\eta$ being
a rescaling of the density of infected individuals.
Numerically, the SIRS model was studied by \citet{abramson01} on
small-world Watts-Strogatz networks (see
Sec.~\ref{sec:basic-network-models}) within a discrete time
deterministic framework, in which infected individuals remain infective
for a fixed time $\tau_I$, after which they recover, while recovered
individuals remain in this state for a fixed time $\tau_R$. For large
values of the Watts-Strogatz model rewiring probability $p$, a periodic
steady state is observed, in which the state of all nodes stays
synchronized~\cite{abramson01}. The level of synchronization increases
with the average degree and also with $p$, after a threshold $p_c$
depending on $\langle k \rangle$ for fixed network size.
The SIRS model can be also interpreted in terms of a disease that
causes death ($I \to R$), leading to an empty node that can be later
occupied by the birth of a new, susceptible individual ($R \to S$).
Within this interpretation, \citet{liu04:_spread} consider a generalized
SIRS model, allowing additionally for simple recovery ($I \to S$ with
rate $\gamma$) and
death of susceptible individuals due to other causes ($S\to R$ with
rate $\alpha$). Applying a DBMF formalism, they recover again a
threshold inversely proportional to the second moment of the degree
distribution, modulated by the diverse parameters in the model, in
agreement with the SIS result.
\subsubsection{The SEIR model}
\label{sec:seir-model}
The SEIR model is generally used to model influenza-like-illness and
other respiratory infections. In the context of networks, this model
has been used by \citet{doi:10.1142/S0218127405012776} to study
numerically the evolution of the Severe Acute Respiratory Syndrome
(SARS) in different social settings, using both deterministic and
stochastic versions of the model, in which different reaction rates
were adjusted using empirical spreading data of the disease.
The edge-based compartmental modelling approach can be adapted
to deal with multiple infectious stages, including SEIR as a
particular case~\cite{10.1371/journal.pone.0069162}.
Exposed individuals can also play a role in more complex epidemiological
models. Thus, for example, the SEIRS model can be used to mimic the
eventual waning of the immunization of recovered individuals, which
implies one additional transition rule, Eq.~(\ref{eq:waning}).
The properties of the SEIRS model in Watts-Strogatz small-worlds
networks (see Sec.~\ref{sec:basic-network-models}) have been described
by \citet{5365379}. A variation of the SEIRS model without the
recovered compartment, or in other words, in the limit of the reaction rate
$\eta \to \infty$ (SEIS), which coincides with a two-stage variation of the
classical contact process \cite{1999} has been analyzed in
heterogeneous networks by \citet{Masuda200664}. Application of DBMF
theory recovers the mapping to the simple SIS model obtained in the
case of the SIRS epidemics.
\subsection{Realistic static networks}
The analytical and numerical results presented so far for the
paradigmatic SIS and SIR models have focused mainly on random
undirected uncorrelated networks, which are only
characterized by their degree distribution, assuming that the rest of
the properties are essentially random. However, real networks are far
from being completely random. Beyond the degree distribution, a
multitude of other topological properties, such as clustering, degree
correlations, weight structure, etc. (see
Sec.~\ref{sec:general-definitions}), are needed to
characterize them.
\subsubsection{Degree correlations}
\label{sec:effects-degr-corr}
Most theoretical results on epidemic spreading in networks, especially
at the DBMF level, are obtained imposing a lack of correlations at the
degree level, that is, assuming that the probability that a vertex of
degree $k$ is connected to a vertex of degree $k'$ is given by
$P(k'|k) = k' P(k') / \av{k}$ \cite{Dorogovtsev:2002}. However, most
natural networks show different levels of correlations, which can have
an impact on dynamical processes running on top of them.
From a theoretical point of view, the specific effect of degree
correlations, as measured by the different observables detailed in
Sec.~\ref{sec:degree-correlations}, is difficult to assess. However,
some specific results are available. At the level of DBMF theory (see
Sec.~\ref{sec:heter-mean-field-2}) it has been shown that for
scale-free networks with $\gamma<3$, no sort of degree correlations
is able to alter the vanishing of the epidemic threshold in the
thermodynamic limit \cite{marian3,marianproc}. From a numerical point
of view, however, the precise determination of the effects of degree
correlations on the position of the epidemic threshold and the shape
of the prevalence function is problematic.
Indeed, it is generally not possible to
ascertain if the changes in the epidemic process are due to the
presence of correlations or other
topological properties generally related to correlations, such as
local clustering. Initial simulations on network models
\cite{structured,sander} claimed that disassortative degree
correlations could induce a finite threshold in the SIS model in
scale-free networks. However, those claims were based on networks with
an underlying finite-dimensional structure \cite{structurednets}, and
most probably the finite threshold observed was due to this
effect.
For the SIS model, the main IBMF result,
Eq.~(\ref{eq:IBMFthreshold}), stating that the epidemic threshold is the
inverse of the largest eigenvalue of the adjacency matrix $\Lambda_1$,
remains unaltered. The presence of correlations has only the
effect of changing the largest eigenvalue. In this respect,
\citet{PVM_assortativity_EJB2010} showed that increasing the degree
assortativity, by means of an appropriately defined degree preserving
rewiring scheme, increases the largest eigenvalue of the adjacency
matrix, thus reducing the effective IBMF epidemic threshold, in a
network of fixed size $N$. On the other hand, the induction of degree
disassortativity reduces the largest eigenvalue, with a corresponding
increase of the effective IBMF threshold. This observation is
confirmed by \citet{Goltsev12} who estimate, by means
of the power iteration method, the
largest eigenvalue of the adjacency matrix as
\begin{equation}
\Lambda_1 \simeq \frac{\av{k^2}}{\av{k}} + \frac{\av{k} \sigma^2
r}{\av{k^2}},
\end{equation}
where $\sigma$ is a positive function of the moments of the degree
distribution and $r$ is the Pearson correlation coefficient (see
Sec.~\ref{sec:degr-degr-distr}). Thus assortativity with $r>0$
(resp. disassortativity with $r<0$) is associated with an increase
(resp. decrease) of the largest eigenvalue. Other properties of the
largest eigenvalue in general networks with any kind of correlations,
such as the bound
\[
\max\left(\sqrt{\av{k^2}},\sqrt{k_{max}}\right)\leq\Lambda_1\leq k_{max},
\]
are derived in \citet{PVM_graphspectra}.
Regarding the SIR model, the mapping to percolation (see
Sec.~\ref{sec:4.B}) allows to obtain more precise information. Assortative
correlations can induce a vanishing threshold in networks with
finite second moment of the degree distribution~\cite{morenopercolation}.
The more general treatment by \citet{PhysRevE.78.051105}, considering the
\textit{branching matrix} $B_{k, k'} = (k'-1)P(k'|k)$
\cite{marianproc}, allows to explicitly check the effects of degree
correlations on the epidemic threshold. Indeed disassortative
correlations increase the threshold from its uncorrelated value, while
assortative correlations decrease
it \cite{PhysRevE.78.051105,Miller2009}. These results, valid for the
SIR model, can also be extended to the SEIR model~\cite{Kenah2011}.
While no explicit
expression for the threshold can be obtained, it is possible to work
out upper and lower bounds, in terms of the transmissibility $T$, that
read as
\begin{equation}
\frac{1}{\max_k B(k)} \leq T_c \leq \frac{\av{k(k-1)}}{\sum_k
k(k-1)B(k)P(k)},
\end{equation}
where $B(k) = \sum_{k'} B_{k, k'}$~\cite{PhysRevE.78.051105}. With
respect to the behavior of the outbreak size close to the epidemic
threshold, degree correlations are irrelevant, in the sense that the
critical exponents are not changed, when the following conditions are
fulfilled \cite{PhysRevE.78.051105}: (i) The largest eigenvalue of the
branching matrix is finite if $\av{k^2}$ is finite, and infinite if
$\av{k^2} \to \infty$; (ii) the second largest eigenvalue of $B_{k, k'}$
is finite; (iii) the eigenvector associated to the largest eigenvalue
has nonzero components in the limit $k \to \infty$. On the other hand,
if any one of these conditions is not fulfilled (large assortativity
leads to the failure of condition (ii), while strong disassortativity
affects condition (iii)), degree correlations become relevant and they
lead to new critical exponents. At the DBMF level the results of
\citet{marian3} for the SIS model extend to the SIR case, implying again
the inability of degree correlations to alter the vanishing of the
epidemic threshold in the thermodynamic limit for $\gamma<3$. This
result has been confirmed numerically by means of the direct numerical
solution of the DBMF equations of the SIR model on scale-free networks
with weak assortative correlations \cite{PhysRevE.68.035103}. The main
effect of these correlations is to induce a smaller overall prevalence
and a larger average lifetime of epidemic outbreaks.
\subsubsection{Effects of clustering}
\label{sec:effects-clust-coeff}
While a priori entangled with degree correlations and other
topological observables, the effect of clustering on epidemic
spreading has been the subject of a large interest, due to the fact
that social networks, the basic substrate for human epidemic
spreading, are generally highly clustered. Initial work in this area
\cite{keeling99}, based on a simple mean-field approximation (and thus
valid in principle for homogeneous networks) already pointed out the
effects of clustering (measured as the clustering coefficient $C$,
see Sec.~\ref{sec:clust-coeff-clust}) on the SIR dynamics.
A noticeable departure from the standard mean-field
results in the absence of clustering is observed,
and in particular a decrease of the outbreak size when increasing $C$.
In the case of the Watts-Strogatz model (see
Sec.~\ref{sec:basic-network-models}), the paradigm of a network with large
clustering, exact analytical results, confirmed by numerical
simulations, were obtained by \citet{moore00} for any value of the
rewiring probability $p$.
Another analytical approach was proposed by
\citet{PhysRevE.68.026121}, who considered a network model based on a
one-mode projection of a bipartite network (see
Sec.~\ref{sec:gener-simple-graphs}) and applied the usual mapping to
percolation. Apart from confirming the observation by
\citet{keeling99} that epidemic outbreaks are a decreasing function of
$C$, it was interestingly observed that, at odds with the behavior of
networks with no clustering, for large $C$ the outbreak size saturates
to a constant value when increasing the transmissibility even for
moderate values of $T$, suggesting that ``in clustered networks
epidemics will reach most of the people who are reachable even for
transmissibilities that are only slightly above the epidemic
threshold'' \cite{PhysRevE.68.026121}. Along the same line,
\citet{Miller2009}, considering a model of random networks with
assortative correlations and tunable clustering, was able to show
that, for a SIR dynamics with uniform transmissibility $T$, clustering
hinders epidemic spreading by increasing the threshold and reducing
prevalence of epidemic outbreaks.
A more general approach, valid for any network,
confirms the previous observations~\cite{PhysRevLett.97.088701}. In
this approach, the generating function calculation scheme includes the
concept of edge multiplicity $m_{ij}$, defined as the number of
triangles in which the edge connecting nodes $i$ and $j$ participate.
In the limit of weak clustering, corresponding to constant $m_{ij}= m_0$,
the clustering spectrum (see
Sec.~\ref{sec:clust-coeff-clust}) follows the scaling
$\bar{c}(k) \sim k^{-1}$, which is
essentially decoupled from two-vertex degree correlations. The epidemic threshold depends on $m_0$ and is shifted with
respect to the unclustered result; however, for
scale-free networks, this shift is not able to restore a finite
threshold in the thermodynamic limit. For strong clustering, with a
clustering spectrum decaying more slowly than $k^{-1}$, numerical
simulations in a model with tunable clustering coefficient
\cite{PhysRevE.72.036133} confirm the inability of clustering to
restore a finite threshold in scale-free networks. Other numerical and
anaytical works \cite{Miller2009,Miller06122009} have confirmed these
results in different clustered network models.
Within the context of IBMF theory for the SIS model, it is possible to
find bounds for the largest eigenvalue of the adjacency matrix as a
function of the clustering (measured by the number of triangles in the
network), indicating that SIS epidemic threshold decreases
with increasing clustering coefficient \cite{PVM_graphspectra}.
\subsubsection{Weighted networks}
\label{sec:weighted-networks}
If we want to take into account that not all contacts in a
social network are equally facilitating contagion (e.g. due to
the different relative frequency of physical contacts associated to
different edges), we must consider weighted networks, where
a weight $\omega_{ij} \ge 0$ is assigned to the edge between connected
nodes $i$ and $j$ (see Sec.~\ref{sec:gener-simple-graphs}).
The models for epidemic spreading are generalized assuming the rate of
disease transmission between two vertices equal to some function
of the weight of the link joining them.
The simplest possibility occurs when the probability of infection
transmission along an edge is directly proportional to the edge weight.
The IBMF theory for the SIS model is readily applied,
just replacing in Eq.~(\ref{eq:IBMFSISequations}) the adjacency matrix
$a_{ij}$ by the matrix $\Omega_{ij} = \omega_{ij} a_{ij}$. The IBMF
threshold is just the inverse of the largest eigenvalue of $\Omega$
\cite{4610111}. \citet{Peng2010549} consider a generalized SIS model
defined by the matrix $\beta_{ij}$, whose terms are the probabilities
that node $i$ is infected by node $j$ through an edge joining them.
Defining the \textit{parametrized adjacency matrix}
$M_{ij} = \beta_{ij} + (1-\mu_i) \delta_{ij}$, where $\mu_i$ is the
recovery probability of node $i$, \citet{Peng2010549} (see also
\citet{PVM_heterogeneous_virusspread}) show that endemic
states occur when the largest eigenvalue (in absolute value) of the
parametrized adjacency matrix is larger than one.
The DBMF approach to the SIS process on weighted networks is simplified
by the introduction of additional assumptions, such as a functional
dependence of the weights of edges on the degree of the nodes at their
endpoints \cite{dynam_in_weigh_networ}. \citet{karsai:036116}
consider the SIS process in a network with local spreading rate, at
the DBMF level, $\lambda_{k k'} \sim (k k')^{-\sigma}$, with $\sigma$
in the range $[0,1]$.
The resulting equations are found to
depend on the effective degree exponent $\gamma' =
(\gamma-\sigma)/(1-\sigma)$. For $\gamma'<3$, a null threshold in the
thermodynamic limit is obtained, while for $\gamma'>3$, the threshold
is finite. \citet{karsai:036116} discuss additionally a finite-size
scaling theory, relating the average prevalence with the network size,
which is checked against numerical simulations. The strict correlation
between weights and degrees is relaxed in other works, such as
\citet{PhysRevE.85.056106}, where a purely edge-based mean-field
approach for weighted homogeneous networks for the SIS model is
proposed. By means of this approach, and focusing on bounded and
power-law weight distributions, \citet{PhysRevE.85.056106} show that the more homogeneous
the weight distribution, the higher is the epidemic prevalence.
Other approaches to the SIS model include a pair-based mean-field
approach~\cite{2012arXiv1208.6036R} for networks with
random and fixed deterministic weight distributions. The main result
is the observation that a weight distribution leads to the
concentration of infectiousness on fewer target links (or individuals)
which causes an increase in the epidemic threshold in both kinds of
networks considered.
\citet{0256-307X-22-2-068} report numerical results for the behavior
of the SI model on the growing weighted network model proposed by
\citet{barrat04:_weigh}
with a local spreading rate of the form
$\lambda_{ij} \sim (\omega_{ij})^\alpha$. The main
results obtained concern the slowing down of the disease spread in
weighted networks with respect to their unweighted counterparts, which
is stronger for larger weight dispersion. Interesting, they also
report a decay in the velocity of spread, after a sharp peak, taking a
slow power law form, at odds with the exponential form obtained in
nonweighted networks \cite{Barthelemy2005275}.
In the case of the SIR model \citet{Chu2011471} present a DBMF analysis
in the case of weights correlated with the degree. The analysis is
based on a trasmission rate $\lambda_{k'k}$ from vertices of degree
$k'$ to vertices of degree $k$, taking the form $\lambda_{k k'} =
\lambda k \omega_{k k'}/s_k$ (where $s_k$ is the strength of a $k$
node) and on an infectivity of nodes $\phi(k)$, denoting the rate at
which a node of degree $k$ transmits the disease. Writing down rate
equations for the usual relevant DBMF quantities for the SIR model, and
assuming $ \omega_{k k'} \sim (k k')^\sigma$ and $\phi(k) \sim
k^\alpha$, \citet{Chu2011471} find the threshold
\begin{equation}
\lambda_c = \frac{\av{k^{\sigma+1}}}{\av{k^{\alpha+\sigma+1}}}.
\end{equation}
By means of numerical simulations, \citet{Chu2011471} report additionally
that the size of epidemic outbreaks increases with the exponent $\alpha$,
while it decreases with increasing $\sigma$.
An analysis of the SIR model in terms of pair approximations for IBMF
theory is presented by \citet{2012arXiv1208.6036R}, reaching analogous
results as those obtained for the SIS model within the same formalism.
It is also noteworthy the numerical work of \citet{Eames200970} on the
SIR model in a realistic social network constructed from actual survey
data on social encounters recorded from a peer-group sample of 49
people.
The results of \citet{Eames200970} highlight the strong correlations
between infection risk and node degree and weight, in correspondence
with the observations at the DBMF level. Additional simulations
considering different immunization strategies (see
Sec.~\ref{sec:effic-immun-prot}) indicate that, for this particular
realistic network, targeting for total degree or total weight provides
approximately the same efficiency levels.
Concerning other models,
\citet{brittonweight2011} have discussed an epidemic model in
a weighted network in which the weights attached to nodes of degree
$k$ are random variables with probability
distributions $q(\omega|k)$, in a construction akin to a weighted
configuration model (see Sec.~\ref{sec:basic-network-models}). In this
kind of network, \citet{brittonweight2011} observe, by means of an
analysis based on branching theory, that both the epidemic threshold
and the outbreak probability are affected by the correlations between
the degree of a node and the weights attached to it. This observation
is confirmed by numerical simulations of their weighted
network model fitted to empirical data from different network
examples, showing that the epidemic threshold is different in the
original network with respect to a network with reshuffled weights.
On the other hand, \citet{Deijfen201157} analyzes immunization of
weighted networks with random and degree dependent weights, observing,
in agreement with~\citet{Eames200970}, that targeting the largest weights
outperforms other immunization strategies.
In the framework of epidemic models on weighted networks it is possible
to include also the contact process (CP) on networks. In this model
each infected node may transmit the disease to at most one neighbor
for each time step.
This can be intepreted in continuum time as a SIS-like model with a
spreading rate $\lambda_{k k'} = 1/k$ for any edge departing from a node
of degree $k$. This modification has the effect of reducing the
importance of degree fluctuations in the spreading dynamics: the
threshold is finite for any value of the exponent
$\gamma$~\cite{Castellano2006,Olinky2004}. The same conclusion can
be drawn also for a model where multiple neighbors can be infected
simultaneously, but up to a fixed maximum value of neighbors
(and not for any $k$ as in SIS)~\cite{Joo2004}.
\subsubsection{Directed networks}
\label{sec:directed-networks}
Directed networks are useful to represent specific types of epidemic
transmission in which there is an intrinsic directionality in the
propagation. An example is given by diseases communicated by means of
blood transfusions or needle sharing. The study of epidemic processes in
directed networks is difficult due to the component structure of this
kind of networks (see Sec.~\ref{sec:general-definitions}). Indeed, the
position of a node in a specific network component can restrict or
enhance its spreading capabilities with respect to other
positions. Thus, in order to be able to generate a macroscopic outbreak,
a seed of infection should be located on the GIN or GSCC components;
seeds on the GOUT or the tendrils will in general produce small
outbreaks, irrespective of the spreading rate. In this sense, the
distribution of outbreak sizes starting from a randomly chosen vertex is
proportional to the distribution of outcomponents.
In the case of the SIR model, the mapping to percolation allows to apply
the generating function formalism developed for percolation in random
directed networks
\cite{Newman2001,PhysRevE.66.015104}. For purely directed networks
(i.e. in which all edges have assigned a directionality), computations
depend on the joint probability $P(k^\textrm{in}, k^\textrm{out})$, see
Section~\ref{sec:degr-degr-distr}, that a randomly chosen node has
in-degree $k^\textrm{in}$ and out-degree $k^\textrm{out}$, which in general exhibits
correlations between the two values.
In the absence of correlations among the degrees of
neighbors~\footnote{Notice that these are correlations among two
connected vertices, while correlations between $k^\textrm{in}$ and $k^\textrm{out}$ are
for the {\em same } node.}, under the tree-like assumption, the
critical transmissibility is
\begin{equation}
\label{eq:SIRdirectthres}
T_c = \frac{\av{k^\textrm{in}}}{\av{ k^\textrm{in} k^\textrm{out}}},
\end{equation}
where averages are taken over the distribution $P(k^\textrm{in}, k^\textrm{out})$
\cite{Newman2001}.
The same result can be obtained by means of more intuitive arguments
\cite{PhysRevE.66.015104}. Eq.~(\ref{eq:SIRdirectthres}) highlights
the important role of correlations between the in-degree and
out-degree in directed networks. Its full discussion is, however, not
easy, since one cannot impose arbitrary forms to $P(k^\textrm{in},
k^\textrm{out})$ given the explicit constraint $\av{k^\textrm{in}} =
\av{k^\textrm{out}}$. \citet{PhysRevE.66.015104} discuss the effects of
scale-free degree distributions with exponents $\gamma_\mathrm{in}$
and $\gamma_\mathrm{out}$ for in-degree and out-degree, respectively,
and given correlations $P(k^\textrm{in}, k^\textrm{out})$
With this distribution, epidemics in the GWCC behave as in an undirected network
with effective degree distribution $P(k) = \sum_{k^\textrm{in}=0}^kP(k^\textrm{in},
k-k^\textrm{in})$, while the $\beta_{SIR}$ exponent characterizing the size of
supercritical outbreaks takes the form of
Eq.~(\ref{eq:SIRbetaexponent}), with an effective $\gamma^* =
\gamma_\mathrm{out} + (\gamma_\mathrm{in} -
\gamma_\mathrm{out})/(\gamma_\mathrm{in}-1)$~\cite{PhysRevE.66.015104}.
More generally, it is possible to
consider semi-directed networks, in which edges maybe directed or
undirected \cite{Meyers2006}.
The network specification is then given in terms of the probability $P(k^\textrm{in}, k^\textrm{out},
k)$ that a vertex has $k^\textrm{in}$ incoming edges, $k^\textrm{out}$ outgoing edges and
$k$ bidirectional edges.
The presence of undirected links implies the existence of short loops
of length $2$, and thus the violation of the tree-like assumption.
Considering the possibility of different
transmissibilities $T_u$ and $T_d$ for undirected and directed edges,
respectively, \citet{Meyers2006} find expressions for the
critical values of one of them, keeping the other fixed. The rather
involved expressions simplify when imposing that the in-degree, out-degree and
undirected degree of each vertex are uncorrelated.
In particular, when these quantities obey Poisson distributions,
the epidemic threshold is given by
\cite{Meyers2006}
\begin{equation}
T_{uc} \av{k}_u + T_{dc} \av{k}_d =1,
\end{equation}
where $\av{k}_u$ and $\av{k}_d$ are the undirected and directed average
degrees, respectively. The analysis of these results allows the
identification of the key epidemiological difference between directed
and undirected networks: while in undirected networks the probability of
an outbreak and the expected fraction of the population affected (if
there is one) are equal, they differ in directed networks: depending on
the topology any of the two can be larger~\cite{Meyers2006}.
The generic case of semi-directed networks with arbitrary one-point
and two-point correlations is treated in~\citet{Boguna2005}.
The temporal evolution of epidemic outbreaks is considered
using the edge-based compartmental modelling
in~\citet{10.1371/journal.pone.0069162}.
Epidemic processes on purely directed networks can be tackled by an
extension of the standard DBMF. The key point is the consideration of
new degree classes which are defined in terms of the pair of in-degree
and out-degree values, $(k^\textrm{in}, k^\textrm{out})$. This implies that the dynamical
quantities characterizing the processes also depend on these two
values, $\rho^{\alpha}_{k^\textrm{in}, k^\textrm{out}}$, see
Sec.~\ref{sec:heter-mean-field-1}
and~\ref{sec:heter-mean-field}. Equations for the SIS and SIR models
(Eqs.~(\ref{eq:HMFSISequation}) and Eq.~(\ref{eq:SIR_HMF})) translate
directly with just one caveat: degree-degree two-vertex correlations (see
Sec.~\ref{sec:degree-correlations}) in purely directed networks
translate into the conditional probability $P^\mathrm{out}({k^\textrm{in}}',
{k^\textrm{out}}'|k^\textrm{in}, k^\textrm{out})$ that an outgoing edge from a vertex $(k^\textrm{in}, k^\textrm{out})$ is
connected to a vertex $({k^\textrm{in}}', {k^\textrm{out}}')$. Lack of two-point
degree-degree correlations implies
\begin{equation}
P^\mathrm{out}({k^\textrm{in}}', {k^\textrm{out}}'|k^\textrm{in}, k^\textrm{out}) = \frac{{k^\textrm{in}}'
P({k^\textrm{in}}',{k^\textrm{out}}')}{\av{k^\textrm{out}}}.
\end{equation}
\citet{Boguna2005} developed this DBMF formalism for the SIR model,
finding a threshold that, in the general case, is a function of the
largest eigenvalue of the extended connectivity matrix
${k^\textrm{in}}' P({k^\textrm{in}}', {k^\textrm{out}}'|k^\textrm{in}, k^\textrm{out})$, and that, without degree-degree
correlations, reduces to Eq.~(\ref{eq:SIRdirectthres}).
In the case of the SIS model, the IBMF result is the same as in
undirected networks, since directionality (i.e. the asymmetry of the
adjacency matrix) does not explicitly enter into the theory. See also
the generalization of the IBMF theory presented by \citet{Peng2010549}
(Sec.~\ref{sec:weighted-networks}). The value of the largest eigenvalue
has been numerically studied in synthetic semi-directed networks with
directionality $\xi$, defined as the fraction of directed edges
\cite{PhysRevE.88.062802}. The main result obtained is the increase of
the epidemic threshold lower bound when increasing directionality $\xi$,
implying that directed networks hinder the propagation of epidemic
processes. At the DBMF level, an extension analogous to the one
considered for the SIR model leads to a threshold with the same
functional form, Eq.~(\ref{eq:SIRdirectthres}), in degree-degree
uncorrelated networks~\cite{tanimoto_epidemic_2011}.
\subsubsection{Bipartite networks}
Bipartite networks (see Sec.~\ref{sec:gener-simple-graphs}) represent
the natural substrate to understand the spreading of sexually
transmitted diseases, in which two kinds of individuals (males and
females) are present and the disease can only be transmitted between
individuals of different kinds\footnote{We neglect here homosexual
contacts.}. In other contexts, bipartite networks can be used to
represent vector-borne diseases, such as malaria, in which the
transmission can only take place between the vectors and the hosts
\cite{10.1371/journal.pone.0013796}, or the spreading of diseases in
hospitals, in which the different kinds of nodes account for (isolated)
patients and caregivers \cite{Ancel-Meyers:2003fe}.
Dealing with the SIR dynamics,
\citet{newman02} considers a variation of the mapping to percolation,
for a model on bipartite networks characterized by the partial degree
distributions $P_m(k)$ and $P_f(k)$, finding that the epidemic
threshold takes the form of a hyperbola in the space defined by the
male and female transmissibilities, $T_m$ and $T_f$,
\begin{equation}
T_m T_f = \frac{\av{k}_m\av{k}_f}{\av{k(k-1)}_m \av{k(k-1)}_f},
\label{SIR_bipartite}
\end{equation}
where the moments $\av{k}_\alpha$ and $\av{k(k-1)}_\alpha$ are
computed for the degree distribution $P_\alpha(k)$.
In the case of the SIS model on bipartite networks,
\citet{Gomez-Gardenes05022008}
find analogous results at the DBMF level, with threshold
on the hyperbola defined by the male and female spreading rates,
$\lambda_m$ and $\lambda_f$, of the form
\begin{equation}
\lambda_m \lambda_f = \frac{\av{k}_m\av{k}_f}{\av{k^2}_m
\av{k^2}_f},
\label{SIS_bipartite}
\end{equation}
see \citet{Wen2012967} for further results with the DBMF
formalism. The general behavior of the SIS model on multipartite
networks, allowing for more than two different classes of nodes, is
discussed by \citet{2013arXiv1306.6812S}.
Expressing in Eq.~(\ref{SIR_bipartite})
the transmissibility in terms of the spreading rate,
$T_i = \lambda_i/(\lambda_i+1)$ (see Sec.~\ref{sec:4.B}) and comparing
with Eq.~(\ref{SIS_bipartite}), an interesting observation emerges
~\cite{Hernandez2013}. In the SIR case, when $\lambda_f$ diverges the
threshold value for $\lambda_m$ goes to a finite value. Hence the
possibility of an endemic outbreak is completely ruled out by reducing
the spreading rate of a {\em single} type of nodes. In the SIS case
instead, the asymptotic value is $\lambda_m=0$ and as a consequence reducing only
one spreading rate may not be sufficient to guarantee no endemic spreading.
This last conclusion, however, turns out to be an artifact of the DBMF approach
~\cite{Hernandez2013}: also for SIS dynamics a finite asymptotic threshold
is found in a theoretical approach based on a pair approximation, confirmed
by numerical simulations.
The previous conclusions hold when the topology-dependent factors appearing
on the right-hand-sides of Eqs.~(\ref{SIR_bipartite})
and~(\ref{SIS_bipartite}) are finite. However it is enough that
one of the restricted degree distributions has a diverging second
moment to have an epidemics spreading over the whole network, no
matter how small are the spreading rates $\lambda_i$.
\subsubsection{Effect of other topological features}
Many works have dealt with networks endowed with a modular (community)
structure, i.e., subdivided in groups with a relative high density of
connections within groups and a smaller density of inter-group links,
see Section~\ref{sec:centrality}.
SIS dynamics has been studied by~\citet{Liu2005} on a generalization of
the classical random graph model with probability $p$ ($q$) of
intra-(inter-) community links. The epidemic threshold is found to
decrease with $p/q$; this effect, however, cannot be attributed to the
community structure only, because of the concurrent change of the degree
distribution, which gets broader. Other studies have decoupled the two
effects, by comparing spreading dynamics on modular networks and on
randomized networks with the same $P(k)$, obtained by suitable
reshuffling~\cite{maslov02}. They support instead the opposite view that
the community structure of a network tends to hinder epidemic spreading.
Using IBMF, \cite{PVM_SIS_communityNetworks2014} express the epidemic
threshold explicitly in terms of the sizes and spreading rates in the
clusters.
For the SI dynamics, the modular structure makes the
growth of the infection slower: prevalence at fixed time is reduced in
networks with community structure~\cite{Huang2007}. The
interpretation is that the presence of communities tends to confine
the outbreak around the initial seed and hinders the transmission to
other communities. This effect is further enhanced in weighted social
networks~\cite{Onnela:2007} by the correlation between topology and
weights~\cite{Granovetter1973}: the ties bridging between strongly connected
communities are typically weak and this greatly delays the propagation among
different communities~\cite{Onnela:2007,PhysRevE.83.025102}.
Investigations on the SIRS model with fixed
infection and recovery times have focused on the oscillations of the
number of infected nodes in the stationary
state~\cite{Yan2007,Zhao2007}. Both for topologies with scale-free
and non scale-free degree distributions it turns out that the modular
structure reduces the synchronization.
Also for SIR dynamics
modularity is found to make spreading more difficult: the final value
of $\rho^R$ is smaller for stronger community structure~\cite{Wu2008}.
More convincingly, ~\citet{Salathe2010}, show, both for empirical and
synthetic networks that community structure has a major hindering effect
on spreading: the final value of $\rho^R$ and the height of the peak of
$\rho^I$ decrease with the modularity.
Moreover, they show that in
networks with strong community structure targeting vaccination
interventions at individuals bridging communities is more effective
than simply targeting highly connected individuals.
It is also worth to mention the observation that SIS-like processes on
complex networks may give rise to the nontrivial scenario of Griffiths
phases~\cite{Vojta2006}, regions of the phase-space where the only
stationary state is the absorbing one, which is however reached via
anomalously long nonuniversal relaxation~\cite{Munoz2010}. This
behavior arises because of rare-regions effects, which can be due either
to quenched local fluctuations in the spreading rates or to subtle
purely topological heterogeneities~\cite{Juhasz2012,PhysRevE.86.026117}.
Such rare-region effects have been discussed in the case of the SIS
model on loopless (tree) weighted networks
\cite{Buono13,PhysRevE.87.042132,PhysRevE.88.032109}, where they have
been related to the localization properties of the largest eigenvalue of
the adjacency matrix \cite{PhysRevE.88.032109}.
\subsubsection{Epidemics in adaptive networks}
\label{sec:6.C}
Previous sections have focused on the evolution of epidemics on static
networks or on annealed topologies where connections are rewired on a
time scale much smaller than the characteristic time scale of the
infection process.
For real human disease epidemics, however, the assumption
that the structure of contacts does not depend on the progression of
the contagion is often unrealistic: In the presence of infectious spreading,
human behavior tends to change spontaneously, influencing the
spreading process itself in a nontrivial feedback loop.
The modifications induced by this coupling may be distinguished
depending on several features~\cite{Funk2010}: the source of information
about the contagion, the type of information considered and the type
of behavioral change induced.
The source of information about the spreading process may be local
(individuals decide depending on the state of their direct contacts) or
global (info on the state of the whole system is publicly available).
Different types of information may influence the behavioural choice:
in prevalence-based models decisions are taken based on the observation
of the epidemic state of others; in belief-based models matters the awareness or the risk perception which may be (at least partially)
independent from the actual disease dynamics and often behaves in
turn as a spreading
process~\cite{Granell2013,Galvani2013,Perra2011,Salathe2008,Bagnoli2007,Funk2009}.
Finally, the behavioral change can be of different types: affecting
the state of the individual (for example via voluntary vaccination)
or the structure of contacts (eliminating existing connections or creating
new ones).
Many models incorporating these features have been investigated in mathematical epidemiology, generally assuming well-mixed
populations~\cite{Funk2010}.
Here we focus on epidemic spreading on adaptive
(or coevolving) contact networks, where the topology of the interaction
pattern changes in response to the contagion.
The coevolution between structure and dynamics is a common theme in many
contexts, from game theory to opinion dynamics~\cite{Gross2008,Nardini2008}.
The first investigation of an adaptive topology for SIS dynamics~\cite{Gross2006} includes
the possibility for individuals to protect themselves by
avoiding contacts with infected people. Infected individuals are allowed at each time step to infect each
of their susceptible contacts with probability $p$ or recover with
probability $r$ (usual SIS dynamics); in addition, susceptibles can
decide (with probability $w$) to sever a link with an infected and
reconnect to a randomly chosen susceptible.
The possibility of rewiring links drastically changes the
phase-diagram of the model.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig7.pdf}
\caption{Density of the infected nodes $i^*$ as a function of the
infection probability $p$ for different values of the rewiring rate
$w$. In each diagram thin lines are computed using a homogeneous
mean-field approach while circles are the results of numerical
simulations. Without rewiring only a single continuous transition
occurs for $p_c \approx 0.0001$ (a). By contrast, rewiring causes a
number of discontinuous transitions, bistability, and hysteresis loops
(indicated by arrows) in (b), (c), (d). Figure adapted from
~\citet{Gross2006}.}
\label{fig:3Gross}
\end{figure}
The threshold $p_c$, below which the system always converges to the
absorbing healthy state, is much larger than in the case of no
coevolution ($w=0$): rewiring hinders the disease propagation. More
interestingly, above this threshold a bistability region appears (see
Fig.~\ref{fig:3Gross}) with associated discontinuous transitions and
hysteresis. In this region both the healthy and the endemic state are
stable and the fate of the system depends on the initial condition. If
$p$ is further increased above a second threshold, bistability ends and
the endemic state is the only attractor of the dynamics. The
coevolution has also strong effects on the topology of the contact
network, leading to the formation of two loosely connected clusters of
infecteds and susceptibles, with a general broadening of the degree
distribution and buildup of assortative correlations. The rich
phase-diagram is recovered by a simple homogeneous mean-field approach
which complements the equation for the prevalence with two additional
equations for the density of links of $I$-$I$ and $S$-$I$ type. A
bifurcation analysis predicts also the existence of a very narrow region
with oscillatory dynamics. A more detailed approach to the same
dynamics~\cite{Marceau2010} takes into account explicitly the degree of
nodes, writing equations for the evolution of the probabilities $S_{kl}$
($I_{kl}$) that nodes in state $S$ ($I$) have degree $k$ and $l$
infected neighbors. The numerical integration of the equations is in
excellent agreement with numerical simulations both with respect to the
transient evolution and to the stationary state. Different initial
topologies (degree-regular, Poisson, power-law distributed) with the
same average connectivity may lead to radically different stationary
states: either full widespread contagion or rapid disease extinction.
The qualitative picture emerging from the model of \citet{Gross2006}
is found also for the adaptive SIRS model~\cite{Shaw2008} and for the
SIS dynamics where a susceptible individual rewires to any randomly
chosen other vertex (not necessarily susceptible)~\cite{Zanette2008}.
The possibility that also infected individuals decide to rewire their
connections is discussed in~\citet{Risau-Gusman2009}.
In the SIS model, also the interplay of the adaptive
topology and vaccination has been investigated~\cite{Shaw2010}.
It turns out that the vaccination frequency needed to significantly lower
the disease prevalence is much smaller in adaptive networks than in
static ones.
The effect of the very same type of adaptive rewiring introduced for SIS
has been studied also for SIR dynamics~\cite{Lagorio2011}.
In this case the effects of the coevolution are less strong, as the
time needed to reach the stationary (absorbing) state is short
(logarithmic in the system size $N$) and the global topology is only
weakly perturbed in this short interval.
The phase-diagram remains qualitatively the same of the nonadaptive case
with a single epidemic
transition separating a healthy state from an endemic one.
The mapping to percolation (see Sec.~\ref{sec:4.B}) is useful also here.
Coevolution leads to an effective transmissibility $T$ which decreases
with the rewiring probability $w$. One can then identify a critical value
$w_c$ above which the adaptive behavior is sufficient to completely
suppress the epidemics.
The assumptions that disconnected links are immediately rewired and
that the target vertices of the reconnection step are randomly
selected in the whole network are highly implausible in real world
situations. Attempts to go beyond these limitations include the
consideration of different rates for breaking and establishing
links~\cite{VanSegbroek2010,Guo2013} and ``intermittent'' social
distancing strategies, such that a link is cut and recreated (between
the same vertices) after a fixed time interval~\cite{Valdez2012} or
with a certain rate after both endpoints have healed~\cite{Tunc2013}.
The latter strategies are intended to mimic what happens with friends
or working partners, with which connections are reestablished after
the disease. The overarching structure of the network
remains static and there is no real coevolution (no new links are
formed). As a consequence the phase-diagram of epidemic models remains
the same found on static networks, with only an increase in the
epidemic threshold due to social distancing.
\subsection{Competing pathogens}
Another generalization of the basic modeling scheme considers the evolution
of multiple epidemic processes in competition in the same network,
a scenario with clear relevance for realistic situations.
The crucial concept here is cross-immunity, i.e. the possibility that
being infected by one pathogen confers partial or total immunity against
the others.
\citet{Newman2005} considers two SIR epidemic processes occuring
one after the other in the same static network, in conditions
of total cross-immunity: The second
pathogen can affect only survivors of the first, i.e. in the "residual"
network obtained once the nodes recovered when the first epidemics
ends are removed. The mapping of SIR static properties to bond percolation
allows to understand this case.
If the first pathogen is characterized by a transmissibility above
a certain value (coexistence threshold) the residual network has
no giant component and the second pathogen cannot spread globally,
even if it has a huge transmissibility.
Global spreading of both pathogens can occur only for values of the
transmissibility of the first infection in an interval between the epidemic
and the coexistence thresholds.
A generalization to the case of partial cross-immunity is discussed by \citet{Funk2010b}.
The case of competing SIR infections spreading concurrently is
investigated in~\citet{Newmancompeting2011}, again in the case of complete
cross-immunity: Infection by one pathogen confers immunity for both.
Nontrivial effects occur when both transmissibilities are above the
threshold for single spreading (otherwise one of the pathogens does not
spread globally and there is no real interference).
If one of the pathogens has a transmissibility significantly
larger than the other, it spreads fast and the second spreads afterwards
in the residual network, much as in the case of subsequent infections.
If the growth rates are very similar the final outcome shows strong dependence
on stochastic fluctuations in the early stages of growth, with
very strong finite size effects.
An alternative approach, based on the edge-based compartmental modelling
allows to investigate theoretically also the dynamics of two competing
infectious diseases~\cite{Miller13}.
\citet{Poletto2013} consider cross-immune pathogens in competition within a
metapopulation framework (see Sec.\ref{metapop}).
The dominance of the strains depends in this case also on the mobility
of hosts across different subpopulations.
Mutual cross-immunity for two competing SIS dynamics is considered
by~\citet{Trpevski2010} (see also~\citet{Ahn2006}), while the domination
time of two competing SIS viruses is analysed in
\cite{PVM_SIS_competing_virus_PRE2014}.
Depending on the network topology, for some values of the parameters it
is possible to find a steady state where the two processes coexist, each
having a finite prevalence.
Another nontrivial and relevant example of interacting epidemics is the case of
coinfection processes, where the opposite of cross-immunity holds: The
second pathogen can spread only to individuals that have been already
infected by the first.
~\citet{Newmaninteracting2013} report a first theoretical and numerical
investigation of this type of dynamics on complex networks.
\section{Epidemic processes in temporal networks}
\label{sec:epid-proc-temporal-nets}
The majority of the results presented so far considered the spreading of
epidemic process in the limit of extreme time scale separation between
the network and the contagion process dynamics (see however
Sec.~\ref{sec:6.C} for a discussion of adaptive networks, whose topology
changes in reaction to a disease). In \textit{static} networks, the
epidemic spreads on a network that is virtually frozen on the time scale
of the contagion process. On the opposite limit, the DBMF theory
considers an effective mean-field network where nodes are effectively
rewired on a time-scale much faster than the contagion process.
However, in the case of many real-world networks those assumptions are
rather simplistic approximations of the real interplay between time
scales. For instance, in social networks, no individual is in contact
with \textit{all} his/her friends \textit{simultaneously} all the
time. On the contrary, contacts are changing in time, often on a time
scale that is comparable with the one of the spreading process. Real
contact networks are thus essentially dynamic, with connections
appearing, disappearing and being rewired with different characteristic
time scales, and are better represented in terms of a \textit{temporal}
or time-varying network \cite{Holme:2011fk,temporalnetworksbook}, see
Fig.~\ref{fig:temporal_net}.
Temporal networks are defined in terms of a \textit{contact sequence},
representing the set of edges present at a given time $t$. By
aggregating the instantaneous contact sequence at all times $t<T$, a
static network projection can be constructed, see
Fig.~\ref{fig:temporal_net}. In this aggregated network, the edge
between nodes $i$ and $j$ is present if it ever appeared at any time
$t<T$. A more informative static representation is a weighted network,
in which the weight associated to each edge is proportional to the total
number of contacts (or the total amount of time the contact was active)
between each pair of individuals. These static network projections,
however, do not account for the nontrivial dynamics of the temporal
network and are thus often inappropriate when considering dynamical
processes unfolding on time-varying connectivity patterns.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig8.pdf}
\caption{A temporal (or time-varying) network can be represented as a
set of nodes that, at every instant of time, are connected by a
different set of edges. A integrated network over a time window $T$ is
constructed by considering that nodes $i$ and $j$ are connected by an
edge if they were ever connected at any time $t \leq T$. Figure
adapted from \citet{2012arXiv1203.5351P}}
\label{fig:temporal_net}
\end{figure}
Recent technological advances allow gathering large amounts of
data on social temporal networks, such as mobile phone communications
\cite{Onnela:2007} and face-to-face
interactions~\cite{10.1371/journal.pone.0011596}. From the analysis
of these datasets, social interactions are
characterized by temporally heterogeneous contact patterns. Indeed it
is more the norm than the exception to find that the temporal behavior
of social interactions is characterized by heavy-tail and skewed
statistical distributions. For instance, the probability distributions
of the length of contacts between pairs of individuals, of times between
consecutive interactions involving the same individual, etc., all
follow a heavy tailed form (see Fig.~\ref{fig:Sociobursts})
\cite{Onnela:2007,Hui:2005,PhysRevE.71.046119,Tang:2010,10.1371/journal.pone.0011596,PVM_Twitter_lognormal}.
These properties contrast with the Poissonian behavior expected in
purely random interactions, thus catalyzing the recent interest in the
study of the \textit{burstiness} of human behavior
\cite{Oliveira:2005fk}.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig9.pdf}
\caption{Statistical properties of a temporal face-to-face contact
network \cite{10.1371/journal.pone.0011596}. The probability
distributions of the lenght of conversations $\Delta t$, total time
spent in conversation between pairs of individuals $\omega$, and the
gap $\tau$ between conversation with different individuals, all show a
long-tailed form, compatible with a power law. Figure adapted from
\citet{PhysRevE.85.056115}. }
\label{fig:Sociobursts}
\end{figure}
The time-varying connectivity pattern of networks affects epidemic
processes in a number of different ways. First, the presence of a
temporal ordering in the connections of the network limits the possible
paths of propagation of the epidemic process. In particular, not all the
edges of the eventually aggregated network projection are available for
the propagation of a disease. Starting on a given node, only the nodes
that belong to its \textit{set of influence} \cite{PhysRevE.71.046119},
defined as the nodes that can be reached through paths that respect time
ordering, may propagate the disease. Furthermore, the Poissonian
approximation for the transmission rate of infectious individuals is not
correct because the time between consecutive nodes' contacts is
generally power-law distributed. However, this non-Poissonian behavior
is different from the one presented in Sec.~\ref{sec:non-mark-react},
where we considered fixed networks in which a disease takes, to
propagate from an infected individual to a susceptible one along a fixed
link, a time $\tau_a$ that is not exponentially distributed. Here we
have the situation in which the very link that can propagate the disease
appears at instants of time that are separated by an inter-event time
$\tau_l$, that can be distributed non-exponentially. Finally, the
relation between the intrinsic time scales of the temporal network and
those of the dynamics plays a substantial role. Thus, for slow dynamics
with a very large relative time scale, it can be a good approximation to
consider as a substrate the weighted integrated network. If the dynamics
is fast, with a small relative time scale, comparable to that of the
temporal network, then the substrate must be the actual contact sequence
defining the temporal network.
Among the effects that a non-Poissonian temporal network induces on
epidemic spreading, one of the most remarkable is a substantial slowing
down of the spread velocity. This observation was first made by using
an SI model~\cite{PhysRevLett.98.158702} (see also
\citet{min_spreading_2011}) in the context of the spreading of email
worms among email users. Empirical data show that the time between
consecutive email activities is heavy-tailed and well approximated by
the form $P(\tau) \sim \tau^{-1-\beta}$. The generation time $\tau$,
defined as the time between the infection of the primary individual and
the infection of a secondary individual is given by the residual waiting
time distribution, assuming a stationary process,
\cite{renewal}
$g(\tau) = \int_\tau^\infty P(\tau')d\tau' / \av{\tau} \sim
\tau^{-\beta}$,
where it is assumed that the time at which emails are received is
uniformly random. The average number of new infections at time $t$,
$n(t)$ is estimated as $n(t) = \sum_{d=1}^D Z_g \hat{g}_d(t)$, where
$Z_d$ is the average number of users at a distance $d$ (at $d$ email
steps) from the first infected user, $D$ is the maximum possible value
of $d$, and $\hat{g}_d(t)$ is the convolution of order $d$ of
$g(\tau)$. Assuming that the integrated network of email contacts is
sparse, \citet{min_spreading_2011} find that $n(t) \sim t^{-\beta}$,
independently of the integrated network structure. This result implies
that the disease spreads much more slowly than in a regular static
network, where an exponential increase of infected individuals is
observed. The slowing down in temporal networks has been empirically
measured in different systems
\cite{PhysRevLett.98.158702,PhysRevE.83.025102,dynnetkaski2011,Stehle:2011nx},
and also reported in other dynamical processes, such as diffusion
\cite{PhysRevE.85.056115,perra_random_2012,hoffmann_generalized_2012} or
synchronization \cite{albert2011sync}. The situation is however not
completely clear, since other works suggest instead a dynamic
acceleration \cite{2013arXiv1309.0701J}. These temporal effects are,
moreover, entangled with topological ones, as shown by
\citet{Rocha:2010} analyzing the SI and SIR models in empirical
spatio-temporal networks. Temporal correlations accelerate epidemic
outbreaks, especially in the initial phase of the epidemics, while the
network heterogeneity tends to slow them down.
The time-varying structure of temporal networks is also able to alter
the value of the epidemic threshold, as analytically shown for the SIS
and SIR processes in \textit{activity driven} network
models~\cite{2012arXiv1203.5351P}. The activity-driven network class of
models \cite{2012arXiv1203.5351P,PhysRevE.87.062807} is based on the
concept of \textit{activity potential}, defined as the probability per
unit time that an individual engages in a social activity. Empirical
evidence shows that the activity potential varies considerably from
individual to individual and the dynamics of the networks is encoded in
the function $F(a)$ that characterizes the probability for a node to
have an activity potential $a$. The activity driven network model
considers $N$ nodes whose activity $a_i$ is assigned randomly according
to the distribution $F(a)$. During each time step the node $i$ is
considered active with probability $a_i$. Active nodes generate $m$
links (engage in $m$ social interactions) that are connected to $m$
individuals chosen uniformly at random. Finally, time is updated
$t \to t+1$. The model output is a sequence of graphs, depending on the
distribution $F(a)$, which is updated at every time step $t$. An
integrated network at time $T$ can be constructed by considering the
union of the sequence of graphs, see Fig.~\ref{fig:temporal_net}. This
integrated network has a degree distribution which depends on the
activity distribution as
$P_T(k) \simeq \frac{1}{T} F\left(\frac{k}{T} - \av{a} \right)$
\cite{PhysRevE.87.062807}, where $\av{a}$ is the average activity and
for simplicity we take $m=1$. The empirically observed power-law
activity distributions, $F(a)$, can thus explain the long tails in the
degree distribution of social networks \cite{2012arXiv1203.5351P}.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig10.pdf}
\caption{Prevalence of the SIS model on the temporal network defined by
the activity driven model, as a function of the basic transmission
probability $\lambda$. The threshold observed for the dynamics on the
temporal network coincides with the theoretical prediction
Eq.~\eqref{eq:SISActivityThreshold}. Simulations on integrated
networks show instead a threshold that becomes smaller when increasing
the integration time $T$. Figure adapted
from~\citet{2012arXiv1203.5351P}}
\label{fig:sistemporalnets}
\end{figure}
\citet{2012arXiv1203.5351P} consider the behavior of the SIS
model in activity driven networks, writing dynamical mean field
equations for the infected individuals in the class of activity rate
$a$, at time $t$, namely $I_{a}(t)$. The discrete time dynamical
evolution considers concurrently the dynamics of the network and the
epidemic model, yielding:
\begin{eqnarray}
\label{pp1activity}
I^{t+1}_{a} &=& \lambda m (N_{a}-I_{a}^{t})a \int d a' \frac{I_{a'}^{t}}{N} + \\ \nonumber
&+&\lambda m(N_{a}-I_{a}^{t})\int d a' \frac{I_{a'}^{t}a'}{N},
\end{eqnarray}
where $N_a = F(a)N$ is the total number of individuals with
activity $a$ and where the recovery probability $\mu=1$.
In Eq.~(\ref{pp1activity}), the first term on the right
side takes into account the probability that a susceptible of class
$a$ is active and acquires the infection getting a connection from any
other infected individual (summing over all different classes), while
the last term takes into account the probability that a susceptible,
independently of his activity, gets a connection from any infected
active individual. A linear stability analysis of
Eq.~(\ref{pp1activity}) leads to an epidemic threshold
\begin{equation}
\lambda_c = \frac{1}{m(\av{a} + \sqrt{\av{a^2}})},
\label{eq:SISActivityThreshold}
\end{equation}
which is independent of the integration time. The same epidemic threshold is obtained for the SIR model,
applying mean-field approximations \cite{2013arXiv1309.7031L} and a
mapping to percolation \cite{2013arXiv1312.5259S}.
This result highlights the crucial fact that scale-free integrated networks can
lead to a vanishing threshold for epidemics with a very large time
scale, while epidemics with a short time scale, comparable to the one
of the contact sequence, can be associated with a finite,
non-vanishing threshold, see Fig.~\ref{fig:sistemporalnets}. This
observation has been confirmed in studies of other temporal network
models \cite{Rocha:2013}.
Finally, a very recent avenue of research in this area has been the
identification of effective immunization protocols for temporal
networks \cite{Lee:2010fk}. The idea here is to define a
\textit{training window} $\Delta T$, such that information is gathered
from the contact sequence at times $t< \Delta T$. A set of individuals
to be immunized is chosen, and effectively vaccinated at time $\Delta
T$. The effects of the immunization are then observed
for $t > \Delta T$. \citet{Lee:2010fk}
explore two local strategies, inspired by the acquittance immunization
protocol for static networks \cite{Cohen03}: In the \textit{Recent}
strategy, a randomly chosen individual is asked at time $\Delta T$ for
its last contact; this last contact is immunized. In the
\textit{Weight} strategy, a randomly chosen individual at time $\Delta T$
is asked for its most frequently contacted peer, up to time $\Delta
T$; this most frequent contact is immunized. By means of numerical
simulations \citet{Lee:2010fk} observe that both protocols offer, for
a limited amount of local information, a reasonable level of
protection against the disease propagation. An interesting issue is the question about the amount of information (the
length $\Delta T$ of the training window) sufficient to achieve an
optimal level of immunization for a fixed fraction of immunized
individuals. \citet{2013arXiv1305.2357S} find a
saturation effect in the level of immunization for training windows of
about a $20\%$ - $40\%$ of the total length of the contact sequence,
for several immunization protocols, indicating that a limited amount
of information is actually enough to optimally immunize a temporal
network. In the case of the activity driven networks, analytical expressions for several immunization
strategies can be obtained \cite{2013arXiv1309.7031L}.
\section{Reaction-diffusion processes and metapopulation models}
\label{metapop}
So far we have reviewed results concerning spreading and contagion
processes in which each node of the network corresponds to a single
individual of the population. A different framework emerges if
we consider nodes as entities where multiple individuals/particles can
be located and eventually wander by moving along the links connecting
the nodes. Examples of such systems are provided by mechanistic
epidemic models where particles represent people moving between
different locations or by the routing of information packets in
technological networks
\cite{Keeling:2002,Sattenspiel:1995,gallos2004absence,Wattsresurgent:2005}.
More in general, models of social behavior and human mobility are often framed
as reaction-diffusion processes where each node $i$ is allowed to host
any nonnegative integer number of particles $\mathcal{N}(i)$, so that
the total particle population of the system is $\mathcal{N}=\sum_i
\mathcal{N}(i)$. This particle-network framework considers that each
particle diffuses along the edges connecting nodes with a diffusion
coefficient that depends on the node degree and/or other node
attributes. Within each node particles may react according to
different schemes characterizing the interaction dynamics of the
system. A simple sketch of the particle network framework is
represented in Figure~\ref{fig:metapop}.
\begin{figure*}[t]
\centering
\includegraphics*[width=\textwidth]{fig11.pdf}
\caption{{\bf a} Schematic illustration of the simplified modeling
framework based on the particle-network scheme. At the macroscopic
level the system is composed of a heterogeneous network of
subpopulations. The contagion process in one subpopulation (marked
in red) can spread to other subpopulations because of particles
diffusing across subpopulations. {\bf b} At the microscopic level,
each subpopulation contains a population of individuals. The
dynamical process, for instance a contagion phenomenon, is
described by a simple compartmentalization (compartments are
indicated by different colored dots in the picture). Within each
subpopulation, individuals can mix homogeneously or according to a
subnetwork and can diffuse with probability $p$ from one subpopulation to
another following the edges of the network. {\bf c} A critical
value $p_c$ of the individuals/particles diffusion identifies a
phase transition between a regime in which the contagion affects a
large fraction of the system and one in which only a small
fraction is affected (see the discussion in the text).}
\label{fig:metapop}
\end{figure*}
In order to have an analytic description of reaction-diffusion systems
in networks one has to allow the possibility of heterogeneous
connectivity patterns among nodes.
A first analytical approach to these systems considers the extension
of the degree-based mean-field approach to reaction-diffusion systems
in networks with arbitrary degree distribution. For the sake of
simplicity, let us first consider the DBMF approach to the case of a
simple system in which non interacting particles (individuals) diffuse
on a network with arbitrary topology. A convenient representation of
the system is therefore provided by quantities defined in terms of the
degree $k$
\begin{equation}
\mathcal{N}_k=\frac{1}{N_k}\sum_{i \in{\mathcal V}(k)}\mathcal{ N}(i)\,,
\end{equation}
where $N_k = NP(k)$ is the number of nodes with degree $k$ and the
sum runs over the set of nodes ${\mathcal V}(k)$ having degree equal to $k$. The degree
block variable $\mathcal{N}_k$ represents the average number of
particles in nodes with degree $k$. The use of the DBMF approach
amounts to the assumption that nodes with degree $k$, and thus the
particles in those nodes, are statistically equivalent. In this
approximation the dynamics of particles randomly diffusing on the
network is given by a mean-field dynamical equation expressing the
variation in time of the particle subpopulation $\mathcal{N}_k(t)$ in
each degree block $k$. This can be easily written as:
\begin{equation}
\frac{d\mathcal{ N}_k}{d t}= -d_k\mathcal{ N}_k(t) +
k\sum_{k'}P(k'|k)d_{k'k}\mathcal{N}_{k'}(t).
\end{equation}
The first term of the equation considers that only a fraction $d_k$ of
particles moves out of the node per unit time. The second term
instead accounts for the particles diffusing from the neighbors into
the node of degree $k$. This term is proportional to the number of
links $k$ times the average number of particles coming from each
neighbor. This is equal to the average over all possible degrees $k'$ of the
fraction of particles moving on that edge,
$d_{k'k}\mathcal{N}_{k'}(t)$, according to the conditional probability
$P(k'|k)$ that an edge belonging to a node of degree $k$ is pointing
to a node of degree $k'$. Here the term $d_{k'k}$ is the diffusion
rate along the edges connecting nodes of degree $k$ and $k'$. The
rate at which individuals leave a subpopulation with degree $k$ is
then given by $d_k=k\sum_{k'}P(k'|k)d_{kk'}$. In the simplest case of
homogeneous diffusion each particle diffuses with rate $r$ from the
node in which it is and thus the diffusion per link $d_{k'k}=r/k'$.
On uncorrelated networks $P(k'|k)=k'P(k')/{\langle k\rangle}$ and
hence one easily gets, in the stationary state $d \mathcal{N}_k/dt=0$
the solution~\cite{v.07:_react,PhysRevLett.92.118701}
\begin{equation}
\mathcal{N}_k= \frac{k}{\langle k\rangle}\frac{\mathcal{N}}{N}.
\label{eq:Wk2}
\end{equation}
The above equation explicitly brings the diffusion of particles in the
description of the system and points out the importance of network
topology in reaction-diffusion processes. This expression indicates
that the larger the degree of a node, the larger the probability to be
visited by the diffusing particles.
\subsection{SIS model in metapopulation networks}
The above approach can be generalized to reacting particles with
different states by adding a reaction term to the above
equations~\cite{v.07:_react}. We now describe a generalization to this
setting of the standard SIS model in discrete time, with probability
per unit time $\beta$ of infection and probability $\mu$ of recovery.
We consider $\mathcal{N}$ individuals diffusing in a heterogeneous
network with $N$ nodes and degree distribution $P(k)$. Each node $i$
of the network has a number $I(i)$ of infectious and $S(i)$ of
susceptible individuals, respectively. The occupation numbers $I(i)$
and $S(i)$ can have any integer value, including $I(i)=S(i)=0$, that is,
void nodes with no individuals. This modeling scheme describes
spatially structured interacting subpopulations, such as city
locations, urban areas, or defined geographical
regions~\citep{Hanski:2004,grenfell1997meta} and is usually referred
to as \textit{metapopulation approach}. Each node of the network
represents a subpopulation and the compartment dynamics accounts for
the possibility that individuals in the same location may get into
contact and change their state according to the infection
dynamics. The interaction among subpopulations is the result of the
movement of individuals from one subpopulation to the other. We have
thus to associate to each individual's class a diffusion probability
$p_I$ and $p_S$ that indicates the probability for any individual to
leave its node and move to a neighboring node of the network. In
general the diffusion probabilities are heterogeneous and can be node
dependent; however for the sake of simplicity we assume that
individuals diffuse with probability $p_I=p_S=1$ along any of the
links departing from the node in which they are. This implies that at
each time step an individual sitting on a node with degree $k$ will
diffuse into one of its nearest neighbors with probability $1/k$. In
order to write the dynamical equations of the system we define the
following quantities:
\begin{equation}
I_k=\frac{1}{N_k}\sum_{i \in{\mathcal V}(k)} I(i);
\quad S_k=\frac{1}{N_k}\sum_{i \in{\mathcal V}(k)} S(i),
\end{equation}
where the sums $\sum_{i \in{\mathcal V}(k)}$ are performed over nodes of degree $k$.
These two quantities express the average number of
susceptible and infectious individuals in nodes with degree $k$.
Clearly, $\mathcal{N}_k=I_k+S_k$ is the average number of
individuals in nodes with degree $k$.
These
quantities allow to write the discrete time equation describing
the time evolution of $I_k(t)$ for each class of degree $k$ as
\begin{equation}
I_k(t+1)=k\sum_{k'}P(k|k')\frac{1}{k'}\left[(1-\mu)I_{k'}(t)+\beta
\Gamma_{k'}(t)\right]\,
\label{eq:sismetapop}
\end{equation}
where $\Gamma_{k'}(t)$ is an interaction kernel, function of $I_{k'}$
and $S_{k'}$. The equation is obtained by considering that at each
time step the particles present on a node of degree $k$ first react
and then diffuse away from the node with probability $1$. The value of
$I_k(t+1)$ is obtained by summing the contribution of all particles
diffusing to nodes of degree $k$ from their neighbors of any degree
$k'$, including the new particles generated by the reaction term
$\Gamma_{k'}$. In the case of uncorrelated networks,
Eq.~\eqref{eq:sismetapop} reduces to
\begin{equation}
I_k(t+1)=\frac{k}{\langle k\rangle} \left[(1-\mu)\bar{I}(t)+
\beta \Gamma \right],
\end{equation}
where $\bar{I}(t)=\sum_k P(k) I_k$ is the average number of infected
individuals per node in the network and $\Gamma=\sum_k P(k)\Gamma_k$.
Analogously the equation describing the dynamics of susceptible
individuals is
\begin{equation}
S_k(t+1)=\frac{k}{\langle k\rangle} \left[ \bar{S}(t) +
\mu\bar{I}(t) - \beta \Gamma \right],
\end{equation}
where $\bar{S}(t)=\sum_k P(k) S_k$.
In order to explicitly solve these equations we have to specify the
type of interaction among individuals. In the usual case of a
mass-action law for the force of infection, we have $\Gamma_k=I_k
S_k/\mathcal{N}_k$. This implies that each particle has a finite
number of contacts with other individuals.
Considering the stationary state $t\to\infty$, and by using some
simple algebra, we can find that an endemic state $\bar{I}>0$ occurs
only if $\beta/\mu > 1$, thus recovering the classic epidemic
threshold in homogeneous systems~\cite{v.07:_react}.
A very different result is obtained if we consider the case in which
each susceptible individual may react with all the infectious
individuals in the same node. In this case $\Gamma_k=I_k S_k$,
i.e. all individuals are in contact with the same probability
(absorbed in the factor $\beta$), independently of the total
population present in each node. This law, referred to as pseudo
mass-action law, is sometimes used to model animal diseases as well as
mobile phone malwares. In this case, an active stationary solution
$\bar{I}>0$ occurs if \cite{v.07:_react}
\begin{equation}
\bar{\mathcal{N}}\geq \bar{\mathcal{N}}_c \equiv
\frac{\langle k \rangle}{\langle k^2 \rangle} \frac{\mu}{\beta},
\end{equation}
where $\bar{\mathcal{N}}=\sum P(k)\mathcal{N}_k=\mathcal{N}/N$ is the
average number of individuals per node.
This result implies that a stationary state with infectious
individuals is possible only if the particle density average
$\bar{\mathcal{N}}$ is larger than a specific critical threshold. However the network
topological fluctuations affect the critical value. In particular, in
heavy-tailed networks with $\langle k^2 \rangle \to \infty$ we have
that $\bar{\mathcal{N}}_c \to 0$, i.e. topological fluctuations induce a vanishing of the threshold in the limit of an infinite
network.
The different behavior obtained in the two types of processes can be
understood qualitatively by the following argument~\cite{v.07:_react}.
In a process governed by the mass action law the epidemic activity in
each node is rescaled by the local population $\mathcal{N}_i$ and it is
therefore the same in all nodes. In this case, the generation of
infected individuals is homogeneous across the network and an epidemic
active state depends only on the balance between $\beta$ and $\mu$,
whose values must poise the system above the critical threshold. In
contagion processes determined by the pseudo-mass action law, whatever
the parameters $\beta$ and $\mu$, there exists a local density of
individuals able to sustain the generation of infected individuals to
keep the system in the active state. In this case topological
fluctuations induce density fluctuations in the network as the
diffusion process brings individuals to each node proportionally to
the degree $k$, Eq.~\eqref{eq:Wk2}. Whatever the average number of
individuals per node in the thermodynamic limit, there is always a
node (with a virtually infinite degree) with enough individuals to
keep alive the contagion process, leading to the disappearance of the
phase transition.
Although the above results are obtained by a discrete formulation that
generally suits well simulation schemes in which reactions and
diffusion are executed sequentially, the continuum
formalism of the above models has been derived
in~\citet{Saldana:2008} (see also \citet{baronchelli08:_boson_react}). In the continuum derivation the same
phenomenology is obtained although the results concerning the critical
value in pseudo mass reaction-like processes scales as the maximum
degree in the network: $\bar{\mathcal{N}}_c\sim k_\mathrm{max}^{-1}$.
It is worth stressing that in most contagion processes, the mobility
of individuals is generally extremely heterogeneous and not simply
mimicked by constant diffusion probabilities as those used in the
previous simple example. The interaction among subpopulations is the
result of the movement of individuals from one subpopulation to the
other. For instance, it is clear that one of the key issues in the
modeling of contagion phenomena in human populations is the accurate
description of the commuting patterns or traveling of people. In many
instances even complicated mechanistic patterns can be accounted for
by effective couplings expressed as a force of infection generated by
the infectious individuals in subpopulation $j$ on the individuals in
subpopulation $i$. More realistic descriptions are provided by
approaches which include explicitly the detailed rate of
traveling/commuting obtained from data or from an empirical fit to
gravity law models ~\cite{Viboud:2006}). For analytical studies,
simplified approaches use the Markovian assumption in which at each
time step the movement of individuals is given according to a matrix
$d_{ij}$ that expresses the rate at which an individual in the
subpopulation $i$ is traveling to the subpopulation $j$. This
approach is extensively used in large populations where the
traffic $w_{ij}$ between subpopulations is known, stating that
$d_{ij}\sim w_{ij}/\mathcal{N}_j$. Several modeling approaches to the
large scale spreading of infectious disease~\cite{Baroyan:1969,
Rvachev:1985,Flahault:1991,Grais:2004,
Hufnagel:2004,colizza06:_predic,Colizza:2007, Balcan2009} use this
mobility process based on real data about transportation networks. A
detailed description of different mobility and diffusion schemes can
be found in~\citet{colizza07:_epidem_model_published}.
\subsection{SIR model in metapopulation networks and the global
invasion threshold}
In the analysis of contagion processes in metapopulation networks, the
diffusion parameters that mimic the mobility rate of
individuals/particles in the system may cause severe changes to the
phase diagram by inducing a novel type of
critical threshold. To see these effects we consider SIR-like models
with no stationary state possible. If we assume a diffusion
probability $p$ for each
individual and that the single population reproductive number of the
SIR model is $R_0>1$, we can easily identify two different limits. If
$p=0$ any epidemic occurring in a given subpopulation will remain
confined; no individual can travel to a different subpopulation and
spread the infection across the system. In the limit $p \to 1$
we have that individuals are constantly wandering from one subpopulation
to the others and the system is in practice equivalent to a well mixed
unique population. In this case, since $R_0>1$, the epidemic will
spread across the entire system. A transition point between these two
regimes is therefore occurring at a threshold value $p_c$ of the
diffusion rate, identifying a global invasion threshold that depends
on the mobility as well as the parameters of the contagion process
(see Fig.~\ref{fig:metapop}). In other words, in a model such as the
SIR model, the epidemic within each subpopulation generates a finite
fraction of infectious individuals in a finite amount of time, and
even if $R_0>1$ the diffusion rate must be large enough to ensure the
diffusion of infected individuals to other subpopulations before the
local epidemic outbreak dies out. It is worth remarking that this does not apply
in models with endemic states such as the SIS model. In this case the disease produces
infectious individuals indefinitely in time and sooner or later the epidemic will be exported
to other subpopulations.
The invasion threshold is encoded in a new quantity $R_*$
characterizing the disease invasion of the metapopulation
system. $R_*$ denotes the number of subpopulations that become
infected from a single initially infected subpopulation; i.e. the
analogue of the reproductive number $R_0$ at the subpopulation level.
It defines the critical values of parameters that allow the contagion
process to spread across a macroscopic fraction of subpopulations.
Interestingly, this effect cannot be captured by a continuous
description that would allow any fraction $p \bar{I}$ of diffusing
infected individual to inoculate the virus in a subpopulation not yet
infected. In certain conditions this fraction $p \bar{I}$, that is a
mean-field average value, may be a number smaller than 1. This is a
common feature of continuous approximations that allow the infection
to persist and diffuse via ``nano-individuals'' that are not capturing
the discrete nature of the real systems. The discrete nature of
individuals and the stochastic nature of the diffusion can therefore
have a crucial role in the problem of resurgent epidemics, extinction
and eradication~\citep{Ball:1997,Cross:2005,Wattsresurgent:2005,Vazquez:2007,Cross:2007}.
In order to provide an analytical estimate of the invasion threshold,
we consider a metapopulation network with arbitrary degree
distribution $P(k)$, where each node of degree $k$ has a stationary
population $\mathcal{N}_k$. By using a Levins-type
approach~\cite{colizza07:_invas_thres} it is possible to characterize the invasion
dynamics by looking at the tree-like branching process describing the contagion process
at the subpopulation level~\cite{Levins:1970}. Let us define
$D^0_k$ as the number of \emph{diseased} subpopulations of degree $k$
at generation $0$, i.e. those which are experiencing an outbreak at
the beginning of the process. Each infected subpopulation will
seed---during the course of the outbreak---the infection in
neighboring subpopulations, defining the set $D^1_k$ of infected
subpopulations at generation 1, and so on. This corresponds to a
basic branching process where the number of infected subpopulations
of degree $k$ at the $n-$th generation is denoted as $D^n_k$.
We can write the iterative equation relating $D^n_k$ and $D^{n-1}_k$ as
\begin{eqnarray}
D_k^n & =& \sum_{k'}D_{k'}^{n-1} (k'-1)
P(k|k')
\nonumber\\
&&
~~~~~~ \times \left(1-\frac{D_k^{n-1}}{N_k}\right)\left[1-\left(\frac{1}{R_0}\right)^{\lambda_{k'k}}\right] .
\label{poptree-het}
\end{eqnarray}
In this expression we assume that each infected subpopulation of
degree $k'$ at the $(n-1)-$th generation may seed the infection in a
number of subpopulations of degree $k$ according to the number of
neighboring subpopulations $(k'-1)$ that discount the neighboring population from which the infection was originally transmitted.
The right term takes into account the probability $P(k|k')$ that each
of the $k'-1$ neighboring populations has degree $k$, the probability
that the seeded population is not infected, and the probability to
observe an outbreak in the seeded population. This last probability
stems from the probability of extinction $P_{ext}=1/R_0$ of an
epidemic seeded with a single infectious
individual~\cite{Bailey_book}, when one considers a seed of size
$\lambda_{kk'}$ given by the number of infected individuals that move
into a connected subpopulation of degree $k'$ during the duration of
the local outbreak in the subpopulation of degree $k$.
The quantity $\lambda_{kk'}$ can be explicitly calculated by
considering that in the case of a macroscopic outbreak in a closed
population, the total number of infected individuals during the
outbreak evolution will be equal to $\bar{\alpha} \mathcal{N}_{k}$ where
$\bar{\alpha}$ depends on the specific disease model and parameter values
used. Each infected individual stays in the infectious state for a
time $\mu^{-1}$ equal to the inverse of the recovery rate, during
which it can travel to the neighboring subpopulation of degree $k'$
with rate $p$. Here, for the sake of simplicity we consider that the
mobility coefficient $p$ is the same for all individuals. Under this
condition the number of infected individuals that may move into a
connected subpopulation of degree $k'$ during the duration of the
local outbreak in the subpopulation of degree $k$ is given by
\begin{equation}
\lambda_{kk'}= p \frac{\bar{\mathcal{N}}\bar{\alpha}\mu^{-1}}{\langle k\rangle},
\label{eq:Nk}
\end{equation}
where we have considered that each individual will diffuse with the
same probability in any of the $k$ available connections and that
$\mathcal{N}_k$ is given by Eq.~\eqref{eq:Wk2}.
In order to provide an explicit solution to the above iterative
equation we consider in the following that $R_0-1\ll 1$, thus
assuming that the system is very close to the epidemic
threshold. In this limit we can approximate the outbreak probability
as $1-R_0^{-\lambda_{k'k}} \simeq \lambda_{k'k}(R_0-1)$. In addition,
we assume that at the early stage of the epidemic $D_k^{n-1}/N_k \ll
1$, and we consider the case of uncorrelated networks, obtaining
\begin{equation}
D_k^n =(R_0-1)\frac{kP(k)}{\langle k\rangle^2}
\frac{p\bar{\mathcal{N}}\bar{\alpha}}{\mu} \sum_{k'}D_{k'}^{n-1} (k'-1).
\end{equation}
By defining $\Theta^n= \sum_{k'}D_{k'}^{n} (k'-1)$, the
last expression can be conveniently written in the iterative form
\begin{equation}
\Theta^n =(R_0-1)\frac{\langle k^2 \rangle-\langle k\rangle}{\langle k \rangle^2}
\frac{p\bar{\mathcal{N}}\bar{\alpha}}{\mu} \Theta^{n-1},
\end{equation}
that allows a growing epidemic only if
\begin{equation}
R_*=(R_0-1)\frac{\langle k^2\rangle-\langle k\rangle}{\langle k\rangle^2}
\frac{p\bar{\mathcal{N}}\bar{\alpha}}{\mu} >1,
\end{equation}
defining the {\em global invasion threshold} of the metapopulation system.
The explicit form of the threshold condition
can be used to find the minimum mobility rate ensuring that on
average each subpopulation can
seed more than one neighboring subpopulation.
The constant $\bar{\alpha}$ is larger than zero for any
$R_0>1$, and in the SIR case for $R_0$ close to 1 it can be approximated
by $\bar{\alpha} \simeq 2(R_0-1)/R_0^2$~\cite{Bailey_book}, yielding
a critical mobility
value $p_c$ below which the epidemics cannot invade the metapopulation
system given by the equation
\begin{equation}
p_c\bar{\mathcal{N}}\geq \frac{\langle k\rangle^2}{\langle k^2\rangle-\langle k\rangle}
\frac{\mu R_0^2}{2(R_0-1)^2}.
\label{eq:glth1}
\end{equation}
In Fig.~\ref{fig:3Dinv} we show the total number of infected
individuals across all subpopulations, also called the global attack
rate, as a function of both $R_0$ and $p$, as obtained from extensive
Monte Carlo simulations in an uncorrelated metapopulation network with
$P(k)\sim k^{-2.1}$, $N=10^5$, $\bar{\mathcal{N}}=10^3$ and $\mu=0.2$.
The global attack rate surface in the $p$-$R_0$ space shows that the smaller
the value of $R_0$, the higher the mobility $p$ in order for the contagion
process to successfully invade a finite fraction of the
subpopulations.
\begin{figure}[t]
\centering
\includegraphics*[width=\columnwidth]{fig12.pdf}
\caption{ Global threshold in a heterogeneous metapopulation
system. The left panel shows a 3D surface representing the value of
the final epidemic size in the metapopulation system as a function
of the local threshold $R_0$ and of the diffusion probability
$p$. If $R_0$ approaches the threshold, larger values of the
diffusion probability $p$ need to be considered in order to observe
a global outbreak in the metapopulation system. Figure adapted from
Colizza \& Vespignani, 2007.}
\label{fig:3Dinv}
\end{figure}
The invasion threshold $R_*>1$ implicitly defines the critical
mobility rate of individuals and is an indicator as important as the
basic reproductive number $R_0>1$ in assessing the behavior of
contagion processes in structured populations. It shifts the attention
from the local outbreak to a global perspective where the
interconnectivity and mobility among subpopulations is extremely
important in possibly hampering the spreading process. The presence of
the factor $\langle k \rangle^2/\langle k^{2}\rangle$ in the explicit
expression of the threshold points out that also at the global level
the heterogeneity of the network plays a very important role. In
other words, the topological fluctuations favor the subpopulation
invasion and suppress the phase transition in the infinite size limit.
While the analysis we have presented here is extremely simplified, in
the last years several studies have provided insight
on metapopulation spreading fully considering the
stochastic and discrete nature of the process in various realistic
contexts: heterogenous schemes for the diffusion of
individuals~\cite{colizza07:_epidem_model_published,Ni:2009,Ben-Zion:2010,Bart:2008};
heterogeneous populations~\cite{Apolloni:2013,Poletto:2012};
non-markovian recurrent mobility patterns mimicking commuting among
geographical regions~\cite{Balcan:2011,Belik:2011,Balcan:2012} and the
introduction of individual behavioral responses to the presence of
disease~\cite{Meloni:2011,Nicolaides:2013}. Indeed one of the
interesting applications of the particle-network framework and the
study of reaction-diffusion processes in metapopulation networks
consists in providing analytic rationales for data driven epidemic
models.
\subsection{Agent Based Models and Network Epidemiology}
In recent years, mathematical and computational approaches to the study
of epidemics have been increasingly relevant in providing quantitative
forecast and scenario analysis of real infectious disease
outbreaks~\cite{Lofgren2014}. For this reason, epidemic models have
evolved into large-scale microsimulations, data-driven approaches that
can provide information at very detailed spatial resolutions. An example
is provided by agent based, spatially structured models that consider
the discrete nature of individuals and their mobility and are generally
including the stochasticity of interactions and mobility of
individuals. These models, are based on the construction of synthetic
populations characterizing each individual in the population and its
mobility pattern, often down to the level of households, schools and
workplaces~\cite{Hufnagel:2004,Eubank2004,Longini2005,Ferguson2005,Colizza:2007,Chao2010}.
The synthetic population construction is a data hungry process and the
resulting model is in most of the cases non-transparent to an analytical
understanding. For this reason, the analysis of these models relies on
computational microsimulations of the epidemic evolution that keep track
of each single individual in the population. The resulting ensemble of
possible epidemic evolutions is then leveraged to provide the usual
quantitative indicators such as median, mean, and reference ranges for
epidemic observables, such as newly generated cases, seeding events,
time of arrival of the infection. The statistical information generated
by the computational approaches is then exploited with different
visualization techniques that reference the data geographically. At
first sight this modeling approach seems unrelated to network
epidemiology. In reality, most of the data driven computational
approaches are relying on the construction of synthetic populations and
interaction patterns that are effectively encoded as multiscale networks
of individuals and locations~\cite{Marathe2013}.
\begin{figure*}[t]
\centering
\includegraphics*[width=\textwidth]{fig13.pdf}
\caption{ Schematic illustration of the construction of a synthetic
population and the resulting contact network. {\bf a} At the
macroscopic level, a synthetic population and its movements are
constructed from census and demographic data. {\bf b} A bipartite
network associating individuals to locations, and eventually
weighting the links with the time spent in the location, is derived
from the synthetic population. {\bf c} The unipartite projection of
the bipartite network provides a contact network for the contagion
process. Different transmission rates and weights on the network
depends on the location and type of interactions. }
\label{fig:synthpop}
\end{figure*}
An example of the underlying network structure of data driven epidemic
models is provided by the GLobal Epidemic and Mobility (GLEAM) model
that integrates census and mobility data in a fully stochastic
meta-population network model that allows for the detailed simulation of
the spread of influenza-like illnesses around the
globe~\cite{Broeck2011}. This model uses real demographic and mobility
data. The world population is divided into geographic census areas that
are defined around transportation hubs and connected by mobility
fluxes. Within each subpopulation, the disease spreads between
individuals. Individuals can move from one subpopulation to another
along the mobility network according to high quality transportation
data, thus simulating the global spreading pattern of epidemic
outbreaks.
At the finer scale of urban areas, synthetic population
constructions are even more refined and consider a classification of
location such as house, schools, offices etc. The movement and time
spent in each location can be used to generate individuals-location
bipartite networks whose unipartite projection defines the
individual-level, synthetic interaction network that governs the
epidemic
spreading~\cite{Eubank2004,Halloran2008,Merler2011,Fumanelli2012}. Also
in this case, although the model underlying the computational approach
is a network model, each individual is annotated with the residence
place, age, as well as many other possible demographic information, that
can be exploited in the analysis of the epidemic outbreak (see
Fig.~\ref{fig:synthpop}).
Data driven computational approaches can generate results at
unprecedented level of detail, and have been used successfully in the
analysis and forecast of real epidemics
\cite{Hufnagel:2004,Balcan2009,Balcan:2009BMC,Merler2011},
and policy making scenario
analysis~\cite{Eubank2004,Longini2005,Ferguson2005,Colizza:2007,Brockmann2013}. Similar
approaches are becoming more and more popular in the simulation of
generalized contagion processes and social
behavior~\cite{Marathe2013}. Although realistic and detailed,
computational approaches often provides non-intuitive results and the
key mechanisms underlying the epidemic evolution are difficult to
identify because of the amount of details integrated in the models. In
such cases, the analytic understanding of the basic models presented in
this review can be the key to the systematic investigation of the impact
of the various complex features of real systems on the basic properties
of epidemic outbreaks. For instance, the simple calculation of the
invasion threshold explains why travel restrictions appear to be highly
ineffective in containing epidemics in large-scale data driven
simulation: the complexity and heterogeneity of the present time human
mobility network favor considerably the global spreading of infectious
diseases. Only unfeasible mobility restrictions reducing the global
travel fluxes by $90\%$ or more would be
effective~\cite{Cooper:2006,Hollingsworth:2006,colizza07:_epidem_model_published,Bajardi:2011}. The
understanding of the behavior of reaction-diffusion processes in complex
networks is therefore a crucial undertaking if we want to answer many
basic questions about the reliability and predictive power of data
driven computational models.
\section{Generalizing epidemic models as social contagion processes}
\label{sec:7.A}
Infectious diseases certainly represent the central focus of epidemic
modeling because of the relevance they played, and continue to play in
present days, in human history. The contagion metaphor however applies
in several other domains and in particular in the social context: the
diffusion of information~\citep{Bikhchandani1992}, the propagation of
rumors, the adoption of innovations or
behaviors~\citep{Bass1969,Rogers2010}, are all phenomena for which the
state of an individual is strongly influenced by the interaction with
peers. Mediated by the network of social contacts, these
interactions can give rise to epidemic-like outbreaks: fads, information
cascades, memes going viral online, etc. The term social (or complex)
contagion generally denotes these type of phenomena. New communication
technologies, online social media, the abundance of digital fingerprints
that we, as individuals, disseminate in our daily life, provide an
unprecedented wealth of data about social contagion phenomena, calling
for theoretical approaches to measure, interpret, model and predict
them. Simple models for disease epidemics are the natural paradigm for
this endeavour and have been applied to social spreading
phenomena~\citep{Goffman1964,Goffman1966,Bettencourt2006}. Some
specific features of social contagion, however, are qualitatively
different from pathogen spreading: the transmission of information
involves intentional acts by the sender and the receiver, it is often
beneficial for both participants (as opposed to disease spreading), and
it is influenced by psychological and cognitive factors. This leads to
the introduction of new ingredients in the models, from which the name
{\it complex contagion} derives. In this Section we will discuss recent
developments in this modeling effort, which we divide in two broad
categories depending on whether the spreading process (threshold models)
or the recovery process (rumor spreading models) of the disease epidemic
propagation is changed. In the light of the modeling efforts, a review
of papers analyzing empirical data follows next.
As the topics presented here encompass a vast spectrum of disciplines,
including physics, computer science, mathematics, and social sciences,
the usual caveat about the impossibility of an exhaustive review of
all the literature is to be particularly stressed. Our limited
goal is to try to outline the most important contributions in a
unitary framework. This endeavor is made even more difficult by the
fact that the propagation of social contagion is also close to other
processes such as failure cascades (in network routing protocols or
mechanical failure~\citep{Motter2002}) or the adoption of strategies
in game-theoretic context~\citep{Easley2010} that are beyond the scope of this review.
\subsection{Threshold models}
For disease epidemics it is customary to assume that a susceptible
individual has a constant probability to receive the infection from a
peer upon every exposure, independently of whether other infected
individuals are simultaneously in contact or other exposures have
occurred in the past. While generally reasonable for the transmission
of pathogens (though exceptions may occur~\citep{Joh2009}) this
hypothesis is clearly unrealistic in most situations where a social
meme is spreading: a piece of information is more credible if arriving
from different sources; the push to adopt a technological innovation
is stronger if neighboring nodes in the social network have already
adopted it. These considerations lead naturally to the introduction
of ``threshold models'' for spreading phenomena, where the effect of
multiple exposures changes from low to high as a function of their
number. Fig.~\ref{fig:Dodds04} displays the probability of infection
(adoption) $P_{inf}$ after $K$ attempts in the different scenarios. In
the case of SIR (left panel) each attempt has a fixed probability $p$
of success and $P_{inf} = 1-(1-p)^K$.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig14.pdf}
\caption{Probability $P_{inf}$ of infection for a susceptible individual
after $K$ contacts with infected individuals. (a) Independent
interaction (e.g., SIR-type) model. (b) Stochastic threshold
model. (c) Deterministic threshold model. Adapted
from~\citet{Dodds2004}}
\label{fig:Dodds04}
\end{figure}
Threshold models have a long tradition in the social and economical
sciences~\citep{Granovetter1978,Morris2000}. In the context of
spreading phenomena on complex networks, a seminal role has been
played by the model introduced by \citet{Watts2002}. Each individual
can be in one of two states ($S$ and $I$) and is endowed with a quenched,
randomly chosen ``threshold'' value $\phi_i$. In an elementary step
an individual agent in state $S$ observes the current state of its
neighbors, and adopts state $I$ if at least a threshold fraction
$\phi_i$ of its neighbors are in state $I$; else it remains in state $S$.
\footnote{This is the definition for {\em relative} threshold models.
In many cases {\em absolute} thresholds are considered
~\citep{Granovetter1978,Kempe2003,Galstyan2007,Centola2007a,
Kimura2009,Karimi2013}. For strongly heterogeneous networks the
different definitions may lead to important changes.}.
No transition from $I$ back to $S$ is possible.
Initially all nodes except for a
small fraction are in state $S$. Out of these initiators a {\em
cascade} of transitions to the $I$ state is generated. The nontrivial
question concerns whether the cascade remains local, i.e. restricted
to a finite number of individuals, or it involves a finite fraction of
the whole population. Given an initial seed, the spreading can occur
only if at least one of its neighbors has a threshold such that
$\phi_i \le 1/k_i$. A cascade is possible only if a cluster of these
``vulnerable'' vertices is connected to the initiator. For global
cascades to be possible it is then conjectured that the subnetwork of
vulnerable vertices must percolate throughout the network. The
condition for global cascades can then be derived applying on locally
tree-like networks the machinery of generating functions for
branching processes. In the
simple case of a uniform threshold $\phi$ and an Erd\H{o}s-R\'enyi pattern
of interactions the phase diagram as a function of the threshold
$\phi$ and of the average degree $\av{k}$ is reported in
Fig.~\ref{fig:Watts02}.
\begin{figure}[t]
\includegraphics*[width=\columnwidth]{fig15.pdf}
\caption{Phase-diagram of Watts' threshold model.
The dashed line encloses the region of the $(\phi, \av{k})$ plane in which
the condition for the existence of global cascades is satisfied
for a uniform random graph with uniform threshold $\phi$.
The solid circles outline the region in which global cascades occur for
the same parameter settings in the full dynamical model for $N = 10000$
(averaged over $100$ random single-node perturbations).
Adapted from~\citet{Watts2002}.}
\label{fig:Watts02}
\end{figure}
For fixed $\phi$, global cascades occur only for intermediate values of
the mean connectivity $1<\av{k}<1/\phi$. The transition occurring for
small $\av{k}$ is trivial and is not due to the spreading dynamics: the
average cascade size is finite for $\av{k}<1$ because the network itself
is composed of small disconnected components: the transition is
percolative with power-law distributed cascade size. For large
$\av{k}>1/\phi$ instead, the propagation is limited by the local
stability of nodes. As the transition is approached increasing $\av{k}$
the distribution of cascade size is bimodal, with an exponential tail at
small cascade size and global cascades increasingly larger but more
rare, until they disappear altogether, implying a discontinuous (i.e.,
first-order) phase transition in the size of successful cascades.
Heterogeneous thresholds reduce the system stability, increasing the
range of parameters where global cascades occur. Degree heterogeneity
has instead the opposite effect.
The critical value of the threshold $\phi_c=1/\av{k}$, separating global
cascades for $\phi<\phi_c$ from localized spreading for $\phi>\phi_c$
highlights the peculiar features of threshold dynamics~\citep{Centola2007}.
Adding new links to the network makes $\av{k}$ grow, thus reducing
$\phi_c$ and making system-wide spreading more difficult; the
opposite of what occurs for SIR epidemics.
Notice indeed that the dependence of the threshold on the average
degree is the same (for homogeneous networks) in both the threshold
model and in SIR dynamics, but in the latter case the global spreading
occurs {\em above} the threshold
(for $\lambda>1/\av{k}$), while in the former case global cascades
are possible {\em below} the threshold ($\phi<1/\av{k})$.
By the same token,
link rewiring which destroys clustering of a network is seen to
reduce the average cascade size for the threshold model. Instead of the
{\em strength of the weak ties}~\citep{Granovetter1973} here the
{\em weakness of long ties}~\citep{Centola2007a} is at work.
Watts' model can be seen as a particular instance of a more general
model~\citep{Dodds2004}, which includes also independent interaction
models (SIR, SIRS) as particular cases.
The model incorporates individual memory, variable magnitude
of exposure (dose amount) and heterogeneity in the susceptibility
of individuals.
At each contact with an infected neighbor a susceptible receives with
probability $p$ a random dose $d(t)$ (distributed according
to $f(d)$).
A susceptible individual $i$ accumulates the doses $d_i(t)$ over a time $T$
and it becomes infected if at some time the accumulated dose
$D_i(t)=\sum_{t'=t-T+1}^t d_i(t')$ is larger than a threshold $d_i^*$
(random for each node with distribution $g(d^*)$).
Recovery is possible with probability $r$ provided the dose $D_i(t)$
falls below $d_i^*$.
The probability that a susceptible individual who encounters
$K \le T$ infected individuals in $T$ time steps becomes infected is
therefore
\begin{equation}
P_{inf}(K) = \sum_{k=1}^K \binom {K}{k} p^k (1-p)^{K-k} P_k
\end{equation}
where
\begin{equation}
P_k = \int_0^{\infty} dd^* g(d^*) P\left(\sum_{i=1}^k d_i \ge d^*\right)
\end{equation}
is the average fraction of individuals infected after receiving
$k$ positive doses in $T$ time steps.
When all doses $d_i$ are identical, all members of the population have the
same threshold $d^*$, and $p<1$,
then the model reduces to the standard SIR
(see Fig.~\ref{fig:Dodds04}a).
In other cases it is a deterministic or stochastic threshold model,
depending on whether thresholds vary (see Fig.~\ref{fig:Dodds04}b)
or are all identical (see Fig.~\ref{fig:Dodds04}c).
Adding a probability $\rho$ that a recovered individual becomes
susceptible again leads to a SIRS-like dynamics. Setting $r=1$ and
$\rho=1$ gives a SIS-like model, for which the stationary fraction of
active nodes as a function of $p$ is the order parameter. Three
qualitatively different shapes of the phase-diagram are found,
depending only on $T$ and $P_1$ and $P_2$, the probabilities that an
individual will become infected as a result of one and two exposures,
respectively. If $P_1>P_2/2$ there is a standard epidemic transition
between an absorbing healthy phase and an active infected one. The
phenomenology is the same of SIS, indicating that successive exposures
are effectively independent. The two other possible behaviors both
exhibit a discontinuous phase transition for finite $p$, differing
in the sensitivity with respect to the size of the initial seed.
By means of an analytical approach for locally tree-like networks,
\citet{Gleeson2007} extended Watts' approach to consider
a finite fraction of initiators $p^{in}$. It turns out that this change
may have dramatic effects on the location of the transitions as a
function of $\av{k}$ and even make the transition for small $\av{k}$
discontinuous.
\citet{Singh2013} have shown that for any $\phi<1$ there is a critical
value $p^{in}_{c}(\phi)$ such that for $p>p^{in}_c(\phi)$ the cascades are
global.
Further work along the same lines has generalized
the analytical treatment to modular
networks~\citep{Gleeson2008}, degree-correlated
networks~\citep{Gleeson2008,Dodds2009} and to networks with tunable
clustering~\citep{Hackett2011}. In the latter case, it turns out that
for large and small values of $\av{k}$ clustering reduces the
size of cascades, while the converse occurs for intermediate
values of the average degree.
Watts' threshold model has been extended in many directions,
to take into account other potentially relevant effects that may
influence the spreading process.
Interaction patterns described by layered
networks are found to increase the cascade size~\citep{Brummitt2012}
while the consideration of temporal networks~\citep{Holme:2011fk}
with the associated bursty activity of individuals may either
facilitate~\citep{Takaguchi2012} or hinder~\citep{Karimi2013}
the spreading process.
Watts' model on a basic two-community network is considered
in~\citet{Galstyan2007}.
Finally it is worth mentioning the work of~\citet{Lorenz2009}
which propose a very general classification of models for cascades,
including, among many others, standard epidemic models and Watts' model
as particular cases.
A large interest in threshold models has also be spurred by the
goal of identifying influential spreaders, i.e.
the starting nodes which maximize the size of cascades, a
topic of interest also for traditional epidemic models
(see Section~\ref{sec:5.A}).
\citet{Kempe2003} show that the problem of finding
the set of initiator nodes such that the total size of the cascade
is maximal~\citep{Domingos2001} is NP-hard, both for linear
threshold models and for an independent cascade model,
which is essentially an inhomogeneous SIR.
Moreover, they provide a greedy hill-climbing algorithm
that provides an efficient approximation
to the NP-hard solution, outperforming random choice as well as
choices based on degree centrality and distance centrality, when
tested on some empirical networks.
\citet{Kempe2003}'s method is computationally costly. An improvement
which makes it much faster is provided by~\citet{Kimura2009}.
\subsection{Rumor spreading}
Models for rumor spreading are variants of the SIR model for disease
epidemics in which the recovery process does not occur spontaneously,
but rather is a consequence of interactions. The basic idea behind this
modification is that it is worth propagating a rumor as long as it is
novel for the recipient: If the spreader finds that the recipient
already knows the rumor he/she might lose interest in spreading it any
further. The formalization of this process is due
to~\citet{Daley1964,Daley1965}; individuals can be in one of three
possible states\footnote{For consistency, we use the same symbols of the
SIR model.}: ignorant (S, equivalent to susceptible in SIR), spreader
(I, equivalent to infected) and stifler (R, equivalent to removed). The
possible events, and the corresponding rates are: \begin{equation} \left\{
\begin{array}{rcl}
S + I & \xrightarrow{\beta} & 2 I \\
R + I & \xrightarrow{\alpha} & 2 R \\
2 I & \xrightarrow{\alpha} & 2 R
\end{array}
\right. .
\label{DK}
\end{equation}
In a slightly distinct version, introduced by~\citet{Maki1973},
the third process is different: when a spreader
contacts another agent and finds it in state I, only the former
turns into a stifler, the latter remaining unchanged, i.e. the third
process is
\begin{equation}
2 I \xrightarrow{\alpha} R + I .
\label{MT}
\end{equation}
As for the SIR model, starting from a single informed individual the
rumor propagates through the network with an increase in the number of
spreaders. Asymptotically all spreaders turn into stiflers and in the
final absorbing state there are only ignorants or stiflers. The
``reliability'', i.e. the fraction $r_{\infty}$ of stiflers in this
asymptotic state, quantifies whether the rumor remains localized
($r_{\infty} \to 0$ for system size $N \to \infty$) or spreads
macroscopically. The solution of both versions of the model on the
complete graph~\citep{Sudbury1985,barratbook} gives the whole temporal
evolution of the reliability, yielding $r_{\infty}$ as the solution of
\begin{equation} r_{\infty} = 1 - e^{-(1+\beta/\alpha) r_{\infty}}
\label{MT_r}
\end{equation}
As a consequence, $r_{\infty}$ is positive for any $\beta/\alpha>0$.
i.e. the rumor spreads macroscopically for any value of the
spreading parameters, at odds with what happens for the SIR dynamics,
which has a finite threshold for homogeneous networks.
Since models for disease epidemics are strongly affected by complex
topologies, it is natural to ask what happens for rumor dynamics.
When the Maki-Thompson model is simulated on scale-free networks it turns
out that heterogeneity hinders the propagation dynamics by reducing
the final reliability $r_{\infty}$, still without introducing a finite
threshold~\citep{Liu2003,Moreno2004a,Moreno2004b}.
Why this happens is easily understood: large hubs are rapidly reached by
the rumor, but then they easily turn into stiflers, thus preventing the
further spreading of the rumor to their many other neighbors. This is
confirmed by the observation that the density of ignorants of degree $k$
at the end of the process decays exponentially with
$k$~\citep{Moreno2004a}. Degree-based mean-field
approaches~\citep{Nekovee2007,Zhou2007} are in good agreement with the
numerical findings. The phenomenology of rumor spreading is markedly
different from the behavior of the SIR model and this is due to the
healing mechanism involving two individuals, present in both
Maki-Thompson and Daley-Kendall dynamics. If spontaneous recovery is
also allowed with rate $\mu$, justified as the effect of forgetting, it
turns out that the model behaves exactly as SIR: macroscopic spreading
occurs only above a threshold inversely proportional to the second
moment $\av{k^2}$, which then vanishes in the large network size limit
for scale-free networks~\citep{Nekovee2007}. Again the interpretation of
this outcome is not difficult: the forgetting term is linear in the
density of spreaders and thus dominates for small densities, since the
healing terms, due to the processes in Eqs.~(\ref{DK}) and~(\ref{MT}),
are quadratic.
When the pattern of interactions among individuals is given by the
Watts-Strogatz topology, rumor dynamics gives rise to a nontrivial
phenomenon: a phase-transition occurring at a critical value of the
rewiring probability $p$~\citep{Zanette2001}: For large values of $p$
the network is essentially random and the rumor reaches a finite
fraction of the vertices. For small values of $p$ the spreading occurs
only in a finite neighborhood of the initiator, so that the density of
stiflers vanishes with the system size. In other transitions occurring
on the Watts-Strogatz network, the critical point scales to zero with
the system size $N$, a consequence of the fact that the geometric
crossover between a one-dimensional lattice and a small-world structure
scales as $1/N$~\citep{watts98}. Strikingly instead, the threshold
$p_c$ for macroscopic rumor spreading converges to a finite value as the
system size grows. This indicates that the transition cannot be
explained only in geometrical terms; some nontrivial interplay between
topology and dynamics is at work. Interestingly, the transition at
finite $p_c$ persists also when an annealed Watts-Strogatz network is
considered.
Recently, some activity has been devoted to the investigation of the
role of influential spreaders in rumor spreading, in analogy to what has
been done for disease epidemics (Sec.~\ref{sec:5.A}).
\citet{Borge-Holthoefer2012a} have looked for the role of nodes with
large $K$-core index for the Maki-Thompson dynamics on several empirical
networks. It turns out that the final density of stiflers does not
depend on the $K$-core value of the initiator. Nodes with high $K$-core
index are not good spreaders; they are reached fast by the rumor and
short-circuit its further spreading. An empirical investigation of
cascades on the Twitter social network~\citep{Borge-Holthoefer2012b}
points out instead that privileged spreaders (identified by large degree
$k$ or large $K$) do exist in real world spreading phenomena, in
patent contrast with the predictions of rumor spreading models. To
reconcile theoretical predictions and empirical observations it is
necessary to amend Maki-Thompson dynamics. Two possible modifications
are proposed in~\citet{Borge-Holthoefer2012c}. In one case individuals
are not always active and do not spread further twits reaching them
while inactive. In the second an ignorant contacted by a spreader turns
into a spreader only with probability $p$, while with probability
(1-$p$) it turns {\em directly} into a stifler. Both modified rumor
spreading models are able to reproduce qualitatively the empirical
findings, provided (for the first) that the probability to be active is
proportional to the node degree or (for the second) that the probability
$p$ to actually spread is very small (of the order of $10^{-3}$).
\subsection{Empirical studies}
Empirical data for a large number of spreading processes in the
real world have been analyzed in terms of epidemic-like phenomena.
Here we outline some of the most important contributions.
\citet{Leskovec2007a} analyze an instance of viral
marketing, in the form of the email recommendation network for products
of a large retailer. There are large variations depending on the type
of goods recommended, its price and the community of customers targeted,
but in general recommendations turn out not to be
very effective and cascades of purchases are not very extended.
The key factor, different from disease epidemics, is that
the ``infection probability'' quickly saturates to a low value
with the number of recommendations received. Moreover, as an
individual sends more and more recommendations the success per
recommendation declines (high degree individuals are not so influent).
Overall, viral marketing is very different from epidemic-like spreading.
A case where cascades are large and the spreading is a real collective
phenomenon is the propagation of chain letters on the
internet. \citet{Liben-Nowell2008} found tree-like dissemination patterns,
very deep but not large. A simple epidemic-like model,
with an individual having a probability to forward
the message to a fraction of his/her contacts, gives instead wide and
shallow trees. More realistic propagations are obtained introducing
two additional ingredients, asynchronous response times and
"back-response''~\citep{Liben-Nowell2008}.
Cascading behavior in large blog graphs is actively
investigated~\citep{Gruhl2004,Adar2005}.
\citet{Leskovec2007} found that in this case cascades
tend to be wide, not deep, with a size distribution
following a power law with slope -2.
The shape of cascades is often star-like.
A single-parameter generative model (essentially a SIS-like
model in the absorbing phase)
is in good agreement with empirical observations regarding
frequent cascades shapes and size distributions.
Also the behavior of individuals is subject to social
influence and thus giving rise to collective spreading.
Obesity, smoking habits and even
happiness~\citep{Christakis2007,Christakis2008,Fowler2008}
have been claimed to spread as epidemics in social networks
(see however~\citet{Shalizi2011} for a criticism of these results).
In a nice empirical investigation \citet{Centola2010}
analyzed an artificially structured online community, devised
to check whether spreading is favored by random unclustered
structures (as in the ``strength of weak ties''
hypothesis~\citep{Granovetter1973}) or by clustered ones with
larger diameter~\citep{Centola2007a}.
The latter structures turn out to favor spreading, the more so for
increasing degree. At the individual level, the presence of 2 or 3
neighbors adopting a behavior leads to an increase in the probability of
doing the same. For 4 and more neighbors the probability remains
instead constant.
For a long time empirical investigations of spreading phenomena
suffered of the drawback that the network mediating the propagation
was unknown and its properties had to be in some way guessed from how
the spreading process itself unfolds. Online social networks, (such
as Facebook and Twitter) are an ideal tool to bypass this problem as
they provide both the topology of existing connections and the actual
path followed by the spreading process on top of the contact
graph~\citep{Lerman10icwsm}. In one of such social networks (Digg),
\citet{VerSteeg2011} find that while the network of contacts has a
scale-free degree distribution, the size of cascades is lognormally
distributed, with essentially all propagations limited to a fraction
smaller than $1\%$ of the whole network. Within the framework of a
SIR model this would imply that the spreading parameter of each
cascade is fine-tuned around the transition point. Two additional
ingredients help to reconcile the empirical findings with models: on
the one hand Digg contact network has a high clustering and this
feature leads to a reduction of outbreak size; on the other hand, as
in~\citet{Centola2010}, the probability to transmit the spreading
quickly saturates as a function of the number of active
neighbors.
Another empirical investigation of Digg~\citep{Doerr2012} (see also
\citet{PVM_lognormal_Digg_EJPb2011}) finds that links between friends
in the social network contribute surprisingly little to the
propagation of information.
Another critical element of the spreading of memes in modern
online social networks is the competition among a large number
of them.~\citet{Weng2013} have analyzed Twitter, finding a very
broad variability of the lifetime and popularity of spreading
memes. A minimalistic model, based on the heterogeneous structure
of Twitter graphs of followers and on ``limited attention'', i.e.
the survival of memes in agents' memory for only a finite time due
to competition with others,
is sufficient to reproduce the empirical findings. Surprisingly,
it is not necessary to assume a variability in the intrinsic appeal
of memes to explain the very heterogeneous persistence and popularity
of individual memes.
Another information spreading experiment was performed by
\citet{Iribarren2009}, in which subscribers to an online newsletter in
11 European countries were offered a reward to recommend it via
email. The recommendations were tracked at every step by means of
viral propagation and it was thus possible to reconstruct the
recommendation cascades originated by 7154 initiators. The topology
of the observed cascades is essentially tree-like, in agreement with
the results of \citet{Liben-Nowell2008}, and of very small size,
suggesting again a behavior at or below a possible critical point. The
heterogeneity of the viral spreading process was quantified by looking
at the distribution of time elapsed between receiving an invitation
email, and forwarding it to another individuals. This distribution can
be fitted to a long-tailed log-normal form. On the other hand, the
average number of informed individuals forwarding the message at time
$t$ was also found to decay slowly (with a log-normal shape), in
contrast with the exponential decay expected in epidemics below the
threshold. Similar results were reported for the retweet time of
Twitter messages, see \citet{PVM_Twitter_lognormal}.
\section{Outlook}
In the last years the whole field of epidemic modeling in networks has
enormously progressed in the understanding of the interplay between
network properties and contagion processes. We hope to have fairly
portrayed the major advances and achieved clarity of presentation on
the various theoretical and numerical approaches in a field that has literally exploded. However the results and
understanding achieved so far have opened the door to new questions and
problems, often stimulated by the availability of new data. For this
reason, the research activity has not slowed its pace and there is
still a number of major challenges.
As shown in the previous sections, we are just moving the first
steps to access to the mathematical and statistical laws that
characterize the co-evolution mechanisms between the network evolution
and the dynamical process. This is a key element in most social
networks, where it is almost impossible to disentangle the agents
cognitive processes shaping the network evolution and their
perception/awareness of the contagion processes.
Indeed, the adaptive
behavior of individuals in response to the dynamical processes they are
involved in represents a serious theoretical challenge dealing with the
feedback among different and competing dynamical processes.
For instance, some activity has already been devoted to coupled
behavior-disease models and to the competition among different
contagion processes in networks, as reviewed in the previous sections,
but much more work is needed to build a comprehensive picture.
The final goal is not only to understand epidemic processes, and
predict their behavior, but also to control their dynamics. The
development of strategies for favoring or hindering contagion
processes is crucial in a wide range of applications that span from
the optimization of disease containment and eradication to viral
marketing. Also in this case, much more work is needed
investigating how co-evolution and feedback mechanisms between the
network evolution and the spreading dynamics affect our influence and
ability to control epidemic processes.
Networks show also a large number of interdependencies of various
nature: physical interdependency when energy, material or people flow
from one infrastructure to another; cyber interdependency when
information is transmitted or exchanged; geographic interdependency
signaling the co-location of infrastructural elements; logical
interdependency due to financial, political coordination, etc.
Interdependence is obviously a major issue also in diffusion and spreading processes. One simple example is
provided by the spreading of information in communication networks
that induces an alteration of the physical proximity contact pattern of
individuals or of the flows and traffic of mobility infrastructure. This
has triggered interest in the understanding of contagion processes in
coupled interdependent networks~\cite{interdependent12} More broadly,
the community is becoming aware that, especially in the area of modern
social networks populating the information technology ecosystem,
epidemic spreading may occur on different interacting networks that
however affect each other. This is obviously the case of information
processes where different type of social communication networks (phone,
real-world, digital) coexist and contribute to the spreading
process. This evidence has led recently to the introduction of
multilayer or multiplex networks \cite{Kivela2013,Boccaletti2014}. Multiplex networks
are defined by a set of $N$ nodes and a set of $L$ layers, which
represent ``dimensions'' or ``aspects'' that characterize a node. A
node can belong to any subset of layers, and edges represent
interactions between nodes belonging to the same layer. We can
consider that a vertex is connected to itself across the different
layers, or allow for inter-layer connections between nodes in
different layers. Every layer is represented thus by a network, and
the whole multiplex by a set of interconnected networks. The
analysis of epidemic processes in these networks shows very
interesting and peculiar behaviors. Several studies have focused on physical-information layered
networks and studied the epidemic dynamics on the different layers as a function of the
inter-layer coupling and the epidemic threshold values on each layer~\cite{Marceau11,Yagan2013,Buono2014}
For the SIR model it is also observed that depending on the average degree of
inter-layer connections~\cite{Dickison2012} a
global endemic state may arise in the interconnected system even if no
epidemics can survive in each network
separately~\cite{Saumell-Mendiola2012,DarabiSahneh_Scogio_interdependentnets2013}. SIS
dynamics on multiple coupled layers is also analyzed
by~\citet{Cozzo2013} and by
\citet{DarabiSahneh_Scogio_interdependentnets2013} in a generalized
mean-field framework. However, epidemic behavior on multiplex networks
is still largely unexplored for more complex models, complex contagion
phenomena and in data-driven settings.
The ever increasing computational power is also favoring very detailed models that simulate large-scale
population networks, including geographic and demographic
attributes on an individual by individual basis. These models
can generate information at unprecedented level of detail and guide
researchers in identifying typical non-linear behavior and
critical points that often challenge our intuition. These results
call for a theoretical understanding and a systematic classification
of the models' dynamical behaviors, thus adding transparency to the
numerical results. Results raise new general questions such as:
What are the fundamental limits in the predictability of epidemics on
networks? How does our understanding depend on the level of data
aggregation and detail? What is the impact of the knowledge on the
state and initial conditions of the network on our understanding of
its dynamical behavior? These are all major conceptual and technical
challenges that require the involvement of a vast research community
and a truly interdisciplinary approach, rooted in the combination of
large-scale data mining techniques, computational methods and
analytical techniques.
The study of epidemic spreading is a vibrant research area that is
finding more and more applications in a wide range of domains. The need
of quantitative and mathematical tools able to provide understanding
in areas ranging from infectious diseases to viral marketing is
fostering the intense research activity at the forefront in the
investigation of epidemic
spreading in networks. We hope that the present review will be a
valuable reference for all researchers that will engage in this field.
\section*{Acknowledgments}
R.P.-S. acknowledges financial support from the Spanish MINECO, under
projects Nos. FIS2010-21781-C02- 01 and FIS2013-47282-C2-2, EC
FET-Proactive Project MULTIPLEX (Grant No. 317532), and ICREA Academia,
funded by the Generalitat de Catalunya. P.V.M. was partially funded by
the EU CONGAS project (no. 288021). A.V. has been partially funded by
the DTRA-1-0910039, NSF CMMI-1125095, MIDAS-National Institute of
General Medical Sciences U54GM111274 awards. The views and conclusions
contained in this document are those of the authors and should not be
interpreted as representing the official policies, either expressed or
implied, of the funding agencies or the U.S. Government. We thank Nicole
Samay for help with the diagrams and figures.
|
2,869,038,154,965 | arxiv | \section{Introduction}
Convex optimization problems arise in a diverse array
of engineering applications, including signal processing~\cite{luo06},
robotics~\cite{schulman14,mellinger11},
communications~\cite{chiang05},
machine learning~\cite{shalev12},
and many others~\cite{boyd04}. In all of these areas,
problems can become very large as the number
of network members (robots, processors, etc.) becomes
large.
Accordingly, there has arisen interest in solving
large-scale optimization problems.
A common feature of large-scale solvers is that they are parallelized
or distributed among a collection of agents in some way.
As the number of agents grows,
it can be difficult or impossible to ensure synchrony
among distributed computations and communications, and
there has therefore arisen interest in distributed
asynchronous optimization algorithms.
One line of research considers asynchronous optimization
algorithms in which agents' communication topologies
vary in time.
A representative sample of this work
includes~\cite{chen12,zhu12,nedic10,nedic09,ram10,lobel11},
and these algorithms
all rely on an underlying averaging-based update law, i.e.,
different agents update the same decision variables and then
repeatedly average their iterates to mitigate disagreements
that stem from asynchrony.
These approaches (and others in the literature)
require some form of graph connectivity over intervals of a
finite length.
In this paper, we are interested in cases in which delay bounds are
outside agents' control, e.g.,
due to environmental hazards and adversarial jamming for
a team of mobile autonomous agents.
In these settings,
verifying graph connectivity can be difficult for single
agents to do, and it may not be possible to even check that
connectivity assumptions are satisfied over prescribed intervals.
Furthermore, even if such checking is possible, it will
be difficult to reliably attain connectivity over the required intervals with
unreliable and impaired communications. For multi-agent systems with
impaired communications, we are interested in developing an
algorithmic framework that succeeds without requiring
delay bounds.
In this paper, we develop a totally asynchronous quadratic
programming (QP) framework for multi-agent optimization.
Our interest in quadratic programming is motivated
by problems in robotics~\cite{mellinger11} and
data science~\cite{rodriguez10},
where some standard problems can
be formalized as QPs.
The ``totally asynchronous'' label originates
in~\cite{bertsekas1989parallel}, and it
describes a class of algorithms which tolerate arbitrarily
long delays, which our framework will do. In addition,
our developments will use block-based update laws in
which each agent updates only a small subset
of the decision variables in a problem, which reduces
each agent's computational burden and, as we will show,
reduces its onboard storage requirements as well.
Other work
on distributed quadratic programming
includes~\cite{carli15,teixeira13,lee15,lee2015convergence,kozma2013distributed,todescato2015robust}.
Our work differs from these existing results because we
consider non-separable objective functions, and
because we consider
unstructured update laws (i.e., we do not require communications
and computations to occur in a particular sequence or pattern).
Furthermore, we consider only deterministic problems, and our framework
converges exactly to a problem's solution, while some existing
works consider stochastic problems and converge approximately or in an appropriate statistical sense. This work is also somewhat related to distributed solutions to systems of linear equations, e.g.~\cite{wang2019solving}, because the gradient of a quadratic function is a linear function. However, methods for such problems are not readily applicable in this paper due to set constraints.
Asynchrony in agents' communications and computations
implies that they will send and receive different information
at different times. As a result, they will disagree about
the values of decision variables in a problem.
Just as it is difficult for agents to agree on this information,
it can also be difficult to agree on a stepsize value in
their algorithms. One could envision a network of agents solving
an agreement problem, e.g.,~\cite{ren05}, to compute a common
stepsize,
though we instead allow agents to independently choose
stepsizes, subject to mild restrictions,
thereby eliminating the need to reach agreement
before optimizing.
It has been shown that
regularizing problems can endow them with an inherent robustness
to asynchrony and improved convergence
properties, e.g.,~\cite{koshal11,hale15,hale2017asynchronous}.
Although regularizing is not required here,
we show, in a precise sense, that regularizing improves
convergence rates of our framework as well.
It is common
for regularization-based approaches to require agents to
use the same regularization parameter, though this is undesirable
for the same reasons as using a common stepsize. Therefore,
we allow agents to independently choose regularization
parameters as well.
To the best of our knowledge,
few works have considered both independent stepsizes
and regularizations. The most relevant is~\cite{koshal11},
which considers primal-dual algorithms for
problems with functional constraints and synchronous primal updates.
This paper is different in that we consider
set-constrained problems with totally asynchronous updates, in addition to unconstrained problems.
Regularizing introduces errors in a solution,
and we bound these errors in terms of agents' regularization
parameters.
A preliminary version of this work appeared in~\cite{ubl2019totally}, however this version further includes distributed regularization selection rules for convergence rate and error bound satisfaction, along with new error bounds and and simulation results.
The rest of the paper is organized as follows.
Section~\ref{sec:background} provides background on QPs
and formal problem statements.
Then, Section~\ref{sec:update} proposes an update law
to solve the problems of interest, and
Section~\ref{sec:convergenceproof} proves its convergence.
Next, Section~\ref{sec:convergencerate} derives a convergence rate, and Section~\ref{sec:regandconvergencerate} then quantifies the effect of regularizations on the convergence rate. Section~\ref{sec:abserrbound} provides an absolute error bound in terms of
agents' regularizations for a set-constrained problem, while Section~\ref{sec:regerrbound} provides a relative error bound for the unconstrained case. Section~\ref{sec:simulation} next illustrates these results in simulation. Finally, Section~\ref{sec:conclusions} concludes the paper.
\section{Background and Problem Statement} \label{sec:background}
In this section, we describe the quadratic optimization
problems to be solved, as well as the assumptions imposed upon these problems
and the agents that solve them. We then describe agents'
stepsizes and regularizations and introduce the need to allow agents
to choose these parameters independently. We next describe the benefits
of independent regularizations, and give two formal problem
statements that will be the focus of the remainder of the paper.
\subsection{Quadratic Programming Background}
We consider a quadratic optimization problem distributed across a
network of $N$ agents, where agents are indexed over $i\in[N]:=\{1,...,N\}$.
Agent $i$ has a decision variable $x^{[i]}\in\mathbb{R}^{n_{i}},n_{i}\in\mathbb{N}$,
which we refer to as its state, and we allow for $n_{i}\neq n_{j}$
if $i\neq j$. The state $x^{[i]}$ is subject to the set constraint
$x^{[i]}\in X_{i}\subset\mathbb{R}^{n_{i}}$, and we make the following
assumption about each $X_{i}$.
\begin{assumption} \label{asm:setconst}
For all $i\in[N]$, the set $X_{i}\subset\mathbb{R}^{n_{i}}$
is non-empty, compact, and convex. $\hfill\triangle$
\end{assumption}
We define the network-level constraint set ${X:=X_{1}\times\cdots\times X_{N}}$,
and Assumption~\ref{asm:setconst} implies that $X$ is non-empty, compact, and convex.
We further define the global state as $x:=\left({x^{[1]}}^{T},...,{x^{[N]}}^{T}\right)^{T}\in X\subset\mathbb{R}^{n}$,
where $n=\sum_{i\in[N]}n_{i}$. We consider quadratic objectives
\begin{equation}
f(x):=\frac{1}{2}x^{T}Qx+r^{T}x,
\end{equation}
where $Q\in\mathbb{R}^{n\times n}$ and $r\in\mathbb{R}^{n}$. We
then make the following assumption about $f$.
\begin{assumption} \label{asm:Qsymmetric}
In $f$, $Q$ is symmetric. $\hfill\triangle$
\end{assumption}
Note that Assumption~\ref{asm:Qsymmetric} holds without loss of generality because a non-symmetric $Q$ will have only its symmetric part contribute to the value of the quadratic form that defines $f$. Because
$f$ is quadratic, it is twice continuously differentiable, which
we indicate by writing that $f$ is $C^{2}$. In addition, ${\nabla f=Qx+r}$,
and~$\nabla f$
is therefore Lipschitz with constant $\|Q\|_{2}$. It is common to assume outright that $Q$ is positive definite, though here we are able to dispense with this assumption based on one in terms of the block structure of agents' updates.
In this paper,
we divide $n\times n$ matrices into blocks. Given a matrix $B\in\mathbb{R}^{n\times n}$,
where $n=\sum_{i=1}^{N}n_{i}$, the $i^{th}$ block of $B$, denoted
$B^{[i]}$, is the $n_{i}\times n$ matrix formed by rows of $B$
with indices $\sum_{k=1}^{i-1}n_{k}+1$ through $\sum_{k=1}^{i}n_{k}$.
In other words, $B^{[1]}$ is the first $n_{1}$ rows of $B$, $B^{[2]}$
is the next $n_{2}$ rows, etc. Similarly, for a vector $b$, $b^{[1]}$
is the first $n_{1}$ entries of $b$, $b^{[2]}$ is the next $n_{2}$
entries, etc. We further define the notation of a sub-block $B^{[i]}_j$, where $B^{[i]} = \left[B^{[i]}_1 \textnormal{ }B^{[i]}_2 \textnormal{ ... } B^{[i]}_N\right]$. That is, $B^{[i]}_{1}$ is the first $n_{1}$ columns of $B^{[i]}$, $B^{[i]}_{2}$ is the next $n_{2}$ columns, etc. For notational simplicity, $B=\left[B^{[i]}_{j}\right]_{p}$ means the matrix $B$ has been partitioned into blocks according to the partition vector $p := [n_{1}, n_{2}, \dots, n_{N}]^{T}$. That is,
\begin{equation*}
B = \left[B^{[i]}_{j}\right]_{p}
= \begin{bmatrix}
B^{[1]} \\
B^{[2]} \\
\vdots \\
B^{[N]} \\
\end{bmatrix} =
\begin{bmatrix}
B^{[1]}_{1} & B^{[1]}_{2} & \dots & B^{[1]}_{N} \\
B^{[2]}_{1} & B^{[2]}_{2} & \dots & B^{[2]}_{N} \\
\vdots & \vdots & \ddots & \vdots \\
B^{[N]}_{1} & B^{[N]}_{2} & \dots & B^{[N]}_{N} \\
\end{bmatrix},
\end{equation*}
where $B^{[i]}_{j} \in \mathbb{R}^{n_{i} \times n_{j}}$ for all $i,j \in [N]$
Previous work has shown that totally asynchronous algorithms may diverge if $Q$ is not diagonally dominant~\cite[Example 3.1]{bertsekas1989parallel}. While enforcing this condition is sufficient to ensure a totally asynchronous update scheme will converge, in this paper we will instead require the weaker condition of block diagonal dominance.
\textit{Definition 1:} Let the matrix $B = \left[B^{[i]}_{j}\right]_{p}$, where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$ is given by the dimensions of agents' states above. If the diagonal sub-blocks $B^{[i]}_{i}$ are nonsingular and if
\begin{equation} \label{eqn:blockdiagdom}
\left(\left\|B^{[i]^{-1}}_{i}\right\|_{2}\right)^{-1} \geq \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2} \textnormal{ for all } i \in [N],
\end{equation}
then $B$ is said to be \textit{block diagonally dominant} relative to the choice of partitioning $p$. If strict inequality in Equation~\eqref{eqn:blockdiagdom} is valid for all $i \in [N]$, then $B$ is \textit{strictly block diagonally dominant} relative to the choice of partitioning $p$. \hfill$\blacktriangle$
In later analysis, we will use the gap between the left and right hand side of Equation~\eqref{eqn:blockdiagdom}, which we define as
\begin{equation}
\delta_{i}(B) = \left(\left\|B^{[i]^{-1}}_{i}\right\|_{2}\right)^{-1} - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2}.
\end{equation}
Note that if $p = [1, 1, \dots, 1]^{T}$, Definition 1 reduces to diagonal dominance in the usual sense. We now make the following assumption:
\begin{assumption} \label{asm:Qbdd}
In $f$, $Q=\left[Q^{[i]}_{j}\right]_{p}$ is strictly block diagonally dominant, where $p = [n_{1}, n_{2}, \dots, n_{N}]^{T}$, and $n_{i}$ is the length of $x^{[i]}$ for all $i \in [N]$. $\hfill\triangle$
\end{assumption}
Note also that from Theorem 2 in \cite{feingold1962block}, if Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} hold for a matrix $B$, then $B$ is also positive definite. Therefore Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} imply that $Q \succ 0$, which renders $f$ strongly convex.
\subsection{Problem Statements}
Following our goal of reducing parametric coupling between agents,
we wish to allow agents to select stepsizes independently. In particular, we wish for the stepsize for block $i$, denoted $\gamma_{i}$, to be chosen using only knowledge of $Q^{[i]}$. The selection of $\gamma_{i}$ should not depend on any other block $Q^{[j]}, j \neq i$, or any stepsize choice, $\gamma_{j}$, for any other block.
Allowing independent stepsizes will preclude the need for agents to
agree on a single value before optimizing. The following problem will be one focus of the remainder of
the paper.
\textit{Problem 1:} Design a totally asynchronous distributed optimization
algorithm to solve
\begin{equation}
\underset{x\in X}{\text{minimize}}\quad\frac{1}{2}x^{T}Qx+r^{T}x,
\end{equation}
where only agent $i$ updates $x^{[i]}$, and where agents choose stepsizes
independently. $\hfill\diamond$
While an algorithm that satisfies the conditions stated in Problem
1 is sufficient to find a solution, we wish to allow for regularizations as well.
Regularizations are commonly used for centralized quadratic programs
to improve convergence properties, and we will therefore use
them here. However, in keeping with the independence of agents' parameters,
we wish to allow agents to choose independent regularization parameters. As with stepsizes, we wish for the regularization for block $i$, denoted $\alpha_{i}$, to be chosen using only knowledge of $Q^{[i]}$. The regularized form of $f$, denoted $f_{A}$,
is
\begin{equation} \label{eqn:regform}
f_{A}(x):=f(x)+\frac{1}{2}x^{T}Ax=\frac{1}{2}x^{T}(Q+A)x+r^{T}x,
\end{equation}
where $A=\text{diag}\left(\alpha_{1}I_{n_{1}},...,\alpha_{N}I_{n_{N}}\right)$,
and where $I_{n_{i}}$ is the $n_{i}\times n_{i}$ identity matrix.
Note that $\frac{\partial f_{A}}{\partial x^{[i]}}=Q^{[i]}x+r^{[i]}+\alpha_{i}x^{[i]}$, where
we see that only $\alpha_{i}$
affects the gradient of $f$ with respect to $x^{[i]}$. With the goal of independent regularizations in mind, we now state
the second problem that we will solve.
\textit{Problem 2:} Design a totally asynchronous distributed optimization
algorithm to solve
\begin{equation}
\underset{x\in X}{\text{minimize}}\quad\frac{1}{2}x^{T}(Q+A)x+r^{T}x,
\end{equation}
where only agent $i$ updates $x^{[i]}$, and where agents independently choose their stepsizes
and regularizations. $\hfill\triangle$
Section III specifies the structure of the asynchronous communications
and computations used to solve Problem 1, and we will solve Problem
1 in Section IV. Afterwards, we will solve Problem 2 in Section V.
\section{Block-Based Multi-Agent Update Law} \label{sec:update}
To define the update law for each agent's state, we first
describe the information stored onboard each agent and how agents
communicate with each other. Each agent will store a vector containing
its own state and that of every agent it communicates with. Formally,
we will denote agent $i$'s full vector of states by $x_{i}$, and
this is agent $i$'s local copy of the global state. Agent $i$'s
own states in this vector are denoted by $x_{i}^{[i]}$. The current
values stored onboard agent $i$ for agent $j$'s states are denoted
by $x_{i}^{[j]}.$ In the forthcoming update law, agent $i$ will only
compute updates for $x_{i}^{[i]}$, and it will share only $x_{i}^{[i]}$
with other agents when communicating. Agent $i$ will only change
the value of $x_{i}^{[j]}$ when agent $j$ sends its own state to agent
$i$.
At time $k$, agent $i$\textquoteright s full state vector is denoted
$x_{i}(k)$, with its own states denoted $x_{i}^{[i]}(k)$ and those
of agent $j$ denoted $x_{i}^{[j]}(k)$. At any timestep, agent $i$
may or may not update its states due to asynchrony in agents\textquoteright{}
computations. As a result, we will in general have $x_{i}(k)\neq x_{j}(k)$
at all times $k$. We define the set $K^{i}$ to contain all times $k$ at which agent $i$ updates $x_{i}^{[i]}$. In designing
an update law, we must provide robustness to asynchrony while allowing
computations to be performed in a distributed fashion. First-order gradient
descent methods are robust to many disturbances, with the additional benefit of being computationally simple. Using our notation of a matrix block, we define $\nabla^{[i]}f:=\frac{\partial f}{\partial x^{[i]}}$,
and we see that $\nabla^{[i]}f(x)=Q^{[i]}x+r^{[i]}$, and we propose the following update law:
\begin{equation}
x_{i}^{[i]}(k+1)=\begin{cases}
\Pi_{X_{i}}\hspace{-0.4em}\left[x_{i}^{[i]}(k)-\hspace{-0.1em}\gamma_{i}\left(Q^{[i]}x_{i}(k)+r^{[i]}\right)\right] & \hspace{-0.5em}k\in K^{i}\\
x_{i}^{[i]}(k) & \hspace{-0.5em}k\notin K^{i}
\end{cases}\hspace{-0.1em},
\end{equation}
where agent $i$ uses some stepsize $\gamma_{i}>0$. The advantage of the block-based
update law can be seen above, as agent $i$ only needs to know $Q^{[i]}$
and $r^{[i]}$. Requiring each agent to store the entirety of $Q$
and $r$ would require $O(n^{2})$ storage space, while $Q^{[i]}$
and $r^{[i]}$ only require $O(n)$. For large quadratic programs,
this block-based update law dramatically reduces each agent's onboard
storage requirements, which promotes scalability.
In order to account for communication delays, we use $\tau^{j}_{i}(k)$
to denote the time at which the value of $x_{i}^{[j]}(k)$ was originally
computed by agent $j$. For example, if agent $j$ computes a state
update at time $k_{a}$ and immediately transmits it to agent $i$,
then agent $i$ may receive this state update at time $k_{b}>k_{a}$
due to communication delays. Then $\tau^{j}_{i}$ is defined so that
$\tau^{j}_{i}(k_{b})=k_{a}$. For $K^{i}$ and $\tau^{j}_{i}$, we assume
the following.
\begin{assumption} \label{asm:infupdate}
For all $i\in[N]$, the set $K^{i}$ is infinite.
Moreover, for all $i\in[N]$ and $j\in[N]\backslash\{i\}$, if $\left\{ k_{d}\right\} _{d\in\mathbb{N}}$
is a sequence in $K^{i}$ tending to infinity, then
\end{assumption}
\begin{equation}
\lim_{d\rightarrow\infty}\tau^{j}_{i}(k_{d})=\infty. \tag*{$\triangle$}
\end{equation}
Assumption~\ref{asm:infupdate} is simply a formalization of the requirement that no
agent ever permanently stop updating and sharing its own state with
any other agent. For $i\neq j$, the sets $K^{i}$ and $K^{j}$ do
not need to have any relationship because agents' updates are asynchronous.
Our proposed update law for all agents can then be written as follows.
\textit{Algorithm 1:} For all $i\in[N]$ and $j\in[N]\backslash\{i\}$,
execute
\begin{align*}
x_{i}^{[i]}(k+1) & =\begin{cases}
\Pi_{X_{i}}\left[x_{i}^{[i]}(k)-\gamma_{i}\left(Q^{[i]}x_{i}(k)+r^{[i]}\right)\right] & \hspace{-0.5em}k\in K^{i}\\
x_{i}^{[i]}(k) & \hspace{-0.5em}k\notin K^{i}
\end{cases}\\
x_{i}^{[j]}(k+1) & =\begin{cases}
x_{j}^{[j]}\left(\tau^{j}_{i}(k+1)\right) & \hspace{-0.5em}\text{$i$ receives $j$'s state at $k+1$}\\
x_{i}^{[j]}(k) & \hspace{-0.5em}\text{otherwise}\hfill\diamond
\end{cases}
\end{align*}
In Algorithm 1 we see that $x_{i}^{[j]}$ changes only when agent $i$
receives a transmission directly from agent $j$; otherwise it remains
constant. This implies that agent $i$ can update its own state using
an old value of agent $j$\textquoteright s state multiple times and
can reuse different agents\textquoteright{} states different numbers
of times.
\section{Convergence of Asynchronous Optimization}\label{sec:convergenceproof}
In this section, we prove convergence of Algorithm 1. This will be shown using Lyapunov-like convergence.
We will derive stepsize bounds from these concepts that will be used
to show asymptotic convergence of all agents.
\subsection{Block-Maximum Norms}
The convergence of Algorithm 1 will be measured using a block-maximum
norm as
in~\cite{bertsekas1989convergence},~\cite{bertsekas1989parallel},
and~\cite{hale2017asynchronous}. Below, we define the block-maximum
norm in terms of our partitioning vector $p$.
\textit{Definition 2:} Let $x=\left[x^{[i]}\right]_{p}\in\mathbb{R}^{n}$,
where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. The norm of the full vector $x$ is defined as the maximum 2-norm of
any single block, i.e.,
\begin{equation}
\|x\|_{2,p}:=\max_{i\in[N]}{\|x^{[i]}\|_{2}}. \tag*{$\blacktriangle$}
\end{equation}
The following lemma allows us to upper-bound the induced block-maximum
matrix norm by the norms of the individual blocks.
\begin{lemma} \label{lem:normsumbound}
For the matrix $B=\left[B^{[i]}_{j}\right]_{p}$ and induced matrix norm $\|B\|_{2,p}$,
\begin{equation}
\|B\|_{2,p} \leq \max_{i \in [N]}\sum^{N}_{j=1}\left\|B^{[i]}_{j}\right\|_{2}.
\end{equation}
\end{lemma}
\textit{Proof:} Proof in Appendix~\ref{app:normsumbound}. $\hfill\blacksquare$
\subsection{Convergence Via Lyapunov Sub-Level Sets}
We now analyze the convergence of Algorithm 1. We construct
a sequence of sets, $\{X(s)\}_{s\in\mathbb{N}}$, based on work
in~\cite{bertsekas1989convergence}
and~\cite{bertsekas1989parallel}. These sets behave analogously to sub-level sets
of a Lyapunov function, and they will enable an invariance type argument
in our convergence proof. Below, we use $\hat{x}:=\arg\min_{x\in X}f(x)$
for the minimizer of $f$. We state the following assumption on these
sets, and below we will construct a sequence of sets that satisfies
this assumption.
\begin{assumption} \label{asm:setsexist}
There exists a collection of sets $\{X(s)\}_{s\in\mathbb{N}}$
that satisfies:
1) $\dots\subset X(s+1)\subset X(s)\subset\dots\subset X$
2) $\lim_{s\rightarrow\infty}X(s)=\left\{ \hat{x}\right\} $
3) There exists $X_{i}(s)\subset X_{i}$ for all $i\in[N]$ and $s\in\mathbb{N}$
such that $X(s)=X_{1}(s)\times...\times X_{N}(s)$
4) $\theta_{i}(y)\in X_{i}(s+1)$, where $\theta_{i}(y):=\Pi_{X_{i}}\left[y^{[i]}-\gamma_{i}\nabla^{[i]}f(y)\right]$
for all $y\in X(s)$ and $i\in[N]$. $\hfill\triangle$
\end{assumption}
Assumptions~\ref{asm:setsexist}.1 and~\ref{asm:setsexist}.2 jointly guarantee that the collection $\{X(s)\}_{s\in\mathbb{N}}$
is nested and that the sets converge to a singleton containing $\hat{x}$.
Assumption~\ref{asm:setsexist}.3 allows for the blocks
of~$x$ to be updated independently by
the agents, which allows for decoupled update laws.
Assumption~\ref{asm:setsexist}.4
ensures that state updates make only forward progress toward $\hat{x}$,
which ensures that each set is forward-invariant in time. It is shown
in~\cite{bertsekas1989convergence} and~\cite{bertsekas1989parallel} that the existence of such a sequence of sets
implies asymptotic convergence of the asynchronous update law in Algorithm
1. We therefore use this strategy to show asymptotic convergence
in this paper. We propose to use the construction
\begin{equation}
X(s)=\left\{ y\in X:\left\|y-\hat{x}\right\|_{2,p}\leq q^{s}D_{o}\right\} ,
\end{equation}
where we define~$D_{o}:=\max_{i\in[N]}\left\|x^{i}(0)-\hat{x}\right\|_{2,p}$,
which is the block furthest from $\hat{x}$ onboard any agent at
timestep zero, and where we define the constant
\begin{equation}
q=\max_{i\in[N]} \left\|I-\gamma_{i}Q^{[i]}_{i}\right\|_{2} + \gamma_{i}\sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2}.
\end{equation}
To show convergence, we will use the fact that each update contracts towards $\hat{x}$ by a factor
of $q$, and will state a lemma that establishes bounds on every $\gamma_{i}$
that imply $q\in(0,1)$. Additionally, we will see that a proof of convergence using this method requires a block diagonal dominance condition on $Q$. This result will be used to show
convergence of Algorithm~1 through satisfaction of
Assumption~\ref{asm:setsexist}.
If we wish for $q\in(0,1)$, this condition can be restated as
\begin{equation} \label{eqn:igqsum}
\left\|I-\gamma_{i}Q^{[i]}_{i}\right\|_{2} + \gamma_{i}\sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2} < 1 \textnormal{ for all } i \in [N].
\end{equation}
Note that because $Q=Q^T \succ 0$ and $Q^{[i]}_{i}$ is a diagonal submatrix of $Q$, we have $Q^{[i]}_{i}=Q^{[i]^{T}}_{i} \succ 0$. From this fact, we see $\left(\left\|Q^{[i]^{-1}}_{i}\right\|_{2}\right)^{-1}=\lambda_{min}\left(Q^{[i]}_{i}\right)$, meaning that Assumption~\ref{asm:Qbdd} holds. Then, in particular,
\begin{equation}
\lambda_{min}\left(Q^{[i]}_i\right)> \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2} \textnormal{ for all } i \in [N].
\end{equation}
The following lemma states an equivalent condition for Equation~\eqref{eqn:igqsum}, which demonstrates the necessity and sufficiency of strict block diagonal dominance.
\begin{lemma} \label{lem:ddnec}
Let $Q=Q^{T}=\left[Q^{[i]}_{j}\right]_{p}$, where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. Additionally, let the $n \times n$ matrix $\Gamma = \textnormal{ diag}(\gamma_{1}I_{n_{1}},\gamma_{2}I_{n_{2}},...,\gamma_{N}I_{n_{N}})$, where $I_{n_{i}}$ is the identity matrix of size $n_{i}$ and $\gamma_{i}>0$. Then
\begin{equation}
\left\|I-\gamma_{i}Q^{[i]}_{i}\right\|_{2} + \gamma_{i}\sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2} < 1 \textnormal{ for all } i \in [N]
\end{equation}
if and only if
\begin{equation}
\lambda_{min}\left(Q^{[i]}_i\right)> \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2}
\textnormal{ and }
\gamma_{i} < \frac{2}{\sum^{N}_{j=1}\left\|Q^{[i]}_{j}\right\|_{2}}
\end{equation}
for all $i \in [N]$.
\end{lemma}
\textit{Proof:} Proof in Appendix~\ref{app:ddnec}. $\hfill\blacksquare$
Note that $\gamma_{i}$ only depends on $Q^{[i]}$. This lemma implies that $\gamma_{i}$ can be chosen according to the conditions of Problem 1 such that $q\in(0,1)$, given that Assumption~\ref{asm:Qbdd} holds for $Q$. Choosing appropriate stepsizes for all $i\in[N]$ and recalling our construction of sets $\left\{ X(s)\right\} _{s\in\mathbb{N}}$
as
\begin{equation} \label{eqn:setcon}
X(s)=\left\{ y\in X:\|y-\hat{x}\|_{2,p}\leq q^{s}D_{o}\right\} ,
\end{equation}
we next show that Assumption~\ref{asm:setsexist} is satisfied, thereby ensuring
convergence of Algorithm 1.
\begin{theorem} \label{thm:setswork}
If Assumptions 1-4 hold and $\Gamma = \textnormal{ diag}(\gamma_{1}I_{n_{1}},\gamma_{2}I_{n_{2}},...,\gamma_{N}I_{n_{N}})$ satisfies the conditions in Lemma~\ref{lem:ddnec}, then the collection of sets $\left\{ X(s)\right\} _{s\in\mathbb{N}}$
as defined in Equation~\eqref{eqn:setcon} satisfies Assumption~\ref{asm:setsexist}.
\end{theorem}
\textit{Proof:} Proof in Appendix~\ref{app:setswork}. $\hfill\blacksquare$
Regarding Problem 1, we therefore state the following:
\begin{theorem} \label{thm:alg1works}
Algorithm 1 solves Problem 1 and asymptotically
converges to $\hat{x}$.
\end{theorem}
\textit{Proof:} Proof in Appendix~\ref{app:alg1works}. $\hfill\blacksquare$
From these requirements, we see that agent $i$ only needs to be
initialized with $Q^{[i]}$ and $r^{[i]}$. Agents are then
free to choose stepsizes independently, provided
stepsizes obey the bounds established in Lemma~\ref{lem:ddnec}.
\section{Convergence Rate} \label{sec:convergencerate}
Beyond asymptotic convergence, the structure of the sets $\left\{ X(s)\right\} _{s\in\mathbb{N}}$ allows us to determine a convergence rate. To do so, we first define the notion of a \textit{communication cycle}.
\textit{Definition 3:} One \textit{communication cycle} occurs when every agent has calculated a state update and this updated state has been sent to and received by every other agent.\hfill$\blacktriangle$
Once the last updated state has been received by the last agent, a communication cycle ends and another begins. It is only at the conclusion of the first communication cycle that each agents' copy of the ensemble state is moved from $X(0)$ to $X(1)$. Once another cycle is completed every agent's copy of the ensemble state is moved from $X(1)$ to $X(2)$. This process repeats indefinitely, and coupled with Assumption~\ref{asm:setsexist}, means the convergence rate is geometric in the number of cycles completed, which we now show.
\begin{theorem} \label{convergencerate}
Let Assumptions 1-5 hold and let $\gamma_{i} \in \left(0,\frac{2}{\sum^{N}_{j=1}\left\|Q^{[i]}_{j}\right\|_{2}}\right)$ for all $i \in [N]$. At time $k$, if $c(k)$ cycles have been completed, then $\|x_{i}(k)-\hat{x}\|_{2,p} \leq q^{c(k)}D_{o}$
for all $i \in [N]$.
\end{theorem}
\textit{Proof:} Proof in Appendix~\ref{app:convergencerate}. $\hfill\blacksquare$
From the definition of $q$, we may write
$q = \max_{i\in[N]}q_{i}$,
where
\begin{equation} \label{eqn:defqi}
q_{i} = \left\|I-\gamma_{i}Q^{[i]}_{i}\right\|_{2} + \gamma_{i}\sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2},
\end{equation}
which illustrates the dependence of each $q_{i}$ upon $\gamma_{i}$. As in all forms of gradient descent optimization, the choice of stepsizes has a significant impact on the convergence rate, which can be expressed through its effect on $q$. Therefore, we would like to determine the optimal stepsizes for each block in order to minimize $q$, which will accelerate convergence to a solution. Due to the structure of $q$, minimizing $q_{i}$ for each $i\in[N]$ will minimize $q$. This fact leads to the following theorem:
\begin{theorem} \label{thm:optimalstep}
$q$ is minimized when, for every $i\in[N]$,
\begin{equation}
\gamma_{i} = \frac{2}{\lambda_{max}\left(Q^{[i]}_i\right)+\lambda_{min}\left(Q^{[i]}_i\right)}.
\end{equation}
\end{theorem}
\textit{Proof:} Proof in Appendix~\ref{app:optimalstep}. $\hfill\blacksquare$
\section{Regularization and Convergence Rate} \label{sec:regandconvergencerate}
In centralized optimization, regularization can be used to accelerate convergence by reducing the condition number of $Q$. It is well known that the condition number of $Q$, denoted $k_{Q}$, plays a significant role in the convergence rate, with large condition numbers correlating to slow convergence rates. However in a decentralized setting it is difficult for agents to independently select regularizations such that $k_Q$ is reduced, and harder still to know the magnitude of the reduction. In~\cite{ubl2019totally} it is shown that if the ratio of the largest to smallest regularization used in the network is less than $k_Q$, then the condition number of the regularized problem is guaranteed to be smaller. However, this requires global knowledge of $k_Q$, requires an upper bound on regularizations to somehow be agreed on, and institutes a lower bound on agents' choice of regularizations, all of which lead to the type of parametric coupling that we wish to avoid.
As stated in
Problem 2, we want to allow agents to choose regularization parameters
independently. Here, we therefore only require that agent $i$ use a positive regularization parameter $\alpha_{i} > 0$. In Algorithm 1, this changes only agent $i$'s updates to $x^{[i]}_{i}$, which now take the form
\begin{equation}
x^{[i]}_{i} = \Pi_{X_{i}}\left[x_{i}^{[i]}(k)-\gamma_{i}\left(Q^{[i]}x_{i}(k)+r^{[i]}+\alpha_{i}x^{[i]}_{i}(k)\right)\right].
\end{equation}
Before we analyze the effects of independently chosen
regularizations on convergence, we must first show that an algorithm
that utilizes them will preserve the convergence properties of Algorithm~1. As shown in Equation~\eqref{eqn:regform}, a regularized cost function takes the form
\begin{equation}
f_{A}(x):=\frac{1}{2}x^{T}(Q+A)x+r^{T}x,
\end{equation}
where $Q+A$ is symmetric and positive definite because ${Q=Q^{T}\succ0}$.
We now state the following theorem that confirms that minimizing $f_{A}$
succeeds.
\begin{theorem} \label{thm:prob2solved}
Suppose that $A=\text{diag}\left(\alpha_{1}I_{n_{1}},...,\alpha_{N}I_{n_{N}}\right)\succ0$, where agent $i$ chooses $\alpha_{i}$ independently of all other agents.
Then Algorithm
1 satisfies the conditions stated in Problem 2 when $f_{A}$ is minimized.
\end{theorem}
\textit{Proof:} Replacing $Q$ with $Q+A$, all assumptions and conditions used to prove Theorem~\ref{thm:alg1works} hold, with the only modifications being the network will converge to $\hat{x}_{A} := \arg\min_{x\in X} f_{A}(x)$. These steps are similar to those used to prove Theorem~\ref{thm:alg1works} and are therefore omitted. \hfill$\blacksquare$
Theorem~\ref{thm:prob2solved} establishes that regularizing preserves asymptotic convergence, and we next turn to analyzing convergence rates. Because the condition number $k_{Q}$ is a parameter that depends on the entirety of $Q$, and each agent only has access to a portion of $Q$, it is impossible for agents to know how their independent choices of regularizations affect $k_{Q}$. However, we can instead use $q$, which provides our convergence rate and can be directly manipulated by agents' choice of regularizations. Assume the optimal stepsize for block $i$ is chosen as given in Equation~\eqref{eqn:optstep}. We then have
\begin{equation}
q_{i} = \frac{2\sum^{N}_{j\neq i}\left\|Q^{[i]}_{j}\right\|_{2}+ \lambda_{max}\left(Q^{[i]}_i\right)-\lambda_{min}\left(Q^{[i]}_i\right)}{\lambda_{max}\left(Q^{[i]}_i\right)+\lambda_{min}\left(Q^{[i]}_i\right)}.
\end{equation}
When we regularize the problem with $A$, the convergence parameter becomes $q_{A} = \max_{i}q_{\alpha_{i}}$, where
\begin{align*}
q_{\alpha_i}
& = \frac{2\sum^{N}_{j\neq i}\left\|Q^{[i]}_{j}\right\|_{2}+ \lambda_{max}\left(Q^{[i]}_i\right)-\lambda_{min}\left(Q^{[i]}_i\right)}{\lambda_{max}\left(Q^{[i]}_i\right)+\lambda_{min}\left(Q^{[i]}_i\right)+2\alpha_{i}}.
\end{align*}
The only effect regularization has on $q_{i}$ is adding $2\alpha_i$ to the denominator, meaning that \textit{any} choice of positive regularization will result in $q_{\alpha_{i}} < q_i$, and thus all regularizations accelerate convergence. Using this fact, we can tailor parameter selections to attain a desired convergence rate. Assume we have a desired convergence rate for our system, corresponding to $q^*$. If we want to set $q_{A} \leq q^{*}$, we need $q_{\alpha_{i}} \leq q^{*}$ for all $i \in [N]$. Some algebraic manipulation of the above equation shows we therefore need to choose $\alpha_{i}$ such that
\begin{equation}
\alpha_{i} \geq \left(\frac{q_{i}}{q^{*}}-1\right)\left(\frac{\lambda_{max}\left(Q^{[i]}_{i}\right)+\lambda_{min}\left(Q^{[i]}_{i}\right)}{2}\right).
\end{equation}
Note that this term will be negative if $q_{i}<q^{*}$. That is, if the dynamics of block $i$ are such that it will already converge faster than required by $q^{*}$, then there is no need to regularize that block.
We now state the following theorem:
\begin{theorem} \label{thm:qreg}
Given $q^{*} \in (0,1)$, if for all $i\in[N]$ agent $i$ chooses
\begin{equation} \label{eqn:qgammabound}
\gamma_{i} = \frac{2}{\lambda_{max}\left(Q^{[i]}_i\right)+\lambda_{min}\left(Q^{[i]}_i\right)+2\alpha_{i}},
\end{equation}
where
\begin{equation}
\alpha_{i} = \max\left\{\hspace{-0.3em}\left(\frac{q_{i}}{q^{*}}-1\right)\hspace{-0.3em}\left(\frac{\lambda_{max}\left(Q^{[i]}_{i}\right)+\lambda_{min}\left(Q^{[i]}_{i}\right)}{2}\right),0\hspace{-0.1em}\right\},
\end{equation}
then $q_{A}\leq q^{*}$.
\end{theorem}
\textit{Proof:} Substitute Equation~\eqref{eqn:qgammabound} into Equation~\eqref{eqn:defqi}. \hfill$\blacksquare$
\begin{comment}\section{Regularization Error Bound} \label{sec:regerrbound}
Regularization inherently results in a suboptimal solution because it results in convergence to $\hat{x}_{A}$ rather than $\hat{x}$, and we therefore wish to bound the error between $f(\hat{x})$ and $f(\hat{x}_{A})$. In particular, given some $\epsilon > 0$, we may wish to bound the absolute error via $|f(\hat{x})-f(\hat{x}_{A})| \leq \epsilon$ or the relative error via
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon
\end{equation}
However, agents independently select regularizations, meaning the regularization for block $i$, denoted $A^{[i]}$, is selected using only knowledge of $Q^{[i]}$. Because agents cannot coordinate their choices to ensure the error bound is satisfied, we must develop independent regularization selection guidelines that depend only on $Q^{[i]}$. We formalize this in the following problem statement,
\textit{Problem 3:} Given the restriction that $A^{[i]}$ can be chosen using only knowledge of $Q^{[i]}$ and $\epsilon$, where $\epsilon \in (0,1)$, develop independent regularization selection guidelines that guarantee
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon. \tag*{$\triangle$}
\end{equation}
For the unregularized problem, the solution is $\hat{x} = -Q^{-1}r$ and the optimal cost is $f(\hat{x}) = -\frac{1}{2}r^{T}Q^{-1}r$. For the regularized problem, the regularized solution is $\hat{x}_{A} = -P^{-1}r$, where $P = Q+A$, and the suboptimal cost is $f(\hat{x}_{A}) = \frac{1}{2}r^{T}P^{-1} QP^{-1}r - r^{T}P^{-1}r$. Note that $f(\hat{x}) \leq f(\hat{x}_{A}) \leq 0$. That is, the cost can be bounded by zero trivially for both the regularized and unregularized cases using $x=0$. Therefore the optimal cost in both cases will be negative, with $f(\hat{x}) \leq f(\hat{x}_{A})$. In particular, we know $f(\hat{x}) - f(\hat{x}_{A}) \leq 0$ and $f(\hat{x}) \leq 0$. Assuming $f(\hat{x}) \neq 0$, we can say
\begin{equation}
\frac{ f(\hat{x}) - f(\hat{x}_{A})}{f(\hat{x})} \geq 0,
\end{equation}
and therefore,
\begin{equation}
\frac{ f(\hat{x}) - f(\hat{x}_{A})}{f(\hat{x})} = \frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|}.
\end{equation}
That is,
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon \Longleftrightarrow \frac{ f(\hat{x}) - f(\hat{x}_{A})}{f(\hat{x})} \leq \epsilon
\end{equation}
This allows us to directly examine the right hand equation, without the use of absolute value terms, which simplifies our coming analysis. The solution to Problem 3 will be developed in two parts. First, it will be shown that the block diagonal dominance condition of $Q$ allows $A$ to be chosen under the restrictions of Problem 3 such that a certain eigenvalue condition of the matrix $A^{-1}Q$ is satisfied. Afterward, it will be shown this condition on $A^{-1}Q$ is sufficient to guarantee the error bound given by $\epsilon$ is satisfied.
\subsection{Block Gershgorin Circle Theorem}
The Gershgorin Circle Theorem tells us that for any eigenvalue of a symmetric $n \times n$ matrix $B$, we have ${\lambda_{k}(B) \in \bigcup_{k=1}^{n}[b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|,b_{k,k} + \sum^{n}_{j \neq k}|b_{k,j}|]}$ for all ${k = 1,...,n}$. That is, every eigenvalue of $B$ is contained within a union of intervals dependent on the rows of $B$. This implies that we can lower bound the minimum eigenvalue of $B$ by ${\lambda_{min}(B) \geq \min_{k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|)} $. In the event that $B$ is a strictly diagonally dominant matrix in the usual sense, i.e., $n_{i} = 1$ for all $i \in [N]$, this implies that every eigenvalue of $B$ is positive, because ${\lambda_{min}(B) \geq \min_{k}b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|} > 0$ for all ${k = 1,...,n}$. Note further that if we let $C$ be an $n \times n$ positive definite diagonal matrix, then ${\lambda_{min}(CB) \geq \min_{k}c_{k,k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|) > 0}$. That is, if $B$ is a strictly diagonally dominant matrix and $C$ is a positive definite diagonal matrix, then $CB$ is strictly diagonally dominant.
Let $B$ and $C$ meet the criteria above, and now let us treat $C$ as a design choice. Suppose we wish for the smallest eigenvalue of $CB$ to be greater than or equal to a particular constant $l$, i.e., we want $\lambda_{min}(CB) \geq l$. From the Gershgorin Circle Theorem, we see this is true if $c_{k,k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|) \geq l$ for all ${k = 1,...,n}$. This can be restated as
\begin{equation}
\textnormal{if } c_{k,k} \geq \frac{l}{b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|} \textnormal{ for all } k = 1,...,n, \textnormal{ then } \lambda_{min}(CB) \geq l.
\end{equation}
That is, given a strictly diagonally dominant matrix $B$ and a positive constant $l$, the $k^{th}$ diagonal element of $C$ can be chosen using only knowledge of the $k^{th}$ row of $B$ and $l$ such that $\lambda_{min}(CB) \geq l$. This intuition can be extended to a strictly block diagonally dominant matrix $B$ using a block analogue of the Gershgorin Circle Theorem, as described below.
\begin{lemma} \label{lem:blockgersh}
For the matrix $B = \left[B^{[i]}_{j}\right]_{p}$, where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$, each eigenvalue $\lambda(B)$ satisfies
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}-\lambda(B) I\right)^{-1}\right\|_{2}\right)^{-1} \leq \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2}
\end{equation}
for at least one $i \in [N]$.
\end{lemma}
\textit{Proof:} See Theorem 2 in~\cite{feingold1962block}. $\hfill\blacksquare$
Note that
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}-\lambda_{min}(B) I\right)^{-1}\right\|_{2}\right)^{-1} = \min_{i}\left|\lambda_{min}(B)-\lambda_{i}\left(B^{[i]}_{i}\right)\right|.
\end{equation}
Additionally, let $\mu\left(B^{[i]}_{i}\right) = \arg\min_{\lambda_{i}}\left|\lambda_{min}(B)-\lambda_{i}\left(B^{[i]}_{i}\right)\right|$, which is the eigenvalue of $B^{[i]}_{i}$ closest to the minimum eigenvalue of $B$. Then,
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}-\lambda_{min}(B) I\right)^{-1}\right\|_{2}\right)^{-1} = \left|\lambda_{min}(B)-\mu\left(B^{[i]}_{i}\right)\right|.
\end{equation}
From the block Gershgorin Circle Theorem, we then have
\begin{equation}
\left|\lambda_{min}(B)-\mu\left(B^{[i]}_{i}\right)\right| \leq \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2} \textnormal{ for at least one } i \in [N],
\end{equation}
which implies
\begin{equation}
\lambda_{min}(B) \geq \mu\left(B^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2} \textnormal{ for at least one } i \in [N].
\end{equation}
Because $\mu\left(B^{[i]}_{i}\right) \geq \lambda_{min}\left(B^{[i]}_{i}\right)$, we can say
\begin{equation}
\lambda_{min}(B) \geq \lambda_{min}\left(B^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2} \textnormal{ for at least one } i \in [N].
\end{equation}
Just as before, if $B$ is strictly block diagonally dominant, then every eigenvalue of $B$ is positive. Now let $C=\left[C^{[i]}_{j}\right]_{p}$, with $C^{[i]}_{i} = c_{i}I$ for every $i \in [N]$ and $C^{[i]}_{j} = 0$ when $j \neq i$. In the same manner as above, we find
\begin{equation} \label{eqn:CB>l}
\textnormal{if } c_{i} \geq \frac{l}{\lambda_{min}\left(B^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2}} \textnormal{ for all } i \in [N], \textnormal{ then } \lambda_{min}(CB) \geq l.
\end{equation}
That is, $c_{i}$ can be chosen using only knowledge of $B^{[i]}$ and $l$. This brings us back to the restrictions imposed in Problem 3. For reasons that will be shown in the following subsection, choose $B=Q$, $C = A^{-1}$, and $l = \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}$. Assuming each block uses a scalar regularization, i.e. $c_{i} = \frac{1}{\alpha_{i}}$ where $\alpha_{i} > 0$, we have the following lemma
\begin{lemma} \label{lem:regeigcond}
Let Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} hold for matrix $Q$ with respect to the partitioning vector $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. Let $A=\left[A^{[i]}_{j}\right]_{p}$, with $A^{[i]}_{i} = \alpha_{i}I$ for every $i \in [N]$ and $A^{[i]}_{j} = 0$ when $j \neq i$. If we have
\begin{equation}
\alpha_{i} \leq \frac{\sqrt{\epsilon}}{1-\sqrt{\epsilon}}\left(\lambda_{min}\left(Q^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2}\right) \textnormal{ for all } i \in [N],
\end{equation}
then
\begin{equation}
\lambda_{min}\left(A^{-1}Q\right) \geq \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}.
\end{equation}
\end{lemma}
\textit{Proof:} Taking Equation~\eqref{eqn:CB>l} and substituting $C=A^{-1}$, $B=Q$, and $l = \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}$ returns Lemma~\ref{lem:regeigcond}. $\hfill\blacksquare$
We have shown this eigenvalue condition can be satisfied according to the conditions in Problem 3, i.e. $A^{[i]}$ is chosen using only knowledge of $Q^{[i]}$ and $\epsilon$. The following subsection will show this condition is sufficient to satisfy the error bound in Problem 3.
\subsection{Error Bound Satisfaction}
Proof of error bound satisfaction will be done using the following lemma.
\begin{lemma} \label{lem:AQtoError}
Let $f(x) = \frac{1}{2}x^{T}Qx + r^{T}x$, where $Q=Q^{T} \succ 0$, $Q \in \mathbb{R}^{n \times n}$, and $r, x \in \mathbb{R}^{n}$. Let $\hat{x} = \textnormal{arg}\min_{x} f(x)$ and $\hat{x}_{A} = \textnormal{arg}\min_{x} f(x)+\frac{1}{2}x^{T}Ax$, where $A \succ 0$ and diagonal. Additionally, let $\epsilon \in [0,1]$. If
\begin{equation}
\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q),
\end{equation}
then
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon.
\end{equation}
\end{lemma}
\textit{Proof:} To facilitate the proof of Lemma~\ref{lem:AQtoError}, we first present the following facts to which we will repeatedly refer:
\textit{Fact 1:} If $B$ is a square matrix such that $0 < \lambda_{min}(B) \leq \lambda_{max}(B)$, then $\lambda_{max}(B^{-1}) = \lambda_{min}^{-1}(B)$
\textit{Fact 2:} If $B$ is a square matrix such that $0 < \lambda_{min}(B) \leq \lambda_{max}(B)$, then $\lambda_{min}(B^{2}) = \lambda^{2}_{min}(B)$
\textit{Fact 3:} If $B$ is a square matrix, then $-\lambda_{max}(B) = \lambda_{min}(-B)$
\textit{Fact 4:} If $B$ is a square matrix and $C$ is an invertible matrix of the same dimension, then $\lambda_{i}(C^{-1}BC) = \lambda_{i}(B)$ for all $i$
\textit{Fact 5:} If $B=B^{T} \preceq 0$ and $C$ is an invertible matrix of the same dimension, then $\lambda_{i}(C^{T}BC) \leq 0$ for all $i$
Facts 1-3 can be easily shown, Fact 4 simply states eigenvalues are invariant under a similarity transform, and Fact 5 is a corollary of Sylvester's Law of Inertia ~\cite[Fact 5.8.17]{bernstein2009matrix}
Bearing these facts in mind, we first rearrange the condition in the lemma statement to find
\begin{align}
\frac{1}{\sqrt{\epsilon}} -1 & \leq \lambda_{min}(A^{-1}Q) \\
\frac{1}{\sqrt{\epsilon}} & \leq 1+ \lambda_{min}(A^{-1}Q) \\
\frac{1}{\sqrt{\epsilon}} & \leq \lambda_{min}(I+A^{-1}Q) \\
\frac{1}{\sqrt{\epsilon}} & \leq \lambda_{min}(A^{-1}(A+Q)) \\
\frac{1}{\sqrt{\epsilon}} & \leq \lambda_{min}(A^{-1}P) \\
\lambda^{-1}_{min}(A^{-1}P) & \leq \sqrt{\epsilon},
\end{align}
where the third inequality follows from the fact that adding the identity to a matrix shifts its eigenvalues by 1. From Fact 1, it follows that
\begin{align}
\lambda_{max} (P^{-1} A) & \leq \sqrt{\epsilon}
\end{align}
and
\begin{align}
\lambda^{2}_{max} (P^{-1} A) & \leq \epsilon.
\end{align}
From Fact 2,
\begin{align}
\lambda_{max} ((P^{-1} A)^2) & \leq \epsilon,
\end{align}
which implies
\begin{align}
- \epsilon & \leq -\lambda_{max} ((P^{-1} A)^2).
\end{align}
From Fact 3,
\begin{align}
- \epsilon & \leq \lambda_{min} (-(P^{-1} A)^2) \\
1- \epsilon & \leq 1+ \lambda_{min} (-(P^{-1} A)^2) \\
1- \epsilon & \leq \lambda_{min} (I-(P^{-1} A)^2) \\
1- \epsilon & \leq \lambda_{min} ((I+P^{-1} A)(I-P^{-1} A)).
\end{align}
Note that $I-P^{-1}A = P^{-1}(P-A) = P^{-1}Q$, therefore
\begin{align}
1- \epsilon & \leq \lambda_{min} ((I+P^{-1} A)P^{-1}Q) \\
1- \epsilon & \leq \lambda_{min} ((P^{-1}+P^{-1} AP^{-1})Q).
\end{align}
Note that $P^{-1}+P^{-1} AP^{-1} = P^{-1}+P^{-1} (P-Q) P^{-1} = 2P^{-1} - P^{-1} QP^{-1}$, therefore
\begin{align}
1- \epsilon & \leq \lambda_{min} ((2P^{-1} - P^{-1} QP^{-1})Q) \\
0 & \leq -(1-\epsilon ) + \lambda_{min} ((2P^{-1} - P^{-1} QP^{-1})Q) \\
0 & \leq \lambda_{min} (-(1-\epsilon )I + (2P^{-1} - P^{-1} QP^{-1})Q).
\end{align}
Again, from Fact 3,
\begin{align}
0 \leq -\lambda_{max} ((1-\epsilon )I - (2P^{-1} - P^{-1} QP^{-1})Q) \\
\lambda_{max} ((1-\epsilon )I - (2P^{-1} - P^{-1} QP^{-1})Q) \leq 0.
\end{align}
From Fact 4, taking $C = Q^{-\frac{1}{2}}$
\begin{align}
\lambda_{max} ((1-\epsilon )I - Q^{\frac{1}{2}}(2P^{-1} - P^{-1} QP^{-1})Q^{\frac{1}{2}}) & \leq 0.
\end{align}
Note that the matrix in the above line is symmetric. Therefore, from Fact 5, taking $C = Q^{-\frac{1}{2}}$, we have
\begin{align}
\lambda_{max} ((1-\epsilon )Q^{-1} - 2P^{-1} + P^{-1} QP^{-1}) & \leq 0.
\end{align}
Note that the matrix in the above line is still symmetric. Therefore, we can write
\begin{align}
(1-\epsilon )Q^{-1} - 2P^{-1} + P^{-1} QP^{-1} & \preceq 0 \\
Q^{-1} - 2P^{-1} + P^{-1} QP^{-1} & \preceq \epsilon Q^{-1}.
\end{align}
This means that for any arbitrary vector $x$ of dimension $n$,
\begin{align}
x^{T}(Q^{-1} - 2P^{-1} + P^{-1} QP^{-1})x & \leq x^{T}(\epsilon Q^{-1})x \\
x^{T}Q^{-1}x - 2x^{T}P^{-1}x + x^{T}P^{-1} QP^{-1}x & \leq \epsilon x^{T}Q^{-1}x.
\end{align}
Assuming $x \neq 0$, $x^{T}Q^{-1}x$ is a positive scalar. Dividing both sides by this term gives
\begin{align}
\frac{x^{T}Q^{-1}x - 2x^{T}P^{-1}x + x^{T}P^{-1} QP^{-1}x}{x^{T}Q^{-1}x} & \leq \epsilon.
\end{align}
Because this relation is true for any arbitrary vector, we can choose $x=r$ to find
\begin{align}
\frac{r^{T}Q^{-1}r - 2r^{T}P^{-1}r + r^{T}P^{-1} QP^{-1}r}{r^{T}Q^{-1}r} & \leq \epsilon.
\end{align}
Multiplying the left hand side by $\frac{-\frac{1}{2}}{-\frac{1}{2}}$ and rearranging gives
\begin{align}
\frac{-\frac{1}{2}r^{T}Q^{-1}r - (\frac{1}{2}r^{T}P^{-1} QP^{-1}r - r^{T}P^{-1}r)}{-\frac{1}{2}r^{T}Q^{-1}r} & \leq \epsilon,
\end{align}
and finally,
\begin{align}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} & \leq \epsilon,
\end{align}
as desired. $\hfill\blacksquare$
With these two lemmas, we now present the following theorem
\begin{theorem} \label{the:errorbound}
Let Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} hold for matrix $Q$ with respect to the partitioning vector $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. Let $A=\left[A^{[i]}_{j}\right]_{p}$, with $A^{[i]}_{i} = \alpha_{i}I$ for every $i \in [N]$ and $A^{[i]}_{j} = 0$ when $j \neq i$. Let $f(x) = \frac{1}{2}x^{T}Qx + r^{T}x$, where $r, x \in \mathbb{R}^{n}$. Let $\hat{x} = \textnormal{arg}\min_{x} f(x) = -Q^{-1}r$ and $\hat{x}_{A} = \textnormal{arg}\min_{x}f(x)+\frac{1}{2}x^{T}Ax = -P^{-1}r$, where $P=Q+A$. Additionally, let $\epsilon \in [0,1]$. If
\begin{equation} \label{eqn:ealphabound}
\alpha_{i} \leq \frac{\sqrt{\epsilon}}{1-\sqrt{\epsilon}}\left(\lambda_{min}\left(Q^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|Q^{[i]}_{j}\right\|_{2}\right) \textnormal{ for all } i \in [N],
\end{equation}
then,
\begin{align}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} & \leq \epsilon
\end{align}
\end{theorem}
\textit{Proof:} Lemma~\ref{lem:regeigcond} shows that the regularization selection rules presented above, along with Assumption~\ref{asm:Qbdd}, imply that $\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q)$. Lemma~\ref{lem:AQtoError} shows that $\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q)$ implies that $\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon$. $\hfill\blacksquare$
\end{comment}
\section{Regularization Absolute Error Bound: Set Constrained Case}
\label{sec:abserrbound}
Regularization inherently results in a suboptimal solution because the system converges to $\Pi_{X}\left[\hat{x}_{A}\right]$ rather than $\Pi_{X}\left[\hat{x}\right]$. We therefore wish to bound this error by a function of the regularization matrix $A$. We define this error in two ways, $\left\|\Pi_{X}\left[\hat{x}\right]-\Pi_{X}\left[\hat{x}_{A}\right]\right\|_{2,p} = \max_{i}\left\|\Pi_{X_{i}}\left[\hat{x}^{[i]}\right]-\Pi_{X_{i}}\left[\hat{x}_{A}^{[i]}\right]\right\|_{2}$, which is the largest error of any one block in the network, and $\left|f\left(\Pi_{X}\left[\hat{x}\right]\right)-f\left(\Pi_{X}\left[\hat{x}_{A}\right]\right)\right|$, which is the difference in cost for the system between the regularized and unregularized cases. Note that in this section we are deriving descriptive error bounds in the sense that a network operator with access to each agent's local information can bound the error for the entire system, but no individual agent is expected to have access to this information.
Looking at the first definition of error, we find
\begin{equation}
\left\|\Pi_{X}\left[\hat{x}\right]-\Pi_{X}\left[\hat{x}_{A}\right]\right\|_{2,p} \leq \left\|\hat{x}-\hat{x}_{A}\right\|_{2,p}
\end{equation}
which follows from the non-expansive property of the projection operator. Because of the fact that $\hat{x} = -Q^{-1}r$ and ${\hat{x}_{A} = -(Q+A)^{-1}r}$, we see
\begin{equation}
\left\|\hat{x}-\hat{x}_{A}\right\|_{2,p} = \left\|(Q^{-1}-(Q+A)^{-1})r\right\|_{2,p}.
\end{equation}
Through use of the Woodbury matrix identity, one can see $Q^{-1}-(Q+A)^{-1} = (I+A^{-1}Q)^{-1}Q^{-1}$, because $A$ is invertible. This gives
\begin{align} \label{eqn:x-xaform}
\left\|\hat{x}-\hat{x}_{A}\right\|_{2,p}
& \leq \left\|(I+A^{-1}Q)^{-1}\right\|_{2,p}\left\|Q^{-1}\right\|_{2,p}\left\|r\right\|_{2,p}.
\end{align}
Here $\left\|r\right\|_{2,p} = \max_{i}\left\|r^{[i]}\right\|_{2}$ is the largest norm of any individual block of $r$, which a network operator can gather from agents. However, the two other terms are $2,p$-norms of inverse matrices, which we do not assume the network operator has the ability to calculate. However, these terms can be bounded above using local information from agents according to the following lemma.
\begin{lemma} \label{lem:invinftynorm}
If there is a block strictly diagonally dominant matrix $B = \left[B^{[i]}_{j}\right]_{p}$, where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$, and $\beta_{p}(B) = \min_{i}\left(\left\|B^{[i]^{-1}}_{i}\right\|_{2}^{-1}-\sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2}\right)$, then
\begin{equation}
\left\|B^{-1}\right\|_{2,p} \leq \beta^{-1}_{p}(B).
\end{equation}
\end{lemma}
\textit{Proof:} Theorem 2 in~\cite{varah1975lower} establishes the above result for $\|\cdot\|_{\infty}$, and the proof for $\|\cdot\|_{2,p}$ follows identical steps. $\hfill\blacksquare$
We note also that $I+A^{-1}Q$ is strictly block diagonally dominant, as $(A^{-1}Q)^{[i]} = \alpha_{i}^{-1}Q^{[i]}$. That is, each block of $Q$ is multiplied by a positive scalar, which preserves the strict diagonal dominance of each block, as does the addition of $I$. Therefore, using Lemma~\ref{lem:invinftynorm} and $Q^{[i]}_{i}=Q^{[i]^{T}}_{i} \succ 0$ for all $i \in [N]$ we see $\left\|(I+A^{-1}Q)^{-1}\right\|_{2,p} \leq \beta^{-1}_{p}(I+A^{-1}Q)$
and $\left\|Q^{-1}\right\|_{2,p} \leq \beta^{-1}_{p}(Q)$,
where $\beta_{p}(I+A^{-1}Q) = \min_{i}\left(1+\alpha^{-1}_{i}
\delta_{i}(Q)
\right)$
and
$\beta_{p}(Q) = \min_{i}\delta_{i}(Q).$ Finally,
\begin{equation} \label{eqn:absxbound}
\left\|\Pi_{X}\left[\hat{x}\right]-\Pi_{X}\left[\hat{x}_{A}\right]\right\|_{2,p} \leq \frac{\max_{i}\|r^{[i]}\|_{2}}{\beta_{p}(I+A^{-1}Q)\beta_{p}(Q)}.
\end{equation}
The significance of this error bound is that if a network operator has access to $\|r^{[i]}\|_{2}$, $\alpha_{i}$, and $\delta_{i}(Q)$ for all $i \in [N]$, which are locally known to every agent, the network operator can compute these bounds.
Defining $\Delta_{X_{A}} = \Pi_{X}\left[\hat{x}\right]-\Pi_{X}\left[\hat{x}_{A}\right]$, we find that $f(\Pi_{X}\left[\hat{x}\right])-f(\Pi_{X}\left[\hat{x}_{A}\right]) = \frac{1}{2}(\Pi_{X}\left[\hat{x}\right]+\Pi_{X}\left[\hat{x}_{A}\right])^{T}Q(\Delta_{X_{A}})+r^{T}(\Delta_{X_{A}})$, which gives
\begin{align}
&|f(\Pi_{X}\left[\hat{x}\right])-f(\Pi_{X}\left[\hat{x_{A}}\right])| \\
& = \Big|\frac{1}{2}(\Pi_{X}\left[\hat{x}\right]+\Pi_{X}\left[\hat{x}_{A}\right])^{T}Q(\Delta_{X_{A}}) +r^{T}(\Delta_{X_{A}})\Big| \\
& \leq \|\frac{1}{2}(\Pi_{X}\left[\hat{x}\right]+\Pi_{X}\left[\hat{x}_{A}\right])^{T}Q+r^{T}\|_{2,p}\|\Delta_{X_{A}}\|_{2,p} \\
& \leq (\|\frac{1}{2}(\Pi_{X}\left[\hat{x}\right]+\Pi_{X}\left[\hat{x}_{A}\right])^{T}Q\|_{2,p}+\|r^{T}\|_{2,p})\|\Delta_{X_{A}}\|_{2,p} \\
& \leq (\|\frac{1}{2}(\Pi_{X}\left[\hat{x}\right]\hspace{-0.1em}+\hspace{-0.1em}\Pi_{X}\left[\hat{x}_{A}\right])^{T}\|_{2,p}\|Q\|_{2,p}\hspace{-0.1em}+\hspace{-0.1em}\|r^{T}\|_{2,p})\|\Delta_{X_{A}}\|_{2,p}.
\end{align}
Note that by definition, $\|x^{T}\|_{2,p} = \sum^{N}_{i=1}\|x^{[i]}\|_{2}$, and by Lemma 1 $\|B\|_{2,p} \leq \max_{i}\sum^{N}_{j=1}\left\|B^{[i]}_{j}\right\|_{2}$. Combining this with the non-expansive property of the projection operator gives
\begin{align}
|f(\Pi_{X}\left[\hat{x}\right])-f(\Pi_{X}\left[\hat{x_{A}}\right])| \\
\leq \bigg(\max_{i}\Big\|\frac{1}{2}\Big(\Pi_{X_{i}}\left[\hat{x}^{[i]}\right]+\Pi_{X_{i}}&\left[\hat{x}^{[i]}_{A}\right]\Big)^{T}\Big\|_{2}\max_{i}\sum^{N}_{j=1}\left\|Q^{[i]}_{j}\right\|_{2} \\
& +\sum^{N}_{i=1}\left\|r^{[i]}\right\|_{2}\bigg)\|\Delta_{X_{A}}\|_{2,p}.
\end{align}
From Assumption 1, the set constraint for each block is compact, meaning agents can find the vector $\bar{x}^{[i]} = \textnormal{arg}\max_{x^{[i]} \in X_{i}} \|x^{[i]}\|_{2}$. Setting $M_{X_{i}} = \|\bar{x}^{[i]}\|_{2}$ and combining this with Equation~\eqref{eqn:absxbound} gives
\begin{align}
&|f(\Pi_{X}\left[\hat{x}\right])-f(\Pi_{X}\left[\hat{x_{A}}\right])| \\
&
\leq \frac{(\max_{i \in [N]}M_{X_{i}}\max_{i \in [N]}\sum^{N}_{j=1}\left\|Q^{[i]}_{j}\right\|_{2}+\sum^{N}_{i=1}\|r^{[i]}\|_{2})}{\beta_{p}(I+A^{-1}Q)\beta_{p}(Q)} \\
& + \frac{\max_{i \in [N]}\|r^{[i]}\|_{2}}{\beta_{p}(I+A^{-1}Q)\beta_{p}(Q)}.
\end{align}
\section{Regularization Relative Error Bound: Unconstrained Case} \label{sec:regerrbound}
In the previous section we derived a descriptive bound for the absolute error in both the states of the system and the cost due to regularizing. This bound is descriptive in the sense that given the agents' regularization choices, one can derive a bound describing error for the system. However given a desired error bound, agents cannot use the above rules to independently select regularizations due to the need for global information. Eliminating this dependence upon global information appears to be difficult because of the wide range of possibilities for the set constraints $X_{i}$. However, in the case where our problem does not have set constraints, i.e. Assumption 1 no longer holds and $X = \mathbb{R}^{n}$, we find that we can develop an entirely independent regularization selection rule to bound relative error. In particular, given some $\epsilon > 0$, we wish to bound the relative cost error via
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon.
\end{equation}
If agents independently select regularizations, then $\alpha_{i}$ is selected using only knowledge of $Q^{[i]}$. Because we do not want to require agents to coordinate their regularizations to ensure the error bound is satisfied, we must develop independent regularization selection guidelines that depend only on $Q^{[i]}$.
\textit{Problem 3:} Given the restriction that $\alpha_{i}$ can be chosen using only knowledge of $Q^{[i]}$ and $\epsilon$, where $\epsilon \in (0,1)$, develop independent regularization selection guidelines that guarantee
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon. \tag*{$\triangle$}
\end{equation}
For the unregularized problem, the solution is $\hat{x} = -Q^{-1}r$ and the optimal cost is $f(\hat{x}) = -\frac{1}{2}r^{T}Q^{-1}r$. For the regularized problem, the regularized solution is $\hat{x}_{A} = -P^{-1}r$, where $P = Q+A$, and the suboptimal cost is $f(\hat{x}_{A}) = \frac{1}{2}r^{T}P^{-1} QP^{-1}r - r^{T}P^{-1}r$. Note that $f(\hat{x}) \leq f(\hat{x}_{A}) \leq 0$. That is, the cost can be upper-bounded by zero trivially for both the regularized and unregularized cases using $x=0$. Therefore the optimal cost in both cases will be negative, with $f(\hat{x}) \leq f(\hat{x}_{A})$. In particular, we know $f(\hat{x}) - f(\hat{x}_{A}) \leq 0$ and $f(\hat{x}) \leq 0$. Assuming $f(\hat{x}) \neq 0$, we can say
\begin{equation}
\frac{ f(\hat{x}) - f(\hat{x}_{A})}{f(\hat{x})} \geq 0.
\end{equation}
That is,
\begin{equation}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon \textnormal{ if and only if } \frac{ f(\hat{x}) - f(\hat{x}_{A})}{f(\hat{x})} \leq \epsilon.
\end{equation}
The solution to Problem 3 will be developed in two parts. First, it will be shown that the block diagonal dominance condition of $Q$ allows $A$ to be chosen under the restrictions of Problem 3 such that a certain eigenvalue condition of the matrix $A^{-1}Q$ is satisfied. Afterward, it will be shown that this condition on $A^{-1}Q$ is sufficient to guarantee the error bound given by $\epsilon$ is satisfied.
\subsection{Block Gershgorin Circle Theorem}
The Gershgorin Circle Theorem tells us that for any eigenvalue of a symmetric $n \times n$ matrix $B$, we have ${\lambda_{k}(B) \in \bigcup_{k=1}^{n}[b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|,b_{k,k} + \sum^{n}_{j \neq k}|b_{k,j}|]}$ for all ${k = 1,...,n}$. That is, every eigenvalue of $B$ is contained within a union of intervals dependent on the rows of $B$. This implies that we can lower bound the minimum eigenvalue of $B$ by ${\lambda_{min}(B) \geq \min_{k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|)} $. In the event that $B$ is a strictly diagonally dominant matrix in the usual sense, i.e., $n_{i} = 1$ for all $i \in [N]$, this implies that every eigenvalue of $B$ is positive, because ${\lambda_{min}(B) \geq \min_{k}b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|} > 0$ for all ${k = 1,...,n}$. Note further that if we let $C$ be an $n \times n$ positive definite diagonal matrix, then ${\lambda_{min}(CB) \geq \min_{k}c_{k,k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|) > 0}$. That is, if $B$ is a strictly diagonally dominant matrix and $C$ is a positive definite diagonal matrix, then $CB$ is strictly diagonally dominant.
Let $B$ and $C$ meet the criteria above, and now let us treat $C$ as a design choice. Suppose we wish for the smallest eigenvalue of $CB$ to be greater than or equal to a particular constant $l$, i.e., we want $\lambda_{min}(CB) \geq l$. From the Gershgorin Circle Theorem, we see this is true if $c_{k,k}(b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|) \geq l$ for all ${k = 1,...,n}$. This condition can be restated as
\begin{equation}
\textnormal{if } c_{k,k} \geq \frac{l}{b_{k,k} - \sum^{n}_{j \neq k}|b_{k,j}|} \textnormal{ for all } k = 1,...,n,
\end{equation}
then $\lambda_{min}(CB) \geq l$.
That is, given a strictly diagonally dominant matrix $B$ and a positive constant $l$, the $k^{th}$ diagonal element of $C$ can be chosen using only knowledge of the $k^{th}$ row of $B$ and $l$ such that $\lambda_{min}(CB) \geq l$. This intuition can be extended to a strictly block diagonally dominant matrix $B$ using a block analogue of the Gershgorin Circle Theorem, as described below.
\begin{lemma} \label{lem:blockgersh}
For the matrix $B = \left[B^{[i]}_{j}\right]_{p}$, where $p = [n_{1},n_{2},\dots,n_{N}]^{T}$, each eigenvalue $\lambda(B)$ satisfies
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}-\lambda(B) I\right)^{-1}\right\|_{2}\right)^{-1} \leq \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2}
\end{equation}
for at least one $i \in [N]$.
\end{lemma}
\textit{Proof:} See Theorem 2 in~\cite{feingold1962block}. $\hfill\blacksquare$
Note that
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}\hspace{-0.1em}-\hspace{-0.1em}\lambda_{min}(B) I\right)^{-1}\right\|_{2}\right)^{-1} \hspace{-0.75em}= \min_{i}\left|\lambda_{min}(B)\hspace{-0.1em}-\hspace{-0.1em}\lambda_{i}\left(B^{[i]}_{i}\right)\right|\hspace{-0.1em}.
\end{equation}
Additionally, let
\begin{equation}
\mu\left(B^{[i]}_{i}\right) = \arg\min_{\lambda_{i}}\left|\lambda_{min}(B)-\lambda_{i}\left(B^{[i]}_{i}\right)\right|,
\end{equation} which is the eigenvalue of $B^{[i]}_{i}$ closest to the minimum eigenvalue of $B$. Then,
\begin{equation}
\left(\left\|\left(B^{[i]}_{i}-\lambda_{min}(B) I\right)^{-1}\right\|_{2}\right)^{-1} = \left|\lambda_{min}(B)-\mu\left(B^{[i]}_{i}\right)\right|.
\end{equation}
From the block Gershgorin Circle Theorem, we then have
\begin{equation}
\lambda_{min}(B) \geq \mu\left(B^{[i]}_{i}\right) - \sum^{N}_{\substack{j=1 \\ j\neq i}}\left\|B^{[i]}_{j}\right\|_{2} \textnormal{for at least one } i \in [N].
\end{equation}
Because $\mu\left(B^{[i]}_{i}\right) \geq \lambda_{min}\left(B^{[i]}_{i}\right)$, we can say
$\lambda_{min}(B) \geq \delta_{i}(B) \textnormal{ for at least one } i \in [N].$
Just as before, if $B$ is strictly block diagonally dominant, then every eigenvalue of $B$ is positive. Now let $C=\left[C^{[i]}_{j}\right]_{p}$, with $C^{[i]}_{i} = c_{i}I$ for every $i \in [N]$ and $C^{[i]}_{j} = 0$ when $j \neq i$. In the same manner as above, we find
\begin{equation} \label{eqn:CB>l}
\textnormal{if } c_{i} \geq \frac{l}{\delta_{i}(B)} \textnormal{ for all } i \in [N],
\end{equation}
then $\lambda_{min}(CB) \geq l$.
That is, $c_{i}$ can be chosen using only knowledge of $B^{[i]}$ and $l$. This brings us back to the restrictions imposed in Problem 3. For reasons that will be shown in the following subsection, choose $B=Q$, $C = A^{-1}$, and $l = \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}$. Assuming each block uses a scalar regularization, i.e. $c_{i} = \frac{1}{\alpha_{i}}$ where $\alpha_{i} > 0$, we have the following lemma
\begin{lemma} \label{lem:regeigcond}
Let Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} hold for the matrix $Q$ with respect to the partitioning vector $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. Let $A=\left[A^{[i]}_{j}\right]_{p}$, with $A^{[i]}_{i} = \alpha_{i}I$ for every $i \in [N]$ and $A^{[i]}_{j} = 0$ when $j \neq i$. If we have
$\alpha_{i} \leq \frac{\sqrt{\epsilon}}{1-\sqrt{\epsilon}}\delta_{i}(Q) \textnormal{ for all } i \in [N]$,
then $\lambda_{min}\left(A^{-1}Q\right) \geq \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}$.
\end{lemma}
\textit{Proof:} Use Equation~\eqref{eqn:CB>l} and substitute $C=A^{-1}$, $B=Q$, and $l = \frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}}$. $\hfill\blacksquare$
We have shown this eigenvalue condition can be satisfied according to the conditions in Problem 3, i.e. $A^{[i]}$ is chosen using only knowledge of $Q^{[i]}$ and $\epsilon$. The following subsection will show this condition is sufficient to satisfy the error bound in Problem 3.
\subsection{Error Bound Satisfaction}
Proof of error bound satisfaction will be done using the following lemma.
\begin{lemma} \label{lem:AQtoError}
Let $f(x) = \frac{1}{2}x^{T}Qx + r^{T}x$, where $Q=Q^{T} \succ 0$, $Q \in \mathbb{R}^{n \times n}$, and $r, x \in \mathbb{R}^{n}$. Let $\hat{x} = \textnormal{arg}\min_{x \in \mathbb{R}^{n}} f(x)$ and $\hat{x}_{A} = \textnormal{arg}\min_{x \in \mathbb{R}^{n}} f(x)+\frac{1}{2}x^{T}Ax$, where $A \succ 0$ and diagonal. Additionally, let $\epsilon \in [0,1]$. If
\begin{equation}
\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q)\textnormal{, then }
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon.
\end{equation}
\end{lemma}
\textit{Proof:} Proof in Appendix~\ref{app:AQtoError}. $\hfill\blacksquare$
With these lemmas, we now present the following theorem.
\begin{theorem} \label{the:errorbound}
Let Assumptions~\ref{asm:Qsymmetric} and~\ref{asm:Qbdd} hold for the matrix $Q$ with respect to the partitioning vector $p = [n_{1},n_{2},\dots,n_{N}]^{T}$. Let $A=\left[A^{[i]}_{j}\right]_{p}$, with $A^{[i]}_{i} = \alpha_{i}I$ for every $i \in [N]$ and $A^{[i]}_{j} = 0$ when $j \neq i$. Let $f(x) = \frac{1}{2}x^{T}Qx + r^{T}x$, where $r, x \in \mathbb{R}^{n}$. Let $\hat{x} = \textnormal{arg}\min_{x \in \mathbb{R}^{n}} f(x) = -Q^{-1}r$ and $\hat{x}_{A} = \textnormal{arg}\min_{x \in \mathbb{R}^{n}}f(x)+\frac{1}{2}x^{T}Ax = -P^{-1}r$, where $P=Q+A$. Additionally, let $\epsilon \in [0,1]$. If
\begin{equation} \label{eqn:ealphabound}
\alpha_{i} \leq \frac{\sqrt{\epsilon}}{1-\sqrt{\epsilon}}\delta_{i}(Q) \textnormal{ for all } i \in [N],
\end{equation}
then,
\begin{align}
\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} & \leq \epsilon
\end{align}
\end{theorem}
\textit{Proof:} Lemma~\ref{lem:regeigcond} shows that the regularization selection rules presented above, along with Assumption~\ref{asm:Qbdd}, imply that $\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q)$. Lemma~\ref{lem:AQtoError} shows that $\frac{1-\sqrt{\epsilon}}{\sqrt{\epsilon}} \leq \lambda_{min}(A^{-1}Q)$ implies that $\frac{\left| f(\hat{x}) - f(\hat{x}_{A})\right|}{\left| f(\hat{x})\right|} \leq \epsilon$. $\hfill\blacksquare$
Additionally, we can derive a similar bound for relative error in the solution itself. Defining this error as $\frac{\|\hat{x}-\hat{x}_{A}\|_{2,p}}{\|\hat{x}\|_{2,p}}$ and using Equation~\eqref{eqn:x-xaform} we see
\begin{align}
& \frac{\|\hat{x}-\hat{x}_{A}\|_{2,p}}{\|\hat{x}\|_{2,p}} = \frac{\|(I+A^{-1}Q)^{-1}Q^{-1}r\|_{2,p}}{\|Q^{-1}r\|_{2,p}} \\
& \leq \frac{\|(I+A^{-1}Q)^{-1}\|_{2,p}\|Q^{-1}r\|_{2,p}}{\|Q^{-1}r\|_{2,p}} = \|(I+A^{-1}Q)^{-1}\|_{2,p} \\
& \leq \frac{1}{\min_{i \in [N]}\left[1+\alpha^{-1}_{i}\delta_{i}(Q)\right]}.
\end{align}
If we wish for agents to select regularizations such that the above error is less than a given constant $\eta$, we see this is accomplished if
\begin{align}
\frac{1}{\eta} & \leq \min_{i \in [N]}1+\alpha^{-1}_{i}\delta_{i}(Q) \\
\alpha_{i} & \leq \frac{\eta}{1-\eta}\delta_{i}(Q) \textnormal{ for all } i \in [N].
\end{align}
This rule has the same structure as the one in Theorem~\ref{the:errorbound}, with the only difference being there is no square root taken of $\eta$.
Note that throughout this section it was assumed that $A$ is invertible, which is true if $\alpha_{i} > 0$ for all $i \in [N]$. However in scenarios where there is no need for a particular agent to regularize, e.g. $q_{i} < q^{*}$, that agent can choose $\alpha_{i} = 0$ for all practical applications. This is because all of the above analysis holds if $\alpha_{i}$ is chosen to be a small positive value, which can be set arbitrarily close to zero.
\subsection{Trade-Off Analysis}
There is an inherent trade-off between the speed at which we reach a solution and the quality of that solution. Theorem~\ref{thm:qreg} provides a lower bound on $\alpha_{i}$ that allows us to converge at any speed we wish, while Theorem~\ref{the:errorbound} provides an upper bound on $\alpha_{i}$ that allows us to bound the cost error between the solution we find and the optimal solution. However, in general, there is no reason to expect these two bounds to be compatible in the sense that $\alpha_{i}$ can be chosen such that both are satisfied for all $i \in [N]$. Therefore, when implemented, it is likely that the network operator will be able to choose whether speed or accuracy is more critical for the specific problem. If speed is mission-critical, then agents may select the smallest regularizations required to match that speed, and if accuracy is mission-critical, agents may select the largest regularizations that obey the specific error bound.
\section{Simulation} \label{sec:simulation}
To visualize the trade-off between speed and error when regularizing, we generate seven QPs, each with 100 diagonally dominant blocks. The QPs are generated to have initial convergence parameters of $q_{initial} =$ 0.99, 0.95, 0.85, 0.70, 0.50, 0.30, and 0.01. For each QP, $A$ is independently chosen according to Theorem~\ref{thm:qreg} such that $q$ is reduced by percentages ranging from 0\% to 100\%, and this percentage reduction is plotted against the corresponding error bound given by Theorem~\ref{the:errorbound} in Figure~\ref{fig:qvse}. For example, the data for the QP with $q_{initial} = 0.85$ is plotted by the yellow dotted line in Figure~\ref{fig:qvse}, and one can see that if this QP is regularized to reduce $q$ by 10\% (i.e., a reduction from 0.85 to 0.765), the relative error in cost can be upper bounded by approximately $\epsilon = 18\%$.
\begin{figure}[!tp]
\centering
\includegraphics[draft = false,width=3.6in]{qvse-eps-converted-to.pdf}
\caption{The percent reduction in $q$ due to regularization plotted vs the relative cost error bound that regularization induces, with different lines plotting this relationship for QPs with different initial values for $q$.
}
\label{fig:qvse}
\end{figure}
There are two main takeaways from Figure~\ref{fig:qvse}. The first is that, as expected, larger regularizations result in a larger relative error bound, which is upper bounded by 1. This is because $q \rightarrow 0$ as $A \rightarrow \infty$, $f(\hat{x}_{A}) \rightarrow 0$ as $A \rightarrow 0$, and $\epsilon \rightarrow 1$ as $f(\hat{x}_{A}) \rightarrow 0$. The second is that the larger $q_{initial}$ is, the more sensitive the error bound for the QP is to regularizing. That is, if $q_{initial}$ is thought of as a condition number, then ``poorly conditioned" QPs will have larger errors due to regularizing.
A second simulation was run to demonstrate the convergence properties due to regularizing. One QP was generated with 100 blocks and $q_{initial} = 0.85$. Three different regularization matrices were chosen according to Theorem~\ref{thm:qreg}, called $A_{5}$, $A_{15}$, and $A_{45}$, such that $q$ is reduced by 5\%, 15\%, and 45\%, respectively. The blocks are then distributed among 100 agents, who have a 10\% chance of computing an update and a 1\% chance of transmitting a state to each other agent at each timestep. Four simulations were run, one solving the unregularized QP, and three others using each regularization matrix. The 2-norm of the system error to the unregularized solution, $\|x(k)-\hat{x}\|_{2}$, is plotted for each simulation against iteration number in Figure~\ref{fig:comparison}.
\begin{figure}[!tp]
\centering
\includegraphics[draft=false,width=3.6in]{Comparison-eps-converted-to.pdf}
\caption{Network error convergence of Algorithm 1 when unregularized vs regularizing such that $q$ is reduced by 5\%, 15\%, and 45\%.
}
\label{fig:comparison}
\end{figure}
As expected, only the unregularized case converges to the unregularized solution, while the other cases converge to other solutions whose distances to the unregularized solution grow with larger regularizations. However, the cases with larger regularizations initially converge to $\hat{x}$ faster, up to a point. That is, larger regularizations mean the system will initially move toward $\hat{x}$ faster, but will reach the turn-off point, where the system error grows again, earlier and further away from $\hat{x}$. This behavior suggests a vanishing regularization scheme, where $A$ shrinks to zero with time, may lead to accelerated convergence to the exact solution $\hat{x}$. Note also that convergence even in the unregularized case is non-monotone, and at times the norm of the system error may even grow due to the asynchronous nature of of communications, but Theorem~\ref{thm:alg1works} guarantees these growths are bounded and error will converge to zero.
\section{Conclusions} \label{sec:conclusions}
We have developed a distributed quadratic programming framework that converges under totally asynchronous conditions. This framework allows agents to select stepsizes and regularizations independently of one another, using only knowledge of their block of the QP, that guarantee a specified global convergence rate and cost error bound. Future work will apply these developments to quadratic programs with functional constraints.
\bibliographystyle{IEEEtran}
|
2,869,038,154,966 | arxiv | \section{Introduction}
The discovery of anomalous frequency and dissipation behavior seen in torsional oscillators (TOs) \cite{Kim04a,Kim04b}
at low temperatures in $^4$He\ has stimulated numerous investigations. These anomalies have been argued to
demonstrate a non-classical rotational inertia (NCRI)
of the long ago predicted supersolid quantum state\cite{Andreev69,Chester67,Reatto69,Leggett70,Anderson84}.
Successive TO experiments \cite{Rittner06,Kondo07,Aoki07,Clark07,Penzev07,Hunt09,Pratt11,Gadagkar2012}
confirmed the finding of the reported anomalous behavior.
Hysteresis behavior and long equilibration times have been observed
\cite{Aoki07,Hunt09,Kim09},
which depend strongly on growth history and annealing \cite{Rittner06}. In the same temperature range, experiments including shear modulus \cite{Beamish05,Beamish06}, ultrasonic \cite{Goodkind02,Burns93} and heat propagation \cite{Goodkind02} have also shown various anomalies. The character and the existence of mass flow is still a matter of
intense investigation. Experiments designed to probe for mass flow by squeezing the lattice report no such flow \cite{Beamish05,Beamish06,Greywall77,Paalanen81,Sasaki06,Ray08,Bonfait89,Balibar08}.
However, experiments in which a chemical potential gradient was created via coupling to a superfluid reservoir, suggest mass flow of an unusual type \cite{Ray08,Ray09,Ray10,Ray11,Vekhov2012}. These and many other results have
led to a flurry of activity. Structural measurements \cite{Burns08,Blackburn07} suggest
that solid $^4$He\ may be composed of a dynamic mosaic of crystals, highlight the importance of defects, and report the absence of notable structural change in the vicinity of the putative supersolid transition.
The intricate structure and dynamics of sold $^4$He\ make it a fascinating system.
We note that when a mosaic of $^4$He\ crystals with intervening liquid channels is placed in a metallic container
\cite{Ray08,BalibarComment}, thermal expansion effects
(in particular the larger thermal expansion coefficient of helium vis a vis that of the metallic container) may be at play in blocking a remnant
superleak. Such a thermal compression will effectively shut down any mass flow and close the open channels as the temperature is raised. This scenario remains a viable explanation for the anomalous mass flow and fountain effect reported by Hallock's group \cite{Ray08,Ray09,Ray10,Ray11,Vekhov2012}, until a superleak can be ruled out.
After eight years of intense experimental and theoretical investigations one must ask what are the established facts, what are the current expectations and hypotheses, and what are the future directions of research in solid $^4$He. There are numerous reviews and progress updates available describing the field
\cite{ProkofevAdvances,Balibar2011,Boninsegni2012}
and addressing some of these questions. Most of the literature reviewing the subject deals with the notion of supersolidity in $^4$He\ as established and proceeds with the discussion on the status and future experiments with the goal of further ``proving" the existence of a supersolid phase transition at very low temperatures.
Amongst the many exciting proposals concerning $^4$He\ as well as supersolids, we briefly mention Andreev's proposal for superglass \cite{Andreev07, Andreev09, Korshunov09}, Anderson's suggestion of vortex proliferation and flow \cite{Anderson2007,Kubota} and supersolid dislocation cores
\cite{dislocation_core1,dislocation_core2,dislocation_core3,Rossi}. As will become evident in later sections, our analysis centers on the dynamical effects of defects and as such may include
vortices, dislocations, or any other defects. Estimating the product of typical dislocation core sizes ($\sim 3$ nm) and their density ($\sim 10^{10}$ m$^{-2}$), it is seen that a direct NCRI origin from superfluid dislocation cores in $^4$He\ cannot account for the
magnitude of the TO anomaly (it is orders of magnitude too small).
We believe that at this mature stage of research in $^4$He\ a somewhat more general view on possible options is needed, where one asks the question: what are the options for possible states that might form at low temperatures in $^4$He\ and by implication in other solid bosonic matter? By taking a broader view one explicitly allows for states other than pure supersolidity and includes possible coexistence phases to form as well. This review is a contribution in the spirit of broadening the conversation by explicitly allowing for other components or ``active ingredients" to be present in addition to, or perhaps instead of, supersolidity in $^4$He. Thus, if one accepts the presence of defects in a solid, then naturally the question arises about the dynamic signatures of such crystal defects and whether they can dominate the response to an external stimulus.
\begin{figure}
\begin{center}
\parbox[t]{0.4\linewidth}{
\caption{The phase diagram of $^4$He\ with the putative supersolid phase transition below 0.3 K \cite{Kim04b} or an alternate order-disorder dominated crossover region governed by impedance matching between the applied frequency and internal relaxation processes.
}\label{fig_phasediagram}
}
\hfill
\begin{minipage}{0.45\linewidth}
\includegraphics[clip,width=1.0\linewidth,angle=0]{FIG01.eps}
\end{minipage}
\end{center}
\end{figure}
We provide a brief critical analysis of some of the existing data and point out that a significant fraction of the data on mechanical, thermodynamic and dielectric properties of $^4$He\ can be analyzed in terms of the emergence of a dissipative {\em viscous} component that we shall call {\em glassy} component
\cite{Balatsky07,Nussinov07,Graf08,Su10a,Su10b}.
This extra component by itself is sufficient to modify numerous properties of solid $^4$He\ and can be responsible for anomalous thermodynamic, elastic and dielectric
properties in solid $^4$He\ observed in experiments.
The glassy component is not a supersolid in the classical sense \cite{ProkofevAdvances}, yet it can coexist and couple to a supersolid component as some of the proposed superglass phases indicate
\cite{Hunt09, Boninsegni06,wu2008,Biroli08}.
The precise nature of the state of solid $^4$He\ at lowest temperatures remains a puzzle. In the past, investigation of $^4$He\ has played an important role in the development of basic concepts in modern condensed matter physics like superfluidity, order parameter, topological excitations, and critical exponents. Given such a prominent role it played in the past it is paramount to come up with the resolution of the puzzles that are clearly seen in experiments at lowest temperatures. Yet there is one important difference that might be key to a solution this time. We propose that precisely because the compound is clean and very well characterized the enabling component for the anomalies at lowest temperature are defects that undergo freeze-out and constitute a glass like component.
To sharpen this point of the discussion, any supersolid component would imply some sort of two-fluid hydrodynamics that schematically equates the total mass current
of helium atoms as a sum of normal component and superfluid component determined by the normal and superfluid mass fractions $\rho_n$ and $\rho_s$ and respective velocities. In this two-fluid picture the total mass current is given as
\begin{equation}
{\bf j} = \rho_n {\bf v_n} + \rho_s {\bf v_s}
\end{equation}
In any superfluid or supersolid phase the coefficients $\rho_n(T,P)$ and $\rho_s(T,P)$ are functions of thermodynamic variables like temperature $T$ and pressure $P$.
There is no direct evidence of either dc or ac supersolid mass current at lowest temperatures, where the putative supersolid state sets in
\cite{Rittner06,Beamish05}, except for the experiments by Hallock \cite{Ray08,Ray09,Ray10,Vekhov2012}. However, it remains to be seen
whether Hallock's results of mass flow and fountain effect cannot be explained through the presence of superleaks connecting the superfluid leads through solid helium with channels.
Therefore, some even proposed that one cannot ``squeeze a superfluid component out of a stone''
\cite{Dorsey06}.
We know that these supersolid expectations
do not apply in the case of solid $^4$He\ at lowest temperatures, because TO experiments clearly indicate that the period and damping exhibit significant hysteresis effects and strong dependence on the history of sample preparations and annealing. All these experimental observations combined would suggest to an impartial observer that there is at the very least, in addition to supersolidity, another physical component at play in $^4$He. Our analysis of various experiments suggests that the crossovers, seen in the specific heat, TO, shear modulus and dielectric function experiments
\cite{Aoki07,LinChan07, Beamish10,Yin11}
are a reflection of the physics whose origin is not due to supersolidity, but a consequence of the dynamics of defects in solid $^4$He\ .
Specifically, we propose the presence of a glassy component in solid $^4$He\ at low temperatures
in order to explain the observed anomalous linear temperature dependence in the specific heat of an otherwise perfect Debye solid
$^4$He\ \cite{Balatsky07, Graf08}.
We used similar ideas when we analyzed the TO, shear modulus, and dielectric properties by assuming the presence of a glassy component at the parts-per-million ({\em ppm}) concentration and asked what the dynamic consequences should be.
With the wealth of data available we do not attempt to provide a complete overview of the field, but give a summary of the work centered on the role of a glassy component in an otherwise nearly perfect crystal.
The exact nature of the glassy component is not known. For example, it may be caused by
two-level systems of pinned dislocation lines, vortex excitations, etc. It is however important to point out that the amplitude of period shift can be changed dramatically and depends on growth history and annealing procedures of the crystal. To explain the puzzling features of solid $^4$He \ we \emph{conjecture} that structural defects like localized dislocation segments or groups of displaced (out-of-equilibrium) atoms effectively form a set of two level systems (TLSs) which are present at low temperatures. These immobile crystalline defects will affect the {\em thermodynamics} of solid $^4$He \cite{Balatsky07,Graf08}, the {\em mechanics} of TOs \cite{Nussinov07} and shear modulus
\cite{Su10a,Su10b},
and dielectric properties \cite{Su11}. Other mechanisms for TLS and glassy dynamics, e.g., due to point defects and grain boundaries are possible as well. We are at a stage where phenomenology allows us to make progress with testable predictions, while a microscopic picture of crystal defects and interactions is still missing.
Early on it was recognized that pinned vibrating dislocation lines can account for a plethora of anelastic damping phenomena in the ultrasound, TO, and shear modulus experiments. Since most experiments are believed to be in the linear or elastic strain-stress regime any plastic deformation due to the motion of gliding dislocation loops has been neglected. However, this is not necessarily so. We proposed \cite{Caizhi2012} that dislocation-induced anomalous softening of solid $^4$He\ is possible due to the classical motion of gliding dislocation lines in slip planes. This picture of dislocation motion is widely accepted in conventional metals. Similar effects are at play in the quantum arena where mobile dislocations (dislocation currents) lead to a screening of applied shear via a Meissner-Higgs type effect
\cite{Zaanen04,Cvetkovic08}. Such unpinned dislocations that screen shear render the system more fluid and may, in line with the framework that we advance in this article,
trigger the $^4$He\ anomalies as the mobile dislocations become quenched at low temperatures \cite{dislocation_core3, Caizhi2012}. Recent experiments provide further impetus for such a picture
\cite{eyal12a,eyal12b}.
The technique of choice for interrogation of solid $^4$He\ has been the TO with varying degrees of complexity. However, the TO does not provide any direct information on the microstructure of samples. More direct structural x-ray and neutron measurements do not have the adequate resolution to detect any changes at the {\em ppm} level in the structure of $^4$He\ at lowest temperatures. Additional challenges arise given how small the volume fraction of the glassy component can be. We estimate it to be in the range of
few
hundreds of {\em ppm} in the specific heat and pressure contribution. Therefore the precise characterization of the microstructure of solid $^4$He, growth history, annealing, and $^3$He\ dependence remain pressing issues in resolving the hypothesis of the presence of a glassy component and TLS in solid $^4$He.
The notion of importance of the role of disorder in solid $^4$He\ received further support over the years in observations of the strong dependence of TO results on the specific design of sample cells, see work by the group of Chan \cite{DYKim2012}. It is hard to imagine that an intrinsic material property like supersolidity should depend strongly on the stiffness or geometry of the torsional oscillator apparatus, while crystalline disorder can easily be affected by those design properties.
In the analysis of the observed excess specific heat, we used a model of independent TLS, which gives the canonical signature of a linear in temperature contribution at lowest temperatures. Building on the presence of TLS we evaluated the mechanical properties of the TO using a model of quenched defects. This approach allows us to make predictions on the viscoelastic properties of $^4$He\ and on the electro-elastic coupling that can be tested in a setup that does not require the TO and hence can be tested over a much broader frequency range. These predictions allow to directly verify the very presence of quenched defects in $^4$He. We therefore would welcome any direct tests of our ideas.
The remainder of this paper is organized as follows. We present the general discussion on the role of quenched defects and disorder in solid $^4$He\ with particular attention to the consequence of defects on the dynamics in Section \ref{quenched}. This is followed in Section \ref{thermo} by an analysis of the thermodynamic properties of solid $^4$He. In Section \ref{general_formalism}, we present a unified framework to analyze dynamical response function that invokes arbitrarily high order backaction effects of defects onto the solid bulk. We then invoke, in Section \ref{tos} this approach and summarize our analysis of the torsional oscillator. In Section \ref{shears} we discuss the shear modulus analysis and the strain-stress relations using a viscoelastic model designed to capture the anelastic contribution from defects. In Section \ref{diels} we discuss predictions for the dielectric properties that follow from the viscoelastic model with locally frozen-in defect dynamics. Finally we conclude with a discussion and give our view on future perspectives in the field.
\section{Defect Quenching and Its Implications}
\label{quenched}
In this review, we will examine the general consequences of a transition
from mobile defects (or dynamic fluid-like components) at high temperatures to quenched immobile defects at low temperatures.
Microscopically, these defects may be dislocations, grain boundaries, vortices, or others.
It may be posited that in quantum solids such as $^4$He\ , zero point motion leads to larger dynamics than is
common in classical solids. To put the discussion in perspective, we recall a few rudiments concerning annealing and quenching.
When quenched, systems fall out of equilibrium.
En route to non-equilibrium states, relaxation times increase until
the dynamic components essentially ``freeze'' (on pertinent experimental time scales) into an amorphous
state. In materials with (sufficiently large) external disorder,
quenching may lead to ``spin-glass'' characteristics. By contrast, in the absence of imposed external disorder, when
fluid components (either classical or quantum) fall out of equilibrium by sufficient rapid cooling (so-called ``super-cooling'')
to low enough temperatures, the resultant state is termed a ``glass''. As liquids are supercooled, their characteristic relaxation times and viscosity may increase dramatically.
If, instead, the temperature is lowered at a sufficiently slow rate, the system does not quench and instead remains
an ``annealed'' system in thermal equilibrium. Notwithstanding exciting progress, exactly how the dynamic components evolve in $^4$He\
crystals as the temperature is lowered is, currently, an open problem. An initially surprising and undeniable feature that has, by now, been seen in
many experiments is that the solid $^4$He\ anomalies depend dramatically on the growth history of the crystal
and diminish as the system is cooled down slowly and defect quenching is thwarted. Memory effects reminiscent to those in
glasses are further present.
These observations imply that ``there is more to life'' than static NCRI and other annealed supersolid properties on their own--
quenching plays a definitive role in triggering the observed effects.
To understand the experimental observations and build a predictive theoretical framework, we invoke
general physical principles
allowing a computation of response functions
and using as input
known characteristics of quenching.
Physically, as alluded to in the Introduction, the quenching is that of the mobile defects (which may constitute only {\it a tiny fraction} of the system) as they become arrested against {\it a crystalline background}.
Our analysis does not rule out the presence of a small supersolid component. The observed large change in the TO dissipation cannot be solely described
by uniform Bose-Einstein condensation \cite{Nussinov07, Graf08, Huse07}.
It remains to be seen if nonuniform Bose-Einstein condensation alone either along grain
boundaries \cite{Clark06} or along the axis of screw dislocations \cite{Boninsegni06, Pollet07, Boninsegni07}
can explain the dynamic response of TOs. Note that simple estimates of the supersolid fraction of dislocation cores are orders of magnitude too small.
There exists a vast literature on defect quenching in systems that range from vortices in superfluid Helium to cosmic strings \cite{Kibble76,Zurek85} and countless others. Our particular focus is, of course, on defects in a crystalline
system (solid $^4$He). Quenched dynamics of such defects is associated with a change of plasticity and related internal dissipation.
Dielectric (and other) response functions in systems of varying plasticity, such as various glass formers as their temperature is lowered,
indicate, in a nearly universal fashion, the presence of a distribution of local relaxation times. These lead to the canonical Cole-Cole or Cole-Davidson distribution functions and related forms as we briefly elaborate.
In an overdamped dissipative system, an impulse (e.g., an external electric field or an elastic deformation) at time $t=0$ leads
a response $g(t)$ which at later times scales as
$g_{single} \sim \exp(-t/\tau)$ where $\tau$ is the (single) relaxation time. When Fourier transformed to the complex frequency ($\omega$) plane, this leads
to the Debye relaxor $g_{single}(\omega) = g_{0}/(1-i \omega \tau)$. Now, in systems that exhibit a distribution $P(\tau')$ of local relaxation events, the response functions attain the form
$\int d\tau' P(\tau') \exp(-t/\tau')$. Empirically, in dissipative plastic systems, relaxations scale as $(\exp(-t/\tau)^{c})$ with a power $0<c<1$ that
leads to a ``stretching'' (slower decay) of the response function as compared to its single overdamped mode form of $\exp(-t/\tau)$.
This stretched exponential and other similar forms of the response function capture the quintessence of the distribution of
relaxation times. Two widely used
relaxation time distributions are the Cole-Cole (CC) and Davidson-Cole (DC) functions that describe a
superposition of overdamped oscillators (Debye relaxors) \cite{Phase1, Phase2}.
With $g(\omega) = g_{0} G(\omega)$,
where $g_{0}$ is a material specific constant,
these two forms are given by different choices for the function $G$,
\begin{eqnarray}
\label{CCD}
G_{CC}(\omega) = 1/[ 1- (i \omega \tau)^{\alpha}], \nonumber
\\ G_{DC}(\omega) = 1/[ 1- i \omega \tau]^{\beta}.
\end{eqnarray}
Values of $\alpha$ and $\beta$ that differ from unity qualitatively play the role of the real-time stretching exponent $c$.
In the dc limit the mechanical motion of any mobile component will have ceased and there will be no relative motion and no transients. Therefore the coefficient $g_0$ is generally a function of frequency. In the case of the single TO its value is set by the resonance, $g_0 \approx g_0(\omega_0)$.
These relaxation times can be associated
with a distribution of TLSs describing viable low temperature configurations of the defects.
The simple TLS analysis can account for thermodynamic measurements.
Recent work \cite{Vural11}
obtained results beyond the TLS model with fewer parameters for generic non-uniform systems irrespective of specific microscopic
origin. We will, for the sake of simplicity, review our work on the low temperature properties of $^4$He\ assuming TLSs.
We conjectured \cite{Balatsky07} that structural defects, e.g.,
localized dislocation segments, form such a set of TLSs at low temperatures.
These immobile crystal defects affect the {\em thermodynamics} \cite{Balatsky07,Graf08} of bulk $^4$He\
and the {\em mechanics} \cite{Nussinov07} of the TO loaded with $^4$He.
For the analysis of the specific heat, we used independent TLS to obtain the universal signature of a
linear-in-temperature specific heat term at low temperatures.
\section{Thermodynamics}
\label{thermo}
Any true phase transition, including supersolid, is accompanied by a thermodynamic signature. Therefore it was anticipated that thermodynamic measurements will resolve the existing puzzles of supersolidity. The search for such thermodynamic signatures proved to be challenging so far, see e.g., measurements of the specific heat
\cite{Swenson62,Frank64,ClarkChan05,LinChan07,LinChan09,WestChan09},
measurements of the pressure dependence of the melting curve \cite{Todoshchenko06,Todoshchenko07},
and pressure-temperature measurements of the solid phase \cite{Grigorev07,Grigorev07b,Lisunov2011}.
The main difficulties lie in measuring small signals at low temperatures in the presence of large backgrounds.
With improving experiments measurements were conducted down to 20 mK.
While there is still no clear evidence of a phase transition in the melting curve experiments, recent pressure measurements and specific heat measurements have both shown deviations from the expected pure Debye lattice behavior.
Early on we proposed that these deviations might be related to a glass transition and be described by the contributions of two-level systems (TLS). \cite{Balatsky07,Graf08}
We model the system of noninteracting TLS with a compact distribution of two-level excitation spacings, see Fig.~\ref{fig:DOS}.
Our results show that the low-temperature deviations in the measured specific heat can be explained by contributions from a
glassy fraction and/or TLS of the solid.
\subsection{Two level system model for the specific heat}
\begin{figure}
\begin{center}
\parbox[t]{0.5\linewidth}{
\caption{Density of states (DOS) of the two-level tunneling system. The black-dashed line represents the DOS for the standard glass model \cite{Anderson72,Phillips72,Balatsky07},
while the blue-solid line is the truncated DOS used to describing the TLS with a cutoff energy.
}\label{fig:DOS}
} \hfill
\begin{minipage}{0.45\linewidth}
\includegraphics[clip, width=1.0\linewidth,angle=0,keepaspectratio]{FIG02.eps}
\end{minipage}
\end{center}
\end{figure}
To avoid complications due to the presence of $^3$He\ atoms, we will compare the effect of different growth processes on ultrapure $^4$He\ containing at most (nominally) 1 ppb of $^3$He\ atoms. At such low levels of impurities, we expect to see the intrinsic properties of solid.
We postulate that at temperature much below the lattice Debye temperature, the specific heat of solid $^4$He\ is described by
\begin{eqnarray}
C(T) = C_{L}(T) + C_{g}(T) ,
\end{eqnarray}
where the lattice contribution to the molar specific heat is given by $C_L(T) = B_L T^3$, with coefficient
$B_L = 12 \pi^4 R/5 \Theta_D^3$, $R=8.314$ J/(mol K) is the gas constant, and $\Theta_D$ is the Debye temperature.
The second term describes the glass contribution due to the TLS subsystem and is given by
$C_{g} (T) = k_B R \frac{d}{d T} \int_0^\infty dE \, {\cal D}_{g}(E) \, f(E) ,$
with $k_B$ the Boltzmann constant and $f(E)$ the Fermi function.
The density of states (DOS) of the TLS may be modeled by the box distribution function (Fig.~\ref{fig:DOS}):
\begin{eqnarray}
\label{DOE}
{\cal D}_{g} (E) = \frac{1}{2}{\cal D}_0 \left[ 1-\tanh((E-E_c)/W) \right] .
\end{eqnarray}
Here ${\cal D}_0$ is the zero-energy DOS, $E_c$ is a characteristic cutoff energy,
and $W$ is the width of the truncated density of states. For $E_c \to \infty$, one obtains the standard hallmark result of glasses at low temperatures: $C_{g}(T) = B_g T$,
where $B_g=k_B R {\cal D}_0$. As we will elaborate in the next section, the glass coefficient $B_g$ has an intrinsic finite value at low temperature even for the purest $^4$He samples,
independent of $^3$He\ concentration.
As shown in (\ref{DOE}), our model goes beyond the standard glass model by introducing a cutoff in the DOS of the TLS (Fig.~\ref{fig:DOS}). The cutoff could be due to the finite barrier height of double-well potentials giving rise to the TLS because in real materials the tunneling barrier has an upper bound set by lattice and dislocation configurations \cite{Jaeckle72}. At high temperatures, the TLS contribution is less important since the thermal energy easily overcomes the barrier and effectively resembles a single harmonic degree of freedom.
\begin{figure}
\begin{center}
\parbox[t]{0.30\linewidth}{
\caption{$\delta C/T$ for experiments (squares) \cite{LinChan09} and the modified glass model with a cutoff energy
in the TLS DOS (blue line) of four samples with different structural quality
where $\delta C = C - C_L$.
Note the large deviation of data points at high temperatures in the highest purity crystal SL34 of solid-liquid coexistence,
which makes the subtraction of the Debye contribution in this sample questionable.
}\label{fig:CovT}
} \hfill
\begin{minipage}{0.65\linewidth}
\includegraphics[clip, width=1.0\linewidth,angle=0,keepaspectratio]{FIG03.eps}
\end{minipage}
\end{center}
\end{figure}
\subsubsection{Specific Heat}
We compare our calculated specific heat with the experimental data by the Penn State group \cite{LinChan09,LinChan07} for four different
growth processes: BC20, BC04, SL34 and SL31. BC20 (04) is the sample grown by blocked capillary (BC) method over 20 (4) hours. SL 34 (31)
represents the samples in solid-liquid coexistence state with 34 (31) percents of solid ratio. Notice that sample SL34 actually corresponds to their reported 75\% solid-liquid coexistence sample and SL31
corresponds to their constant pressure sample (CP)
\cite{Su10a,LinChan09}.
The data are described with three parameters: ${\cal D}_0$, $E_c$ and the Debye temperature $\Theta_D$.
We first determine $\Theta_D$, or the lattice contribution, from the high-temperature data.
The lattice contribution is then subtracted from $C$ to obtain the difference $\delta C = C - C_L$. We fit $\delta C/T$ with our specific heat formula for TLS.
Next we plot $\delta C/T$ in Fig.~\ref{fig:CovT}.
The TLS model with cutoff describes well the data. In these plots we fixed the width of the cutoff to $W = 1 \, {\rm \mu eV}$ since there is no qualitative difference when varying $W$ within reasonable range.
Notice that the shape of $\delta C(T)$ depends strongly on the subtraction of the high-temperature lattice contribution.
The TLS behavior is mainly characterized by the zero-energy DOS and the cutoff energy of the TLS, which are both noticeably larger in BC04,
see Table~\ref{para_table}. This may be explained by the rapid growth process of a strained crystal, which gives rise to both a larger TLS concentration, $n_{\rm TLS}$, and a smaller cutoff energy, $E_c$, i.e., a smaller maximum tunneling barrier height. Since the TLS concentrations of these samples range from $3.7$ to $21.5$ {\em ppm}, which are at least 1000 times larger than the nominal $^3$He concentration, we believe that $^3$He has
no effect on the observed intrinsic heat capacity of ultrapure solid $^4$He.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|ccccccc}
& $P$ & $V_m$ & $\Theta_D $ & ${\cal D}_0 \times 10^4$ & $E_c \times 10^2$
&$n_{\rm TLS}$ & $\Delta S$\\
& (bar) &(cm$^3$/ mol)& (K) & (1/meV) & (meV) & ({\em ppm}) & ($\mu$J/(mol K)) \\
\hline
SL34 & 25 & 21.25 & 24.5 & 2.2 & 1.7 & 3.7 & 21.3\\
SL31 & 25 & 21.25 & 24.8 & 2.9 & 2.2 & 6.4 & 36.9\\
BC20 & 33 & 20.46 & 29.7 & 3.0 & 2.3 & 6.9 & 39.5\\
BC04 & 33 & 20.46 & 28.9 & 6.5 & 3.3 & 21.5& 115.0\\
\end{tabular}
\end{center}
\caption{Physical and model parameters: Debye temperature $\Theta_D$, zero energy TLS DOS
${\cal D}_0$, cutoff energy $E_c$, concentration of TLS $n_{\rm TLS}={\cal D}_0\times E_c$ and excess entropy $\Delta S$. }
\end{table} \label{para_table}
\subsubsection{Entropy Analysis}
Our analysis of the excess entropy supports the existence of a glassy component or TLS. The excess entropy,
\begin{eqnarray}
\Delta S(T) = \int_0^T dT' \,\delta C(T')/T',
\end{eqnarray}
is associated with an excess specific heat.
We find consistently for specific heat measurements \cite{ClarkChan05,LinChan07,LinChan09} that the
obtained entropy values
$\Delta S \sim 10^{-4}$
J/(K mol) are 5 to 6 orders of magnitude smaller compared to the theoretical prediction for a homogeneous supersolid
if the entire sample actually underwent Bose-Einstein condensation (BEC). In the limit of a non-interacting BEC
bosons with a quadratic dispersion one finds the standard result
$\Delta S_{BEC}=15/4 (\zeta(5/2)/\zeta(3/2))\, R (T/T_c)^{3/2}\sim (5/4)\, R\sim 10.4$ J/(K mol).
This means that if $\Delta S$ is indeed due to supersolidity, then the supersolid volume fraction is at most
11 {\em ppm} or 0.0011\% in the most disordered or quenched sample of the four ultrapure samples studied in this work,
i.e., sample BC04.
Such a supersolid fraction in the specific heat is more than 100 to 1000 times smaller than is usually reported for the
non-classical rotational inertia fraction (NCRIF) in TO experiments.
This enormous discrepancy between supersolid fractions in specific heat and TO experiments was already
noticed in Refs.~\cite{Balatsky07,Graf08}, while Lin et al.\ \cite{LinChan09} keep invoking a hyperscaling mechanism
of unknown origin.
Until to date, this discrepancy remains a major puzzle that is hard to reconcile within a supersolid scenario.
The validity of the analysis of the entropy in terms of a non-interacting BEC is repeatedly questioned on grounds of how robust it is in the presence of interactions. We discussed in our original study \cite{Balatsky07} the effects of interactions on the entropy. Here it is
important to realize that the entropy is
a total count of all low-energy states irrespective of a particular model for the specific heat. Hence, we concluded that strong-coupling effects
cannot change the order of the effect. They may only change the magnitude. For example, the
well-known strongly correlated superfluid system $^4$He\ possesses a superfluid entropy of $\sim 0.6 R \sim 4.6 $ J/(K mol) \cite{Ahlers1973},
which is only half the value of the non-interacting BEC. With hindsight this justifies the neglect of strong-coupling effects in the order of magnitude analysis.
Clearly the entropy is a reliable measure of any phase transition in $^4$He. No matter how one looks at this puzzle, the reported excess entropy is either far too small to explain observed NCRIF effects in torsional oscillators or far too large to describe the boiling off of $^3$He\ atoms from dislocation lines, when the nominal concentration of $^3$He\ is less than 1 {\em ppb}. For those reasons, we argued in favor of two-level systems of low-lying states until a better microscopic understanding of solid $^4$He\ emerges.
\subsubsection{Comparison with Pressure Measurements}
\begin{figure}
\begin{center}
\parbox[t]{0.35\linewidth}{
\caption{($P-P_0)/T^2$ vs. $T^2$ for $P \sim 33$ bar. The squares represent the data reported by Grigor'ev et al. \cite{Grigorev07,Grigorev07b} The blue-thick and black-thin lines are predictions for BC04 and BC20, respectively.
The black-dotted line has been shifted vertically by a constant compared to BC04 to illustrate the capability of describing
Grigor'ev's data by the TLS model. Predicted curves for BC04 and BC20 are shown for comparison. }\label{fig:PovTsq2}
} \hfill
\begin{minipage}{0.64\linewidth}
\includegraphics[width=1.0\linewidth,angle=0,keepaspectratio]{FIG04.eps}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\parbox[t]{0.35\linewidth}{
\caption{Low-temperature pressure deviation from lattice contribution of Debye solid (data by Yin et al.\ \cite{Yin11}). The intercept of $\Delta P/T^2$ vs.\ $T^2$ extrapolated from low-temperature data points is in agreement with the TLS contribution of order 100 {\em ppm} ($\Delta P = P-P_0$).
The arrow marks the deviation from the Debye solid behavior. The large scatter in data at lowest $T$ is due to the subtraction of $P_0$. Dashed lines are guides to the eyes.
}\label{fig:PT}
} \hfill
\begin{minipage}{0.64\linewidth}
\includegraphics[width=1.0\linewidth,angle=0,keepaspectratio]{FIG05.eps}
\end{minipage}
\end{center}
\end{figure}
Next we relate the pressure measurements with the specific heat measurement through our model.
The quantities to characterize the pressure measurement in the combined lattice and glass models are $a_L$ and $a_g$ defined by
\begin{eqnarray}
P(T) \equiv P_0+P_L(T)+P_g(T) = P_0 + a_L T^4+ a_g T^2 ,
\end{eqnarray}
where $P(T)$ is the pressure at temperature $T$.
$P_0$, $P_L$, $P_g$ are the corresponding pressure contributions of the ions at zero temperature, lattice vibrations, and two-level excitations of the glass. On the other hand, the thermodynamic Maxwell relations between pressure and specific heat give
\begin{eqnarray}\label{maxwell}
\left(\frac{\partial P}{\partial T} \right)_V =
\frac{\gamma_g}{V_m} \, C_{g,V} + \frac{\gamma_L}{V_m} \, C_{L,V} ,
\end{eqnarray}
where $\gamma_i$ are the Gr\"uneisen coefficients of the glass ($g$) and lattice ($L$).
Literature values for the Gr\"uneisen coefficient of phonons in solid hcp $^4$He range between $2.6 < \gamma_L < 3.0$
\cite{Grigorev07b,Driessen86}, while nothing is known about $\gamma_g$ of glassy $^4$He.
For simplicity we assume $\gamma_g \sim \gamma_L =2.6$.
Equation~(\ref{maxwell}) is related to the first Ehrenfest relation involving the compressibility, which was shown to be always satisfied in glasses \cite{Nieuwenhuizen1997}.
In Figs.~\ref{fig:PovTsq2} and \ref{fig:PT} we show the temperature dependence of the $(P-P_0)/T^2$ data reported by Grigor'ev et al.
\cite{Grigorev07,Grigorev07b} and Yin et al.\cite{Yin11}.
The curves for samples BC04 and BC20 are derived from our specific heat analysis of the data by Lin et al.\cite{LinChan09}
The key result is the dependence $\sim T^2$ with finite intercept at $T=0$.
In the TLS model, the finite intercept describes the glassy contribution, whereas the $T^2$ behavior is attributed
to phonons.
In conclusion, the data by Grigor'ev et al.\ and Yin et al.\ are in agreement with predicted $(P-P_0)/T^2$
curves for a system of TLS.
\section{General response function formalism and physical origin of dynamical anomalies}
\label{general_formalism}
To rationalize the TO experiments, we developed a general phenomenological formalism in Ref.~\cite{Nussinov07}. We have since extended and applied it to visco-elastic and dielectric properties. With simple modifications, this predictive approach can be used to study all measurable dynamical response functions. Here, we summarize the essence of our approach. In later sections, we will apply it in a self-contained way to the various
quantities that we wish to interrogate. We start with the equation of motion for generalized coordinates $q_{i}$. These coordinates may be an angle, $q=\theta$, in the case of the TO experiments, Cartesian components of atomic displacements , $q_{i}=u_{i}$ with ${\bf u}$ the local atomic displacement (as in our analysis of the visco-elastic and dielectric response functions), or any other. Associated with these coordinates are conjugate generalized momenta $p_{i}$ (e.g., angular momentum in the TO analysis, linear momentum for Cartesian coordinates) and their associated generalized forces $F_{i}$ (torques in the case of the TO, and rectilinear forces for atomic displacements). Physically, these forces correspond to a sum
of two contributions:
(i) Direct forces ($F_{direct}$). These may originate from either externally applied forces on the solid ($F_{ext}$) or lowest order ``direct'' internal forces $(F_{int}$). By direct forces we allude
to forces on the coordinate $q$ that do {\it not} involve the response of the system on $q$ as a result of its change.
(ii) Indirect ``backaction'' forces ($F_{BA}$). These allude to higher order effects wherein a variation in the coordinate $q$ can lead to displacements in other parts of the medium (e.g., those involving plastic regions or
nearby atoms) which then act back on the original coordinate $q$.
To linear order
\begin{eqnarray}
\label{BAE}
F_{BA}(t) = \int_{-\infty}^t g(t-t') q(t') dt'.
\end{eqnarray}
The backaction function $g(t-t')$ captures how a displacement $q$ at time $t'$ can lead to a perturbation in the solid
which then acts back on the coordinate $q$ at time $t$. With these preliminaries in tow,
we now outline our standard linear response formalism which further takes into account all higher order backaction effects.
$\bullet$ (1) Write down the Newtonian equation(s) of motion for the generalized coordinate(s) $q$ that we wish to study.
With $\chi_{0}$ a suitable differential operator $\chi_{0}^{-1}$, these can be cast as
\begin{eqnarray}
\chi_{0}^{-1} q(t) = F_{direct}(t) + F_{BA}(t).
\label{general_EOM}
\end{eqnarray}
This equation might seem a bit formal. To make it concrete, we note that in the simplest case that we will discuss- that of the compact scalar angular coordinate $\theta$ for the TO orientation,
the operator
\begin{eqnarray}
\chi_{0}^{-1} = I_{osc} \frac{d^{2}}{dt^{2}} + \gamma_{osc} \frac{d}{dt} + \alpha_{osc}
\end{eqnarray}
where $I_{osc}, \gamma_{osc},$ and $\alpha_{osc}$ are, respectively, the oscillator moment of inertia,
dissipation, and
torsion rod
stiffness. A more complicated tensorial operator involving the elastic modulus
appears when writing the equations of motion for the Cartesian components of the atomic displacements.
$\bullet$ (2) When Fourier transformed to the complex frequency ($\omega$) plane, $\chi_{0}(\omega)$ corresponds to
the bare susceptibility. We will denote the Fourier transforms of the various quantities (forces, backaction function) by simply making it clear that
the argument of the various quantities is now the frequency $\omega$ and not the time $t$. We trivially recast Eq. (\ref{general_EOM}) as
\begin{eqnarray}
\chi^{-1}(\omega) q(\omega) = F_{direct}(\omega),
\end{eqnarray}
where
\begin{eqnarray}
\label{Dyson}
\chi^{-1}(\omega) = \chi_{0}^{-1}(\omega) - g(\omega).
\end{eqnarray}
Equation (\ref{Dyson}) will be used in all of our upcoming analysis.
The physical content of Eq. (\ref{Dyson}) can be seen by writing
its inverse
as a geometric (or Dyson) series
\begin{eqnarray}
\label{longc}
\chi(t) = \chi_{0}(t) + \int dt' \chi_{0}(t) g(t-t') \chi_{0}(t') \nonumber
\\ + \int dt' \int dt" \chi_{0}(t) g(t-t') \chi_{0}(t') g(t'-t") \chi_{0}(t") + ... .
\end{eqnarray}
The terms in this series correspond to
(a) the direct contribution ($\chi_{0}$),
(b) a lowest order backaction effect wherein a displacement at an earlier time $t'<t$ leads to a deformation of the solid which then acts back on the coordinate at time $t$ (the term $\int dt' \chi_{0}(t) g(t-t') \chi_{0}(t') $),
(c) a higher order process in which a deformation at time $t''$ leads to a backaction from the solid on the coordinate at time $t'>t"$ which in turn then acts back on its surroundings which then acts back on the original coordinate at time $t$ (the term $\int dt' \int dt" \chi_{0}(t) g(t-t') \chi_{0}(t') g(t'-t") \chi_{0}(t") $), and so on ad infinitum.
In Fourier space, the convolution integrals become products and Eq. (\ref{longc})
becomes
\begin{eqnarray}
\label{dyson}
\chi(\omega) = \chi_{0}(\omega) + \chi_{0}(\omega) g(\omega) \chi_{0}(\omega) + \chi_{0}(\omega) g (\omega) \chi_{0}(\omega) g(\omega) \chi_{0}(\omega) + ... .
\end{eqnarray}
The geometric series of Eq. (\ref{dyson}) sums to Eq. (\ref{Dyson}).
As simple as it is,
Eq. (\ref{Dyson}) combining standard linear response theory (step 1) with backaction effects (or the Dyson equation) of step 2,
leads to a very powerful tool that allows us to investigate numerous systems while accounting for arbitrarily high order backaction effects.
As is well known and we will expand on and employ in later sections, the real and imaginary parts of the poles of the susceptibility $\chi(\omega)$ allow us to
probe typical oscillation times and dissipation. This will allow us to connect with TO and other measurements and make precise statements about
the backaction function $g$ which affords information about the dynamics within the solid. It is important to stress that Eq. (\ref{Dyson}) is general.
In the derivation above no assumptions need to be made concerning the precise physical origin of the backaction function $g$.
The adduced function $g$ captures all effects not present in the direct equations of motion for the normal solid. If supersolid
effects would be present, they will directly appear in the function $g$.
Thus far, our sole assumption was that the deformations $q$ are small enough to justify the linear (in $q$) order analysis of
Equations (\ref{BAE}, \ref{general_EOM}) for measurements on the rigid solid. We now invoke additional assumptions (which have been partially vindicated in a growing number of experiments since our original proposal
\cite{Nussinov07}). These assumption relate to the form of $g(\omega)$ and its dependence on temperature. They are motivated by our view of defect quenching and characteristic relaxation times as the origin of the
$^4$He\ anomalies.
$\bullet$ (3) As we will elaborate on in later sections, data for disparate susceptibilites $\chi$ taken at different frequencies $\omega$ or temperatures $T${\it collapse onto one curve}.
This indicates that $g$ is a function of only one dimensionless argument ($\omega \tau(T)$) instead of both $\omega$ and $T$ independently.
That is, there is only one dominant temperature dependent relaxation time scale $\tau(T)$ for the backaction of the quenched solid on the original coordinate $q$.
As is well known and alluded to in Section (\ref{quenched}), exponential damping with a single relaxation time $\tau$,
leads to a function $g_{single}(\omega)= g_{0} (1- i \omega \tau)/(1+ \omega^{2} \tau^{2})$ which, when plotted with the real and imaginary parts of
$g$ along the horizontal and vertical axes describes a semi-circle as a function of the dimensionless quantity $(\omega \tau)>0 $. However, when plotted
in this way, the $^4$He\ data for the complex susceptibility measured by TO and other probes lead to a collapsed curve which is more like that of a skewed semi-circle
and there is a distribution of
relaxation times about a characteristic time scale $\tau$. For ease of analysis, we will approximate the complex response
functions by Eqs. (\ref{CCD}).
The curve collapse allows for information about how the characteristic transient relaxation times $\tau(T)$ increase
as $T$ is decreased. We further invoke
the Vogel-Fulcher-Tammann form for glasses \cite{Rault00},
\begin{eqnarray}
\label{VFT_eq}
\tau(T) =
\left\{
\begin{array}{ll}
\tau_0 \,e^{\Delta/(T-T_0)} & \mbox{ for $T>T_0$} , \\
\infty & \mbox{ for $T \le T_0$} .
\end{array}
\right.
\end{eqnarray}
Here, $T_{0}$ is the temperature at which the relaxation times would
truly diverge and $\Delta$ is an energy scale. In fitting the data in this way, negative values $T_{0}<0$ were often found.
That is, the typical dynamics as adduced from the collapsed $\tau(T)$ is faster than
simple activated dynamics (one in which $T_{0}=0$ in Eq. (\ref{VFT_eq})).
This is consistent with the intrinsic quantum character of the solid $^4$He\ crystal
with large zero point motion as compared to classical activated dynamics.
What is the physical content of this general formalism vis a vis the putative supersolid transtition? Given perturbations of a typical frequency $\omega$, the backaction response $g$ from the plastic components acting on $q$
may either be sufficiently rapid or slow to respond. Just at the tipping point when $\omega \tau(T_{X}) \simeq 1$
different components will be maximally out of synchrony with each other in being able to respond to
the perturbation or not and the dissipation (given by the reciprocal of the imaginary part of the zero of $\chi^{-1}(\omega)$) is maximal.
Similarly, there will be a change in the typical periods of the system between a system which at high $T$ (i.e., $T< T_{X}$) contains rapidly
equilibrating plastic components to those at low
$T$, which
are too slow to respond and thus
the system appears to have undergone ``a transition''.
In the sections that follow, we will apply and replicate anew and at great length the considerations outlined above to the particular set of physical quantities that we wish to investigate.
We start, in Section \ref{tos} with the investigation of the TO (for which the above formalism was first developed) and then move to explore other arenas- the viscoelastic (Section \ref{shears})
and dielectric response (Section \ref{diels}) functions
where the above formalism leads to experimentally testable predictions.
\section{General susceptibility and response function of torsional oscillators}
\label{tos}
A formulation of the rotational susceptibility of the TO was given in Ref.~\cite{Nussinov07}. It is now often used as a basis
for
the linear response discussion \cite{Dorsey08,Pratt11,Gadagkar2012}.
In Section (\ref{general_formalism}) we outlined the key points of this formulation when written in its general form.
Since its derivation it has been applied by us and others to study other response functions.
The result of Eq. (\ref{central}) below is none other than Eq. (\ref{Dyson}) when the generalized coordinate $q$ corresponds to the TO angle. The bulk of this section will be devoted
to analyzing the experimental consequences of this relation and its related counterpart for the double TO.
It is important to realize that the TO experiment measures the period and dissipation of the entire apparatus by reporting the relationship between the force and displacement (angle). Therefore a model is needed to determine the relation between observable period and dissipation and the moment of inertia, damping and effective stiffness of the media.
We start with the general equation of motion
for a harmonic TO defined by an angular coordinate
$\theta$ in the presence of an external and internal torque,\cite{Nussinov07}
\begin{eqnarray}
I_{osc} \ddot{\theta}(t) + \gamma_{osc} \dot{\theta}(t) + \alpha_{osc} \theta(t)
= {M}_{ext}(t) + {M}_{int}(t).
\label{de}
\end{eqnarray}
Here, $I_{osc}$ is the moment of inertia of the (empty) TO apparatus,
$\alpha_{osc}$ is the restoring constant of the torsion rod, and $\gamma_{osc}$ is its
damping coefficient. $M_{ext}(t)$ is the externally imposed torque by the drive.
${M}_{int}(t) = \int {g}(t-t') \theta(t') dt'$
is the internal torque exerted by solid $^4$He\ on the oscillator for a system with time
translation invariance. In general, the backaction $g(t-t')$ is temperature, $T$, dependent.
The external torque, $M_{ext}(t) = \dot{L}(t)$, is the derivative
of the total angular momentum of a rigid body,
$L(t) = \frac{d}{dt} \int d^{3}x ~\rho(\vec{x}) r^{2} ~ \dot{\theta}(\vec{x})$,
where $r$ is the distance to the axis of rotation, $\rho(\vec{x})$
is the mass density and $\dot{\theta}(\vec{x})$ the local angular
velocity about the axis of rotation.\cite{Anderson08}
The experimentally measured quantity is the angular motion of the TO -
not that of bulk helium, which is enclosed in it. Ab
initio, we cannot assume that the medium moves as one rigid body.
If the non-solid subsystem ``freezes" into a glass, the medium will move with greater
uniformity and speed. This leads to an effect similar to that of the nonclassical
rotational moment of inertia, although its physical origin is completely different.
We argue for an alternate physical picture, namely that of softening of the oscillator's stiffness.
The angular coordinate $\theta(t)$ of the oscillator is a convolution of the applied
external torque ${M}_{ext}(t)$ with the response function ${\chi}(t,t')$.
Causality demands ${\chi}(t,t') = \theta(t-t') {\chi}(t,t')$. Under Fourier
transformation, this leads to the Kramers-Kronig relations.
In any time translationally invariant system, the Fourier
amplitude of the angular response of the TO is
\begin{eqnarray}
\chi_0^{-1}(\omega)\theta(\omega) = M_{ext}(\omega) + M_{int}(\omega) .
\label{ft}
\end{eqnarray}
Defining the total angular susceptibility as
$\chi^{-1} = \chi_0^{-1} - M_{int}$,
we write the effective inverse susceptibility as
\begin{eqnarray}
\chi^{-1}(\omega) =
\alpha_{osc} - i \gamma_{osc} \omega - I_{osc} \omega^{2} - g(\omega),
\label{central}
\end{eqnarray}
where $g(\omega)$ is the Fourier transform of the backaction
due to the added solid $^4$He.
In what follows, we will treat the
backaction as a small perturbation to the TO chassis.
We will now apply our formalism to the study of the
ingle TO, which is described by Eq.~(\ref{central}),
and then turn to the double TO.
Very recently, Beamish \cite{Beamish2012} and Maris \cite{Maris2012} employed the same general linear response formalism
to explain some of the TO results in terms of purely mechanical effects due to either the changing
stiffness of the torsion rod or floppiness
of the sample cell flange (lid).
\subsection{A model for the single torsional oscillator}
In what follows, we analyze the experimental consequences of Eq. (\ref{central}).
\subsubsection{Rotational susceptibility - period and dissipation}
We can now calculate specific consequences of the phenomenological model introduced above.
The effective oscillator parameters are defined as
the sum of parameters describing the apparatus, $\chi_0^{-1}$,
and the added solid $^4$He\ given by
\begin{equation}
g(\omega)=i\gamma_{He}\omega + I_{He}\omega^2 + g_0 G(\omega) .
\label{geq}
\end{equation}
It is convenient to introduce a net moment of inertia $I = I_{osc} + I_{He}$
and net dissipation $\gamma=\gamma_{osc}+\gamma_{He}$.
The transient dynamic response function $G(\omega)$ can be approximated by a distribution of
overdamped oscillators with relaxation time $\tau$ as discussed in Section (\ref{quenched}) [see Eqs. (\ref{CCD}), in particular].
The resonant frequency of the TO with a backaction
is given by the root of
\begin{eqnarray}
\chi^{-1}(\omega) =
\alpha - i \gamma \omega - I \omega^{2}- g_{0} G(\omega) \equiv 0.
\label{central_glass}
\end{eqnarray}
As discussed in Section \ref{general_formalism}, we anticipate that when the relaxation time
is similar to the period of the
underdamped oscillator, the dissipation will be maximal, sometimes referred to as ``$\omega\tau=1$'' physics.
In linear response theory, the homogeneous Eq.~(\ref{central_glass}) is scale invariant.
Thus, we normalize all oscillator quantities by the effective moment of inertia $I$, i.e.,
$\bar{\alpha} = \alpha/I$, $\bar{\gamma} = \gamma/I$, and $\bar{g}_0 = g_0/I$.
As can be seen from Eq.~(\ref{central_glass}), for an ideal dissipationless oscillator ($\bar\gamma=0$),
the resonant frequency
$\omega_{0}= \sqrt{\bar\alpha}$
is the pole of $\chi(\omega)$ in the limit $1/\tau \to 0$. If we expand $\chi^{-1}$ about this root,
$\omega= \omega_{0} + \delta\omega$,
we find to leading order in $\delta\omega$
\begin{eqnarray}
\delta\omega \approx
-\frac{i \bar\gamma \omega_{0} +
\bar{g}_0 G(\omega_0)
}{i \bar\gamma + 2\omega_{0}} .
\end{eqnarray}
It follows that the shift in dissipation with respect to high temperatures is
\begin{eqnarray}
\Delta Q^{-1} \equiv Q^{-1} - Q^{-1}_0
\approx \frac{\bar{g}_0}{\omega_{0}^2} {\rm Im\ } G(\omega_0) ,
\label{Q-1p}
\end{eqnarray}
whereas the shift in resonant frequency is
\begin{eqnarray}
\Delta\omega\equiv 2\pi (f_0-f)
&\approx& \frac{\bar{g}_0}{4 \omega_{0}^2}
\Big(
2 \omega_{0} \, {\rm Re\ }G(\omega_0) + \bar\gamma \, {\rm Im\ }G(\omega_0)
\Big),
\label{P-1p}
\end{eqnarray}
which increases monotonically when $T$ is lowered.
Combining Eqns.~(\ref{Q-1p}) and (\ref{P-1p}) for the strongly underdamped oscillator,
we arrive at
\begin{equation}\label{Q-P-ratio}
\frac{\Delta Q^{-1}}{\Delta \omega} =
\frac{ 4 {\rm Im\ }G(\omega_0) }{ 2\omega_0{\rm Re\ }G(\omega_0) + \bar{\gamma} {\rm Im\ }G(\omega_0) } \approx
\frac{2}{\omega_{0}} \frac{ {\rm Im\ }G(\omega_0) }{ {\rm Re\ }G(\omega_0) }.
\end{equation}
It is this general relationship for the response function of the damped oscillator that was successfully applied in the
TO analysis by Pratt et al.
\cite{Pratt11,Gadagkar2012}
to demonstrate the interplay of rotational, relaxation,
and shear dynamics in solid $^4$He.
For a Debye relaxor the ratio of Eq.~(\ref{Q-P-ratio}) reduces to $2\tau$ and provides a direct means to measure the
relaxation time.
Similar results for the ratio were obtained for other phenomenological models.\cite{Huse07, Dorsey08}
For example, Huse and Khandker \cite{Huse07} assumed a simple phenomenological
two-fluid model, where the supersolid is dissipatively
coupled to a normal solid resulting in a ratio of
${\Delta Q^{-1}}\frac{\omega_0}{\Delta \omega} \approx 1$, while Yoo and Dorsey\cite{Dorsey08}
developed a viscoelastic model, and Korshunov\cite{Korshunov09} derived a two-level system glass model
that captures the results of the general model originally proposed by Nussinov et al.\cite{Nussinov07}
To make further progress we assume that $\tau$ follows the phenomenological Vogel-Fulcher-Tammann (VFT) equation
of Eq. (\ref{VFT_eq}).
\begin{figure}
\begin{center}
\includegraphics[width=0.60\linewidth,angle=0]{FIG06.eps}
\end{center}
\caption{The resonant frequency (black, left axis) and dissipation (red, right axis) vs.\ temperature.
The experimental data
from Hunt et al.\cite{Hunt09} are well described by a Cole-Cole (CC) distribution function.
\cite{Graf10,Graf11}
}\label{fig_Hunt}
\end{figure}
Figure~\ref{fig_Hunt} provides a fit to the measured data by Hunt et al.\cite{Hunt09} assuming a CC distribution of
relaxation times. As shown, an excellent
fit is obtained. For comparison, we also tried a DC distribution for relaxation times, but found only fair agreement.
It is worth mentioning that unlike in the Debye relaxation analysis by Hunt and coworkers, i.e., a single overdamped mode,
we do not require a supersolid component to {\em simultaneously} account for frequency shift and dissipation peak
\cite{Graf10,Graf11}.
Our model leads to a universal scaling of period change vs.\ dissipation in a Cole-Cole or Davidson-Cole plot as seen in
Ref.~\cite{Pratt11,Gadagkar2012}.
Indeed similar viscoelastic behavior may have already been observed in solid hydrogen \cite{Clark2006}.
\subsection{A model for the double oscillator}
\begin{figure}
\bigskip
\begin{center}
\includegraphics[ width=0.25\linewidth, keepaspectratio, angle=0 ]{FIG07.eps}
\end{center}
\caption{Cartoon of the double torsional oscillator modeled in Eqns.~(\ref{eqns}).
The upper moment of inertia ($I_1$) is the dummy bob, while the lower moment of inertia ($I_2$)
is the cylindrical pressure cell that can be loaded with $^4$He. The stiffness of the torsion rods is given by
$k_1$ and $k_2$ with $k_1 \approx k_2$ by design.}
\label{Fig2}
\end{figure}
The double TO results of the Rutgers group by Kojima have proved difficult to explain when
simply extrapolating from the single TO model
\cite{Dorsey08}.
Here we model the coupled double TO, sketched in Fig.~\ref{Fig2},
by the following system of equations:
\begin{eqnarray}\label{eqns}
\left( -I_1 \omega^2 - i \gamma_1 \omega + k_1 + k_2 \right) \Theta_1(\omega) - k_2 \Theta_2(\omega) &=& F(\omega) ,
\nonumber\\
\left( -I_2 \omega^2 - i \gamma_2 \omega + k_2 - g(\omega) \right) \Theta_2(\omega) - k_2 \Theta_1(\omega) &=& 0 .
\end{eqnarray}
where $\Theta_i$ are torsion angles, $\gamma_i$ are damping coefficients, $k_i$ are torsion rod
stiffnesses, $g(\omega)$ is the glass backaction term, and $F(\omega)$ is the applied external torque.
The subindex ``$1$'' refers to the upper or dummy bob in the experiment,
while ``$2$'' refers to the lower oscillator with the pressure cell that can be loaded with solid $^4$He.
For a strongly underdamped oscillator and a small backaction, it suffices to solve first for the
bare resonant frequencies and later include perturbatively damping and backaction terms, for details see
Ref.~\cite{Graf10}.
More recently this approach has been extended to a triple TO \cite{Mi2011}.
\begin{figure}
\begin{center}
\includegraphics[ width=0.8\linewidth, keepaspectratio, angle=0 ]{FIG08.eps}
\end{center}
\caption{Frequency and dissipation in double TO by Aoki et al.\cite{Aoki07}
(symbols) compared with glass theory (lines).
Panel (a): Temperature dependence of resonant frequency shifts $\Delta f_1$ (black, left axis) and $\Delta f_2$ (blue, right axis).
Panel (b): Temperature dependence of dissipation $Q_1^{-1}$ (black, left axis) and $Q_2^{-1}$ (blue, right axis).
The experimental data were corrected for the significant temperature-dependent background of the empty cell.
}
\label{fig_DO}
\end{figure}
Figure~\ref{fig_DO} shows good agreement between our phenomenological model of the coupled double TO and experiment.
The TO parameters $I_i$ and $k_i$ can be determined from the bare resonant frequencies $f_i^0 = \omega_i^0/2\pi$.
In addition, the damping coefficients $\gamma_i$ can be extracted from the high-temperature dissipation $Q_{i, \infty}^{-1}$.
Finally, the backaction
${g}(\omega)$ accounts through $\tau(T)$
for the temperature dependence of
$\Delta f_i$ and $Q_i^{-1}$.
Our phenomenological theory of the double oscillator explained for the first time both
frequency shift and dissipation peak for in-phase and out-of-phase
torsional response \cite{Aoki07}.
Data for in-phase frequency $f_1=496$ Hz and out-of-phase $f_2=1173$ Hz
are shown in Fig.~\ref{fig_DO}, plotted against the
temperature.
The obtained values for moment of inertia and rod stiffness agree well with
other estimates \cite{Aoki08}.
Remarkably an anomalous damping coefficient $\gamma_1 \sim -\gamma_2$
is required to explain the behavior of increased
dissipation with increased frequency. Such anomalous damping
is already required to describe the unloaded pressure cell. Thus it is {\em unrelated} to the
properties of solid $^4$He\ and an intrinsic property of the composite TO.
After loading the cell with solid $^4$He\ the dissipation ratio becomes
$Q_{2}^{-1}/Q_{1}^{-1} = 2.5$ at 300 mK with frequency ratio $f_2/f_1 = 2.37$.
Our fit results in a negative parameter $T_0=-32.73$ mK. This effective negative value of $T_{0}$ is in line with
earlier comment in Section \ref{general_formalism},
concerning the quantum character of solid $^4$He. This value may be indicative of strong zero point quantum fluctuations that thwart a
glass transition.
Finally, the comparison in Fig.~\ref{fig_DO} shows that an explicit
frequency-dependent backaction must be used with
$g_0(\omega) = g_0 \left({\omega}/{\omega_1^0}\right)^p$
and $p = 1.77$ to account for the experimental fact of $\Delta f_1/f_1 \approx \Delta f_2/f_2$,
i.e., the relative frequency shift is unaffected by the changing resonant frequency.
In contrast, various theories describing solid $^4$He\ in torsional oscillators as viscoelastic material \cite{Dorsey08}
or two-level systems moving through a solid matrix \cite{Andreev07, Andreev09, Korshunov09}
predict an exponent of $p=4$ for the backaction term.
\section{Shear and stiffness of a viscoelastic medium}
\label{shears}
Another aspect of the dynamic response of $^4$He crystals was revealed through a series of elasticity studies.\cite{Paalanen81,Goodkind02,Burns93,Beamish07,Beamish09,Beamish10}
In particular, Beamish and coworkers demonstrated
the qualitative similarity between shear modulus and the TO measurements.
In the shear modulus experiment solid helium is grown in between and around two closely spaced sound transducers. When one of the transducers applies an external strain, the other transducer measures the induced stress from which the shear modulus of the sample is deduced.\cite{Beamish07}
In this way, the experiment provides a direct measurement of the elastic response to the applied force within a broad and tunable frequency range. The frequency dependence is especially crucial in determining the nature of the relaxation processes and complements current TO experiments with their limited frequency range.
Similar to the TO analysis in the previous section, we analyzed the shear modulus within the general linear response function framework, where the amplitude of the shear modulus increases (stiffens) upon lowering $T$, because of the freezing out of low-energy excitations. This change is accompanied by a prominent dissipation peak, indicative of {\em anelastic} relaxation processes.
We calculated the complex shear modulus $\mu(\omega; T)$ of a viscoelastic material and predicted: (a) the maximum of the shear modulus change and the height of the dissipation peak are independent of frequency and (b) the inverse crossover temperature $1/T_X$ vs.\ the applied frequency $\omega$ obeys the form $\omega \tau(T_X) =1$ characteristic of dynamic crossover.
\subsection{Model of dynamic shear modulus}
As in our analysis of the TO, we start with the same general linear response function formulation outlined in Section \ref{general_formalism}.
Our final result of Eq. (\ref{mu}) will, once again, reflect the general relation of Eq. (\ref{Dyson}).
Here we replace the angular coordinate of the TO with a displacement coordinate and the restoring force with a stress tensor \cite{Su10b}.
For ease, our notation in the below will differ slightly from that in Section \ref{general_formalism}.
The equation of motion for displacement $u_i$ in the $i$-th direction of a volume element in the presence of an external driving force is
\begin{eqnarray} \label{EOM}
-\rho \, \omega^2 u_i + \partial_j \,
\sigma_{ij}^{\rm He} \ \
= f_i^{\rm EXT}(\omega) + f_i^{\rm BA}(\omega) ,
\end{eqnarray}
where $\rho$ is the mass density, $f_i^{\rm EXT }$ and $f_i^{\rm BA}$ are the external force density and the backaction force density, and
$\sigma_{ij}^{\rm He}$ is the elastic stress tensor of solid helium.
In general, $\sigma_{ij}^{\rm He} = \lambda_{ijkl} \, u_{kl}$, with the elastic modulus tensor $ \lambda_{ijkl}$ \cite{LL_elastic}.
In the case of a homogeneous solid with shear wave propagation along the $z$ axis and wave polarization in the $x$-$y$ plane, the backaction takes on the form
\begin{eqnarray} \label{fBA}
f_i^{\rm BA}(\omega) = {\overline{G}}(\omega;T) \,\partial_z^2 \, u_i(\omega) ,
\end{eqnarray}
where
$\overline G$
describes the strength of the backaction on the solid (viscoelastic response) and $i=x, y$. Although $f_i^{\rm BA}(\omega)$ is much smaller than the purely elastic restoring force $\partial_j \, \sigma_{ij}^{\rm He}$, it is this term that is responsible for the stiffening of shear modulus with decreasing temperature.
Polycrystalline and amorphous materials are nearly isotropic, hence the elastic modulus tensor becomes
$\lambda_{ijkl} = \lambda_0 \delta_{ij} \delta_{kl} +
\tilde{\mu}_0 (\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk})$.
Note the stress tensor in Eq.~(\ref{EOM}) is finite only for orientations $j=z$ and either $k$ or $l$
equal to $z$. With $k, l$ being interchangeable, the relevant element will
be $\lambda_{iziz}$, which gives the purely elastic shear modulus
$\tilde{\mu}_0$.
Finally the fully dressed shear modulus (dressed by the backaction) relates the displacement to an external force, or
$[-\rho \, \omega^2 + \mu \, \partial_z^2 ] u_i(\omega)= f_i^{\rm EXT}(\omega)$.
Comparing this expression with Eq.~(\ref{EOM}), we obtain for the dynamic shear modulus in a viscoelastic material the general response function
\begin{eqnarray} \label{mu1}
\mu(\omega; T) =
\tilde{\mu}_0(T) - \overline{G} (\omega;T).
\end{eqnarray}
Next we employ for $\overline{G}(\omega; T)$ the Cole-Cole distribution function [specifically, by reference to
Eq.~(\ref{CCD}), we set $\overline{G}(\omega;T) = g_{0} \mu_{0} G(\omega, T)$
to obtain the specific form
\begin{eqnarray} \label{mu}
\mu(\omega; T) &=& \tilde{\mu}_0 \left[
1 -\frac{g_{0}}{1-(i \omega \tau)^{\alpha}}
\right] ,
\end{eqnarray}
with the sample dependent parameter
$g_0$. The experimentally measurable quantities are the amplitude of the shear modulus, $|\mu|$, and the phase delay between
the input and read-out signal, $\phi \equiv {\rm arg} \, (\mu)$;
$\phi$ measures the dissipation of the system, which is related to the inverse of the quality factor $Q^{-1} \equiv \tan \phi$.
Several interesting results follow immediately from the general response function in Eq.~(\ref{mu}):
(1) The change in shear modulus $\Delta \mu$ between zero and infinite relaxation time is
$\Delta \mu/\tilde{\mu}_0 = g_{0}$.
It measures the strength of the backaction as well as the concentration of defects.
(2) At fixed $T$, the shear modulus amplitude $|\mu(\omega; T)|$ decreases with increasing $\omega$.
(3) The parameter $\omega \tau$ is the only scaling parameter.
(4) The peak height $\Delta \phi$ of the phase angle is proportional to $g_{0}$. When $\omega \tau \sim1$, then
$\Delta \phi \approx g_{0} \, {\cot(\alpha \pi/4)/(2-g_{0}) } $ for $g_{0} \ll 1$.
In the limit $1 < \alpha \le 2 $, this simplifies further to
\begin{eqnarray}
\Delta \phi \approx (1-\alpha/2)
(\Delta \mu/\tilde{\mu}_0) \ll 1 ,
\end{eqnarray}
where $\Delta \phi$ is in units of radians. Quite remarkably, the peak height $\Delta \phi$ depends only on the phenomenological parameters $\alpha$ and
$g\propto \Delta\mu$. At fixed temperature the maximum change of both $\Delta \mu$ and $\Delta \phi$ is {\it independent of frequency}.
\subsection{Results}
Let us compare our model calculations with the experimental shear modulus measurements by Beamish and coworkers \cite{Beamish10} for a transducer driven at 2000 Hz, 200 Hz, 20 Hz, 2 Hz, and 0.5 Hz \cite{Comment1}.
Specifically, for the $T$-dependent $\tau(T)$ in Eq.~(\ref{mu})
we consider Vogel-Fulcher-Tammann (VFT) and power-law (PL) relaxation processes.
In our model parameter search, we are not constraining $T_0$ to be positive (where $\tau$ diverges).
Fair agreement between calculations with a single set of parameters and experiments is obtained with $T_0 = -69.3$ mK,
see Fig.~\ref{fig:SM_VFTnew}. We refer to these calculations as ``VFT$_<$''.
Similar to the TO results, a negative $T_0$ means that no real phase transition occurs in solid $^4$He.
The agreement between theory and experiment at highest (2000 Hz) and lowest (0.5 Hz) frequencies is of poorer quality. This discrepancy may be related to very different backgrounds at these frequencies and may not be intrinsic to $^4$He. Another reason may be the presence of additional dissipation mechanisms, not accounted for.
Furthermore, our calculations confirm that the change of amplitude and peak height of phase angle are nearly frequency independent between 2 Hz and 200 Hz.
By defining the crossover temperature $T_X$ as the point where the phase angle peaks, we find as predicted that $T_{X}$ decreases with decreasing $\omega$.
\begin{figure}
\begin{center}
\includegraphics[width=.75\linewidth,angle=0,keepaspectratio]{FIG09.eps}
\end{center}
\vskip-.5cm
\caption{Experimental data and theoretical calculations of the shear modulus vs.\ temperature assuming a VFT relaxation time. The red and blue squares are the experimental data for the amplitude and dissipation of shear modulus. The black-solid lines show the theoretical calculations. We used the set of parameters $\alpha=1.31$, $g_{0}=1.44\times 10^{-1}, \tilde\mu_0=0.47$ pA Hz$^{-1}$, $\tau_0=50.0$ ns, $\Delta=1.92$K, and $T_0=-69.3$ mK. Notice a negative $T_0$ means that there is no true phase transition occurring at finite temperatures, probably because of strong quantum fluctuations of helium atoms. The shear modulus amplitude and the phase angle were shifted by 0.1 pA Hz$^{-1}$ and 3 degrees with respect to the 2 Hz data.
}\label{fig:SM_VFTnew}
\end{figure}
Finally, when we set $T_0=0$ K the VFT expression (``VFT$_0$'') reduces to an activated Arrhenius form. While it describes reasonably well the position $T_X$,
it gives a much narrower linewidth for $\Delta\phi$ than does VFT$_<$, which is not in accord with the data and thus not shown.
Notice that the VFT relaxation is not the only possible relaxation process that can describe the data; power-law or other types of relaxation can give similar level of agreement to the experiment \cite{Beamish10}.
Iwasa proposed a relaxation process \cite{Iwasa10} similar to our phenomenological one,
which is
based on the theory of pinned vibrating dislocation lines by Granato and L\"ucke \cite{GranatoLuecke56}.
Clearly further experiments at lower frequencies and lower temperatures are required to determine the exact type of relaxation processes in solid $^4$He.
\begin{figure}[t]
\begin{center}
\parbox[t]{0.4\linewidth}{
\caption{The Cole-Cole plots for experimental data and for VFT calculation. For given form of $\tau$, all different frequency curves collapse onto one single master curve reflecting that $\omega \tau$ is the only scaling parameter. The Cole-Cole plots show reflection symmetry about Re[$|\mu-\tilde\mu_0|/\Delta \mu$]=0.5, which is a consequence of the Cole-Cole distribution function.
}\label{fig:CC}
} \hfill
\begin{minipage}{0.58\linewidth}
\includegraphics[clip, width=1.0\linewidth,angle=0,keepaspectratio]{FIG10.eps}
\end{minipage}
\end{center}
\end{figure}
Figure~\ref{fig:CC} shows the Cole-Cole plot for experiments and calculations.
The experimental curves for different frequencies collapse roughly onto one curve confirming our theoretical assumption
that $\omega \tau(T)$ is a universal scaling parameter. This behavior was also seen in TO experiments \cite{Hunt09}.
In addition, the data are symmetric about
Re$[|\mu-\tilde\mu_0|/\Delta \mu]=0.5$
as expected for the Cole-Cole distribution.
A more detailed data analysis may resolve the remaining discrepancy between theory and experiment. The discrepancy may be due to either the presence of additional relaxation processes at temperatures above $T_X$, i.e., a more complicated form for $\tau(T)$, or by a modified functional form of the defect distribution function.
\begin{figure}
\begin{center}
\parbox[t]{0.4\linewidth}{
\caption{Prediction for the inverse crossover temperature vs.\ applied frequency. The green squares correspond to $T_X$ in Ref.~\cite{Beamish10}.
For the power-law process with phase transition occurring at 40 mK, we used
$\tau=\tau_0 (|T_g|/(T-T_0))^p$ for $T>T_0$ and $\tau = \infty$ for $T \le T_0$.
}\label{fig:SM_omegavsT}
} \hfill
\begin{minipage}{0.58\linewidth}
\includegraphics[clip, width=1.0\linewidth,angle=0,keepaspectratio]{FIG11.eps}
\end{minipage}
\end{center}
\end{figure}
The pertinent question about a true phase transition at zero frequency vs.\ crossover dynamics can be addressed by investigating the asymptotic limit of $\omega \tau(T) = 1$. From this expression we estimate $T_{X}$ as a function of the applied frequency $f=\omega/2\pi$.
Figure.~\ref{fig:SM_omegavsT} shows $1/T_X$ vs.\ $f$. The VFT$_<$ and VFT$_0$ calculations give significantly better agreement than the PL calculations with phase transitions occurring at 0 K (PL$_0$) and 40 mK (PL$_{40}$). For positive $T_0$ (see PL$_{40}$), we find a true freeze-out transition, which would indicate arrested dynamics
for $f\to 0$ Hz. For both VFT and PL relaxation times our calculations demonstrate that in the low-frequency limit the existence of a phase transition shows clear signatures of $T_X$ converging toward the ideal glass temperature $T_0$. Therefore the absence of arrested behavior may serve as experimental evidence against a true phase transition.
\subsection{Viscoelastic model}
The viscoelastic model successfully describes
composite materials with anelastic properties. In fact, Yoo and Dorsey \cite{Dorsey08} applied the viscoelastic model to the TO experiments.
More generally, a distribution of viscous components, embedded in an otherwise elastic solid, may be treated through a generalized Maxwell model
\cite{Su10b,Su11}.
Conceptually one may subdivide the material into many elements and solve the coupled materials equations for stress and strain.
Here we use constitutive materials equations to show that our results for a glass are equivalent to the generalized Maxwell model with parallel connections of an elastic component with an infinite set of Debye relaxors, see Fig.~\ref{fig:Maxwell}.
\begin{figure}
\begin{center}
\parbox[t]{0.4\linewidth}{
\caption{Lump circuit of the generalized Maxwell model. The elastic spring $\mu_0$ represents the purely elastic shear modulus at high temperature. Each Debye relaxor is made out of a series of elastic spring $\mu_{\rm RS}$ and dissipative dash-pot $\eta^{(n)}$.
}\label{fig:Maxwell}
} \hfill
\begin{minipage}{0.58\linewidth}
\includegraphics[clip, width=1.0\linewidth,angle=0,keepaspectratio]{FIG12.eps}
\end{minipage}
\end{center}
\end{figure}
The anomalous stiffening of the shear modulus can be
described within the viscoelastic model, though other defect mechanisms like dislocation glide are possible too \cite{Friedel,Caizhi2012}.
The equivalent lump circuit model is sketched in Fig.~\ref{fig:Maxwell}, where the basic dissipative element is the Debye relaxor. It is comprised of a serial connection of a rigid solid (RS), characterized by an elastic shear modulus $\mu_{\rm RS}$, and a Newtonian liquid (NL) or dash-pot, characterized by a viscosity $\eta$. The RS component describes the ideal elastic solid helium of this volume element, while NL represents the anelastic component, which gives rise to viscous damping. The two parts are connected in series, so that both share the same magnitude of stress, while the net strain is additive. The strain rate equation for both constituents of the Debye relaxor is
\begin{eqnarray}
\dot{\epsilon}= {\dot{\sigma}}/{\mu_{\rm RS}}
+ {\sigma}/{\eta} ,
\end{eqnarray}
where $\epsilon$ is the net strain of the Debye relaxor (DR) and $\sigma$ is the magnitude of stress shared by the components RS and NL. In order to obtain the above equation, we used the constitutive materials equations for strain and stress:
$\epsilon_{\rm RS}=\sigma_{RS}/\mu_{RS}$ and $\dot{\epsilon}_{\rm NL}=\sigma_{NL}/\eta$. After performing the Fourier transformation we obtain
for a single Debye relaxor (DR) the shear modulus $\mu_{\rm DR}=\sigma/\epsilon$,
\begin{eqnarray}
\mu_{DR}(\omega) =\frac{\mu_{\rm RS}}{1+\frac{i}{\omega \tau_{\rm DR}}}
= \mu_{\rm RS} \left[ 1 -\frac{1}{ 1- i \omega \tau_{\rm DR} } \right] ,
\end{eqnarray}
with relaxation time $\tau_{\rm DR} \equiv \eta/\mu_{\rm RS}$. For a viscoelastic material with a single relaxation time, the solid behaves like a parallel connection between the elastic part and the Debye relaxor
\begin{eqnarray} \label{mufinal}
\mu(\omega) &\equiv& \mu_0 + \mu_{\rm DR}(\omega) = \tilde\mu_{0} \left[
1 -\frac{g_{0}}{ 1- i \omega \tau_{\rm DR} }
\right] ,
\end{eqnarray}
with $g_{0} = \mu_{\rm RS}/\tilde\mu_0$
and $\tilde\mu_0=\mu_0+\mu_{\rm RS}$ is the dressed elastic shear modulus.
To consider the general case of many components, we introduce Debye relaxors with different relaxation times connected in parallel as shown in Fig.~\ref{fig:Maxwell}.
The total anelastic contribution from $n$ such constituents is given by a weighted sum. The corresponding continuous version of this expression with a distribution $P(s)$ of relaxation times is
\begin{eqnarray} \label{GMG}
\mu_{ae}(\omega) = \mu_{\rm RS} \int_0^{\infty}
ds \ P(s)
\,\left[1- \frac{1}{1- i \omega \tau s} \right]
=\mu_{\rm RS} - \frac{\mu_{\rm RS}}{ 1-(i \omega \, \tau) ^{\alpha} } .
\end{eqnarray}
Here the Cole-Cole distribution,
\begin{eqnarray}
P(s)=\frac{t^{-(1-\alpha)} \sin \alpha \pi }
{1+s^{2 \alpha}+2 s^{\alpha}\cos \alpha \pi} ,
\end{eqnarray}
was used.
Similar to our response functions elsewhere, the net shear modulus of the composite system is given by the sum of two terms- the purely elastic component and the anelastic contribution
\begin{eqnarray} \label{mufinal}
\mu(\omega) &=& \mu_0 + \mu_{ae}(\omega) = \tilde\mu_{0} \left[
1 -\frac{g_{0}}{ 1-(i \omega \tau)^{\alpha} }
\right] .
\end{eqnarray}
Indeed, this expression is identical to the one obtained previously using the general response function formalism with a backaction (\ref{Dyson}).
This is no coincidence, since the backaction term accounts for damping and thus describes the anelastic response of defects to the external stress.
\section{Dielectric properties of a viscoelastic medium}
\label{diels}
The measurements of $\epsilon(\omega, T)$ by Yin et al.\ \cite{Yin11} showed the anomalous increase of the dielectric function of solid $^4$He\
at low temperatures. A similar test experiment in liquid helium showed no such effect. We propose that these results may be explained by an electro-elastic coupling of a quenched solid with frozen-in internal stress.
Such behavior cannot be described by the standard Clausius-Mossotti equation via a change in mass density or polarizability (due to, e.g., dipole-induced dipole interactions).
Neither the measured change of the mean mass density $\delta \rho/\rho \sim 10^{-6}$,
nor the predicted correction in polarizability, which actually leads to a decrease of $\epsilon(\omega, T)$ at low temperatures \cite{Kirkwood36, Chan77},
can account for the reported anomalous change of the dielectric function of order
$\delta \epsilon /\epsilon \sim 10^{-5}$, while a model with frozen-in stress and electro-elastic coupling can explain the data.
\subsection{Model for electro-elastic properties}
The relation of Eq. (\ref{el}) that we derive below constitutes yet another realization of our general relation of Eq. (\ref{Dyson}). We now turn to the specifics of the electro-elastic coupling
describing the interaction between the electromagnetic and the strain fields. The coupling may be obtained by expanding the dipole moment, $\bf p (r_{\it a})$, of a nonpolar atom around its equilibrium value:
\begin{eqnarray}\label{TEdipole}
{\bf p (r_{\it a}) \approx p (R_{\it a})+ (u_{\it a} \cdot \nabla ) \ p \left. \right|_{R_{\it a}} } ,
\end{eqnarray}
where $\bf r_{\it a}=R_{\it a}+u_{\it a}$ is the position of atom $a$, $\bf R_{\it a}$ is its equilibrium position, and $\bf u_{\it a}$ is its displacement. The polarization is obtained by averaging ${\bf p}$ over a macroscopic volume element $v$,
${\bf P (r)}=({1}/{v}) \sum_{\it a} {\bf p (r_a)}$. In the continuum limit (when the macroscopic length scale is far larger than the atomic length)
$\sum_a {\bf p (r_{\it a})} \approx (1/v)\int_v d {\bf r}' \ {\bf p (r')}$ .
In the presence of a local strain field, to linear order in the displacement, Eq.~(\ref{TEdipole}) reads
\begin{eqnarray}
{\bf P (r)} \approx (1/v^2)\int_v d \,{\bf R \ [ \,p(R)
+ (u \cdot \nabla) \ p (R)\,]}.
\end{eqnarray}
Integration of the first term yields the macroscopic polarization for zero internal strain (a solid in equilibrium), ${\bf P}_0$. It is related to the macroscopic electric field ${\bf E}$ by ${\bf P}_0 \equiv \chi_0 {\bf E}$ where
$\chi_0 = (\epsilon_0-1)/4\pi$ is the zero-strain susceptibility and $\epsilon_0$ is the permittivity. The second term in the integration describes the polarization change $\bf \delta P$ due to atomic displacements.
Neglecting surface contributions the second term modifies the polarization ${\bf P}={\bf P}_0 + \delta {\bf P}$ by
\begin{eqnarray} \label{P-strain}
\delta {\bf P} = - (1/v^2) \int d {\bf R \ ( \nabla \cdot u ) \ p(R)} \approx - e_{ii} \,{\bf P}_0 ,
\end{eqnarray}
with $e_{ii} = (1/v) \int_v d{\bf R} \, (\nabla \cdot \bf u)$ the macroscopic frozen-in dilatory strain (we use the repeated indices summation convention
$e_{ii}\equiv e_{11} + e_{22} + e_{33}$). In obtaining Eq.~(\ref{P-strain}), we assumed that ${\bf P}$ is slowly varying. This long-wave length approximation holds for wavelengths of microwave ($\lambda >10^{-3}$m) and above. The polarization change alters the dielectric function $\epsilon_{ii}=(\epsilon_0)_{ii} + \delta\epsilon_{ii}$ by
\begin{eqnarray} \label{deltaepsilon}
\delta \epsilon_{ii} \, (\omega, T)
= - 4 \pi \chi_0 \, e_{ii} \, (\omega, T)
\end{eqnarray}
with $\epsilon_{ii} -1 = 4 \pi P_i/ E_i$. Only the diagonal components of the strain tensor play a role in a leading order expansion of the electro-elastic coupling.
In principle, the shear strain can couple to the electric field by considering dipole-induced dipole interactions (van der Waals), which is
a higher order effect. The same coupling mechanism between photon (electric field) and phonons (strain) gives rise to the acousto-optical effect.
Our derivation of Eq.~(\ref{deltaepsilon}) is, to leading order, equivalent to the Pockels coefficient for acousto-optical coupling in
isotropic dielectrics,
$
\delta \epsilon_{ij}= - \epsilon^2 \,[ 2 P_{44} \, e_{ij} + P_{12} \, e_{kk} \delta_{ij}],
$
where $P_{kl}$ are the reduced Pockels coefficients (of order unity) \cite{Werthamer69, Kisliuk91, Landau_Lifshitz}.
We next turn to the locally frozen-in strains and write the equation of motion within the general response function theory.
As before in an isotropic medium the elastic stress tensor
$\sigma_{ij}^{\rm He} = \lambda_{ijkl} \, \partial u_k/\partial x_{l}$
with $\lambda_{ijkl} = \lambda_0 \delta_{ij} \delta_{kl} + \mu_0( \delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk})$.
If the electric field couples to local density fluctuations only through dilatory strain,
then the important matrix element is the Lam\'e parameter $\lambda_{0}$. We write the displacement to an out-of-equilibrium internal (INT) force in the presence of the backaction as
\begin{eqnarray} \label{eEOM}
-\rho \omega^2 \,u_i (\omega)+ \lambda_{0} \, \partial_i^2 u_i (\omega)
= f_i^{\rm INT} (\omega)+ f_i^{\rm BA} (\omega),
\end{eqnarray}
where $u_i$ is a displacement in the $i$th direction and
$\rho$ is the mass density.
The backaction force density of nearby atoms is given by
\begin{eqnarray} \label{fBA}
f_i^{\rm BA} (\omega) = \overline{G}(\omega;T) \,\partial_i^2 \, u_i(\omega) ,
\end{eqnarray}
where ${\overline{G}}$ is the strength of the pertinent backaction.
$f_i^{\rm INT }$ is the out-of-equilibrium (frozen-in) internal force density in the $i$th direction at the defect.
Integrating Eq.~(\ref{eEOM}) yields
the strain due to the internal dilatory stress $\sigma_{ii}$:
\begin{eqnarray}
{e}_{ii}(\omega, T) = {\sigma}^{}_{ii}
\left(\lambda_{0}-\overline{G} (\omega; T) \right)^{-1}.
\end{eqnarray}
Again, we assume that the backaction can be described by a Cole-Cole distribution of Debye relaxors,
$\overline{G}(\omega; T) = {g_0} \lambda_0/{\big[1-(i \omega \, \tau) ^{\alpha}\big]}$.
The corresponding local dilatory strain reads
\begin{eqnarray} \label{strain_glass}
{e}_{ii} (\omega, T) &=& e_0 \left(1- {g_0}/\big[{1-(i\omega \tau)^{\alpha}}\big] \right)^{-1} ,
\end{eqnarray}
where $e_0 \equiv \sigma_{ii}/\lambda_0$
at $T=0$.
From Eqns.~(\ref{deltaepsilon}) and( \ref{strain_glass}) the change in the dielectric function due to local strain fluctuations is
\begin{eqnarray}
\label{el}
\delta \epsilon_{ii} &=& -4 \pi \chi_0 \, e_0
\left(1- {g_0}/\big[{1-(i\omega \tau)^{\alpha}}\big] \right)^{-1} .
\end{eqnarray}
This result is similar to the one for the TO and shear modulus discussed in the earlier sections. At low temperatures, $\tau \rightarrow \infty$ and $e_{ii} \rightarrow e_0$, hence the strain is minimal and the reduction of the dielectric function due to local strain fluctuations is small.
At high temperatures, $\tau \rightarrow 0$ and $e_{ii} \rightarrow e_0 (1-g_0)^{-1}$ reaches its maximum resulting in the largest reduction of strain, where solid $^4$He is softest.
The main result is that the dielectric function reflects the arrested dynamics of the glassy components at low temperatures through the acousto-optical (electro-elastic) coupling.
The derivation of Eq.~(\ref{el}) for the dielectric response is extremely general.
We illustrate how a Cole-Cole form for the elastic relaxation implies a similar dielectric response (and vice versa). An identical result holds for other forms of the local elastic relaxation dynamics- these will leave a similar imprint on the dielectric response.
Historically (since the 1940s) the Cole-Cole response function \cite{Phase1} was found to be valid
for the dielectric response. In our initial works on $^4$He\, in trying to capture the local quenched dynamics, we first assumed this form for the TO and then for the general elastic response.
By virtue of their inter-relation and coupling, the local relaxation dynamics is the same for all of these quantities. Therefore measurements of the dielectric response may inform
about local dynamics and vice versa. Practically, our prediction of Eq.~(\ref{el}) applies to any nonpolarized system with a local distribution of stress relaxations.
In polarized materials, detecting a change in the dielectric function due to atoms sensing different local fields is challenging because the large intrinsic polarizability of the material will overwhelm the contributions derived above.
Among the nonpolarized solids, solid $^4$He favors the observation of this phenomenon especially because of its softness. Rapid cooling of solid helium allows a large local strain build-up, which is proportional to the size of the effect. Similarly, we expect the effect to be seen via delicate effects in solid $^3$He, hydrogen or xenon.
\subsection{Results}
The results of our electro-elastic predictions for the dielectric function are shown in Fig.~\ref{fig:epsilon}.
We obtain excellent agreement with experiment \cite{Yin11} for an applied alternating voltage at 500 Hz.
Our analysis predicts, that similar to the TO and shear modulus, a dissipation peak appears in the dielectric function.
The phase lag angle
$\phi=arg( \epsilon )$ records the lag between the real and imaginary part of $\epsilon(\omega; T)$.
Future observation of the dissipation peak will provide an important test of our picture concerning
quenched dynamics in solid $^4$He.
Consistent with earlier sections in this review, we assumed a Vogel-Fulcher-Tammann (VFT) form for the defect relaxation time $\tau(T) = \tau_0 \,e^{\Delta/(T-T_0)}$.
As in the previous sections, we obtain from our fits a negative $T_0=-119$ mK, in accordance with an avoided dynamic arrest of defect motion.
\begin{figure}
\begin{center}
\parbox[t]{0.4\linewidth}{
\caption{Experimental data and calculations of the dielectric function vs.\ temperature.
The red circles are the experimental data of the dielectric function (data by Yin et al.\ \cite{Yin11}).
The black lines are the calculated amplitude and phase lag (dissipation) of $\epsilon(\omega; T)$.
We used parameters $\alpha=1.49$, $e_0=8.88\times 10^{-4}, g=0.21$, $\tau_0=10.4$ ns, $\Delta=1.92$ K, and $T_0=-119$ mK.
}\label{fig:epsilon}
} \hfill
\begin{minipage}{0.58\linewidth}
\includegraphics[clip,width=1.0\linewidth,angle=0,keepaspectratio]{FIG13.eps}
\end{minipage}
\end{center}
\end{figure}
A rough estimate of the electro-elastic coupling can be obtained from mass flow measurements in bulk solid $^4$He, \cite{Ray10} where a pressure difference of $\Delta P_{L} \sim 0.1$ bar across a centimeter-sized pressure cell was reported. The estimated local strain, with a bulk modulus $B=320$ bar, is accordingly $\Delta P_{L}/B = 3 \times 10^{-4}$. This is consistent with the value we used for the fit in Fig.~\ref{fig:epsilon}, namely $e_0 = 8.88 \times 10^{-4}$.
In addition, the $P(T)$ measurement by Yin clearly deviates from a purely Debye lattice behavior at around $T = 0.4$ K with a large positive intercept corresponding to the order of 100 {\em ppm} of TLS, see Fig.~\ref{fig:PT}. This number is roughly five times larger than the most disordered sample in Lin's \cite{Lin09} specific heat experiments on ultrapure $^4$He with less than 1 ppb of $^3$He impurities \cite{Lin09,Su10}.
Thus the crystals grown by Yin are strongly disordered and harbor sufficiently many defects to support
centers of local strain fields.
Thus far, we assumed that both local and global stress are constant at low temperatures.
From Fig.~\ref{fig:PT} we can read off that the global pressure change between 300 mK and 40 mK is less than $\Delta P_T=0.18$ mbar. This is more than three orders of magnitude smaller than the local stress $\sigma_L = 8.88 \times 10^{-4} \times 320 \ {\rm bar} \sim 250$ mbar inferred from the dilatory strain $e_0$ used in the fit, as well as the static pressure difference $\Delta P_L = 0.1$ bar measured at two pressure gauges in mass flow experiments \cite{Ray10}.
Putting all the pieces together, we find that the change of the dielectric function based on global
density changes in the Clausius-Mossotti equation is negligible.
This is because the corresponding density change,
$\Delta\rho/\rho = \Delta P_T/B < 10^{-6}$, leads to a change in the
dielectric function of only
$\delta \epsilon \approx (\epsilon-1) \Delta \rho /\rho = 0.065 \Delta P_T/B < 10^{-7}$, which is more than two orders of magnitude too small to account for the observed effect of order $10^{-5}$.
Whereas the model of local stress and electro-elastic coupling can account for the magnitude and temperature dependence of the dynamic dielectric function.
More recently Yin {\it et al.} \cite{Yin2012} redesigned their dielectric function experiment with a simplified capacitor geometry
and found no measurable anomaly at low temperatures.
Within our theory of disorder such a null result is consistent with a negligible amount of frozen-in stress in the solid.
\section{Conclusions and future directions}
In this review we provided a general overview of the role of defects in solid $^4$He. We suggested that defect dynamics and freeze-out provides a rich ground to account for a significant fraction of the data, while at the same time they allow enough flexibility to accomodate sample and history dependence and hysteretic behavior ubiquitously seen in experiments. The general response function approach presented covers a wide range of observed effects.
We provide a brief synopsis of these below.
\subsection{Thermodynamics}
We started our discussion with the notion that any true phase transition, including supersolid, is accompanied by a thermodynamic signature. Therefore thermodynamic measurements like specific heat can reveal the signature one would naturally associate with a phase transition. However, we found that the excess specific heat and corresponding entropy are consistent with the contribution from noninteracting two level systems (TLSs). The estimated fraction of these TLS to the specific heat is at the level of tens to hundreds of parts-per-million. Consequently, the corresponding entropy contribution $\Delta S \sim 10^{-4}$ J/(K mol) is inconsistent by
several
orders of magnitude with reported values of the superfluid fraction
(NCRIF)
in the TO and shear modulus experiments. We also point out that some of the most disordered samples demonstrated "supersolid" fraction up to 20\%, while there is little evidence for any excess entropy on the scale of the gas constant $R$ times the fraction of defects in any measured sample.
\subsection{Single and double torsional oscillators}
We have shown that a phenomenological glass (viscoelastic) model for quenched defects accounts for the experimentally observed change in resonant frequency and the concomitant peak in dissipation.
Our analysis of torsional oscillator (TO) experiments revealed that most are well described by a Cole-Cole
distribution for relaxation times.
In addition, we derived a simple relation for the ratio of change in dissipation
and change in resonant frequency that can explain the large ratios observed
in experiments.
The values for the glass exponents in the distribution function of the backaction required to fit the experimental data
point toward broad distributions of glassy relaxation times, which clearly invalidate any attempt to describe these
experiments by a single overdamped mode, i.e., a single Debye relaxation process.
We also applied these ideas to understand the double oscillator data.
Our studies of the coupled oscillator showed that the observed shifts in resonant frequencies and dissipation are in agreement
with a glassy backaction contribution provided one includes anomalous damping in the dummy bob and an explicit frequency dependence in the backaction term.
As a side comment, it came as a surprise that already the unloaded double TO (no $^4$He) required a negative
damping coefficient for the dummy bob to accurately describe resonant frequencies and dissipation at 300 mK.
Finally, one should keep in mind that a significant difference between glassy and supersolid dynamics is that a glassy contribution to the TO grows with frequency, while a superfluid component decreases with frequency. This could be another differentiating factor for separating very different relaxation mechanisms.
\subsection{Shear modulus}
We showed that the shear modulus anomaly of solid $^4$He is strongly affected by the dynamics of defects. The freezing out of defects leads to stiffening of the solid concomitant with a peak in dissipation. By studying the glass susceptibility due to the backaction, we found that both the amplitude change and $T$-dependence of the shear modulus are well captured by this model.
An important consequence of the dynamic response analysis was the description of the dissipation or phase angle.
We found that the peak height of the dissipation is independent of the applied frequency and linearly proportional to the Cole-Cole exponent $\alpha$ as well as the backaction strength $g_{0}$. As $g_{0}$ depends on the concentration of the TLS, we predicted that increasing disorder will result in larger amplitude changes of the shear modulus.
Additionally, we extracted a universal scaling behavior proportional to $\omega \tau(T)$ using the Cole-Cole plot.
In this plot all curves of the shear modulus collapsed onto a single curve over a wide range of frequencies.
Furthermore, we have shown that
the glass contribution can be described by a viscoelastic model through the incorporation of anelastic elements
in constitutive equations of stress and strain.
\subsection{Dielectric function}
We have shown that the arrested glass dynamics causes the low-temperature anomaly in strained solid $^4$He through the acousto-optical (electro-elastic) coupling and proposed that the temperature behavior of the dielectric function is coupled to local strain fields near crystal defects.
It records the glassy dynamics and freeze-out of the hypothesized TLS excitations, which also lead to a stiffening of the solid with decreasing temperature. This effect is not captured by the standard Clausius-Mossotti relation, which attributes dielectric function changes to a change in mass density or polarizability of the nonpolar $^4$He\ atom.
An important consequence of the phenomenological glass susceptibility is the decrease of the dielectric function at high temperatures, accompanied by a broad dissipation peak that can be measured by the imaginary part of the dielectric function. We hypothesized that the cooperative motion of atoms forming the TLS along dislocation segments is the relevant process contributing to the reported anomaly.
In our model, both the change in $\epsilon(\omega; T)$ and the dissipation peak are to leading order independent of the applied frequency.
Since the coefficient $g_0$ of the backaction depends on the concentration of defects, we predicted that the change in dielectric function will be larger in quench-cooled or shear-stressed samples, while it vanishes in defect-free single crystals. Beyond the specific application to solid $^4$He,
which we invoked here,
our formalism allowed for a direct link between elastic and dielectric properties which could prove
fruitful in many other systems.
\bigskip
In summary we encourage and welcome experiments that will allow a more precise structural characterization of $^4$He. Also one needs to sharpen and provide a more detailed analysis of the structural aspects of solid $^4$He\ in order to be able to investigate any arrested dynamics of the postulated glassy regions to separate it from a simple crossover phenomenon.
Finally, we believe that more dynamic studies probing the frequency or time response to a stimulus are necessary to
investigate the differences between small subsystems of glassy, supersolid or superglassy origin in solid $^4$He.
\begin{acknowledgements}
We are grateful to colleagues and collaborators who provided encouragement and constructive criticism of the ideas presented here, A.F. Andreev, P. W. Anderson, I. Beyerlein, J. C. Davis, A. Dorsey, C. Reichardt, B. Hunt, E. Pratt, V. Gadagkar, J. Reppy, M. Chan, N. Prokof'ev, B. Svistunov, D. Schmeltzer, A. Kuklov and E. Rudavsky. This work was supported by the US Dept.\ of Energy at Los Alamos National Laboratory
under contract No.~DE-AC52-06NA25396 and the Basic Energy Sciences Office.
Z. N. was partially supported by the NSF grant Award No.\ DMR-1106293. We also acknowledge steady and generous support by the Aspen Center for Physics where parts of this review were written.
\end{acknowledgements}
|
2,869,038,154,967 | arxiv | \section{Introduction}
Group activity %
recognition, which refers to discern the activities involving a large number of interactive individuals, %
has attracted growing interests in the communities of computer vision \cite{DBLP:conf/cvpr/DengVHM16,DBLP:conf/cvpr/WangNY17,tang2018mining,yan2018participation,DBLP:conf/eccv/QiQLWLG18}. %
Unlike conventional video action recognition that only concentrates on the spatiotemporal dynamics of one or two persons, group activity recognition further requires understanding the group-relevant interactions among many individuals.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.95\linewidth,height=0.89\linewidth]{./Fig1-1.pdf}
\caption{The overview of proposed method. A feature-distilling (FD) agent progressively selects the most informative frames of the low-level spatiotemporal individual features. A relation-gating (RG) agent further progressively refines the high-level semantic relation graph (SRG) to discover group-relevant relations.}
\label{overview}
\end{figure}
In the past a few years, a series of approaches combine the hand-crafted feature with probability graph \cite{choi2012unified,DBLP:journals/pami/LanWYRM12,DBLP:conf/cvpr/ShuXRTZ15}. %
Recently, the LSTM, strucural RNNs and message passing neural network (MPNN) are also applied to model the interactions among persons, subgroups and groups \cite{DBLP:conf/eccv/QiQLWLG18,DBLP:conf/cvpr/WangNY17,DBLP:conf/wacv/BiswasG18}. The interaction relations in these methods are \textit{implicitly} contained in the ordered RNNs or the passing messages of MPNN. Moreover, not all the existing relations are relevant to the group activity and the pairwise relations may contain many edges that are coupled from spurious noise, such as cluttered background, inaccurate human detection, and interaction between outlier persons (\eg, the ``Waiting" person in Fig.~\ref{overview}). Due to the relations in previous methods are modeled \textit{implicitly}, it is unable to determine whether one specific relation is group-relevant or not.
In addition, although a large number of persons may involve in a group activity, usually only a few actions or interactions in several key frames essentially define the group activity. Yan \etal~\cite{yan2018participation} heuristically defined the key participants as the ones with ``long motion" and ``flash motion". Qi \etal~\cite{DBLP:conf/eccv/QiQLWLG18} applied a ``self-attention" mechanism to attend to important persons and key frames. %
Nevertheless, these methods are limited to the coarse individual (person) level, and have not dug into the fine-grained relation level to consider which relations are vital (\eg, regulating 15 pairwise relations is more fine-grained than attending 6 persons). %
To move beyond such limitations, we propose a progressive relation learning framework to effectively model and distill the group-relevant actions and interactions in group activities. Firstly, we build a graph to explicitly model the semantic relations in group activities. %
Then, as illustrated in Fig.~\ref{overview}, two agents progressively refine the low-level spatiotemporal features and high-level semantic relations of group activities. Specifically, at the feature level, a feature-distilling agent explores a policy to distill the most informative frames of low-level spatiotemporal features. At the relation level, a relation-gating agent further refines the high-level %
relation graph to focus on the group-relevant relations. %
In summary, the contributions of this paper can be summarized as: (1) A novel progressive relation learning framework is proposed for group activity analysis. (2) Beyond distilling group-relevant information at the course individual (person) level, we proposed a RG agent to progressively discover group-relevant semantic relations at the fine-grained relation level. (3) A FD agent is proposed to further progressively filter the frames of low-level spatiotemporal features that used for constructing the high-level semantic relation graph.
\section{Related Works}
\textbf{Reinforcement Learning}. Reinforcement learning (RL) has benefited many fields of computer vision, such as image cropping \cite{DBLP:conf/cvpr/LiWZH18} %
and visual semantic navigation \cite{DBLP:journals/corr/abs-1810-06543}. %
Regarding the optimization policy, RL can be categorized into the value-based methods, policy-based methods, and their hybrids. The value-based methods (\eg, deep Q-learning \cite{DBLP:journals/corr/MnihKSGAWR13}) are good at solving the problems in low dimensional discrete action space, but they fail in high dimensional continuous space. Although the policy-based methods (\eg, policy gradient \cite{DBLP:conf/nips/SuttonMSM99}) %
are capable to deal with the problems in continuous space, they suffer from high variance of gradient estimation. The hybrid methods, such as Actor-Critic algorithms \cite{DBLP:conf/nips/KondaT99}, combine their advantages and are capable for both of discrete and continuous action spaces. Moreover, by exploiting %
asynchronous updating, the Asynchronous Advantage Actor-Critic (A3C) algorithm \cite{DBLP:conf/icml/MnihBMGLHSK16} has largely improved the training efficiency. Therefore, we adopt the A3C algorithm to optimize both of our RG agent in continuous action space and our FD agent in discrete action space.
\textbf{Graph Neural Network}. Due to the advantages of representing and
reasoning over structured data, the graph neural network (GNN) has attracted increasing attention \cite{DBLP:journals/corr/abs-1810-00826,DBLP:journals/corr/abs-1901-00596,DBLP:journals/icme/gyhu,hu2019joint,DBLP:journals/corr/abs-1806-01261}. %
Graph convolutional network (GCN) generalizes CNN on graph, which therefore can deal with non-Euclidean data \cite{DBLP:journals/spm/BronsteinBLSV17}. It has been widely applied in computer vision, \eg, point cloud classification \cite{DBLP:conf/cvpr/SimonovskyK17}, action recognition \cite{DBLP:conf/aaai/YanXL18}, and traffic forecasting \cite{DBLP:conf/ijcai/YuYZ18}.
Another class of GNN combines graph with RNN, in which each node captures the semantic relation and structured information from its neighbors through multiple iterations of passing and updating, \eg, message-passing neural network \cite{DBLP:conf/icml/GilmerSRVD17}, graph network block \cite{DBLP:conf/icml/Sanchez-Gonzalez18}. %
Each relation in the former class (\ie, GCN) is represented by a scalar in its adjacency matrix that is not adequate for modeling the complex context information in group activity. Therefore, our semantic relation graph is built under the umbrella of the latter class that each relation is explicitly represented by a learnable vector.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.95\linewidth,height=0.45\linewidth]{./Fig2-1.pdf}
\caption{The detailed framework of our method. The low-level spatiotemporal
features of persons are extracted by a CNN and a LSTM. The feature-distilling (FD) agent selects the informative %
frames of features. Then the distilled features are used to build a high-level semantic relation graph (SRG), and a relation-gating (RG) agent further refines the SRG. ``FC" denotes fully connected layer. Finally, the activity category is predicted according to the sum of global attributes at all the times.}
\label{framework}
\end{figure*}
\section{Method}
\subsection{Individual Feature Extraction}
Following \cite{yan2018participation}, the person bounding boxes are firstly obtained through the object tracker %
in the Dlib library \cite{DBLP:journals/jmlr/King09}.
As shown in Fig.~\ref{framework}, %
the visual feature (\eg, appearance and pose) $ x^{vis}_{p_i}$ of each person $ i $ is extracted through a convolutional neural network (called Person-CNN). Then, the spatial visual feature is fed into a long short-term memory network (called Person-LSTM) to model the individual temporal dynamic $ x^{tem }_{p_i} $. Finally, we concatenate the stacked visual features $ \bm{x}^{vis}_{p} $ and temporal dynamics $ \bm{x}^{tem}_{p} $ of all persons as the basic spatiotemporal features, \ie, $ \bm{x}_p =[\bm{x}^{vis}_{p},\bm{x}^{tem}_{p}]$. %
These basic representations %
contain no context information, such as the person to person, person to group, and group to group interactions. %
Besides, the spatial distance vectors $ \{ |dx|, |dy|, |dx+dy|, \sqrt{(dx)^2+(dy)^2}\}$ and direction vectors $\{arctan(dy,dx), arctan2(dy,dx)\} $ between each pair of persons are concatenated as the original interaction features $ \bm{x_e} $, where $ dx $ and $ dy $ are the displacements along horizontal and vertical axes, respectively.
\subsection{Semantic Relation Graph}
Inferring semantic relations over inherent structure in a scene is helpful to suppress noises, such as inaccurate human detection, mistaken action recognition, and outlier people not involved in a particular group activity. To achieve it, we explicitly model the structured relations through a graph network \cite{DBLP:conf/icml/Sanchez-Gonzalez18}. Let us put aside the two agents in Fig.~\ref{framework} and explain how to build the baseline semantic relation graph first. Let a graph $\bm{G} = (\bm{u}, \bm{V}, \bm{E})$ , where $ \bm{u} $ is the global attribute (\ie, activity score), $\bm{V}=\{v_i\}_{i=1}^{N_v} $ and $ \bm{E}=\{e_{ij}\}_{i,j=1}^{N_v}$ are respectively the person nodes and the relation edges among them. The attributes of person nodes $ \bm{H_{v}} $ and the attributes of relation edges $ \bm{H_{e}} $ are respectively initialized with the embeddings of low-level spatiotemporal features $ \bm{X_p} $ and original interaction features $ \bm{X_e} $.
During graph passing, each node $ v_i $ collects the contextual information $ \bm{h_{ve}^{ij}} $ from each of its neighbors $ v_j $ ($j \in \mathcal{N}(v_i) $) via a collecting function $ \phi_{ve} $, and aggregates all collected information via an aggregating function $ \psi_{v} $, \ie,
\begin{equation}
\bm{h_{ve}}^{ij}
= \phi_{ve}(\bm{h_{e_{ij}}},\bm{h_{v_j}})
= \text{NN}_{ve}\left([\bm{h_{e_{ij}}},\bm{h_{v_j}}]\right)
\end{equation}
\begin{equation}
\bm{\overline{h}_{e_{i}}}
= \psi_{v} (\bm{h_{ve}}^{i})=\sum_{j\in\mathcal{N}(v_i)}\bm{h_{ve}}^{ij}
\end{equation}
where the collecting function $ \phi_{ve} $ is %
implemented by a neural network $ \text{NN}_{ve} $, and [$ \cdot $] denotes concatenation.
Then, the aggregated contextual information $ \bm{\overline{h}_{e_{i}}} $ updates the node attributes via a node updating function $ \phi_{v} $ (network $ \text{NN}_v $),
\begin{equation}
\bm{h'_{v_i}}
= \phi_{v}(\bm{\overline{h}_{e_{i}}},\bm{h_{v_{i}}})
= \text{NN}_{v}\left([\bm{\overline{h}_{e_{i}}},\bm{h_{v_{i}}}]\right).
\end{equation}
After that, each edge $ \bm{h_{e_{ij}}} $ enrolls message from the sender $ \bm{h'_{v_i}} $ and receiver $ \bm{h'_{v_j}} $ to update its edge attributes via an edge updating function $ \phi_e $ (network $ \text{NN}_e $),
\begin{equation}
\bm{\hat{h}}_{\bm{e}_{ij}}
= \phi_e(\bm{h_{v'_i}}, \bm{h_{v'_j}}, \bm{h_{e_{ij}}})
= \text{NN}_e\left([\bm{h_{v'_i}}, \bm{h_{v'_j}}, \bm{h_{e_{ij}}}]\right)
\end{equation}
To simplify the problem, we consider the graph is undirected (\ie,
$ \bm{{h'}_{e_{ij}}}=\bm{{h'}_{e_{ji}}} = (\bm{\hat{h}_{e_{ij}}}+\bm{\hat{h}_{e_{ji}}})/2 $) and has no self-connection. %
Finally, the global attribute $ \bm{u} $ is updated based on semantic relations in the whole %
relation graph, \ie,
\begin{equation}
\bm{u'}
= \bm{W_u} \left( \sum_{i=1}^{N_v}\sum_{j > i}^{N_v} \bm{h'_{e_{ij}}}\right) + \bm{b_u}
\end{equation}
where $ \bm{W_u} $ is parameter matrix and $\bm{b_u}$ is bias. The $ \text{NN}_e $, $ \text{NN}_{ve} $ and $ \text{NN}_v $ are implemented with LSTM networks.
Since propagating information over the graph once captures at most pairwise relations, we %
update the graph for $ \bm{m} $ iterations to encode high-order interactions. %
After the propagations, the graph automatically learns the high-level semantic relations %
from the low-level individual features in the scene. Finally, the activity score can be obtained by appending a softmax layer to the $ \bm{u} $ after the last iteration.
\subsection{Progressively Relation Gating}
Although the above fully-connected semantic graph is capable of explicitly modeling any type of relation, it contains many group-irrelevant relations. Therefore, we introduce a relation-gating agent to explore an adaptive policy to select group-relevant relations. %
The decision process is formulated as a Markov Process $ \mathcal{M}=\{S, A, \mathcal{T}, r, \gamma \}$. %
\textbf{States.} The state $ S $ consists of three parts $S=\{S_g,S_l,S_u\}$. $ S_g $ is the whole semantic graph, represented by the stack of all relation triplets (\textit{``sender", ``relation", ``receiver"}), which provides the global information about the current scene. $ S_l $ is concatenation of the relation triplet $(\bm{h}_{\bm{v}_i},\bm{h}_{\bm{e}_{ij}},\bm{h}_{\bm{v}_j}) $ corresponding to one specific relation $ \bm{h}_{\bm{e}_{ij}} $ that will be refined, which provides the local information for the agent.
$ S_l\in \mathbb{R}^{D_v+D_e+D_v} $, where $ D_v$ and $ D_e $ denote the attribute dimensions of $ N_v $ nodes and $ N_e $ relations, respectively. $ S_u = \bm{u} $ is global attributes of the relation graph at the current state, where $ \bm{u} $ is the activity scores.
\textbf{Action.} Inspired by the information gates in the LSTMs, we introduce a gate $ g_{ij} $ for each relation edge. The action $ A $ of the agent is to generate %
the gate $ g_{ij} \in [0,1] $. Then, it is applied to adjust the corresponding relation at each reinforcement step, \ie, $ \bm{h}_{\bm{e}_{ij}}$ = $g_{ij} \cdot \bm{h}_{\bm{e}_{ij}} $.
Since the semantic relation graph is undirected, we normalize the values of gates before gating operation, \ie, $ g_{ij}=g_{ji}=(g_{ij}+g_{ji})/2 $.
\textbf{Reward.} The reward $ r(S, A) $, reflecting the efficacy of action $ A $ w.r.t the state $ S $, consists of three parts. \text{1)} To encourage the relation gates $ \bm{G}=\{g_{ij}\}_{i,j=1}^{N_v} $ to selects group-relevant relations, we propose a structured sparsity reward. We define structured sparsity as the $ L_{2,1} $ norm of $ \bm{G} $, \ie,
\begin{equation}
L_{2,1}(\bm{G})= \sum_{i=1}^{N_v}\|\bm{g}_{i,:}\|_2 = \sum_{i=1}^{N_v}\left(\sqrt{\sum_{j=1}^{N_v} |g_{ij}|^2}\right)
\end{equation}
where $ \bm{g}_{i,:} $ is row vectors of $ \bm{G} $. As illustrated in Fig.~\ref{RG:a}, unlike $ L_1$ norm that tends to uniformly make all gating elements sparse, the $ L_{2,1}$ norm can encourage the rows of $ \bm{G} $ to be sparse. Thus, the structured sparsity is very helpful to attend to a few key participants which have wide influence to others. The structured sparsity reward at the $ \tau $th reinforcement step is defined to encourage the agent to gradually attend to a few key participants and relations, \ie,
\begin{equation}
r_{sparse} = - sgn\left( L_{2,1}\left( \bm{G}_{\tau}\right) - L_{2,1}\left(\bm{G}_{\tau-1}\right) \right)
\end{equation}
where $ r_{sparse} \in \{-1,1\}$ and the $ sgn $ is sign function. \text{2)} To encourage the posterior probability to evolve along an ascending trajectory, we introduce an ascending reward with respect to the probability of groundtruth activity label, \ie,
\begin{equation}
r_{ascend} = sgn\left( \bm{p}^c_{\tau} - \bm{p}^c_{\tau-1} \right)
\end{equation}
where $ \bm{p}^c_{\tau} $ is predicted probability of the groundtruth %
label at the $ \tau $th step. $ r_{ascend} \in \{-1,1\}$ reflects the probability improvement of the groundtruth.
\text{3)} To ensure that the model tends to predict correct classes, inspired by \cite{DBLP:conf/cvpr/TangTLL018}, a strong stimulation $ \Omega $ is enforced when
the predicted class shifts from wrong to correct after a step, and a strong punishment $ - \Omega $ is applied if the turning goes otherwise, \ie,
\begin{equation}
r_{s} =
\begin{cases}
\Omega, \qquad & \text{if stimulation}\\
-\Omega, \qquad & \text{if punishment}\\
0, \qquad & \text{otherwise}\\
\end{cases}
\label{r-EI}
\end{equation}
Finally, the total reward for the RG agent is
\begin{equation}
r=r_{sparse} + r_{ascend} + r_{shift}.
\end{equation}
\begin{figure}[tbp]
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=1\linewidth,height=2.9in]{./Fig3_a.pdf}
\caption{Comparison of sparsity}
\label{RG:a}
\end{subfigure}
\hspace{0.05\linewidth}
\begin{subfigure}[b]{0.6\linewidth}
\includegraphics[width=0.95\linewidth,height=3.0in]{./Fig3_b.pdf}
\caption{Structure of the RG agent}
\label{RG:b}
\end{subfigure}
\label{RG}
\caption{(a) Comparing the $ L_1 $ and $ L_{2,1} $ norms of gating matrix $ \bm{G} $, where the transparency denotes the value of each gate. The $ \|\bm{G}\|_1 $ encourages uniform sparsity while $ \|\bm{G}\|_{2,1} $ encourages structured row sparsity. The implementation of $ \|\bm{G}\|_{2,1} $ is illustrated in the bottom. (b) The RG agent takes in the global information $ S_g $, the local information $ S_l $ for specific relation, and the global scene attribute $ S_u $. ``FC1", \dots, ``FC7" are %
fully connected layers, and ``Edge Pooling" denotes average pooling along the edge dimension. Finally, the left branch (\textit{Actor}) and the right branch (\textit{Critic}) outputs an action %
and a value for the current state, respectively.
}
\end{figure}
\textbf{Relation-gating Agent.} Since searching high dimensional continuous action space is challenging for reinforcement learning, we compromise to let the agent output one gating value at a time and cycle through all edges within each reinforcement step.
The architecture of the RG agent is shown in Fig.~\ref{RG:b}, which is under an Actor-Critic framework \cite{DBLP:conf/nips/KondaT99}. Inspired by human's decision making that historical experience can assist the current decision, a LSTM block is used to memorize the information of the past states. The agent maintains both a policy $ \pi(A_{\tau}|S_{\tau}; \theta) $ (also named \textit{Actor}) to generate actions (gates) and an estimation of value function $ V(S_{\tau};\theta_v) $ (also named \textit{Critic}) to assess values for corresponding states. Specifically, the \textit{Actor} outputs a mean $ \mu_{ij} $ and a standard deviation $ \sigma_{ij} $ of action distribution $ \mathcal{N}(\mu_{ij}, \sigma_{ij}) $. The action $ g_{ij} $ is sampled from the Gaussian distribution $ \mathcal{N}(\mu_{ij}, \sigma_{ij}) $ during training, and is set as $ \mu_{ij} $ directly during testing.
\textbf{Optimization.} The agent is optimized with the classical A3C algorithm \cite{DBLP:conf/icml/MnihBMGLHSK16} for reinforcement learning. The policy and the value function of the agent are updated after every $ \tau_{max} $ (updating interval) steps or when a terminal state is reached. The accumulated reward at the step $ \tau $ is { $ R_{\tau}=\sum_{i=0}^{k-1}\gamma^i r_{\tau+i}+\gamma^kV(S_{\tau+k};\theta_v) $}, where $ \gamma $ is the discount factor, $ r_{\tau} $ is the reward at the $ \tau th$ step, and $ k $ varies from 0 to $ \tau_{max} $. The advantage function can be calculated by { $ R_{\tau}-V(S_{\tau};\theta_v)$}, and the entropy of policy $ \pi $ is $H(\pi(S_{\tau}; \theta)) $. Eventually, the gradients are accumulated via Eq.~\ref{eq_critic} and Eq.~\ref{eq_actor} to respectively update the value function and the policy of agent \cite{DBLP:conf/icml/MnihBMGLHSK16}.
{
\begin{equation}
d\theta_v \leftarrow d\theta_v + \nabla_{\theta_v}\left(R_{\tau}-V(S_{\tau};\theta_v)\right)^2/2
\label{eq_critic}
\end{equation}
\begin{equation}
\begin{split}
d\theta \leftarrow d\theta &+ \nabla_{\theta}log\pi(A_\tau|S_\tau;\theta)\left(R_{\tau}-V(S_{\tau};\theta_v)\right)\\
&+\beta \nabla_{\theta} H(\pi(S_{\tau}; \theta))
\end{split}
\label{eq_actor}
\end{equation}
}where $ \beta $ controls the strength of entropy regularization.
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.95\linewidth}
\includegraphics[width=\linewidth,height=0.95\linewidth]{./Fig4-a.pdf}
\caption{Illustration of the feature-distilling process}
\label{FA:a}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\includegraphics[width=\linewidth,height=0.45\linewidth]{./Fig4-b.pdf}
\caption{Structure of the feature-distilling agent}
\label{FA:b}
\end{subfigure}
\caption{(a) The FD agent has two discrete actions, \ie, ``\textit{stay distilled}" (red icon) and ``\textit{shift to alternate}" (green icon). The ``Queue" is a queue which contains the alternate feature frames, and $ \mathcal{T}' $ is the deterministic state transition function. %
(b) %
The convolutional layers Conv1 and Conv3 (with kernel of 1x1) are used for channel squeezing, and Conv2 and Conv4 (with kernel of 3x3) are used for feature extracting. The``FC" denotes fully connected layer.
}
\label{FA}
\end{figure}
\subsection{Progressively Feature Distilling}
To further refine the low-level spatiotemporal features used for constructing %
graph, we introduced another feature-distilling agent. It is aimed at distilling the most informative frames of features, which is also formulated as
a Markov Decision Process {$ \mathcal{M}'=\{S',A', \mathcal{T}', r', \gamma' \}$}.
\textbf{State.} The state of the FD agent consists of three components $S'=\{S'_F,S'_{F_d}, S'_M\}$. The whole feature tensor of an activity $ S'_F \in \mathbb{R}^{N\times T\times D_F} $ provides the global information about the activity clip,
where $ N $, $ T $ and $ D_F $ are respectively the numbers of person, frame and feature dimension of the %
feature tensor. The local feature $ S'_{F_d} \in \mathbb{R}^{N\times T_d\times D_F} $ carries the \textit{implicit} information of the distilled frames, where $ T_d $ is the number of frames to be kept. In order to be \textit{explicitly} aware of the distilled frames, the state of FD agent also contains the binary mask $ S'_M $ of the distilled frames.
\textbf{Action.} As shown in Fig.~\ref{FA:a}, the FD agent outputs two types of discrete actions for each selected frame, \ie~``\textit{stay distilled}" indicating the frame is informative that the agent determines to keep it, and ``\textit{shift to alternate}" indicating the agent determines to discard the frame and take in an alternate. The shifting may be frequent at the beginning but will gradually become stable after some explorations (Fig.~\ref{FA:a}). In order to give equal chance for all alternates to be enrolled, the latest discarded frames are appended to the end of a queue and have the lowest priority to be enrolled again.
\textbf{Feature-distilling Agent.} %
The FD agent in Fig.~\ref{FA:b} is also constructed under the Actor-Critic \cite{DBLP:conf/nips/KondaT99} framework. The agent takes in the global knowledge from the whole feature {$ S'_F $}, the \textit{implicit} local knowledge from the distilled features {$ S'_{F_d} $}, and the \textit{explicit} local knowledge from the binary frame mask {$ S'_M $}. Finally, the agent outputs an action vector for the $ T_d $ distilled feature frames and a value for the current state. The action vector is sampled from the policy distribution during training, and is directly set as the action type with max probability during testing.
\textbf{Optimization and Rewards.} The optimization algorithm (A3C) and object function are same as the RG agent. %
The reward only contains the components about trajectory ascending and class shifting introduced above, \ie,
\begin{equation}
r'= r_{ascend} + r_{shift}.
\end{equation}
\subsection{Training Procedure}
In the proposed approach, the agents and the graph need to be updated respectively on CPU (to exploit numerous CPU cores/threads for asynchronous updating workers according to A3C algorithm \cite{DBLP:conf/icml/MnihBMGLHSK16}) and GPU. In addition, the graph is updated after each video batch, but the agents are updated many times during each video when the number of reinforcement step reaches the updating interval $ \tau_{max} $ or a terminal state is reached. Thus, the graph and agents are updated on different devices with different updating periods, and it is unable to optimize them with conventional end-to-end training. Therefore, we adopt alternate training. More details of the standard flowchart of A3C algorithm can be found in the \textit{Supplementary Material}.
\textbf{Individual Feature Preparation.} Following \cite{tang2018mining}, we finetune the \text{Person-CNN} (VGG16 \cite{DBLP:journals/corr/SimonyanZ14a})
pretrained on ImageNet \cite{DBLP:journals/ijcv/RussakovskyDSKS15} with individual action labels to extract visual features, and then train the \text{Person-LSTM} with individual action labels to extract temporal features. To lower the computation burden, the extracted individual features are saved to disk and only need reloading after this procedure.
\textbf{Alternate Training.} There are totally 9 separated training stages. At each stage, only one of the three components (SRG, trained with 15 epochs; FD- or RG-agent, trained with 2 hours) is trained and the remaining two are frozen (or removed). In the first stage, the SRG (without agents) is trained with the extract features to capture the context information within activities. In the second stage, the SRG is frozen, and the FD agent is introduced and trained with the rewards provided by the frozen SRG. In the third stage, the SRG and FD agent are frozen, the RG agent is introduced and trained with the rewards provided by the frozen SRG and FD agent. After that, one of the SRG, FD agent and RG agent is trained in turn with the remaining two be frozen in the following 6 stages.
\section{Experiments}
\subsection{Datasets}
\textbf{Volleyball Datasets} \cite{DBLP:conf/cvpr/IbrahimMDVM16}. The Volleyball dataset is currently the largest dataset for group activity recognition. It contains 4830 clips of 55 volleyball videos. %
Each clip is annotated with 8 group activity categories (\ie, right set, right spike, right pass, right winpoint, left winpoint, left pass, left spike and left set), and its middle frame is annotated with 9 individual action labels (\ie, waiting, setting, digging, falling, spiking, blocking, jumping, moving and standing). %
We %
employ the metrics of Multi-class Classification Accuracy (MCA) and Mean Per Class Accuracy (MPCA) to evaluate the performance %
following \cite{yan2018participation}.
\textbf{Collective Activity Dataset (CAD)} \cite{choi2009they}. The CAD contains 2481 activity clips of 44 videos. %
The middle frame of each clip is annotated with 6 individual action classes (\ie, NA, crossing, walking, waiting, talking and queueing), and the group activity label is assigned as the majority action label of individuals in the scene. %
Following \cite{DBLP:conf/cvpr/WangNY17}, we merge the classes ``walking" and ``crossing" as ``moving" and report the %
MPCA to evaluate the performance.
Since the existing datasets lack sufficient diversity of background \cite{DBLP:conf/cvpr/WangNY17}, it is too difficult to distinguish useful objects (\eg, volleyball) from noisy background without any annotation. Following \cite{DBLP:conf/cvpr/IbrahimMDVM16,DBLP:conf/iccv/LiC17,yan2018participation,DBLP:conf/cvpr/ShuTZ17,DBLP:conf/eccv/QiQLWLG18,tang2018mining}, we ignore the background and only focus on interactions among persons.
\subsection{Implementation Details}
For fair comparison with previous methods \cite{DBLP:conf/eccv/QiQLWLG18,tang2018mining}, we use the same backbone network (Person-CNN) VGG16 \cite{DBLP:journals/corr/SimonyanZ14a}. It outputs 4096-d features and the Person-LSTM equipped with 3000 hidden neurons takes in all the features in T (T=10) time steps. In the SRG, the embedding sizes of node and edge are 1000 and 100 respectively, and the graph passes 3 iterations at each time. Thus, the number of hidden neurons in updating functions $ \text{NN}_{ve} $, $ \text{NN}_v $, and $ \text{NN}_e $ are 1000, 1000 and 100, respectively. In the RG agent, the fully connected layers FC1, FC2, \dots, FC7 are respectively contains 512, 256, 512, 256, 256, 64 and 256 neurons, and its LSTM network contains 128 hidden nodes. In the FD agent, the number of feature frames to be kept $ T_d $ is practically set as 5. In Fig.~\ref{FA:b}, the neuron number of the two FC layers from the left to right is 64 and 256, the channels of Conv1, Conv2, Conv3, Conv4 are respectively 1024, 1024, 256, 256, and the LSTM network contains 128 neurons.
During training, we use RMSprop/Adam (SRG/Agents) optimizer with an initial learning rate of 0.00001/0.0001 (SRG/Agents) and a weight decay of 0.0001. The batch size is 8/16 (CAD/Volleyball) for SRG training. The discount factor $ \gamma $, entropy factor $ \beta $ and the number of asynchronous workers in A3C for both agents are respectively set as 0.99, 0.01 and 16. In practice, the updating interval $ \tau_{max} $ and $ \Omega $ (in Eq.~\ref{r-EI}) are set as 5/5 and 15/20 (RG/FD agent), respectively. In Volleyball dataset, following %
\cite{DBLP:conf/cvpr/IbrahimMDVM16}, the 12 players are split into two subgroups (\ie, the left team and the right team) according to %
positions, and the RG agent are shared by the two subgroups in our framework, and finally the outputs of the two subgroups are averaged. In CAD dataset, since the number of individuals is varying from 1 to 12, we select 5 effective persons for each frame and fill zeros for the frames contain less than 5 persons following %
\cite{yan2018participation}.
\subsection{Baseline and Variants for Ablation Studies}
To examine the effectiveness of each component in the proposed method, we conduct ablation studies with the following baseline and variants. \textit{stagNet w/o Atten.}~\cite{DBLP:conf/eccv/QiQLWLG18}: this baseline constructs a message passing graph network with the similar low-level features as our SRG. It implicitly represents the interactions by the passing messages, while our SRG explicitly models relations in a full graph network. \textit{Ours-SRG}: this variant only contains the SRG of the proposed method. \textit{Ours-SRG+T.~A.}: this variant contains our SRG and a temporal attention over feature frames. \textit{Ours-SRG+R.~A.}: this variant contains our SRG and a relation attention that directly learns relation gates. \textit{Ours-SRG+FD}: this variant contains both the SRG and FD agent, and they are trained alternately to boost each other. \textit{Ours-SRG+RG}: this variant contains both the SRG and RG agent, and they are alternately trained. \textit{Ours-SRG+FD+RG (PRL)}: our progressive reinforcement learning framework that contains all the proposed three components, including the SRG, the FD agent, and the RG agent.
\begin{table}[tbp]
\caption{Comparisons of recognition accuracy (\%) on Volleyball dataset. ``OF" denotes additional optical flow input.}
\label{tab-volleybal}
\begin{tabular}{lcccc}
\toprule
Methods&Backbone&OF&MCA &MPCA\\
\midrule
HDTM~\cite{DBLP:conf/cvpr/IbrahimMDVM16}&AlexNet&N&81.9&82.9\\
SBGAR~\cite{DBLP:conf/iccv/LiC17}&{\small Inception-v3}&Y&66.9&67.6\\
CERN-2~\cite{DBLP:conf/cvpr/ShuTZ17}&VGG16&N&83.3&83.6\\
SSU~\cite{DBLP:conf/cvpr/BagautdinovAFFS17}&{\small Inception-v3}&N&89.9&-\\
SRNN~\cite{DBLP:conf/wacv/BiswasG18}&AlexNet&N&83.5&-\\
PC-TDM~\cite{yan2018participation}&AlexNet&Y&87.7&88.1\\
stagNet~\cite{DBLP:conf/eccv/QiQLWLG18}&VGG16&N&89.3&-\\
{\small SPA+KD}~\cite{tang2018mining}&VGG16&N&89.3&89.0\\
{\small SPA+KD+OF}~\cite{tang2018mining}&VGG16&Y&90.7&90.0\\
ARG~\cite{wu2019learning}&VGG16&N&\textbf{91.9}&-\\
CRM~\cite{azar2019convolutional}&I3D&Y&\textbf{93.0}&-\\
\midrule
Baseline \cite{DBLP:conf/eccv/QiQLWLG18}&VGG16&N&87.9&-\\
Ours-SRG&VGG16&N&88.3&88.5\\
Ours-SRG+T. A.&VGG16 &N&88.6&88.7\\
Ours-SRG+R. A.&VGG16 &N&88.7&89.0\\
Ours-SRG+FD&VGG16&N&89.5&89.2\\
Ours-SRG+RG&VGG16&N&89.8&91.1\\
Ours-PRL&VGG16&N&\textbf{91.4}&\textbf{91.8}\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.94\linewidth,height=0.89\linewidth]{./Fig_CM_Volleyball_1.pdf}
\caption{Confusion matrix on the Volleyball dataset.}
\label{CM_Volleyball}
\end{figure}
\subsection{Results on the Volleyball Dataset}
To examine the effectiveness of each component, we compare the proposed PRL against the above baseline and variants. As Table~\ref{tab-volleybal} shows, although building graphs on similar low-level features, our semantic relation graph is superior to the baseline (\text{stagNet w/o Atten.} \cite{DBLP:conf/eccv/QiQLWLG18})
because our semantic relations are explicitly modeled while the baseline only implicitly contains them in the passing messages. Our SRG+FD boosts the SRG over 1.2\% (MCA) and 0.7\% (MPCA) by applying the FD agent to filter out ambiguous frames of features, and our SRG+RG also improves the performance of the SRG over 1.5\% (MCA) and 2.6\% (MPCA) by exploiting the RG agent to refine the relations. Our PRL achieves better performance by combining the advantages from the two agents. Note that the PRL eventually improves 3.1\% (MCA) over the original SRG, which is even larger than the sum of increments from the two agents, 2.7\% (MCA), indicating that the two agents can boost each other through the alternate training procedure. Besides, the agent-equipped variants SRG+FD and SRG+RG respectively perform better than corresponding attention-equipped variants \text{SRG+T.~A.} and \text{SRG+R.~A.} by 0.9\% and 1.1\% (MCA). The superiority of the agents probably owe to two reasons: 1) The attention variants can only learn from the annotated activity labels, while our RL-based agents can also learn from the historical experience during the policy exploring processes. 2) The attention variants only updates for each video batch, while our agents are updated many times during each single video (cf.~training flowchart) that can achieve more fine-grained and video-specific adjustments.
Then, we compare the proposed PRL with other state-of-the-art methods. As shown in Table~\ref{tab-volleybal}, our PRL is on par with the state-of-the-art method that has no extra optical flow input (ARG \cite{wu2019learning}). Our PRL even outperforms most of the methods that exploit optical flow input (including SBGAR \cite{DBLP:conf/iccv/LiC17}, PC-TDM \cite{yan2018participation}, and SPA+KD+OF \cite{tang2018mining}). Although CRM \cite{azar2019convolutional} performs somewhat better than our PRL, it is unfair to compare with. Because the CRM not only exploits extra optical flow input but only utilizes a much larger backbone (I3D~\cite{carreira2017quo}) than ours (VGG16 \cite{DBLP:journals/corr/SimonyanZ14a}).
\begin{figure*}[tbp]
\centering
\includegraphics[width=1.0\linewidth,height=0.39\linewidth]{./Fig6-1.pdf}
\caption{Visualization of the refined SRGs. The first row contains the obtained tracklets and the groundtruth labels of activity and person actions. The second row contains the refined SRGs and the predicted activity labels. The color of person represents its importance degree. To facilitate visualization, only the relations with top5/top3 (Volleyball/CAD) gate values are shown (the white lines). The samples of (a,b) and (c,d) are from the Volleyball and CAD datasets, respectively.}
\label{visualization}
\end{figure*}
In addition, the confusion matrix of the proposed PRL is shown in Fig.~\ref{CM_Volleyball}. As we can see, our PRL achieves promising recognition accuracies ($ \geq 90\%$) on most of the activities. The main failure cases are from ``set" and ``pass" within the left and right subgroups, which is probably due to the very similar actions and positions of the key participants. We also visualized several refined semantic relation graphs in Fig.~\ref{visualization}, where the relations with top5 gate values are shown and the importance degree of persons are indirectly computed by summing the connected relation gates (normalized over all persons). In Fig.~\ref{visualization}{\color{red}a}, benefited from the rewards of structured
sparsity, our RG agent successfully discovers the subset of relations related to the ``digging" person is the key to determine the activity ``left pass". In Fig.~\ref{visualization}{\color{red}b}, the model predicts ``right winpoint" mainly based on two relation clusters, including the cluster characterized by the two ``falling" persons in the left team and the cheering cluster in the right team.
\begin{table}
\caption{Comparisons of recognition accuracy (\%) on CAD dataset. ``OF" denotes additional optical flow input.}
\label{tab-cad}
\centering
\begin{threeparttable}[htb]
\begin{tabular}{lccc}
\toprule
Methods&Backbone&OF&MPCA(\%)\\
\midrule
HDTM \cite{DBLP:conf/cvpr/IbrahimMDVM16}&AlexNet&N&89.6\\
CERN-2 \cite{DBLP:conf/cvpr/ShuTZ17}&VGG16&N&88.3\\
SBGAR \cite{DBLP:conf/iccv/LiC17}&Inception-v3&Y&89.9\\
PC-TDM \cite{yan2018participation}&AlexNet&Y&92.2\\
SPA+KD \cite{tang2018mining}&VGG16&N&92.5\\
SPA+KD+OF \cite{tang2018mining}&VGG16&Y&\textbf{95.7}\\
CRM \cite{azar2019convolutional}&I3D&Y&94.2\\
\midrule
Baseline \cite{DBLP:conf/eccv/QiQLWLG18}&VGG16&N&87.7\tnote{\text{*}}\\
Ours-SRG &VGG16&N&89.4\\
Ours-SRG+R.~A. &VGG16&N&90.0\\
Ours-SRG+T.~A. &VGG16&N&90.1\\
Ours-SRG+FD&VGG16&N&91.1\\
Ours-SRG+RG&VGG16&N&91.4\\
Ours-PRL &VGG16&N&\textbf{93.8}\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[\text{*}] MPCA is unavailable, MCA is listed instead.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Results on the Collective Activity Dataset}
Table~\ref{tab-cad} shows the comparison with different methods on the CAD dataset. Following \cite{yan2018participation,tang2018mining}, the results regarding MPCA of several methods are calculated from the reported confusion matrices in \cite{DBLP:conf/cvpr/HajimirsadeghiY15,DBLP:conf/cvpr/IbrahimMDVM16,DBLP:conf/cvpr/ShuTZ17,DBLP:conf/iccv/LiC17}. %
Our PRL outperforms the state-of-the-art method (SPA+KD \cite{tang2018mining}) without extra optical flow input by a margin of 1.3\%. Although the SPA+KD+OF \cite{tang2018mining} performs better than our PRL, its main improvement (3.2\%) is owed to the extra optical flow information (cf.~Table~\ref{tab-cad}). The backbone of CRM~\cite{azar2019convolutional} (I3D) is much larger than ours (VGG19), making it less comparable. The detailed confusion matrix of our PRL on the CAD dataset can also be found in the \textit{Supplementary Material}.
Furthermore, we analyze the results by visualizing the final SRGs. For the ``Moving" activity in Fig.~\ref{visualization}{\color{red}c}, our method concentrates on the relations among the three moving persons to suppress the noisy relations caused by the ``Waiting" person. Similarly, in Fig.~\ref{visualization}{\color{red}d}, our method successfully attends to the relations connected to the ``Talking" person and weakens the relations among the three audiences.
\section{Conclusion}
In this work, we propose a novel progressive relation learning method to model and distill the group-relevant actions and interactions in group activities. A graph built on the spatiotemporal features and the interactions of individuals is used to explicitly model the semantic relations in group activities. A feature-distilling agent is proposed to progressively distill the most informative frames of the low-level features, and the relation-gating agent is proposed to refine the high-level relations in the semantic relation graph. Eventually, our PRL achieves promising results on two widely used benchmarks for group activity recognition.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,968 | arxiv | \section{Introduction}
In recent years there has been huge progress in exactly solving
the ${\cal N}=4$ Super-Yang-Mills theory in the planar limit \cite{Bena}-\cite{ArutFrol2} . This is particularly
remarkable as ${\cal N}=4$ SYM is an interacting four-dimensional gauge theory
with highly nontrivial dynamics. All previous examples of exact solvability
in Quantum Field Theories were restricted either to two dimensional theories
or, in the case of higher dimensional theories, to some supersymmetric subsector
or to theories with much simpler dynamics like topological field theories.
The solvability of ${\cal N}=4$ SYM arises due to the AdS/CFT correspondence \cite{Mal, GubKlebPol, Wit},
according to which the gauge theory is equivalent to string theory in
$AdS_5 \times S^5$. Therefore ${\cal N}=4$ SYM avoids the no-go theorems for
integrable field theories in more than two-dimensions by translating
its dynamics into properties of the two-dimensional worldsheet QFT of
the superstring in $AdS_5 \times S^5$ which is an integrable QFT.
Since ${\cal N}=4$ SYM is a conformal field theory, all correlation functions of
local operators are, in principle, determined by a much smaller set of data:
the set of conformal dimensions (equivalently anomalous dimensions of
gauge theory operators) and the OPE coefficients. These data can be extracted
from the knowledge of 2- and 3-point correlation functions.
In the AdS/CFT case, however, the anomalous dimensions are extracted not from
2-point correlation functions but directly as eigenvalues of the dilatation
operator, which translates to energies of string states in $AdS_5 \times S^5$.
Therefore the problem of finding all anomalous dimensions reduces to finding
the energy levels of the two-dimensional worldsheet QFT on a cylinder.
Currently we have a very complete understanding of the spectrum of conformal
dimensions which is described by a set of Thermodynamic Bethe Ansatz equations \cite{ArutFrol1}-\cite{ArutFrol2}.
The methods used are similar to the ones employed for relativistic integrable
field theories, although their generalization to the AdS/CFT case is far from
trivial due to many unique features of the worldsheet QFT.
For the case of OPE coefficients, however, there seems to be no alternative
to a direct computation of 3-point correlation functions. It is convenient to
classify the operators into three main groups, depending on their behaviour
at strong coupling: `light', `medium' and `heavy' operators. The `light' operators
are BPS and dual to supergravity fields. Consequently their anomalous dimensions
do not depend on the coupling. The next class of operators are the lightest
massive string states whose dimensions scale like $\lambda^{\f{1}{4}}$. A classical
example of these `medium' operators is the Konishi operator. Finally, the `heavy'
operators have large charges (of the order of $\lambda^{\f{1}{2}}$) and are dual to
classical string states with anomalous dimensions scaling like $\lambda^{\f{1}{2}}$ \cite{H_GubKlebPol}-\cite{H_Tseyt}.
Although very useful, this is only a rough and nonexhaustive classification.
BPS operators with large
charges may for all practical purposes behave like `heavy' operators.
There may be operators with dimensions like $\lambda^{\f{1}{2}}$ which may
be very quantum and without a classical string description (like BPS operators
with two large charges).
The techniques for computing 3-point correlation functions
are very well developed for the case of `light' operators, i.e. BPS
operators dual to supergravity fields \cite{Freed}-\cite{ArutFrolBPS}. Unfortunately these OPE coefficients are
protected and do not depend on gauge theory coupling.
For unprotected operators, the techniques for computing even 2-point correlation
functions have been developed only recently \cite{US} (but see also \cite{Tsuji}
and \cite{TseytlinB}). These results
have been used to compute OPE coefficients between two `heavy' and one `light'
operator using the known classical solution corresponding to a 2-point function
and integrating a supergravity propagator over the classical string worldsheet \cite{Zarembo}, \cite{HHL_Costa}, \cite{HHL_RoibTsey}-\cite{HHL_Ahn}. It has been further extended for correlators involving two
Wilson loops and a `light' operator \cite{AldTseyt1},\cite{AldTseyt2}
An intermediate case recently considered in the literature involved a geodesic
approximation for the three operators \cite{TristanKlose}. Such an approximation may be very
relevant to the case of three `medium' operators which are not sufficiently heavy
to generate an extended, non-pointlike, surface.
The goal of this paper is to compute (the AdS part of) the 3-point correlation
functions of three `heavy' operators. We assume that these operators do not
have any spin in $AdS_5$.
The main difficulty lies in the fact that a novel type of
a classical solution has to be found. Moreover, in contrast to the spectral
problem, there is no analog of this problem for conventional relativistic
integrable field theories, therefore we do not have any guide in this respect.
The computation of the OPE coefficients for three `heavy' operators is
especially interesting in view of the fundamental importance that
the integrable classification of finite-gap spinning string solutions and their
comparision with 1-loop Bethe Ansatz results had in
arriving at the all-loop interpolation. We hope that a similar comparision with
weak coupling data \cite{pert1,pert2} will be very illuminating also in the case
of OPE coefficients.
The plan of this paper is as follows. In section~2, we briefly review the case
of 2-point correlation functions, and in section~3 the general features of the
problem of finding 3-point correlation functions. In section~4, we give an
overview of our approach to this problem, in order for the reader to
not get lost in the technicalities. In section~5, we review Pohlmeyer reduction,
and give our main technical results necessary for later computations ---
we solve functional equations for the products between the solutions
of the linear system on the 3-punctured sphere and give formulas for
reconstructing the classical solution in $AdS_2$ from the Pohlmeyer data.
We then proceed, using these results, to evaluate in the next 3 sections the
two main parts of the AdS contribution to the correlation function. We give our
final result in section~9 and discuss the limits of small and large anomalous
dimensions and the link of the latter to the Painlev{\'e} III transcendent.
We close the paper with a discussion and several appendices with some technical
details.
\section{Two-point correlation functions}
In this section we will briefly review the computation of 2-point correlation functions
for the class of operators that we are considering in this paper, namely
operators corresponding to classical string solutions with no charges in $AdS$ \cite{H_GubKlebPol}-\cite{H_Tseyt}.
The approach introduced in \cite{US}, amounts to computing a cylinder amplitude
(with additional wavefunctions included in order to project on the specific
string state that we are interested in)
with the boundary conditions such that the string worldsheet approaches two given
points on the boundary regularized by a cut-off $z=\mathcal E$. This is done by
a classical computation, where the corresponding solution is just a geodesic in
the $AdS$ part and in the $S^5$ part coincides with the unmodified $S^5$
spinning string solution used in
the conventional calculation of the anomalous dimension. Then one performs a saddle
point evaluation of the integral over the modular parameter. The outcome
is\footnote{See specific examples in \cite{US}.} that the saddle point value of
the modular parameter is \emph{purely imaginary} thus effectively making the
worldsheet Euclidean.
This generic pattern indicates that we could have started directly at the saddle
point, with the worldsheet having the topology of a 2-punctured sphere (again
with small disks corresponding to $z=\mathcal E$ cut out), and the \emph{Euclidean}
solution satisfying Virasoro constraints.
The Euclidean solution for the operators in question has the following simple
structure.
The $AdS_5$ part reduces just to a geodesic in the $AdS_2$ subspace which contains
the two gauge theory operator insertion points on the boundary.
Explicitly we have
\begin{equation}
\label{e.geodesic}
x = \f{R}{2}\, \tanh \kappa \tau +x_0 \quad\quad\quad\quad
z = \f{R}{2}\, \f{1}{\cosh \kappa \tau}
\end{equation}
where the distance between the operator insertion points is $x_1-x_2=R$. Imposing
the target space cut-off $z=\mathcal E$ translates into a worldsheet cut-off which limits
the range of $\tau$ to
\begin{equation}
\Delta \tau= \f{2}{\kappa} \log \f{R}{\mathcal E}
\end{equation}
The $S^5$ part is just the Wick rotated spinning string solution, a simple example
being a circular string with equal angular momenta given in terms of the standard
angular coordinates on $S^3 \subset S^5$ by
\begin{equation}
\psi=\sigma \quad\quad\quad\quad \phi_1=\phi_2=i \omega\tau
\end{equation}
One should note that due to the $i$, the solution is inherently complex and in fact
does not look like any kind of spinning string. Virasoro constraints link $\kappa$
and $\omega$ through $\kappa =\sqrt{1+\omega^2}$. In general we have $\kappa=\Delta$, where
$\Delta$ is the dimension of the operator.
The 2-point correlation function is now obtained by i) evaluating the $AdS_2$
action on the $AdS$ geodesic part and ii) evaluating the $S^5$ action together
with wavefunction contributions, which transforms the action integral into an
\emph{Euclidean energy} integral.
Explicitly we get
\begin{equation}
\exp\underbrace{\left\{\scriptstyle -\sqrt{\lambda} \sqrt{1+\omega^2} \log \f{R}{\mathcal E}
\right\}}_{\text{$AdS$ action}}
\cdot
\scriptstyle
\exp\underbrace{\left\{\scriptstyle -\sqrt{\lambda} \sqrt{1+\omega^2} \log \f{R}{\mathcal E}
\right\}}_{\text{$S^5$ energy}}
\end{equation}
which reproduces the 2-point correlation function with the correct value
of the anomalous dimension
\begin{equation}
\label{e.twopointR}
\cor{O(x_1) O(x_2)} = \left( \f{\mathcal E}{R} \right)^{2\sqrt{\lambda} \sqrt{1+\omega^2}}
\end{equation}
The wavefunction factors in the $S^5$ part are crucial for the correct answer.
The Virasoro constraint
sets the only free parameter $\kappa$ in the $AdS_2$ geodesic solution
(\ref{e.geodesic}) to be equal to $\Delta$, the (anomalous) dimension of the corresponding
operator\footnote{More precisely the dimension $\mathbf{\Delta}$ is $\mathbf{\Delta}=\sqrt{\lambda}\Delta$.}. Therefore,
the $AdS$ part of the solution is completely determined just by the dimension of the
operator in question and not by any further details of the specific operator.
We will see that the same
property will hold also for 3-point correlation functions. In the following
we will use complex coordinates which, in the present case, read
\begin{equation}
w=e^{\tau+i\sigma} \quad\quad\quad\quad {\overline{w}}=e^{\tau-i\sigma}
\end{equation}
putting the two punctures at $w=0$ and $w=\infty$.
Finally, let us note that
the target space cut-off $\mathcal E$ enters (\ref{e.twopointR}) essentially as
the normalization of our operators.
This is important as in the case of 3-point correlation functions we have to retain
exactly the same normalization of operators as for 2-point functions in order to
extract unambiguosly the OPE coefficients. This leads to severe difficulties,
such as
linking the worldsheet cut-offs around the 3 punctures to the single target space
cut-off $z=\mathcal E$. This is highly nontrivial due to the lack of an explicit classical
$AdS$ solution in the case of 3-point correlation functions. If one would adopt
a different approach\footnote{which should of course be in the end equivalent} of using
vertex operators \cite{TseytlinV1,Buchb,TseytlinB}, then the difficulties
remain but appear in different places.
In the vertex operator approach, one computes the worlsheet integral over the \emph{whole}
punctured sphere with the vertex operator contributions sitting directly at the
punctures. For 2-point functions one then just neglects infinities. For 3-point
functions it is not clear how to control possible finite renormalizations. It would
be very interesting to understand quantitatively the precise dictionary
between the two approaches.
\section{Three-point correlation functions -- general features}
In order to compute the 3-point correlation function of heavy operators, we
have to find a classical solution of string equations of motion in Euclidean
signature with the topology of a sphere with 3 punctures, with the property that
the solution close to each puncture associated with a given gauge theory
operator $O_k$ looks asymptotically like a solution corresponding to a 2-point
correlation function of the operator $O_k$.
The classical bosonic equations of motion in $AdS_5 \times S^5$ reduce
to two independent sets of equations, one on $S^5$, the other on $AdS_5$, which are
coupled together only through the Virasoro constraint
\begin{equation}
T_{AdS_5}(w)+T_{S^5}(w)=0
\end{equation}
where $w$ is the holomorphic worldsheet coordinate, and $T_{AdS_5}(w)$
(resp. $T_{S^5}$) is the classical
energy-momentum tensor of the $AdS_5$ (resp. $S^5$) part of the $\sigma$-model.
For operators which do not have any spins, the $AdS_5$ part of the problem
greatly simplifies. Then, without loss of generality, we put the gauge theory
operator insertion points of all three operators on a single line. Consequently, the
$AdS_5$ part of the string solution is contained in an (Euclidean) $AdS_2$ subspace. The problem,
however, does not trivialize as we are not looking for a minimal surface
but have a prescribed nonzero energy-momentum tensor $T_{AdS_2}(w)$.
Fortunately, the $AdS_2$ energy-momentum tensor can be explicitly expressed just in
terms of the anomalous dimensions of the three operators entering the 3-point
correlation function. From now on we will denote $T_{AdS_2}(w)$ by $T(w)$.
In order to find the explicit form of $T(w)$, recall that the classical solution
should approach, at the punctures, 2-point solutions which are explicitly known.
In particular $T(w)$ for the 2-point solutions is given by
\begin{equation}
T_{2-point}(w)=\f{\Delta^2/4}{w^2}
\end{equation}
Therefore at the punctures $T(w)$ should have at most poles of $2^{nd}$ order with
the leading coefficients $\Delta_k^2/4$ determined by the dimension of the operator
inserted at $w=w_k$. Since $T(w)$ is holomorphic and transforms under inversion
like a $(2,0)$ tensor
\begin{equation}
T(w) \to \f{1}{u^4} T\left( \f{1}{u} \right)
\end{equation}
its form is uniquely determined. Without loss of generality we may put the punctures
at $w=\pm 1$ and $w=\infty$. Then, for the case of equal conformal weights $\Delta$
at $w=\pm 1$ and $\Delta_\infty$ at $w=\infty$, $T(w)$ is given by
\begin{equation}
T(w)= \f{\Delta_\infty^2}{4} \f{w^2+a^2}{(1-w^2)^2} \quad\quad\quad\quad
\text{where}\quad\quad
a^2=\f{4\Delta^2}{\Delta_\infty^2}-1
\end{equation}
In the present paper for simplicity we will predominantly consider the above
symmetric case.
To summarize, we thus have to evaluate the action
\begin{equation}
\exp\left(-\f{\sqrt{\lambda}}{\pi} \int_\Sigma {\cal L}_{AdS_2}^{Polyakov} d^2w \right)
\equiv \left(-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \f{\partial z \bar{\partial} z+ \partial x \bar{\partial} x}{z^2} d^2w \right)
\end{equation}
for a classical solution approaching the operator insertion points $x_k$ at\\
$w=-1,1,\infty$
subject to the constraint
\begin{equation}
\f{(\partial z)^2 + (\partial x)^2}{z^2}=T(w)
\end{equation}
\section{Our strategy}
As described in the previous section, the 3-point correlation functions
for three heavy operators with no spins in $AdS_5$ factorize into a product of
an $S^5$ and an $AdS_2$ contribution evaluated for a worldsheet with
the topology of a 3-punctured sphere. Similarly as for 2-point functions
we regularize the worldsheet by cutting out small disks of radii $\varepsilon_i$
around the punctures which are defined by the condition that on their boundaries
\begin{equation}
z=\mathcal E
\end{equation}
where $z$ is the $AdS$ radial coordinate in the Poincare patch ($z=0$ is the $AdS$ boundary). $\mathcal E$ is the target space cut-off which is taken to be very small.
It is necessary to ensure that $z=\mathcal E$ around each puncture in order to have
the same normalization of operators in 2- and 3-point correlation functions so as
to unambigously extract the OPE coefficients.
For the $AdS_2$ part, we have to evaluate the action of the classical solution, while
for the $S^5$ part we have to include, in addition, contributions from the classical
wavefunctions of the external states. Therefore, the 3-point correlation function
is schematically given by
\begin{equation}
\label{e.decomp}
e^{-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}} {\cal L}_{AdS_2}^{Polyakov}} \cdot
\underbrace{\Psi_1 \Psi_2 \Psi_3^* e^{-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}}
{\cal L}_{S^5}^{Polyakov}}}_{
e^{-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}} \text{$S^5$ contribution}}
}
\end{equation}
Since both exponents have logarithmic divergences around the punctures, it is
convenient to subtract and add $\sqrt{T(w)\overline{T}({\overline{w}})}$ regularizing the integrals.
This yields
\begin{equation}
\label{e.ads}
e^{-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \left( {\cal L}_{AdS_2}^{Polyakov} -\sqrt{T\overline{T}} \right)
-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T\overline{T}}}
\end{equation}
for the $AdS_2$ part and
\begin{equation}
\label{e.sv}
e^{-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \left( \text{$S^5$ contribution} -\sqrt{T\overline{T}} \right)
-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T\overline{T}}}
\end{equation}
for the $S^5$ part. The first terms in the above expressions are now finite and can
be integrated over the whole punctured sphere, while the explicit dependence on the
worldsheet cut-offs appears only in the second integral with a known integrand.
In this paper we will compute the contribution (\ref{e.ads}) together with the
\emph{second} term in (\ref{e.sv}), leaving the remaining factor
\begin{equation}
\label{e.svii}
e^{-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \left( \text{$S^5$ contribution} -\sqrt{T\overline{T}} \right)}
\end{equation}
for further investigation.
In order to compute
\begin{equation}
\label{e.polregi}
e^{-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \left( {\cal L}_{AdS_2}^{Polyakov} -\sqrt{T\overline{T}} \right)}
\end{equation}
we will use Pohlmeyer reduction \cite{Pohl}, \cite{deVega} and adapt the methods of \cite{AldMal} to evaluate this expression.
Firstly, one transforms the above integral into an integral of the wedge product
of two closed 1-forms on a double cover of $\Sigma$. Secondly, one uses Riemann reciprocity
(Riemann bilinear identity) to express the integral in terms of products of integrals
of the 1-forms on certain open cycles. Thirdly, one links the above 1-form integrals
to the asymptotics in the spectral parameter ($\xi \to 0$) of appropriate skew products between
specific solutions (associated with each puncture) of the Pohlmeyer linear system.
Thus the evaluation of the integral (\ref{e.polregi}) is reduced to the knowledge
of appropriate skew products as a function of the spectral parameter.
The remaining integral
\begin{equation}
\label{e.divint}
e^{-\f{\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T\overline{T}}}
\end{equation}
can be evaluated analytically in the small $\varepsilon_i$ limit. The main difficulty lies
in linking the worldsheet cut-offs $\{\varepsilon_i\}$ to the target space cut-off $z=\mathcal E$,
without an explicit knowledge of the classical solution. To do that we need formulas
for reconstructing the classical solution from Pohlmeyer data. Fortunately, since
the classical solution should approach the known solutions for 2-point functions
close to the punctures, we can get explicit formulas relating
the positions of the
gauge theory operator insertion points $x_k$ and the target space cut-off $\mathcal E$
to the worldsheet cut-offs $\{\varepsilon_i\}$ in terms of the skew products mentioned
above, but this time evaluated at $\xi=1$. Using this knowledge, the (two copies of the)
integral (\ref{e.divint}) yield the standard space-time dependent part of the
3-point CFT correlation function, as well as a finite contribution to the OPE
coefficient expressed in terms of the skew products at $\xi=1$.
The skew products between the specific solutions (of the Pohlmeyer linear system)
associated to each puncture
are therefore a key ingredient in the evaluation
of the 3-point correlation function. We will often refer to these
chosen solutions as `elementary solutions'.
In the following section we introduce the main
features of Pohlmeyer reduction, the elementary solutions associated with each puncture
and define the skew products. Then we derive and solve functional equations for
the skew products of the elementary solutions as a function of the spectral parameter $\xi$.
Finally we state the reconstruction formulas which link the operator insertion points
and the target space cut-off to appropriate skew products.
After this preparatory part, we proceed to evaluate the integral (\ref{e.polregi})
in section~7 and the divergent contribution (\ref{e.divint}) in section~8.
Then we put together the obtained formulas into the final AdS contribution
to the OPE coefficient and analyze the limits of large and small anomalous
dimensions as well as the extremal limit.
Before we end this overview, let us remark that the same decomposition (\ref{e.decomp})
could also be performed for a 2-punctured sphere corresponding to a 2-point
correlation function (of course in this case only two wavefunctions would appear). Then it turns out that both the `nontrivial' parts (\ref{e.polregi}) and (\ref{e.svii})
are identically zero. However it is interesting to note that they vanish for quite
different reasons. The AdS part (\ref{e.polregi}) vanishes because it is evaluated
on a trivial classical solution -- a point-like string moving along a geodesic.
The corresponding Pohlmeyer function is just identically zero and consequently
(\ref{e.polregi}) vanishes. On the $S^5$, however, we deal with arbitrarily complicated
finite-gap solutions of arbitrary genus, which would have a highly nontrivial Pohlmeyer
description. Yet, the wavefunction contributions transform the classical action
into an integral of the energy density (in an appropriate coordinate system)
and the resulting $S^5$ contribution exactly cancels the integral of $\sqrt{T\overline{T}}$. It is
tempting to speculate that a similar simplification may occur for the case of
3-point functions.
\section{Pohlmeyer reduction}
Contrary to the well known case of Pohlmeyer reduction for minimal surfaces in
$AdS_3$ \cite{Pohl}, \cite{deVega}, \cite{AldMal}, we need to perform Pohlmeyer reduction for classical solutions in $AdS_2$ but
with a prescribed nonzero energy-momentum tensor. Thus the classical solutions in $AdS_2$ are of course
\emph{not} minimal surfaces. On the other hand, the full string solution, which takes into account both $AdS_2$ and $S^5$ contributions, is a minimal surface.
The Pohlmeyer reduction for this case amounts to defining
$\tilde{\gm}(w,{\overline{w}})$ through
\begin{equation}
\label{e.pohl1}
\f{\partial x \bar{\partial} x+ \partial z \bar{\partial} z}{z^2}= \sqrt{T\overline{T}} \cosh \tilde{\gm}
\end{equation}
where $T$ is the energy-momentum tensor $T(w)$.
Then $\tilde{\gm}(w,{\overline{w}})$ satisfies a modified form of Sinh-Gordon equation
\begin{equation}
\label{e.pohl1eom}
\partial \bar{\partial} \tilde{\gm}= \sqrt{T\overline{T}} \sinh \tilde{\gm}
\end{equation}
The solution corresponding to a 2-point function is just $\tilde{\gm}(w,{\overline{w}})\equiv 0$.
Consequently, the boundary conditions close to each puncture are
\begin{equation}
\tilde{\gm} \to 0
\end{equation}
For the case relevant to 3-point correlation functions, $T(w)$ has two zeroes, and
thus the form of Pohlmeyer reduction given by (\ref{e.pohl1}) is inconvenient as
it would imply that all first derivatives vanish
\begin{equation}
\partial z=\bar{\partial} z=\partial x=\bar{\partial} x=0
\end{equation}
at the zeroes of $T(w)$. This would be a very nongeneric situation as each
such single equation gives a codimension one subspace. Their intersection
is generically empty. This is even the case for pointlike strings appearing
in 2-point functions. Consequently we will assume,
as is the case for polygonal Wilson loops, that the right hand side of (\ref{e.pohl1})
is everywhere nonzero. This implies that $\tilde{\gm}$ has to have logarithmic
singularities at the zeros of $T(w)$.
To avoid this drawback, it is convenient to redefine $\tilde{\gm}$ through
\begin{equation}
\label{e.gmt}
\tilde{\gm} =2\gamma-\f{1}{2} \log T\overline{T}
\end{equation}
Now (\ref{e.pohl1}) takes the form
\begin{equation}
\f{\partial x \bar{\partial} x+ \partial z \bar{\partial} z}{z^2}= \f{1}{2} \left( e^{2\gamma}+ T\overline{T} e^{-2\gamma} \right)
\end{equation}
which does not lead to any problem at the zeroes of $T(w)$.
The equation of motion becomes
\begin{equation}
\label{e.mshgeom}
\partial \bar{\partial} \gamma=\f{1}{4} \left( e^{2\gamma}- T\overline{T} e^{-2\gamma} \right)
\end{equation}
This is virtually the same as the setup for Wilson loop \cite{AldMal}, \cite{AMSV} but with the polynomial
defining the polygonal Wilson loop substituted by $T(w)$.
We will discuss the similarities and differences in more detail at the end
of the present section.
It is well known that the modified sinh-Gordon model is integrable.
It is easiest to verify by making a holomorphic change of worldsheet
coordinates to map this model into ordinary sinh-Gordon.
However, due to the rather complicated analytical structure
of the resulting domain we will not use this mapping in the sequel.
Below we review the main features of the integrability of sinh-Gordon model which
will be important for us later.
There exists a family of flat connections parametrized by the spectral parameter
$\xi$. We will also use the parametrization
\begin{equation}
\xi = e^\th
\end{equation}
The flat connection $J=J_w\, dw+J_{\overline{w}}\, d{\overline{w}}$ has the following components
\begin{equation}
\label{e.flatcon}
J_w= \f{1}{2} \arr{\partial \gamma}{-\f{1}{\xi} e^\gamma \;}{-\f{1}{\xi} e^{-\gamma} T}{-\partial \gamma}
\quad\quad\quad\quad
J_{{\overline{w}}}=\f{1}{2} \arr{-\bar{\partial} \gamma}{-\xi e^{-\gamma} \overline{T} \;}{-\xi e^\gamma}{\bar{\partial} \gamma}
\end{equation}
Flatness is equivalent to the compatibility of the associated linear system
\begin{equation}
\label{e.linear}
\partial \Psi+J_w \Psi =0 \quad\quad\quad\quad \bar{\partial} \Psi+J_{{\overline{w}}} \Psi =0
\end{equation}
which in turn is equivalent to the equation of motion (\ref{e.mshgeom}).
Another useful decomposition of the flat connection is
\begin{equation}
J=\f{1}{\xi}\, \Phi_w\, dw +A+ \xi\, \Phi_{{\overline{w}}}\, d{\overline{w}}
\end{equation}
using which we may write the string action as
\begin{equation}
\f{\partial x \bar{\partial} x+ \partial z \bar{\partial} z}{z^2}= 2\, \mbox{\rm tr}\, \Phi_w \Phi_{{\overline{w}}}
\end{equation}
Certain specific solutions of the linear system (\ref{e.linear}) associated with
each puncture will be of key importance in the following. Since close to the punctures
\begin{equation}
T(w) \sim \f{\Delta^2/4}{w^2}
\end{equation}
and
\begin{equation}
\gamma \sim \f{1}{4} \log T(w) \overline{T}({\overline{w}})
\end{equation}
we get two solutions, which close to the puncture behave like
\begin{equation}
\tilde{\Psi}_1= w^{\f{\Delta}{4 \xi}} {\overline{w}}^{\f{\Delta}{4} \xi}
\vc{w^{\f{1}{4}} {\overline{w}}^{-\f{1}{4}}}{w^{-\f{1}{4}} {\overline{w}}^{\f{1}{4}}}
\quad\quad\quad\quad
\tilde{\Psi}_2= w^{-\f{\Delta}{4 \xi}} {\overline{w}}^{-\f{\Delta}{4} \xi}
\vc{w^{\f{1}{4}} {\overline{w}}^{-\f{1}{4}}}{-w^{-\f{1}{4}} {\overline{w}}^{\f{1}{4}}}
\end{equation}
It is clear that these solutions have nontrivial monodromies $e^{\pm i \tilde{p}(\xi)}$
around the puncture $w=0$ with
\begin{equation}
\tilde{p}(\xi)=\Delta \f{\pi}{2} \left( \xi -\f{1}{\xi} \right)+\pi
\end{equation}
It is in fact convenient to get rid of the $\pi$ by a gauge transformation
$\Psi=V \tilde{\Psi}$ with
\begin{equation}
\label{e.vgauge}
V=\arr{\left(\f{(w-w_1)(w-w_2)(w-w_3)}{({\overline{w}}-{\overline{w}}_1)({\overline{w}}-{\overline{w}}_2)({\overline{w}}-{\overline{w}}_3)}\right)^{-\f{1}{4}} }{0}{0}{
\left(\f{(w-w_1)(w-w_2)(w-w_3)}{({\overline{w}}-{\overline{w}}_1)({\overline{w}}-{\overline{w}}_2)({\overline{w}}-{\overline{w}}_3)}\right)^{\f{1}{4}} }
\end{equation}
Then our final basis of solutions associated to the puncture at $w=w_1$ is
\begin{eqnarray}
\label{e.assol1}
\Psi_1 &=& \f{i}{\sqrt{2}} (w-w_1)^{\f{\Delta}{4 \xi}} ({\overline{w}}-{\overline{w}}_1)^{\f{\Delta}{4} \xi}
\vc{u_1}{u_1^{-1}} \\
\label{e.assol2}
\Psi_{\bar{1}} &=& \f{i}{\sqrt{2}} (w-w_1)^{-\f{\Delta}{4 \xi}}
({\overline{w}}-{\overline{w}}_1)^{-\f{\Delta}{4} \xi}
\vc{u_1}{-u_1^{-1}}
\end{eqnarray}
where the constants $u_1$ are given by
\begin{equation}
u_1=\f{({\overline{w}}_{12}{\overline{w}}_{13})^{\f{1}{4}}}{(w_{12} w_{13})^{\f{1}{4}}}
\end{equation}
with $w_{ij}=w_i-w_j$. The solutions $1$ (i.e. $\Psi_1$) and $\bar{1}$ (i.e. $\Psi_{\bar{1}}$)
have the monodromies $e^{ip(\xi)}$ and $e^{-ip(\xi)}$
with the pseudomomentum given by
\begin{equation}
\label{e.pseudo}
p(\xi)=\Delta \f{\pi}{2} \left( \xi -\f{1}{\xi} \right) \quad (\equiv \Delta \pi \sinh \th)
\end{equation}
Several comments are in order here. These solutions can be continued to the
neighborhoods of the other punctures. Of course we do not know their analytical expressions
so we cannot perform this explicitly. Generically these solutions will no longer be
eigenstates of the monodromy operator around \emph{other} punctures. However, since
the space of solution of the linear system is two-dimensional, we can express $1$ and
$\bar{1}$ as linear combinations\footnote{with coefficients depending just on the spectral
parameter} of an analogous basis $k$ and $\bar{k}$ at the puncture $w=w_k$.
It is exactly these coefficients which are the key ingredients for the computation of the
AdS part of the 3-point correlation function.
In order
to fix an inherent ambiguity associated with nontrivial monodromy, we have to fix
once and for all the path of analytical continuation, whose detailed form will not
be important for us.
It is clear that the pseudomomentum of the elementary solutions obeys the important
general property
\begin{equation}
p(e^{i\pi} \xi)=-p(\xi)
\end{equation}
This suggests that it should be possible to obtain the second solution $\bar{1}$ from
the first $1$. Since just changing $\xi \to e^{i\pi} \xi$ modifies the expressions
for the flat connection, one has to perform in addition a similarity transformation
\begin{equation}
U J_{w,{\overline{w}}}(w,{\overline{w}};\xi) U^{-1}=J_{w,{\overline{w}}}(w,{\overline{w}};e^{i\pi}\xi)
\end{equation}
with $U=i\sigma_3$ to compensate. Therefore the second solution can be obtained from
the first $\Psi(w,{\overline{w}};\xi)$ through
\begin{equation}
\label{e.sg3prop}
\Psi_{\bar{k}}(w,{\overline{w}};\xi)=\sigma_3 \Psi_k(w,{\overline{w}};e^{i\pi}\xi)
\end{equation}
This is a crucial property which allows for the formulation of a set of functional
equations for the overlap coefficients.
Let us close this section with a comparision of the present set-up of a 3-point
correlation function in $AdS_2$ with the case of Pohlmeyer reduction
for polygonal Wilson loops in $AdS_3$.
In both cases we have the same modified sinh-Gordon model, but with the modification
defined in terms of functions with quite different analytical properties. In the case of
polygonal Wilson loops we have a polynomial with a single asymptotic region (covered by
several Stokes sectors), here we have a rational function with three ($2^{nd}$ order) poles
and thus we have three distinct asymptotic regions. In the Wilson loop case, only
the `small' solution was unambigously defined, while here two solutions are uniquely specified
as eigenfunctions of the monodromy operator at each puncture. Finally, the spacetime
picture is quite different. In the Wilson loop case, the target-space was $AdS_3$ and
one had natural `left-' and `right-' linear problems. Here the target space is
one dimension less ($AdS_2$) and we have to develop appropriate reconstruction
formulas and impose boundary conditions characteristic of a 3-point correlation
function (i.e. fixing the boundary coordinates of the operator insertion
points $x_k$ and the target-space cut-off $z=\mathcal E$).
\subsection{Overlaps}
In this section we will derive and solve functional equations for the overlaps
between the elementary solutions associated with each puncture defined in the previous
section. For any two solutions of the linear system (\ref{e.linear}) $\Psi_k$ and $\Psi_l$,
one defines the antisymmetric product (skew-product)
\begin{equation}
\ss{k}{l}
\end{equation}
which is the determinant of the matrix formed by the column vectors $\Psi_k$
and $\Psi_l$. It is a function of the spectral parameter $\xi$ (or equivalently $\th$).
Our elementary solutions (\ref{e.assol1})-(\ref{e.assol2}) have
the canonical normalization
\begin{equation}
\sS{k}{k}=1
\end{equation}
A characteristic feature of the product $\ss{k}{l}$ is that for \emph{any} four
solutions the relevant products obey a purely algebraic relation called the Schouten
identity:
\begin{equation}
\ss{i}{j}\ss{k}{l}+\ss{i}{l}\ss{j}{k}+\ss{i}{k}\ss{l}{j}=0
\end{equation}
In our case we have 6 distinguished solutions of the linear system --
$1,\bar{1},2,\bar{2},3,\bar{3}$. Our aim is to find the skew-products between
these solutions as functions of $\th$, given the set of conformal weights $\Delta_1$,
$\Delta_2$ and $\Delta_3$.
It is convienient to repackage the products between the various solutions into
connection matrices $M_{kl}$ which transform the coordinates of a solution
in the basis associated to the puncture $l$ to the coordinates in the basis
associated to the puncture $k$.
The equation
\begin{equation}
\vc{\gamma}{\delta} =\underbrace{\arr{A}{B}{C}{D}}_{M_{kl}} \vc{\alpha}{\beta}
\end{equation}
amounts to the following equality between the elementary solutions
\begin{equation}
\gamma \Psi_k+\delta \Psi_{\bar{k}}= \alpha \Psi_l+\beta \Psi_{\bar{l}}
\end{equation}
This means that
\begin{eqnarray}
\Psi_l &=& A \Psi_k+C \Psi_{\bar{k}} \\
\Psi_{\bar{l}} &=& B \Psi_k+D \Psi_{\bar{k}}
\end{eqnarray}
Now taking appropriate products gives an expression for $M_{kl}$ in terms of our fundamental products.
\begin{equation}
M_{kl}=\arr{-\Ss{k}{l}}{-\SS{k}{l}}{\ss{k}{l}}{\sS{k}{l}}
\end{equation}
The obvious compatibility conditions between the connection matrices
\begin{equation}
\label{e.compat}
M_{km}=M_{kl} M_{lm} \quad\quad\quad\quad M_{kl} M_{lk}=id
\end{equation}
are in fact \emph{equivalent} to the full set of Schouten identities. This can be easily seen by considering various choices for the solutions entering Schouten identities and comparing with appropriate elements
of the matrix products (\ref{e.compat}).
The full set of functional relations for the products $\ss{k}{l}$ thus comprises the compatibility conditions (\ref{e.compat}) and the vanishing of total monodromy
\begin{equation}
\label{e.monodromyeq}
\Omega_1 M_{13} \Omega_3 M_{32} \Omega_2 M_{21}=id
\end{equation}
where
\begin{equation}
\Omega_k=\arr{e^{ip_k(\th)}}{0}{0}{e^{-ip_k(\th)}}
\end{equation}
As they stand, the equations (\ref{e.compat}) and (\ref{e.monodromyeq}) are
a complicated set of constraints for 12 unknown products. The key property which
allows us to transform them into a set of solvable functional equations is the
property (\ref{e.sg3prop}). Using this construction we may relate 6 of the 12 unknown
products to the other 6 but evaluated at a shifted value of the spectral parameter.
Explicitly suppose that the $k$ elementary solution (eigenvector of the monodromy
matrix around the puncture $w_k$ with the eigenvalue $e^{ip_k}$) is
\begin{equation}
\Psi_k(w,{\overline{w}};\xi)=\vc{a_k}{b_k}
\end{equation}
Then the second solution $\bar{k}$ (with eigenvalue $e^{-ip_k}$) is obtained through
\begin{equation}
\Psi_{\bar{k}}(w,{\overline{w}};\xi)=\sigma_3 \Psi_k(w,{\overline{w}};\xi e^{i\pi})\equiv
\vc{a_k^{++}}{-b_k^{++}}
\end{equation}
where the superscript `$+$' denotes the shift $\th \to \th+i\pi/2$. Using
the fact\footnote{This follows from $\Psi_k^{++++}=\lambda_k \Psi_k$ and an argument that
$\lambda_k=1$.} that $a_k^{++++}=a_k$ and $b_k^{++++}=b_k$, it follows that
\begin{eqnarray}
\SS{k}{l}&=& -\ss{k}{l}^{++} \\
\Ss{k}{l}&=& -\sS{k}{l}^{++}
\end{eqnarray}
Now the relations (\ref{e.compat}) and (\ref{e.monodromyeq}) become functional
equations for just 6 products.
\subsubsection*{Solution of the functional relation}
In this section we will solve the full set of functional equations (\ref{e.compat})
and (\ref{e.monodromyeq}).
Let us first define the three functions
\begin{eqnarray}
X_{32} &\equiv& \ss{3}{2} \ss{3}{2}^{++} \\
X_{3\bar{2}} &\equiv& \sS{3}{2} \sS{3}{2}^{++} \\
X_{2\bar{1}} &\equiv& \sS{2}{1} \sS{2}{1}^{++}
\end{eqnarray}
Once we determine them explicitly, the products $\ss{3}{2}$,
$\sS{3}{2}$ and $\sS{2}{1}$ will be expressed through convolution
with a $\cosh$ kernel and zero-mode parts. The remaining products $\ss{2}{1}$,
$\ss{3}{1}$ and $\sS{3}{1}$ will turn out to be expressed in terms of
the first three and the pseudomomenta. In fact for our applications, it suffices
to know the formulas for the products between the unbarred solutions: $\ss{1}{2}$,
$\ss{2}{3}$ and $\ss{3}{1}$ -- so all of them will be expressed through $X_{32}$ and
some permutation of indices. However in this section, for completeness,
we will solve all equations.
We start from the equation $M_{32}M_{21}=M_{31}$. This just expresses
$\ss{3}{1}$ and $\sS{3}{1}$ through the Schouten identities:
\begin{eqnarray}
\ss{3}{1} &=& \ss{3}{2} \sS{2}{1}^{++}+\ss{2}{1}\sS{3}{2} \\
\sS{3}{1} &=& \ss{3}{2} \ss{2}{1}^{++}+\sS{2}{1}\sS{3}{2}
\end{eqnarray}
Then define $Y_1$ and $Y_3$ as
\begin{eqnarray}
Y_1 &=& \f{\sS{3}{2} \ss{2}{1}}{\ss{3}{1}} \\
Y_3 &=& \f{\sS{3}{2} \sS{2}{1}}{\sS{3}{1}}
\end{eqnarray}
We can express $\ss{2}{1}$ in terms of $Y_1$. Plugging the results into
the formula for $Y_3$ we see that we can express $X_{3\bar{2}}$ as
\begin{equation}
X_{3\bar{2}} = X_{32} \f{Y_1^{++} Y_3}{(1-Y_1^{++})(1-Y_3)}
\end{equation}
At this stage is convenient to rewrite the monodromy equation (\ref{e.monodromyeq})
in the form
\begin{equation}
\label{e.monodromy}
M_{32} \Omega_2 M_{21} = \Omega_3^{-1} M_{31} \Omega_1^{-1}
\end{equation}
The entries of (\ref{e.monodromy}) are in fact the counterparts of the $\bar{Y}$
functions introduced by Maldacena and Zhiboedov (\cite{MaldZhib} fig. 5).
The equation (\ref{e.monodromy}) enables us to determine $Y_1$ and $Y_3$ defined
earlier. Explicitly we obtain
\begin{eqnarray}
Y_1 &=& \f{1-e^{i(p_3-p_1-p_2)}}{e^{2ip_2}-e^{-2ip_2}} \\
Y_3 &=& \f{1-e^{i(p_3+p_1-p_2)}}{e^{2ip_2}-e^{-2ip_2}}
\end{eqnarray}
Now we proceed to the equation $M_{21} M_{13}=M_{23}$ which enables us to
determine $X_{2\bar{1}}$, and finally the equation $M_{13} M_{32}=M_{12}$
determines $X_{32}$. We check that all the remaining compatibility conditions
of (\ref{e.compat}) are satisfied.
Consequently, the final functional equations for $\ss{3}{2}$, $\sS{3}{2}$
and $\sS{2}{1}$ read
\begin{eqnarray}
\ss{3}{2} \ss{3}{2}^{++} &=& \f{\sin \f{p_1-p_2-p_3}{2} \sin \f{p_1+p_2+p_3}{2}}{\sin p_2 \sin p_3} \\
\sS{3}{2} \sS{3}{2}^{++} &=& \f{\sin \f{p_3-p_1-p_2}{2} \sin \f{p_2-p_1-p_3}{2}}{\sin p_2 \sin p_3} \\
\sS{2}{1} \sS{2}{1}^{++} &=& \f{\sin \f{p_1-p_2-p_3}{2} \sin \f{p_2-p_1-p_3}{2}}{\sin p_1 \sin p_2}
\end{eqnarray}
with the right hand sides being exactly our functions $X_{32}$, $X_{3\bar{2}}$ and
$X_{2\bar{1}}$. In the above expressions we did not use the specific form of $p_k(\th)$
given by (\ref{e.pseudo}) but only the generic property
\begin{equation}
p_k(\th+ i \pi)=-p_k(\th)
\end{equation}
Therefore, the above solution may have a much greater range of applicability
than the specific case of no spin in $AdS_5$ that we consider in the present paper.
Let us now specialize to the pseudomomenta (\ref{e.pseudo}) and use the parametrization $\xi=e^\th$. Then
\begin{equation}
p_k(\th)=\Delta_k \pi \sinh \th
\end{equation}
The above functional equations can be recast in the form
\begin{eqnarray}
\ss{3}{2}^+ \ss{3}{2}^{-} &=& -\f{\sinh(\f{\Delta_2+\Delta_3-\Delta_1}{2} \pi \cosh\th) \sinh(\f{\Delta_1+\Delta_2+\Delta_3}{2} \pi \cosh\th)}{
\sinh( \Delta_2 \pi \cosh\th ) \sinh( \Delta_3 \pi \cosh\th ) } \\
\label{eq2}
\sS{3}{2}^+ \sS{3}{2}^{-} &=& \f{\sinh(\f{\Delta_1+\Delta_3-\Delta_2}{2} \pi \cosh\th) \sinh(\f{\Delta_1+\Delta_2-\Delta_3}{2} \pi \cosh\th)}{
\sinh( \Delta_2 \pi \cosh\th ) \sinh( \Delta_3 \pi \cosh\th ) } \\
\label{eq3}
\sS{2}{1}^+ \sS{2}{1}^{-} &=& \f{\sinh(\f{\Delta_2+\Delta_3-\Delta_1}{2} \pi \cosh\th) \sinh(\f{\Delta_1+\Delta_3-\Delta_2}{2} \pi \cosh\th)}{
\sinh( \Delta_1 \pi \cosh\th ) \sinh( \Delta_2 \pi \cosh\th ) }
\end{eqnarray}
As mentioned before, for our purposes we will be interested in the solution of the
first equation. The formulas for $\ss{1}{2}$ and $\ss{3}{1}$ can then be obtained
simply by a permutation of the $\Delta_i$'s.
The right hand side of the first equation has the property that it approaches
a constant when $\th \to \pm \infty$ thus making the solution simpler.
The basic functional equation to solve is
\begin{equation}
\label{e.basic}
f_a^+ f_a^- =1-e^{-a \pi \cosh\th}
\end{equation}
which can be solved by convolution
\begin{equation}
f_a(\th)=\exp \int_{-\infty}^\infty \f{d\th'}{2\pi} \f{\log\left( 1-e^{-a \pi
\cosh\th'} \right)}{ \cosh( \th -\th')}
\end{equation}
Therefore we get the following expression for the product $\ss{3}{2}$:
\begin{equation}
\label{e.s32}
\ss{3}{2}(\th)=i e^{M e^{\th} +M^* e^{-\th}} \cdot
\f{f_{\Delta_2+\Delta_3-\Delta_1}(\th) f_{\Delta_2+\Delta_3+\Delta_1}(\th)}{f_{2\Delta_2}(\th)f_{2\Delta_3}(\th)}
\end{equation}
where the first term is a zero-mode part depending on two constants $M$ and $M^*$.
These constants can be found from the leading WKB asymptotics of $\ss{3}{2}(\th)$
which can be found independently. We will discuss this part in section~8 and
Appendix~C.
The formula (\ref{e.s32}) is the key formula of this section. We will use it in
the following to obtain the AdS contribution to the 3-point correlation functions.
Before we end this section, for completeness, let us discuss briefly
the solution of equations (\ref{eq2})
and (\ref{eq3}). The right hand sides of these equations do not approach a constant
when $\th \to \pm \infty$ so we cannot directly use the convolution with the $\cosh$
kernel. Apart from (\ref{e.basic}) we just have to consider in addition
\begin{equation}
\tilde{f}_a^+ \tilde{f}_a^- =e^{\f{a}{2} \pi \cosh \th}
\end{equation}
with the solution
\begin{equation}
\tilde{f}_a(\th)=e^{-\f{a}{2} \th \sinh\th}
\end{equation}
This will then solve the functional equations for $\sS{3}{2}$ and $\sS{2}{1}$.
However in this case the zero mode part is undetermined. We will not consider this
issue further since we do not need these expressions in the remaining part of the paper.
\subsection{Reconstruction formulas}
In this section we will show how one can reconstruct the string solution in
the $AdS_2$ target space from the Pohlmeyer data.
The explicit expressions for the string solutions are important for two reasons.
Firstly, the correlation function has to be regularized by making a cut-off at $z=\mathcal E$.
This has to be translated into a worldsheet cut-off around
each puncture. Secondly, we need to have control over the coordinates of
the operator insertion points $x_k$ in gauge theory. In particular the standard
conformal dependence on $x_k$ should arise automatically.
We will show that the string solution can be reconstructed from \emph{two} given
solutions $\Psi_A$ and $\Psi_B$ of the linear system for $\th=0$ ($\xi=1$) normalized by
$\ss{\Psi_A}{\Psi_B}=1$. Equivalently, it is determined by the coefficients
$\alpha$, $\beta$, $\gamma$ and $\delta$ of
\begin{equation}
\label{e.phi12}
\Psi_A=\alpha \Psi_1 +\beta \Psi_{\bar{1}} \quad\quad\quad\quad \Psi_B=\gamma \Psi_1 +\delta \Psi_{\bar{1}}
\end{equation}
satisfying $\alpha \delta-\beta \gamma=1$. These two solutions can also be combined
into a $2\times 2$ matrix as
\begin{equation}
\hat{\Psi}= \left( \Psi_A \Psi_B \right) \equiv \arr{a}{b}{c}{d}
\end{equation}
\subsubsection*{Global embedding coordinates}
We will present reconstruction formulas in the global embedding coordinates
\begin{equation}
Y^1 = \f{-1}{2z} (1-x^2-z^2) \quad\quad
Y^2 = \f{1}{2z} (1+x^2+z^2) \quad\quad
Y^3 = \f{x}{z}
\end{equation}
satisfying
\begin{equation}
(Y^1)^2-(Y^2)^2+(Y^3)^2=-1 \quad\quad
(dY^1)^2-(dY^2)^2+(dY^3)^2=\f{dx^2+dz^2}{z^2}
\end{equation}
Once we know $Y^i$, the Poincare coordinates may be easily extracted through
\begin{equation}
Y^2-Y^1=\f{1}{z} \quad\quad\quad\quad Y^3=\f{x}{z}
\end{equation}
\subsubsection*{Reconstruction formulas}
The reconstruction formula for the string solution is
\begin{equation}
\label{e.YIrecon}
Y^I=\f{1}{2} \mbox{\rm tr}\,\left( \tilde{\sigma}^I C \hat{\Psi}^T D \hat{\Psi} \right)
\end{equation}
where
\begin{equation}
C =\arr{0}{1}{-1}{0} \quad\quad\quad\quad D=\arr{0}{i}{i}{0}
\end{equation}
and $\tilde{\sigma}$ are related to the standard Pauli matrices by
\begin{equation}
\tilde{\sigma}^1=\sigma^1 \quad \quad
\tilde{\sigma}^2=i\sigma^2 \quad \quad
\tilde{\sigma}^3=\sigma^3
\end{equation}
Using the equations
\begin{equation}
\partial \hat{\Psi}+J \hat{\Psi}=0 \quad\quad\quad\quad \bar{\partial} \hat{\Psi}+\bar{J} \hat{\Psi}=0
\end{equation}
written in the original gauge (\ref{e.flatcon}), we may verify that
\begin{eqnarray}
(Y^I)^2 &=& -1 \\
(\partial Y^I)^2 &=& T(w) \\
(\bar{\partial} Y^I)^2 &=& \overline{T}({\overline{w}}) \\
(\partial Y^I \bar{\partial} Y^I) &=& \f{1}{2} \left( e^{2\gamma}+T\overline{T} e^{-2\gamma} \right) \\
\partial \bar{\partial} Y^I &=& (\partial Y^K \bar{\partial} Y^K) Y^I
\end{eqnarray}
From the formula (\ref{e.YIrecon}) we may now express the $AdS_2$ coordinates
directly in terms of the components of $\hat{\Psi}$ given above:
\begin{equation}
\label{e.recon}
\f{1}{z} \equiv Y^2-Y^1 =2iac \quad\quad\quad\quad
\f{x}{z} \equiv Y^3 =i(ad+bc)
\end{equation}
Note that these expressions are invariant under the gauge transformation
\begin{equation}
\Psi \to \arr{\lambda}{0}{0}{\lambda^{-1}} \Psi
\end{equation}
thus we can use them also in our final gauge (\ref{e.vgauge}).
\subsubsection*{The operator insertion points $x_k$ and the target space cut-off $\mathcal E$}
We may now use the above formulae to express the gauge theory operator insertion
points $x_k$ in terms of the two solutions of the linear system $\Psi_A$, $\Psi_B$ which determine
the classical string embedding. Fortunately, close to the puncture we have
explicit formulas (\ref{e.assol1})-(\ref{e.assol2}) for the basis of solutions
around each puncture. Using these formulas we see that for $\xi=1$, the dominant
solution around the puncture $w_k$ is $\bar{k}$.
So only the $\beta$ and $\delta$ coefficients of $\Psi_{A,B}$ in
(\ref{e.phi12}) will be relevant.
Using the formulas (\ref{e.recon}) and the explicit expression
(\ref{e.assol2}) we get the link between target space $z$ coordinate and
the worldsheet coordinate around the puncture $w=w_k$
\begin{equation}
z=\f{1}{i\beta_k^2} |w-w_k|^{\Delta_k} \quad\quad \text{where} \quad
\beta_k =\ss{k}{\Psi_A}
\end{equation}
This allows us to relate the target space cut-off $z=\mathcal E$ to
the worldsheet cut-offs $\varepsilon_k$:
\begin{equation}
\label{e.epsk}
\Delta_k \log \varepsilon_k =\log \mathcal E + \log |\ss{k}{\Psi_A}|^2
\end{equation}
Similarly, we obtain expressions for the coordinates of the gauge theory
operator insertion points
\begin{equation}
\label{e.xk}
x_k=\f{\ss{k}{\Psi_B}}{\ss{k}{\Psi_A}}
\end{equation}
The two above expressions (\ref{e.epsk}) and (\ref{e.xk}) are the key results
of the present section which will be essential for the determination of
the `divergent' part of the AdS action integral in section~8 (recall also
the overview in section~4 above).
\section{The AdS action}
After the above preparations we are now ready to tackle the calculation
of the AdS contribution to the 3-point correlation function using Pohlmeyer
reduction.
We have to compute the action of the $AdS_2$ part of the solution over the
worldsheet, which is a `regularized' 3-punctured sphere with 3 disks cut out
around the punctures\footnote{For the puncture at $w=\infty$ we define the worldsheet
cut-off through $|w|<1/\varepsilon_\infty$.} $|w-w_i|>\varepsilon_i$. The worldsheet cut-off's
around each puncture are not independent but are determined by
the \emph{single} target-space cut-off $z=\mathcal E$
\begin{equation}
\f{\sqrt{\lambda}}{\pi}\int_{\Sigma \setminus \{\varepsilon_i\}} \f{\partial x \bar{\partial} x+ \partial z \bar{\partial} z}{z^2}
\end{equation}
Let us emphasize that this is \emph{not} the area of the worldsheet as there
is a nonzero energy-momentum tensor. Using the elements of the Pohlmeyer flat
connection the above integral can be written as
\begin{equation}
\f{\sqrt{\lambda}}{\pi}\int_{\Sigma \setminus \{\varepsilon_i\}} 2\, \mbox{\rm tr}\, \Phi_w \Phi_{\overline{w}}
\end{equation}
Since in the above expression we have both an unknown integrand (i.e. which depends
on the solution of the modified sinh-Gordon equation which we do not know explicitly)
and an unknown integration domain (since the worldsheet cut-offs depend on the
target-space solution), it is convenient, as outlined in section~4, to split
the integral into a cut-off independent finite piece which involves the unknown
integrand but can be integrated over the whole punctured sphere and a cut-off
dependent part with an explicitly known integrand.
\begin{equation}
\label{e.decomp2}
\f{\sqrt{\lambda}}{\pi}\int_{\Sigma} \left( 2\, \mbox{\rm tr}\, {\Phi_w \Phi_{\overline{w}}}-
\sqrt{T \overline{T}} \,d^2w\right) +
\f{\sqrt{\lambda}}{\pi}\int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T \overline{T}}\,d^2w
\end{equation}
As outlined in section~4, we also adopt a similar regularization for the $S^5$
part thus the cut-off dependent part will appear in the final answer with
coefficient 2.
We will evaluate the first integral in section 7, and the second integral
in section 8.
\section{The regularized Pohlmeyer contribution}
In order to evaluate the first integral in (\ref{e.decomp2}), we will proceed as for
Polygonal Wilson loops, and pass to a gauge\footnote{By this we mean redefining
the solution of the linear system $\tilde{\Psi}=W\Psi$ with some given matrix
$W$ depending on the worldsheet coordinates $w$, ${\overline{w}}$.} where the $\Phi_w$ part of the flat
connection is diagonal
\begin{equation}
\Phi_w \to W \Phi_w W^{-1}
\end{equation}
Fortunately it turns out that the diagonalized $\Phi_w$ does not depend on the
unknown Pohlmeyer function $\gamma$ and is expressed as
\begin{equation}
\Phi_w =\arr{-\f{\sqrt{T}}{2}}{0}{0}{\f{\sqrt{T}}{2}}
\end{equation}
The diagonal components of $\Phi_{\overline{w}}$ become more complicated
\begin{equation}
\Phi_{\overline{w}}=\arr{-\f{(e^{2\gamma}+T\overline{T} e^{-2\gamma})}{4\sqrt{T}}}{\f{(e^{2\gamma}-T\overline{T} e^{-2\gamma}) }{4\sqrt{T}}}{-\f{(e^{2\gamma}-T\overline{T} e^{-2\gamma}) }{4\sqrt{T}}}{\f{(e^{2\gamma}+T\overline{T} e^{-2\gamma}) }{4\sqrt{T}}}
=\scriptstyle
\arr{\scriptstyle-\f{1}{2} \sqrt{\overline{T}} \cosh \tilde{\gm}}{\scriptstyle\f{1}{2} \sqrt{\overline{T}} \sinh \tilde{\gm}}{\scriptstyle-\f{1}{2} \sqrt{\overline{T}} \sinh \tilde{\gm}}{\scriptstyle\f{1}{2} \sqrt{\overline{T}} \cosh \tilde{\gm}}
\end{equation}
however, the important observation made in \cite{AMSV} is that
the diagonal components of
each expression can be treated as a single function defined on a double cover
$\widetilde{\Sigma}$
\begin{equation}
y^2=T(w)
\end{equation}
of the worldsheet $\Sigma$.
In this manner one can rewrite the integral
\begin{equation}
\int_{\Sigma} \left( 2\, \mbox{\rm tr}\, {\Phi_w \Phi_{\overline{w}}}-\sqrt{T \overline{T}} \,d^2w\right)
\end{equation}
as an integral over $\widetilde{\Sigma}$ of a wedge product of two \emph{closed} 1-forms:
\begin{equation}
\int_{\Sigma} \left( 2\, \mbox{\rm tr}\, {\Phi_w \Phi_{\overline{w}}}-\sqrt{T \overline{T}} \,d^2w\right)=
\f{i}{2} \cdot \int_{\widetilde{\Sigma}} \omega \wedge \eta
\end{equation}
with
\begin{equation}
\omega=\sqrt{T(w)} dw \quad\quad\quad\quad
\eta=\f{1}{2} \sqrt{\overline{T}({\overline{w}})} \left( \cosh \tilde{\gm}-1 \right) d{\overline{w}}
+\f{1}{4} \f{1}{\sqrt{T(w)}} (\partial \tilde{\gm})^2 dw
\end{equation}
where for simplicity we used the original Pohlmeyer function (see (\ref{e.gmt})).
The $dw$ component of $\eta$ does not influence the integral but is chosen so that
$\eta$ is also closed ($d\eta=0$).
\begin{figure}
\hfill\includegraphics[height=7cm]{rys_1.png}\hfill{}
\caption{Cycles on a genus 3 surface. For concrete computations it is
convenient to make all the $B_i$ cycles to pass through the same point on
the last cut e.g. $w=0$.}
\label{f.genus3}
\end{figure}
If $\widetilde{\Sigma}$ had genus $g$ (which is the generic case for Polygonal Wilson loops),
one would use Riemann bilinear identity (or reciprocity)
to reduce the integral to products of integrals over cycles
\begin{equation}
\int_{\Sigma_g} \omega \wedge \eta=\sum_{i=1}^g \int_{A_i} \omega \int_{B_i} \eta -
\int_{A_i} \eta \int_{B_i} \omega \label{area genus}
\end{equation}
However in our case $\widetilde{\Sigma}$ has genus 0, and the 1-forms may have singularities at
8 points (two copies of the 3 punctures and 2 branch points of the covering $y^2=T(w)$).
One possibility to proceed is to prove an analog of Riemann reciprocity
directly for this case. The resulting expressions are, however, quite messy.
In the end, we decided to adopt a slightly different strategy by
treating the punctures
as infinitesimal branch cuts and using Riemann reciprocity for
a genus 3 Riemann surface (see figure~\ref{f.genus3})
with an additional treatment of the singularities at the zeroes of $T(w)$.
Let us note that the $\eta$ 1-form is neither holomorphic or antiholomorphic and
generic textbook formulas are not directly applicable.
\begin{figure}
\hfill\includegraphics[height=7cm]{rys_2.png} \hfill{}
\caption{The polygon is the standard representation of a genus 3 surface whose
boundary is the curve $L_{g=3}$ composed of the cycles $A_i$ and $B_i$ each
traversed twice. The infinitesimal circles $C_\pm$ surround the singularities of
$\eta$. $P_0$ is the (arbitrary but fixed) base point for constructing
the function $F$ such that $\omega=dF$.}
\label{f.recip}
\end{figure}
The idea of the derivation of the Riemann reciprocity formula is to rewrite
one of the forms as an exact form:
\begin{equation}
\omega=dF
\end{equation}
where $F(P)=\int_{P_0}^P \omega$. This can always be done on the Riemann surface
minus some contour. Then one transforms the surface integral into a 1-dimensional
integral over that contour using Stokes theorem:
\begin{equation}
\int_{\widetilde{\Sigma}} \omega \wedge \eta =
\int_{\widetilde{\Sigma} \setminus L} \omega \wedge \eta = \int_{\widetilde{\Sigma} \setminus L} d(F\eta)
=\int_L F\eta
\end{equation}
In our case, since $\omega$ is regular\footnote{Recall that we are on $\widetilde{\Sigma}$.}
at the zeroes of $T(w)$, the contour $L$
may be taken to be the sum of the standard contour for a genus 3 surface
$L_{g=3}$ and
two infinitesimal circles $C_\pm$ around the zeros of $T(w)$ at $w=\pm i a$
as shown on figure~\ref{f.recip}.
Therefore
\begin{equation}
\label{area}
\int_{\widetilde{\Sigma}} \omega \wedge \eta = \int_{L_{g=3}} F\eta + \int_{C_{+}} F \eta +
\int_{C_{-}} F \eta
\end{equation}
The first integral gives directly the standard bilinear expression
\begin{equation}
\label{periodsg3}
\int_{L_{g=3}} F\eta = \sum_{i=1}^3 \int_{A_i} \omega \int_{B_i} \eta -
\int_{A_i} \eta \int_{B_i} \omega
\end{equation}
where we used the fact that
\begin{equation}
\int_{C_{+}} \omega = \int_{C_{-}} \omega =0
\end{equation}
Let us now concentrate on the remaining two terms. Now,
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\!\int_{C_{+}} F \eta &=& \int_{C_+} \left( \int_{P_0}^P \omega \right) \eta = \int_{C_+} \left[ \left( \int_{P_0}^{P_+} + \int_{P_+}^P \right) \omega \right] \eta \nonumber\\
&=& \int_{P_0}^{P_+} \omega \int_{C_+} \eta + \int_{C_+} \left( \int_{P_+}^{P} \omega \right) \eta = \int_{P_0}^{P_+} \omega \int_{C_+} \eta - i\pi \frac{1}{6}
\end{eqnarray}
where the last integral is computed in Appendix~B.1.
Then, adding a similar expression for the second zero we get
\begin{eqnarray}
\left( \int_{C_{+}} + \int_{C_{-}} \right)F \eta &=& \int_{P_0}^{P_+} \omega \int_{C_+} \eta + \int_{P_0}^{P_-} \omega \int_{C_-} \eta -2\cdot i\pi \frac{1}{6} \nonumber\\
&=& \int_{P_-}^{P_+} \omega \int_{C_+} \eta -2\cdot i\pi \frac{1}{6} \nonumber\\
&=& -2\cdot i\pi \frac{1}{6}
\end{eqnarray}
where we used the fact that the 1-form $\eta$ is regular everywhere apart
from the zeros of $T(w)$ and so
\begin{equation}
\int_{C_+} \eta + \int_{C_-} \eta =0
\end{equation}
Moreover, one can even show that $\int_{C_+} \eta =0$ (see Appendix~B.2). In this way we arrived at the final equality.
Further, inserting this result into the integral (\ref{area}),
and computing the periods in (\ref{periodsg3}) using again
the regularity of $\eta$ outside the zeros of $T(w)$ and
explicitly computing
the integrals of the 1-form $\omega$
\begin{equation}
\int_{A_1} \omega = \int_{A_2} \omega = -2\pi i \frac{\Delta}{2}, \;\;\; \int_{A_3} \omega = 2\pi i \frac{\Delta_{\infty}}{2}
\end{equation}
we find that
\begin{eqnarray}
\int_{\widetilde{\Sigma}} \omega \wedge \eta &=& \sum_{i=1}^3 \int_{A_i} \omega \int_{B_i} \eta -2\cdot i\pi \frac{1}{6} \\
&=& 2\pi i \left[ -\frac{\Delta}{2} \left( \int_{B_1} \eta+ \int_{B_2} \eta \right) +\frac{\Delta_{\infty}}{2} \int_{B_3} \eta -\frac{1}{6} \right]
\end{eqnarray}
The integrals over the cycles $B_i$ may be expressed by integrals between
the punctures. From Fig.~\ref{f.genus3} and the antisymmetry of $\eta$ under
changing of the Riemann sheet we find
\begin{equation}
2\int_{C_{-11}} \eta = \int_{B_1} \eta + \int_{B_2} \eta, \;\;\; \int_{B_1} \eta = \int_{B_2}\eta,
\end{equation}
\begin{equation}
2\int_{C_{1\infty}} \eta = \int_{B_1} \eta - \int_{B_3} \eta
\end{equation}
Hence,
\begin{equation}
\int_{\widetilde{\Sigma}} \omega \wedge \eta = -2 i \left[ \frac{\pi}{6} -\frac{\pi}{2} \left( (\Delta_{\infty} -2\Delta) \int_{C_{-11}} \eta- 2\Delta_{\infty} \int_{C_{1\infty}} \eta \right)\right]
\end{equation}
Therefore the regularized Pohlmeyer contribution becomes
\begin{equation}
\label{e.regperiods}
\int_{\Sigma} \left( 2\, \mbox{\rm tr}\, {\Phi_w \Phi_{\overline{w}}}-
\sqrt{T(w) \overline{T}({\overline{w}})} \,d^2w\right) = \f{\pi}{6}-\f{\pi}{2} \left( (\Delta_\infty-2\Delta)
\int_{C_{-1\,1}} \!\!\!\!\eta
-2\Delta_\infty \int_{C_{1\,\infty}} \!\!\!\!\eta \right)
\end{equation}
At this stage we have reduced the computation of the regularized Pohlmeyer contribution
to the evaluation of the integrals of the 1-form $\eta$ between the punctures.
This cannot be done directly, as we do not know of course the explicit form of
the Pohlmeyer solution $\gamma$ or $\tilde{\gm}$. However, as shown in \cite{AMSV},
the integrals of
$\eta$ can be related to the $\th \to -\infty$ ($\xi \to 0$) asymptotics of the
parallel transport of a solution along the curve which is a WKB line \cite{gaiotto}.
The main idea is to apply the well-know semiclassical methods where with the role of the Plank constant is played by the spectral parameter $\xi$. Then, the linear problem
\begin{equation}
(d+J)\Psi=0
\end{equation}
can be approximately solved with the leading contribution coming from the $\Phi_w$ part
\begin{equation}
\Psi \sim e^{\mp \frac{1}{2\xi} \int \sqrt{T(w)} dw}
\end{equation}
Clearly, the approximation is the best once we are on the WKB line defined as
\begin{equation}
\mbox{Im} \left( \frac{1}{\xi} \sqrt{T(w)} \dot{w} \right)=0
\end{equation}
For our purposes, however, it is crucial to know also the subleading term related to the $\Phi_{\bar{w}}$ part of the flat connection
\begin{equation}
e^{\pm \xi \int \tilde{\eta} } = e^{\pm \xi \int \left( \tilde{\eta} - \frac{1}{2} \sqrt{\bar{T}(\bar{w})} d\bar{w} \right)} e^{\pm \frac{\xi}{2} \int \sqrt{\bar{T}(\bar{w})} d\bar{w} }=e^{\pm \xi \int \eta } e^{\pm \frac{\xi}{2} \int \sqrt{\bar{T}(\bar{w})} d\bar{w} }
\end{equation}
where $\tilde{\eta}$ is the 1-form $\eta$ without the subtraction term $\sqrt{\bar{T}(\bar{w})}/2$, exactly as it shows up in $\Phi_{\bar{w}}$.
\\
The basic object we want to compute in this limit is the skew product between two solutions at the punctures $j,k$. Then, the prescription is the following: \\
(i) take the known solution $\Psi_j(w_k')$ at $w=w_j'$ in the vicinity of the puncture $w_j$
\\
(ii) transport this solution via the parallel transport equation along a curve given by the WBK line equation to a point $w=w_k'$ near the puncture at $w_k$ taking into account the leading as well as the subleading terms
\\
(iii) compare the resulting with the known solution $\Psi_k(w_k')$ at $w=w_k'$.
\\
The resulting formula reads
\begin{eqnarray}
\lim_{\xi \rightarrow 0} \ss{j}{k} &=& e^{\frac{1}{\xi} \left[ \frac{1}{2}\int_{w_j'}^{w_k'} \sqrt{T(w)} dw + \frac{\Delta_j}{4} \log (w_j-w_j') + \frac{\Delta_k}{4} \log (w_k-w_k') \right] } \cdot \\
& & e^{\xi \left[ \frac{1}{2}\int_{w_j'}^{w_k'} \sqrt{\bar{T}(\bar{w})} d\bar{w} + \frac{\Delta_j}{4} \log (\bar{w}_j-\bar{w}_j') + \frac{\Delta_k}{4} \log (\bar{w}_k-\bar{w}_k') \right] } \cdot e^{\xi \int_{w_j}^{w_k} \eta }
\end{eqnarray}
where the logarithmic terms are due to the exactly known form of the solution near the punctures. Moreover, these subtractions render the expression finite and therefore allow to extend the integration exactly to the punctures. This formula may be now compared with the exact expression for the skew product at any $\xi$ (\ref{e.s32}), which contains two undetermined zero mode constants $M,M^*$. Fortunately, they are given by the first two terms of the WKB approximation. Then, the path integral of $\eta$ may be given by a combination of the $\theta \rightarrow -\infty$ asymptotic of $f_a(\theta)$ function.
In this way we obtain the following explicit expressions for the period
integrals
\begin{eqnarray}
\int_{C_{-1\,1}} \eta &=& h(2\Delta-\Delta_\infty) + h(2\Delta+\Delta_\infty) -2h(2\Delta)
\nonumber \\
\int_{C_{1\,\infty}} \eta &=& h(\Delta_\infty)+h(2\Delta+\Delta_\infty) -h(2\Delta) -h(2\Delta_\infty)
\label{e.regperiods2}
\end{eqnarray}
where
\begin{equation}
h(a)=\int_{-\infty}^{\infty} \frac{d\theta}{\pi} \cosh \theta \log \left(1-e^{-a\pi \cosh \theta} \right)
\end{equation}
Together with (\ref{e.regperiods}) it gives our final explicit expression for the
regularized Pohlmeyer contribution.
\subsection{Comparision with numerics}
Since the above derivation of (\ref{e.regperiods})-(\ref{e.regperiods2}) was
quite complicated and involved many new ingredients, we decided to test the result
by numerically solving the modified sinh-Gordon equation on the 3-punctured
sphere and directly computing the regularized Pohlmeyer integral from the
numerical solution. We give some details on the numerical setup in Appendix A,
while here we just summarize the results and the comparision with the
analytical predictions (\ref{e.regperiods})-(\ref{e.regperiods2}).
\begin{figure}
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[height=5cm]{action.png} \hfill{}
\end{minipage}
\hfill
\begin{minipage}[c]{0.45\textwidth}
\begin{tabular}{|c|c|c|c|}
\hline
$\Delta$ & $\Delta_\infty$ & numerics & our formula\\
\hline
0.2& 0.3& 0.04536 & 0.0450779 \\
0.5& 0.9& 0.107649& 0.107622 \\
1.& 1.& 0.426311& 0.426166\\
1.& 1.05& 0.429572& 0.429503\\
2.& 2.& 0.517689& 0.517688\\
2.& 3.& 0.488985& 0.488985\\
4.& 4.& 0.523584& 0.523584\\
4.& 7.99& 0.0152435& 0.0152435 \\
\hline
\end{tabular}
\end{minipage}\hfill{}
\caption{Regularized action density for $\Delta=\Delta_\infty=4.0$ and a comparision
between the numerically evaluated regularized action and the analytical
results of the formulas (\ref{e.regperiods})-(\ref{e.regperiods2}).}
\label{f.numerics}
\end{figure}
In figure~\ref{f.numerics} we show a plot of the integrand entering the
regularized Pohlmeyer action
\begin{equation}
\label{e.reg}
\int_{\Sigma} \left\{ \f{1}{2} \left( e^{2\gamma(w,{\overline{w}})}+ T(w)\overline{T}({\overline{w}}) e^{-2\gamma(w,{\overline{w}})} \right)
-\sqrt{T(w) \overline{T}({\overline{w}})} \right\} d^2w
\end{equation}
in a qudrant of the (compactified) complex plane. The upper and right
borders are mapped to
the puncture at $w=\infty$, while the puncture at $w=+1$ is right in
the middle of the
lower border. The solution in the remaining three quadrants follows by symmetry.
In the table we have shown a comparision of the numerical evaluation of (\ref{e.reg})
together with the analytical results following from
(\ref{e.regperiods})-(\ref{e.regperiods2}). The numerics becomes more difficult
and less reliable for small $\Delta$'s. In particular the deviations in the first rows
of the table are within numerical errors estimated by changing the number of points of
the numerical grid. The remaining results show excellent agreement with the
analytical formulas following from periods and solutions of the functional equations
for the overlaps.
\section{The regularized divergent contribution}
In this section we will deal with the remaining contribution to the action -- an
integral of $\sqrt{T\overline{T}}$ on the punctured sphere with specific cut-offs around
each puncture $|w-w_i|>\varepsilon_i$. Since we adopt a similar regularization for the $S^5$
contribution (see (\ref{e.sv})), we have in fact two such contributions
\begin{equation}
\exp\left\{-\f{2\sqrt{\lambda}}{\pi} \int_{\Sigma \setminus \{\varepsilon_i\}}
\sqrt{T(w)\overline{T}({\overline{w}})}\, d^2w\right\}
\end{equation}
The above integral can be evaluated explicitly in the small $\varepsilon_i$ limit
relevant for us. To do it, we note that it can be expressed as an integral
of the wedge product of two closed 1-forms:
\begin{equation}
\int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T(w)\overline{T}({\overline{w}})}\, d^2w=
const \cdot \int \sqrt{T} dw \wedge \sqrt{\overline{T}} d{\overline{w}}
\end{equation}
and transformed into a product of residues of $\sqrt{T}$ times appropriate
(regularized) integrals over intervals between punctures.
The outcome is
\begin{equation}
\int_{\Sigma \setminus \{\varepsilon_i\}} \sqrt{T(w)\overline{T}({\overline{w}})}\, d^2w=Finite
-\f{\pi}{2} \Delta_\infty^2 \log\varepsilon_\infty -\f{\pi}{2} \Delta^2 \log \varepsilon
-\f{\pi}{2} \Delta^2 \log \varepsilon
\end{equation}
where
\begin{eqnarray}
\label{e.finite}
Finite&=&\f{\pi}{4} \Delta^2_\infty \biggl(2\log 2 +(1+a^2) \log 2+(-2-a^2+2\sqrt{1+a^2})
\log a^2 +\nonumber\\
&&+(1+a^2) \log(1+a^2)-4 \sqrt{1+a^2} \log(1+\sqrt{1+a^2}) \biggr)
\end{eqnarray}
Let us first concentrate on the logarithmically divergent part. Taking into account
the coupling-constant dependent prefactor, and generalizing slightly to three generic anomalous dimensions, we get
\begin{equation}
\exp\left\{ \sqrt{\lambda} (\Delta_1^2 \log\varepsilon_1+\Delta_2^2 \log \varepsilon_2+
\Delta_3^2 \log \varepsilon_3) \right\}
\end{equation}
We may use the relations (\ref{e.epsk}) to express the worldsheet cut-off in
terms of the physical target-space cut-off $z=\mathcal E$ and the products
between the elementary solutions $1,2,3$ and one ($\Psi_A$) of the two
solutions appearing in the reconstruction formulas
of section~5. We get
\begin{equation}
\label{e.eqlogsprod}
\exp\left\{ \sum \mathbf{\Delta}_i \log \mathcal E+ \mathbf{\Delta}_1 \log |\ss{1}{A}|^2+ \mathbf{\Delta}_2 \log |\ss{2}{A}|^2+
\mathbf{\Delta}_3 \log |\ss{3}{A}|^2 \right\}
\end{equation}
where $\mathbf{\Delta}_i\equiv \sqrt{\lambda}\Delta_i$ is the unrescaled anomalous dimension.
We will now express the scalar products $\ss{k}{A}$ in terms of the gauge theory
operator insertion points $x_k$ and products between the elementary solutions.
The two solutions of the linear system $\Psi_A$ and $\Psi_B$ which determine
the target-space string embedding are completely specified by their coordinates
in e.g. the 1, $\bar{1}$ basis:
\begin{equation}
\Psi_A= \alpha 1+ \beta \bar{1} \quad\quad\quad\quad \Psi_B= \gamma 1+ \delta \bar{1}
\end{equation}
where $\alpha \delta -\beta \gamma=1$. Similarly we can express the 2 and 3 elementary
solutions entering (\ref{e.eqlogsprod}) in terms of 1 and $\bar{1}$:
\begin{equation}
2=k 1+l \bar{1} \quad\quad\quad\quad 3= m 1+n \bar{1}
\end{equation}
where $k$, $l$, $m$ and $n$ are appropriate overlaps evaluated at
$\th=0$ ($\xi=1$). In terms of the above quantities, the part of (\ref{e.eqlogsprod})
depending on the products becomes
\begin{equation}
\sum_{k} \mathbf{\Delta}_k \log | \ss{k}{A} |^2 = \mathbf{\Delta}_1 \log \beta^2+ \mathbf{\Delta}_2 \log\, (k \beta-l\alpha)^2+
\mathbf{\Delta}_3 \log\,(m \beta-n \alpha)^2
\end{equation}
Now we may use formula (\ref{e.xk}) to relate the hitherto unknown coefficients
$\alpha$, $\beta$ and $\gamma$ to the operator insertion points:
\begin{eqnarray}
x_1 &=& \f{\ss{1}{\Psi_B}}{\ss{1}{\Psi_A}} = \f{\delta}{\beta} \\
x_2 &=& \f{\ss{2}{\Psi_B}}{\ss{2}{\Psi_A}} = \f{k\delta-l\gamma}{k\beta-l\alpha} \\
x_3 &=& \f{\ss{3}{\Psi_B}}{\ss{3}{\Psi_A}} = \f{m\delta-n\gamma}{m\beta-n\alpha}
\end{eqnarray}
Solving these equations with the constraint $\alpha \delta -\beta \gamma=1$ yields
\begin{eqnarray}
\beta^2 &=& \f{ln}{lm-kn} \cdot \f{x_{23}}{x_{12} x_{13}} \\
(k \beta-l\alpha)^2 &=& \f{l}{n}(lm-kn) \cdot \f{x_{13}}{x_{12} x_{23}} \\
(m \beta-n \alpha)^2 &=& \f{n}{l} (lm-kn) \cdot \f{x_{12}}{x_{13} x_{23}}
\end{eqnarray}
In the above formula $l=\ss{1}{2}_0 \equiv \ss{1}{2}_{\th=0}$, $n=\ss{1}{3}_0$,
while
\begin{equation}
lm-kn=\ss{1}{2}_0\sS{3}{1}_0-\sS{2}{1}_0\ss{1}{3}_0=\ss{3}{2}_0
\end{equation}
using Schouten's identity.
Plugging the above expressions into (\ref{e.eqlogsprod}), we obtain finally
the standard CFT spacetime dependence of the 3-point function
\begin{equation}
\f{1}{\left( \f{x_{12}}{\mathcal E} \right)^{\mathbf{\Delta}_1+\mathbf{\Delta}_2-\mathbf{\Delta}_3}
\left( \f{x_{13}}{\mathcal E} \right)^{\mathbf{\Delta}_1+\mathbf{\Delta}_3-\mathbf{\Delta}_2}
\left( \f{x_{23}}{\mathcal E} \right)^{\mathbf{\Delta}_2+\mathbf{\Delta}_3-\mathbf{\Delta}_1}} \cdot ...
\end{equation}
multiplied by an additional contribution coming from the products of the
elementary solutions
\begin{equation}
\label{e.elem}
\exp^{\sqrt{\lambda} \left( (\Delta_1+\Delta_2-\Delta_3) \log \ss{1}{2}_0 +
(\Delta_1+\Delta_3-\Delta_2) \log \ss{1}{3}_0 +
(\Delta_2+\Delta_3-\Delta_1) \log \ss{3}{2}_0 \right) }
\end{equation}
Going back to our solution of the functional equations (and returning to the
symmetric case of $\Delta_1=\Delta_2=\Delta$ and $\Delta_3=\Delta_\infty$) we see that the products
have the following structure at $\th=0$:
\begin{eqnarray}
\ss{1}{2}_0 &=& e^{M_{-11}+M_{-11}^*} \cdot e^{K_{-11}} \\
\ss{1}{3}_0=\ss{2}{3}_0 &=& e^{M_{1\infty}+M_{1\infty}^*} \cdot e^{K_{1\infty}}
\end{eqnarray}
where
\begin{eqnarray}
K_{-11} &=& k(2\Delta-\Delta_\infty) + k(2\Delta+\Delta_\infty) -2k(2\Delta) \\
K_{1\infty} &=& k(\Delta_\infty)+k(2\Delta+\Delta_\infty) -k(2\Delta) -k(2\Delta_\infty)
\end{eqnarray}
with
\begin{equation}
k(a)=\int_{-\infty}^\infty \f{d\th}{2\pi} \f{\log\left(1-e^{-a\pi \cosh \th}\right)}{\cosh \th}
\end{equation}
The zero-mode constants $M_{-11}$ and $M_{1\infty}$ are evaluated in Appendix C
and after substituting into
(\ref{e.elem}), it turns out that they exactly \emph{cancel} the finite term
(\ref{e.finite}). Thus the remaining contribution to the OPE coefficient
becomes finally
\begin{equation}
\label{e.opediv}
\exp \left\{ \sqrt{\lambda} \left[ (2\Delta-\Delta_\infty) K_{-11}+2\Delta_\infty K_{1\infty} \right]
\right\}
\end{equation}
\section{The final AdS contribution to the OPE coefficients}
We may now sum together the two contributions to the OPE coefficients --
(\ref{e.regperiods})-(\ref{e.regperiods2}) coming
from the regularized Pohlmeyer integral and (\ref{e.opediv}) coming from the regularized
divergent integral. Both contributions have the same structure yielding
\begin{equation}
C^{OPE}_{AdS} = \exp\left\{-\f{\sqrt{\lambda}}{6} -\sqrt{\lambda} \left[ (2\Delta-\Delta_\infty) \tilde{P}_{-11}
+2\Delta_\infty \tilde{P}_{1\infty}\right]\right\}
\end{equation}
with
\begin{eqnarray}
\tilde{P}_{-11} &=& \tilde{h}(2\Delta-\Delta_\infty) + \tilde{h}(2\Delta+\Delta_\infty) -2\tilde{h}(2\Delta) \\
\tilde{P}_{1\infty} &=& \tilde{h}(\Delta_\infty)+\tilde{h}(2\Delta+\Delta_\infty) -\tilde{h}(2\Delta) -\tilde{h}(2\Delta_\infty)
\end{eqnarray}
where
\begin{equation}
\tilde{h}(a)=\f{1}{2} h(a)-k(a)=
\f{1}{2\pi} \int_{-\infty}^\infty \f{\sinh^2\th}{\cosh\th} \log
\left(1-e^{-a \pi \cosh \th} \right) d\th
\end{equation}
Several comments are in order here. Firstly, the above expression does not depend
on any details of the operators entering the OPE coefficient apart from their
anomalous dimensions, thus it is universal for this class of operators. Secondly,
the above expression has to be supplanted by the regularized $S^5$ contribution
\begin{equation}
C^{OPE}_{S^5} \equiv e^{-\f{\sqrt{\lambda}}{\pi} \int_\Sigma \left(
\text{$S^5$ contribution} -\sqrt{T\overline{T}} \right)}
\end{equation}
so one cannot draw conlusions on the behaviour of real OPE coefficients $C^{OPE}=
C^{OPE}_{AdS} \cdot C^{OPE}_{S^5}$, since the
latter part is currently unknown.
Thirdly, the factor $\exp(-\sqrt{\lambda}/6)$ seems quite surprising, however its presence is essential
for sensible extremal and small $\Delta_i$ limits which we will examine shortly.
In this paper we have mostly considered the symmetric case of two equal anomalous dimensions. It should not be difficult to extend these considerations to the generic case of three distinct anomalous dimensions.
Repeating e.g. the analysis of the regularized Pohlmeyer contribution
suggests the following structure.
Let us introduce the parameters $\alpha_i$:
\begin{equation}
\alpha_1=\Delta_2+\Delta_3-\Delta_1 \quad\quad\quad
\alpha_2=\Delta_1+\Delta_3-\Delta_2 \quad\quad\quad
\alpha_3=\Delta_1+\Delta_2-\Delta_3
\end{equation}
Then the general answer should be
\begin{equation}
C^{OPE}_{AdS} =\exp \left\{ -\sqrt{\lambda} \left( \f{1}{6} +F(\alpha_1,\alpha_2,\alpha_3)
\right) \right\}
\end{equation}
where
\begin{equation}
F(\alpha_1,\alpha_2,\alpha_3)=\alpha_1\tilde{h} (\alpha_1)+\alpha_2\tilde{h} (\alpha_2)+\alpha_3\tilde{h} (\alpha_3) +(\alpha_1+\alpha_2+\alpha_3) \tilde{h} (\alpha_1+\alpha_2+\alpha_3) \nonumber
\end{equation}
\begin{equation}
-(\alpha_1+\alpha_2) \tilde{h} (\alpha_1+\alpha_2)-(\alpha_1+\alpha_3) \tilde{h} (\alpha_1+\alpha_3)-(\alpha_3+\alpha_2) \tilde{h} (\alpha_3+\alpha_2)
\end{equation}
The structure of $F(\alpha_1,\alpha_2,\alpha_3)$ is very similar to the structure of formula
(7.11) in \cite{TristanKlose} but with a different function $\tilde{h}$ instead of
a logarithm. Below we will see that (7.11) arises from our formula in
the limit of small anomalous dimensions.
\subsubsection*{Extremal limit}
In the extremal limit $\Delta_\infty=2\Delta$, all the terms with $\tilde{h}(a)$'s
with nonzero arguments
will cancel between each other leaving the term $(2\Delta-\Delta_\infty) \tilde{h}(2\Delta-\Delta_\infty)$.
Now $\tilde{h}(a) \sim -\f{1}{6a}$ (see Appendix~D), so the remaining term will
cancel with the
$-\sqrt{\lambda}/6$ giving the expected result
\begin{equation}
C^{OPE}_{AdS}(\Delta,\Delta,\Delta_\infty=2\Delta)=1
\end{equation}
\subsubsection*{Small $\Delta_i$ limit}
For small arguments, $\tilde{h}(a)$ behaves like (see Appendix~D)
\begin{equation}
\tilde{h}(a) \sim -\f{1}{6a} -\f{1}{2} \log a
\end{equation}
It turns out that contributions coming from the leading term will cancel out
completely. The subleading logarithmic terms yield an expression
\begin{equation}
C^{OPE}_{AdS}(\Delta,\Delta,\Delta_\infty) \to
\left( \f{(2\Delta-\Delta_\infty)^{2\mathbf{\Delta}-\mathbf{\Delta}_\infty} (2\Delta+\Delta_\infty)^{2\mathbf{\Delta}+\mathbf{\Delta}_\infty}
\Delta_\infty^{2\mathbf{\Delta}_\infty} }{ (2\Delta)^{4\mathbf{\Delta}} (2\Delta_\infty)^{2\mathbf{\Delta}_\infty}} \right)^{\f{1}{2}}
\end{equation}
which coincides with formula (7.11) in \cite{TristanKlose}.
\subsubsection*{Large $\Delta_i$ limit and the Painleve transcendental}
For large arguments $\tilde{h}(a) \propto a^\# \cdot e^{-\pi a} $, and thus their contribution
is exponentially suppressed yielding a surprisingly simple universal limit independent
of the conformal dimensions of operators:
\begin{equation}
C^{OPE}_{AdS}(\Delta,\Delta,\Delta_\infty) \to \exp \left(-\f{\sqrt{\lambda}}{6} \right)
\end{equation}
The simplicity of this result suggests that there should exist a much simpler
direct derivation of the above result\footnote{Thanks to Pedro Vieira for asking
this interesting question.}. This turns out indeed to be the case.
In order to study the large $\Delta$ limit it is most convenient to study the
modified sinh-Gordon equation in its original formulation (\ref{e.pohl1eom})
\begin{equation}
\label{e.modshg}
\partial \bar{\partial} \tilde{\gm}= \sqrt{T\overline{T}} \sinh \tilde{\gm}
\end{equation}
where
\begin{equation}
T(w)= \f{\Delta_\infty^2}{4} \f{w^2+a^2}{(1-w^2)^2}
\end{equation}
The advantage of using (\ref{e.modshg}) is that $\tilde{\gm} \to 0$ around the
punctures. Hence it would seem naively that $\tilde{\gm}=0$ would be a possible solution
of the equations of motion. This is not the case, however, as due to
our genericity assumption on the nonvanishing of (\ref{e.pohl1}),
$\tilde{\gm}$ has to have logarithmic singularities
\begin{equation}
\label{e.logsing}
\tilde{\gm} \sim \pm \log | w \pm i a|
\end{equation}
at the zeros of $T(w)$. Nevertheless, in the large $\Delta$ limit,
when $\sqrt{T\overline{T}}$ is generically very large, in order to minimize
the string action (\ref{e.pohl1}), we expect to have an almost vanishing solution
with two narrow logarithmic spikes around the two zeros of $T(w)$.
Let us concentrate on the neighbourhood of $w=ia$ and introduce a new coordinate
through $w=u+i a$. Then (\ref{e.modshg}) takes the form
\begin{equation}
\partial \bar{\partial} \tilde{\gm} = \underbrace{\f{\Delta_\infty^2}{4} \f{2a}{(1+a^2)^2}}_{C^2} \cdot
\sqrt{u} \sqrt{{\overline{u}}} \sinh \tilde{\gm}
\end{equation}
Redefining coordinates again
\begin{equation}
v=\f{2}{3} C u^{\f{3}{2}}
\end{equation}
yields the standard sinh-Gordon equation
\begin{equation}
\partial \bar{\partial} \tilde{\gm}=\sinh \tilde{\gm}
\end{equation}
In the large $\Delta$ limit the problem becomes rotationally invariant and
after introducing a new variable $\tilde{\gm}=2U$ and $R=2r \equiv 2|v|$ we obtain
the equation for a Painlev{\'e} III transcendent normalized as in \cite{ZamPainleve}:
\begin{equation}
U''+\f{1}{R} U'=\f{1}{2} \sinh 2U
\end{equation}
The coefficient of the logarithmic singularity (\ref{e.logsing}) becomes
\begin{equation}
U \sim \pm \f{1}{3} \log R
\end{equation}
Now we may use the results of \cite{ZamPainleve} to evaluate directly
the regularized Pohlmeyer action. We may rewrite the contribution around $w=ia$
as
\begin{equation}
\label{e.intcoshreg}
\int \sqrt{T\overline{T}} (\cosh \tilde{\gm} -1) d^2w = \int_0^\infty (\cosh 2U-1) \f{3\pi R dR}{4}
\end{equation}
Note the $3\pi$ which comes from an angular integral corresponding to a $2\pi$ angle
in the original $u$ and $w$ coordinates. We then use the substitution
from~\cite{ZamPainleve}
\begin{equation}
\f{1}{2} \cosh 2U =-\f{1}{R} \f{d}{dR} R F_c(R)
\end{equation}
to get
\begin{equation}
-\f{3\pi}{2}\int_0^\infty \left( \f{d}{dR} R F_c(R) +\f{R}{2} \right) dR
\end{equation}
This integral can be evaluated exactly using the asymptotic properties of
$F_c(R)$ established in \cite{ZamPainleve}. At large $R$, $F_c(R) \sim -R/4$
up to exponentially small terms, while for small $R$, $R F_c(R) \to 1/18$.
Thus the above
integral evaluates to $\pi/12$. Now taking into account two such contributions
and the prefactor of the integral (\ref{e.intcoshreg}), we arrive directly at
our universal large $\Delta$ limit:
\begin{equation}
e^{-\f{\sqrt{\lambda}}{\pi} \left( \f{\pi}{12}+\f{\pi}{12} \right)} =
e^{-\f{\sqrt{\lambda}}{6}}
\end{equation}
\section{Summary and Outlook}
In this paper we have computed the universal part of the OPE coefficients
of three heavy operators with no Lorentz spins. This contribution comes from the
$AdS_2$ part of the string $\sigma$-model, and has to be supplanted with the
contribution of the $S^5$ part in order to obtain the full OPE coefficient
of the relevant operators.
We employed the methods of Pohlmeyer reduction, which have been previously
applied with great success to the case of null polygonal Wilson loops. It is interesting to notice that different aspects of the strong coupling physics of $\mathcal{N}=4$ SYM (gluon scattering amplitudes, anomalous dimensions through 2-point correlation functions,
OPE coefficients) may be expressed by the same (modified) sinh-Gordon equation with all differences encoded in the analytical structure of the modification functions.
Despite the similarities, the differences are significant, especially
in the analytical structure and target-space reconstruction (as we are
dealing with $AdS_2$ instead of $AdS_3$) which make the
generalization nontrivial. As a cross-check of our results we made a comparison
of our formulas with direct numerical solution of the modified sinh-Gordon equation.
Unfortunately, as the OPE coefficients of operators dual to semiclassical spinning strings have not been previously calculated, in general there are no independent results allowing for testing our final expression. Even in the case were we have some information, like specific
BPS operators with large charges, our lack of knowledge of the $S^5$ contribution
precludes a direct check.
However, there are two limits in which we may cross-check our formula. First of all, in the extremal case we find that the AdS contribution to the OPE coefficient does not have
any semiclassical piece, as expected in this case. Secondly, when the anomalous dimensions are small (`medium'-type operators) our result is exactly equivalent to the Klose-McLoughlin formula obtained by a classical extremalization procedure of three point geodesics
related to the three operators. It is reasonable to expect that such
a point-like string/geodesic approximation to three point correlators is
acceptable for not too `heavy' operators, which do not generate an extended surface.
An additional consistency check is provided by the derivation of the correct
CFT space-time dependence of the three-point correlators which arises from the
regularized divergent part.
It would be interesting to perform a comparision with the case of
heavy-heavy-light correlators.
However it seems that such effects as leading order backreaction to the classical
2-point solution and corrections to vertex operators would have to be included
in the heavy-heavy-light calculations in order for the comparision to be made.
There are several directions in which our work could be further developed. Obviously, as we consider entirely the AdS contribution to the OPE coefficient, one has to perform
an analogous analysis for the $S^5$ part in order to obtain the full OPE coefficient
of the `heavy' operators. We expect that the $S^5$ contribution will depend on
the particularities of the states in questions and not only on their conformal
dimensions. Moreover, all selection rules should show up from the $S^5$ contribution.
The $S^5$ part of the problem seems to be significantly more sophisticated
for at least two reasons.
The first reason is just technical, and should not pose too much problems.
Namely, the simplest string solution rotates in $S^3 \subset S^5$ which amounts to the reduction
of the $\sigma$-model action to three dimensional case. Then, one should consider the
corresponding Pohlmeyer reduction, again with a prescribed nonzero energy-momentum tensor. Since
we deal with a higher dimensional target space, it leads to a more complicated (but still
integrable) equations.
A more serious problem is connected with the classical wavefunctions of the external states
which must be taken into account as well.
In particular it is not clear whether their contributions to the 3-point function
would cancel out with their contributions to the 2-point functions when constructing
a normalization independent OPE coefficient. Currently we lack an appropriate
formulation with definite regularization prescription.
Further generalization would involve the computation of the OPE coefficients of
operators carrying charges related to $AdS_5$ momenta,
but here again a consistent treatment of the wavefunctions would be needed.
Finally, it would be very interesting to develop an analogous framework for
higher point correlation functions and identify the general form of the key
functional equations for the overlaps in this case.
\bigskip
\noindent{\bf Acknowledgments:} We would like to thank Pedro Vieira and
Amit Sever for initial collaboration, and Volodya Kazakov, Kostya Zarembo,
Kolya Gromov, Tristan McLoughlin and Arkady Tseytlin for interesting
discussions. RJ was supported by Polish science funds as a research
project N N202 105136 (2009-2012).
|
2,869,038,154,969 | arxiv | \chapter{The apeirogon limit} \label{chap:Apeirogon}
\chapter{The limit $e\to\infty$} \label{chap:Apeirogon}
In chapter \ref{chap:IntegrableLimit} we introduced the piecewise-affine Hamiltonian $\mathscr{P}$,
which describes the behaviour of the rescaled discretised rotation $F_{\lambda}$ in the integrable limit ($\lambda\rightarrow 0$).
We saw that orbits of the flow $\varphi$ associated with $\mathscr{P}$ are convex polygons
(theorem \ref{thm:Polygons}, page \pageref{thm:Polygons}),
which have a natural classification indexed by the set of critical numbers $\mathscr{E}$.
In this chapter we consider the behaviour of the Hamiltonian system at infinity,
where the index $e\in\mathscr{E}$ diverges.
In this limit the number of discontinuities experienced by an orbit
(i.e., the number of vertices of the polygons) is unbounded,
whilst the magnitude of these discontinuities becomes arbitrarily small.
We find that the limiting behaviour is a dichotomy.
Typically the limiting flow is linear, like the underlying rigid rotation:
however, for a certain subsequence of values of $e$ the nonlinearity persists.
We focus on the return map of the flow introduced in section \ref{sec:IntegrableReturnMap}:
the integrable counterpart to the return map $\Phi$.
\section{A change of coordinates} \label{sec:cylinder_coordinates}
In section \ref{sec:IntegrableReturnMap}, we remarked that it is natural
to think of the first return map of the flow as a twist map on a cylinder.
To formalise this description, we need to make a change of coordinates.
Recall the domain $\mathscr{X}$ of the integrable return map, introduced in equation (\ref{eq:cX}), page \pageref{eq:cX}.
By analogy with the perturbed case (cf. section \ref{sec:RegularDomains}, page \pageref{def:regular}),
we call a point $z\in\mathscr{X}$ with $\mathscr{P}(z)\in\mathscr{I}^e$ \defn{regular} with respect to the flow if
\begin{equation*}
\varphi^{\lambda}(z) = z + \lambda\mathbf{w}(z) = z + \lambda\mathbf{w}_{v_1,v_1}.
\end{equation*}
(Recall that regular points $z\inX^e$ satisfy $F_{\lambda}^4=z + \lambda\mathbf{w}_{v_1,v_1}$, among other things.)
Then we can define the sequence of sets:
\begin{equation} \label{def:cXe}
\mathscr{X}^e = \{ z\in\mathscr{X} \, : \; \mathscr{P}(z) \in I^e \},
\end{equation}
where $I^e\subset \mathscr{I}^e$ is the largest interval such that all points in $\mathscr{X}^e$ are regular.
As in the perturbed case, it is straightforward to show that the union
of the $\mathscr{X}^e$ have full density in $\mathscr{X}$.
Note that the domain $X^e$ is a subset of $\mathscr{X}^e$, since
$$ z\inX^e \quad \Rightarrow \quad z\notin\Lambda \quad \Rightarrow \quad \varphi^{\lambda}(z) = z + \lambda\mathbf{w}_{v_1,v_1}. $$
To compare the actions of the unperturbed return map for varying values of $e$, we define the two-parameter family of maps:
$$ \eta^e(\lambda): \mathscr{X}^e \rightarrow \mathbb{S}^1 \times \mathbb{R} \hskip 40pt e\in\mathscr{E}, \; \lambda>0. $$
The map $\eta^e(\lambda)$ is a change of coordinates $z=(x,y)\mapsto (\theta,\rho)$, with
\begin{equation} \label{def:rho_theta}
\theta(z) = \frac{1}{\lambda} \, \frac{x-y}{2(2v_1+1)} \hskip 20pt \rho(z) = \frac{1}{\lambda} \, \frac{x+y-2x_0}{2(2v_1+1)},
\end{equation}
where $v_1 = \fl{\sqrt{e/2}}$ and $z_0=(x_0,x_0)\in\mathscr{X}^e$ is some fixed point of the return map lying on $\Fix{G}$. This change of coordinates is the composition of several elements: a rotation through an angle $\pi/4$, which maps the symmetry line $\Fix{G}$ onto the $\rho$-axis; a rescaling of the plane by a factor of $1/\lambda\sqrt{2}(2v_1+1)$, which normalises the range of the coordinate $\theta$; and a translation, which ensures that the preimage $z_0$ of the origin $(\theta,\rho)=(0,0)$ is a fixed point. Such a fixed point is guaranteed to exist for sufficiently small $\lambda$, as by proposition \ref{prop:cPhi(z)}, we may take any $z_0\in\Fix{G}$ satisfying:
\begin{equation} \label{eq:z_0}
\frac{1}{4} - \frac{\mathscr{T}(z_0)}{4\lambda} \equiv 0 \mod{ 1}.
\end{equation}
In what follows, we omit the $\lambda$ dependence and simply write $\eta^e$.
For $z,\varphi^{\lambda t}(z)\in\mathscr{X}^e$, the flow acts as:
$$ \varphi^{\lambda t}(z) = z + \lambda t\mathbf{w}_{v_1,v_1}, $$
where $\mathbf{w}_{v_1,v_1}$ is perpendicular to $\Fix{G}$,
so that the coordinate $\theta$ is parallel to the direction of flow, and $\rho$ is perpendicular to it:
\begin{equation*}
\varphi^{\lambda t}(\theta, \rho) = (\theta + t, \rho).
\end{equation*}
(We use the symbol $\varphi$ to denote the flow in both coordinate spaces.)
Thus if we identify the interval $[-1/2,1/2)$ with the unit circle $\mathbb{S}^1$, it is straightforward to see that the coordinate $\theta$ plays the same role as in the expression (\ref{eq:cX}) for $\mathscr{X}$.
In the following proposition, we show that under this change of coordinates, the unperturbed return map acts as a linear twist map.
\begin{theorem} \label{thm:Omega_e}
For $e\in\mathscr{E}$, let $\Omega^e$ be the map
\begin{equation} \label{def:Omega^e}
\Omega^e : \mathbb{S}^1 \times \mathbb{R} \rightarrow \mathbb{S}^1 \times \mathbb{R}
\hskip 40pt
\Omega^e(\theta,\rho) = \left( \theta - \frac{1}{2} (2v_1+1)^2\rho\mathscr{T}^{\prime}(e) , \rho \right).
\end{equation}
Furthermore, let $z\in\mathscr{X}^e$, and let $z^{\prime}$ be the first return of $z$ to $\mathscr{X}^e$ \hl{under $\mathscr{F}_{\lambda}$}.
Then for any sufficiently small $\lambda>0$:
$$ \eta^e(z)=(\theta,\rho) \hskip 20pt \Rightarrow \hskip 20pt \eta^e (z^{\prime}) = \Omega^e(\theta,\rho). $$
\end{theorem}
Here the derivative $\mathscr{T}^{\prime}$ of the period function is undefined on the critical numbers,
and constant on the sequence of sets $\mathscr{I}^e$ (see equation (\ref{eq:Tprime(alpha)})).
Hence we write $\mathscr{T}^{\prime}(e)$ to denote the value of $\mathscr{T}^{\prime}$ on $\mathscr{I}^e$:
$$ \mathscr{T}^{\prime}(e) = \lim_{\alpha\rightarrow e^+} \mathscr{T}^{\prime}(\alpha). $$
\begin{proof}
For $e\in\mathscr{E}$, pick $z\in\mathscr{X}^e$ and let $(\theta,\rho)=\eta^e(z)$.
Then we have $z=\varphi^{\lambda\theta}(u,u)$ and $z^{\prime}=\varphi^{\lambda\theta^{\prime}}(u,u)$
for some $u\geq 0$, where $\theta^{\prime}$ is given by equation (\ref{eq:theta_prime}) of proposition \ref{prop:cPhi(z)}.
In the new coordinates, this is equivalent to
$$ \eta^e (z^{\prime}) = \left( \theta + \frac{1}{4} - \frac{\mathscr{T}(z)}{4\lambda} , \rho \right), $$
where we have used the fact that the coordinate $\theta$ is periodic.
Let $z=(x,y)$ and $z_0=(x_0,y_0)$.
Since the Hamiltonian function $\mathscr{P}$ is affine on $\mathscr{X}^e$, expanding $\mathscr{P}$ about $z_0$ gives
\begin{align}
\mathscr{P}(z) &= \mathscr{P}(z_0) + (x+y-2x_0)(2v_1+1) \nonumber \\
&= \mathscr{P}(z_0) + 2\lambda(2v_1+1)^2\rho. \label{eq:cP(z)_rho_expansion}
\end{align}
Similarly, since $\mathscr{T}$ is affine on the interval $\mathscr{I}^e$:
$$ \mathscr{T}(z) = \mathscr{T}(z_0) + 2\lambda(2v_1+1)^2\rho\mathscr{T}^{\prime}(e). $$
As $z_0$ is a fixed point, it must satisfy (\ref{eq:z_0}).
Thus we obtain
\begin{align*}
\eta^e (z^{\prime}) &= \left( \theta + \frac{1}{4} - \frac{\mathscr{T}(z_0) + 2\lambda(2v_1+1)^2\rho\mathscr{T}^{\prime}(e)}{4\lambda} , \rho \right) \\
&= \left( \theta - \frac{1}{2} (2v_1+1)^2\rho\mathscr{T}^{\prime}(e) , \rho \right),
\end{align*}
which completes the proof.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=1]{TikZ/Omega_e}
\caption{A schematic representation of the action of $\Omega^e$ on the space $\mathbb{S}^1\times\mathbb{R}$}
\end{figure}
Note that the action of the conjugate map $\Omega^e$ is independent of the parameter $\lambda$,
which appears only in the domain $\eta^e(\mathscr{X}^e)$ in which the conjugate map is valid.
Using the definition (\ref{def:cXe}) of the domain $\mathscr{X}^e$ and the above expansion
(\ref{eq:cP(z)_rho_expansion}) of $\mathscr{P}$ about the fixed point $z_0$,
we see that this domain is given by
\begin{displaymath}
\eta^e(\mathscr{X}^e) = \{ (\theta,\rho)\in\mathbb{S}^1 \times \mathbb{R} \, : \; \mathscr{P}(z_0) + 2\lambda(2v_1+1)^2\rho \in I^e \}.
\end{displaymath}
The range of values of $\rho$ in the domain grows like
\begin{displaymath}
\frac{|I^e|}{2\lambda(2v_1+1)^2} = \frac{|\mathscr{I}^e|}{2\lambda(2v_1+1)^2} + O(1)
\end{displaymath}
as $\lambda\rightarrow 0$, and in the integrable limit, we think of the conjugate map as valid on the whole of $\mathbb{S}^1\times\mathbb{R}$.
The map $\Omega^e$ is a linear twist map, and we \hl{denote the twist by}
\begin{equation} \label{def:K(e)}
K(e) = -\frac{1}{2} (2v_1+1)^2 \mathscr{T}^{\prime}(e) \hskip 40pt e\in\mathscr{E}.
\end{equation}
Each $\Omega^e$ is reversible, and can be written as the composition of the involutions $\mathscr{G}$ and $\mathscr{H}^e$, where
\begin{equation*}
\mathscr{G}(\theta,\rho) = (-\theta,\rho) \hskip 40pt \mathscr{H}^e(\theta,\rho) = \left( -\theta + K(e)\rho , \rho \right).
\end{equation*}
The fixed spaces of these involutions are given by
\begin{align}
\Fix{\mathscr{G}} &= \{ (\theta, \rho) \in \mathbb{S}^1\times\mathbb{R} \, : \; \theta\in\{-1/2,0\} \}, \nonumber \\
\Fix{\mathscr{H}^e} &= \{ (\theta, \rho) \in \mathbb{S}^1\times\mathbb{R} \, : \; \theta=\frac{1}{2} \, K(e)\rho \}. \label{eq:Fix(cH)}
\end{align}
(Note the connection between the involution $\mathscr{G}$ and the reversing symmetry $G^e$ of the perturbed return map $\Phi$, which was introduced in section \ref{sec:MainTheorems}.)
The map $\Omega^e$ is also reversible with respect to the reflection $(\theta,\rho)\mapsto(\theta,-\rho)$,
and equivariant under the group \hl{generated by the translation}
\begin{equation} \label{eq:rho_bar}
\rho \mapsto \rho + \bar{\rho} \hskip 40pt \bar{\rho} = \frac{1}{K(e)} = \frac{-2}{(2v_1+1)^2\mathscr{T}^{\prime}(e)}.
\end{equation}
Each circle $\rho=$ const. is invariant under $\Omega^e$,
and motion restricted to this circle is a rotation with rotation number $\rho/\bar{\rho}$ (mod $1$).
\section{The limiting dynamics} \label{sec:limiting_dynamics} \label{SEC:LIMITING_DYNAMICS}
Now we are in a position to study the dynamics of the sequence of maps $\Omega^e$ in the limit $e\rightarrow\infty$.
As in section \ref{sec:IntegrableReturnMap}, where we studied the nonlinearity of the flow,
we turn our attention to the behaviour of the period function $\mathscr{T}$.
\subsection*{The period function at infinity}
We wish to study the behaviour of the period $\mathscr{T}(\alpha)$ of the Hamiltonian flow
$\varphi$ in the limit $\alpha\rightarrow \infty$.
As one would expect, the period of the piecewise-affine flow converges to the period $\pi$ of the underlying rotation.
However, the period undergoes damped oscillations and, after a suitable rescaling to restore these oscillations,
we find that the divergence of the period from its asymptotic value converges to a limiting functional form.
We give this limiting form in the following theorem.
\begin{theorem} \label{thm:T_asymptotics}
Let $b\in[0,1)$, and let $\alpha=\alpha(v_k,b)$ be given by:
\begin{equation} \label{def:b}
\alpha = (v_k+b)^2 \hskip 40pt v_k\in\mathbb{N},
\end{equation}
so that $b$ is the fractional part of $\sqrt{\alpha}$ and $v_k$ is the integer part.
Then as $v_k\rightarrow \infty$:
\begin{equation} \label{eq:T_asymptotics}
v_k^{3/2} \left(\frac{\mathscr{T}(\alpha)-\pi}{4}\right) \rightarrow \frac{1}{3}(2b+1)^{3/2} - \sqrt{2b} - \epsilon(b),
\end{equation}
where $\epsilon$ is a bounded function.
\end{theorem}
In what follows, we will give an expression for the function $\epsilon$ and bound its range explicitly,
but for now we think of it as an error term which is small but non-vanishing as $\alpha\rightarrow \infty$.
The first two terms are sufficient to give an accurate qualitative description of the function (see figure \ref{fig:cT(alpha)}).
Furthermore, we claim without proof that the convergence in theorem \ref{thm:T_asymptotics} is uniform in $b$.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{Graphics/cT}
\caption{This figure plots the function $v_k^{3/2} \left(\mathscr{T}(\alpha)-\pi\right)/4$ (solid line) against $\sqrt{\alpha}$ for $\sqrt{\alpha}\in[100,101)$, i.e., for $v_k=100$ and $b\in[0,1)$. The dotted line shows the function $\frac{1}{3}(2b+1)^{3/2} - \sqrt{2b}$.}
\label{fig:cT(alpha)}
\end{figure}
We prove this theorem via a number of lemmas, the proofs of which we postpone until the next section.
To begin, we consider the formula for the period function $\mathscr{T}(\alpha)$,
as given in proposition \ref{prop:T(alpha)} (page \pageref{prop:T(alpha)}):
\begin{equation} \label{eq:cT(alpha)_II}
\frac{\mathscr{T}(\alpha)}{8} = \frac{P^{-1}(\alpha/2)}{2v_1+1} -2 \sum_{n=v_1+1}^{v_k} \frac{P^{-1}(\alpha - n^2)}{4n^2-1}.
\end{equation}
The function $P^{-1}$, defined in (\ref{def:Pinv}), admits the alternative expression:
\begin{equation} \label{eq:Pinv2}
P^{-1}(x) = \sqrt{x} - \frac{\{\sqrt{x}\}(1-\{\sqrt{x}\})}{2\fl{ \sqrt{x} } + 1},
\end{equation}
where $\{ x \} = x-\fl{x}$ represents the fractional part of a real number $x$.
For large argument, $P^{-1}$ is well approximated by a square-root.
We use this fact to approximate the summand in (\ref{eq:cT(alpha)_II}).
\begin{lemma} \label{lemma:f_sum} \label{LEMMA:F_SUM}
As $\alpha\rightarrow\infty$, we have:
\begin{equation} \label{eq:sqrt_estimate}
\sum_{n=v_1+1}^{v_k} \left( \frac{P^{-1}(\alpha - n^2)}{4n^2-1} - \frac{\sqrt{\alpha - n^2}}{4n^2} \right) = O\left(\frac{1}{\alpha}\right),
\end{equation}
where $v_1=\fl{\sqrt{\alpha/2}}$ and $v_k=\fl{\sqrt{\alpha}}$.
\end{lemma}
Then we approximate the sum in equation (\ref{eq:sqrt_estimate}) with an integral.
To do this, we note that for any function $f$ which is integrable on the interval $[v_1,v_k]$, we can re-write the sum over $f$ as:
\begin{equation} \label{eq:sum_int_expansion}
\sum_{n=v_1+1}^{v_k} f(n) = \int_{v_1+1/2}^{v_k-1/2} f(x) \,dx + f(v_k) + \sum_{n=v_1+1}^{v_k-1} \int_{n-1/2}^{n+1/2} f(n) - f(x) \, dx.
\end{equation}
All but one of the terms in the sum are approximated by an integral, with the sum over integrals constituting the error in this approximation.
The remaining term---$f(v_k)$---cannot be approximated in this way since the interval on which $f$ is integrable does not allow.
We apply this formula to $f(x) = x^{-2}\sqrt{\alpha-x^2}$. Recall from (\ref{def:b}) that $b$ denotes the fractional part of $\sqrt{\alpha}$. We write $a$ for the fractional part of $\sqrt{\alpha/2}$:
\begin{equation} \label{def:a}
\frac{\alpha}{2} = (v_1+a)^2 \hskip 40pt a\in[0,1),
\end{equation}
which gives the following expression for the behaviour of the integral in (\ref{eq:sum_int_expansion}).
\begin{lemma} \label{lemma:f_integral} \label{LEMMA:F_INTEGRAL}
As $\alpha\rightarrow\infty$, we have:
\begin{equation*}
\int_{v_1+1/2}^{v_k-1/2} \frac{\sqrt{\alpha - x^2}}{x^2} \, dx
= 1 - \frac{\pi}{4} + \frac{2a-1}{\sqrt{2\alpha}} - \frac{1}{3}\frac{(2b+1)^{3/2}}{\alpha^{3/4}} + O\left(\frac{1}{\alpha}\right),
\end{equation*}
where $a$ and $b$ denote the fractional parts of $\sqrt{\alpha/2}$ and $\sqrt{\alpha}$, respectively.
\end{lemma}
Finally we define the function $\epsilon(b)$ as the rescaled limit of the error term in the expression (\ref{eq:sum_int_expansion}).
We defer the proof of lemma \ref{lemma:epsilon_bounds} to appendix \ref{chap:Appendix}.
\begin{lemma} \label{lemma:epsilon_bounds}
For $b\in[0,1)$ and $v_k\in\mathbb{N}$, let $v_1=\fl{(v_k+b)/\sqrt{2}}$.
Then the following limit exists
\begin{equation} \label{eq:epsilon(b)}
\epsilon(b) = \lim_{v_k\rightarrow\infty} \left( v_k^{3/2} \; \sum_{n=v_1+1}^{v_k-1}
\int_{n-1/2}^{n+1/2} \frac{\sqrt{(v_k+b)^2 - n^2}}{n^2} - \frac{\sqrt{(v_k+b)^2 - x^2}}{x^2} \, dx \right),
\end{equation}
and satisfies
\begin{equation*}
\frac{1}{36} \, \frac{1}{\sqrt{3(b+1)}} \leq \epsilon(b) \leq \frac{1}{12} \, \frac{1}{\sqrt{b+1}} \, \frac{2b+3}{2b+2}.
\end{equation*}
\end{lemma}
Using these three lemmas, we proceed with the proof of theorem \ref{thm:T_asymptotics}.
\begin{proof}[Proof of Theorem \ref{thm:T_asymptotics}]
The period $\mathscr{T}(\alpha)$ of the flow on $\Pi(\alpha)$ is given by equation (\ref{eq:cT(alpha)_II}):
combining this with lemma \ref{lemma:f_sum}, we have that $\mathscr{T}(\alpha)$ satisfies
\begin{equation} \label{eq:cT(alpha)_asymptotics}
\frac{\mathscr{T}(\alpha)}{4} = \frac{2P^{-1}(\alpha/2)}{2v_1+1} - \sum_{n=v_1+1}^{v_k} \frac{\sqrt{\alpha - n^2}}{n^2} + O\left(\frac{1}{\alpha}\right)
\end{equation}
as $\alpha\rightarrow\infty$.
To evaluate the first term of the above, we recall the definition (\ref{def:a}) of $v_1$ and $a$ as the integer and fractional parts of $\sqrt{\alpha/2}$, respectively, and apply the formula (\ref{eq:Pinv2}) for $P^{-1}$ to obtain:
\begin{align*}
\frac{P^{-1}(\alpha/2)}{2v_1+1} &= \frac{\sqrt{\alpha/2}}{2v_1+1} + O\left(\frac{1}{\alpha}\right) \\
&= \frac{v_1+a}{2v_1+1} + O\left(\frac{1}{\alpha}\right) \\
&= \frac{1}{2} + \frac{a-1/2}{\sqrt{2\alpha}} + O\left(\frac{1}{\alpha}\right).
\end{align*}
For the sum, we apply the formula (\ref{eq:sum_int_expansion}) to $f(x) = x^{-2}\sqrt{\alpha-x^2}$,
then use lemma \ref{lemma:f_integral} to get
\begin{align*}
\sum_{n=v_1+1}^{v_k} \frac{\sqrt{\alpha - n^2}}{n^2}
&= 1 - \frac{\pi}{4} + \frac{2a-1}{\sqrt{2\alpha}} - \frac{1}{3}\frac{(2b+1)^{3/2}}{\alpha^{3/4}} + \frac{\sqrt{\alpha - v_k^2}}{v_k^2} \\
&\hskip 40pt + \sum_{n=v_1+1}^{v_k-1} \int_{n-1/2}^{n+1/2} \frac{\sqrt{\alpha - n^2}}{n^2} - \frac{\sqrt{\alpha - x^2}}{x^2} \, dx + O\left(\frac{1}{\alpha}\right).
\end{align*}
Using the definition (\ref{def:b}) of $b$, we observe that the $f(v_k)$ term behaves like:
\begin{align*}
\frac{\sqrt{\alpha - v_k^2}}{v_k^2} &= \frac{\sqrt{2v_k b + b^2}}{v_k^2} \\
&= \frac{\sqrt{2b}}{v_k^{3/2}} + O\left( \frac{1}{\alpha^{5/4}}\right) .
\end{align*}
Substituting the above three expressions into (\ref{eq:cT(alpha)_asymptotics}),
we find that the terms involving $a$ cancel, giving
\begin{align*}
\frac{\mathscr{T}(\alpha)-\pi}{4} &= \frac{1}{3}\frac{(2b+1)^{3/2}}{\alpha^{3/4}} - \frac{\sqrt{2b}}{v_k^{3/2}} - \sum_{n=v_1+1}^{v_k-1} \int_{n-1/2}^{n+1/2} \frac{\sqrt{\alpha - n^2}}{n^2} - \frac{\sqrt{\alpha - x^2}}{x^2} \, dx + O\left(\frac{1}{\alpha}\right).
\end{align*}
We can replace the $\alpha^{3/4}$ in the denominator of the first term by $v_k^{3/2}$, since
$$ \alpha^{-3/4} = (v_k+b)^{-3/2} = v_k^{-3/2} \left( 1 + O\left(\frac{1}{\sqrt{\alpha}}\right) \right). $$
Then multiplying by $v_k^{3/2}$, taking the limit, and noting the definition (\ref{eq:epsilon(b)}) of $\epsilon(b)$ gives the formula (\ref{eq:T_asymptotics}), as required.
The boundedness of $\epsilon(b)$ follows from lemma \ref{lemma:epsilon_bounds}.
\end{proof}
The key feature of the limiting distribution (\ref{eq:T_asymptotics}) is the singularity in its derivative at $b=0$,
since it is the derivative of the period function which determines the behaviour of the integrable return map.
\subsection*{The integrable return map at infinity}
Recall the map $\Omega^e$ of theorem \ref{thm:Omega_e}:
a linear twist map whose twist $K(e)$ (equation (\ref{def:K(e)}))
determines the nonlinearity of the integrable flow $\varphi$ in $\mathscr{X}^e$.
In this section we show that the twist is singular in the limit $e\rightarrow\infty$,
and thus that there are two distinct regimes of asymptotic behaviour.
\begin{proposition} \label{prop:Tprime_asymptotics}
Let $b\in[0,1)$ and let $\alpha=\alpha(v_k,b)$ be given as in (\ref{def:b}).
Then as $v_k\rightarrow \infty$:
\begin{equation} \label{eq:Tprime_asymptotics}
\frac{1}{2} (2v_1+1)^2 \mathscr{T}^{\prime}(\alpha(v_k,b)) \rightarrow -4\delta_0(b),
\end{equation}
where $v_1=\fl{(v_k+b)/\sqrt{2}}$, and $\delta_0$ is the indicator function at zero:
$$ \delta_0(x) = \left\{ \begin{array}{ll} 1 \quad & x=0, \\ 0 \quad & \mathrm{otherwise.}\end{array} \right. $$
\end{proposition}
The convergence (\ref{eq:Tprime_asymptotics}) is not uniform:
if $b$ is close to zero, the convergence can be made arbitrarily slow.
For a plot of $(2v_1+1)^2 \mathscr{T}^{\prime}(\alpha)/2$, see figure \ref{fig:rho_bar}(a), page \pageref{fig:rho_bar}.
The function $\mathscr{T}^{\prime}$ is piecewise-constant on the sequence of intervals $\mathscr{I}^e$, $e\in\mathscr{E}$.
To observe the convergence (\ref{eq:Tprime_asymptotics}) in the sequence of twists $K(e)$ for a given value of $b\in[0,1)$,
we define the subsequence of critical numbers $e(v_k,b)$ satisfying
\begin{equation} \label{def:e(vk,b)}
\alpha(v_k,b)=(v_k+b)^2\in \mathscr{I}^{e(v_k,b)} \hskip 40pt v_k\in\mathbb{N}.
\end{equation}
Then as $v_k\to\infty$, $K(e(v_k,b))\to 4\delta_0(b)$, as above.
The two regimes of behaviour for the sequence of return maps $\Omega^e$ follow directly.
\begin{corollary} \label{corollary:Omega_asymptotics}
Let $b\in[0,1)$ and let $e(v_k,b)$ be as above.
If $b=0$, then the sequence $e(v_k,b)$ is simply the sequence of squares,
and as $v_k\rightarrow\infty$, the sequence of functions $\Omega^{e(v_k,b)}$ converges pointwise to a limiting function $\Omega^{\infty}$ given by
\begin{equation*}
\Omega^{\infty}: \mathbb{S}^1\times\mathbb{R} \rightarrow \mathbb{S}^1\times\mathbb{R}
\hskip 40pt
\Omega^{\infty}(\theta,\rho) = \left( \theta + 4\rho, \rho \right).
\end{equation*}
If $b>0$, then the sequence $\Omega^{e(v_k,b)}$ converges pointwise to the identity.
\end{corollary}
The (reversing) symmetries of $\Omega^{e(v_k,b)}$ also converge as $v_k\rightarrow\infty$,
leading, for example, to an asymptotic version of $\mathscr{H}^e$.
Here we note in particular the translation invariance (\ref{eq:rho_bar}) of the sequence $\Omega^{e(v_k,b)}$, \hl{whose magnitude satisfies}
\begin{equation} \label{eq:rho_bar_asymptotics}
|\bar{\rho}| \rightarrow \left\{ \begin{array}{ll} 1/4 & \quad b=0 \\ \infty & \quad b>0 \end{array} \right.
\end{equation}
as $v_k\rightarrow\infty$.
\section{Proofs for section \ref{sec:limiting_dynamics}}
In the following proofs, we make extensive use of Taylor's Theorem (see, for example, \cite[Theorem 4.82]{Burkill}).
\begin{proof}[Proof of lemma \ref{lemma:f_sum}]
For any $n$ in the range $v_1+1\leq n\leq v_k$, the alternative form (\ref{eq:Pinv2}) of the function $P^{-1}$ gives us that
\begin{align*}
\left| \, \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \, \right|
&= \left| \, \frac{\sqrt{\alpha - n^2}}{4n^2(4n^2-1)} -
\frac{\{\sqrt{\alpha - n^2}\}(1-\{\sqrt{\alpha - n^2}\})}{(4n^2-1)(2\fl{ \sqrt{\alpha - n^2} } + 1)} \, \right|\\
&\leq \frac{\sqrt{\alpha - n^2}}{4n^2(4n^2-1)} + \frac{1/4}{(4n^2-1)(2\fl{ \sqrt{\alpha - n^2} } + 1)},
\end{align*}
where the inequality follows from the triangle inequality, and from the observation that
$$ 0 \leq x(1-x) \leq 1/4 \hskip 40pt x\in[0,1]. $$
Furthermore, since $n\geq 1$, we have $4n^2-1 \geq 3n^2$, and hence
\begin{equation} \label{eq:f_sum_inequalityI}
\left| \, \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \, \right|
\leq \frac{1}{12n^2} \left( \frac{\sqrt{\alpha - n^2}}{n^2} + \frac{1}{2\fl{ \sqrt{\alpha - n^2} } + 1} \right).
\end{equation}
Now we have two cases. For $n=v_k$, the square root $\sqrt{\alpha - n^2}$ satisfies
$$ 0 \leq \sqrt{\alpha - v_k^2} < \sqrt{2v_k+1}, $$
so we observe from the inequality (\ref{eq:f_sum_inequalityI}) that
$$ \left| \, \frac{P^{-1}(\alpha-v_k^2)}{4v_k^2-1} -\frac{\sqrt{\alpha-v_k^2}}{4v_k^2} \, \right|
< \frac{1}{12v_k^2} \left( \frac{\sqrt{2v_k+1}}{v_k^2} + 1 \right) = O\left(\frac{1}{\alpha}\right). $$
For $n<v_k$, the square root satisfies
$$ 0 < \sqrt{\alpha - n^2} < 2\fl{\sqrt{\alpha - n^2}} + 1, $$
in which case the inequality (\ref{eq:f_sum_inequalityI}) gives
$$ \left| \, \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \, \right|
< \frac{1}{12n^2\sqrt{\alpha - n^2}} \left( \frac{\alpha - n^2}{n^2} + 1 \right) < \frac{1}{6n^2\sqrt{\alpha - n^2}}, $$
where the last inequality uses the fact that $n\geq v_1+1>\sqrt{\alpha/2}$.
Summing over $n$, we obtain
\begin{align} \label{eq:f_sum}
\left| \, \sum_{n=v_1+1}^{v_k} \left( \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \right) \, \right|
&\leq \sum_{n=v_1+1}^{v_k} \, \left| \, \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \, \right| \nonumber \\
&< \sum_{n=v_1+1}^{v_k-1} \, \left( \frac{1}{6n^2\sqrt{\alpha - n^2}} \right) + O\left(\frac{1}{\alpha}\right).
\end{align}
We can approximate the sum on the right hand side of (\ref{eq:f_sum}) with an integral:
\begin{align}
\sum_{n=v_1+1}^{v_k-1} \frac{1}{n^2\sqrt{\alpha - n^2}}
&= \int_{v_1+1/2}^{v_k-1/2} \frac{1}{x^2\sqrt{\alpha - x^2}} \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right) \, dx \nonumber \\
&= \frac{1}{\alpha} \, \Big[ \tan{\theta} \Big]_{\theta_2}^{\theta_1} \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right) \nonumber \\
&= \frac{1}{\alpha} \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right), \label{eq:Tprime_bound2}
\end{align}
where we have used the substitution $x=\sqrt{\alpha}\cos{\theta}$, and $\tan{\theta_1}$, $\tan{\theta_2}$ are given by
\begin{align*}
\tan{\theta_1} &= 1 + O\left(\frac{1}{\sqrt{\alpha}}\right) \\
\tan{\theta_2} &= O\left(\frac{1}{\alpha^{1/4}}\right)
\end{align*}
(see equations (\ref{eq:tan(theta1)}) and (\ref{eq:tan(theta2)}) of the next proof).
Thus, combining (\ref{eq:Tprime_bound2}) and (\ref{eq:f_sum}), we have
$$ \sum_{n=v_1+1}^{v_k} \left( \frac{P^{-1}(\alpha-n^2)}{4n^2-1} -\frac{\sqrt{\alpha-n^2}}{4n^2} \right)
= O\left(\frac{1}{\alpha}\right). $$
\end{proof}
\medskip
\begin{proof}[Proof of lemma \ref{lemma:f_integral}]
We treat the integral using the substitution $x=\sqrt{\alpha}\cos{\theta}$, which gives:
\begin{align} \label{eq:tan_integral}
\int_{v_1+1/2}^{v_k-1/2} \frac{\sqrt{\alpha-x^2}}{x^2} \, dx
&= \int_{\theta_2}^{\theta_1} \tan^2\theta \, d\theta \nonumber \\
&= \Big[ \tan\theta -\theta \, \Big]_{\theta_2}^{\theta_1},
\end{align}
where the limits $\theta_1$ and $\theta_2$ satisfy:
\begin{align}
\cos{\theta_1} &= \frac{v_1+1/2}{\sqrt{\alpha}} = \frac{1}{\sqrt{2}}\left( 1 - \frac{a-1/2}{\sqrt{\alpha/2}} \right), \label{eq:cos1}\\
\cos{\theta_2} &= \frac{v_k-1/2}{\sqrt{\alpha}} = 1 - \frac{b+1/2}{\sqrt{\alpha}}. \label{eq:cos2}
\end{align}
Using Taylor's theorem, we have that
$$ \cos^{-1}\left(\frac{1-x}{\sqrt{2}}\right) = \frac{\pi}{4} + x + O(x^2) $$
as $x\rightarrow0$. Thus if we let $x=(a-1/2)/\sqrt{\alpha/2}$, then (\ref{eq:cos1}) gives that
\begin{equation} \label{eq:theta1}
\theta_1 = \frac{\pi}{4} + \frac{a-1/2}{\sqrt{\alpha/2}} + O\left(\frac{1}{\alpha}\right)
\end{equation}
as $\alpha\rightarrow\infty$.
Next we have
$$ \frac{\cos^{-1}(1-x)}{\sqrt{2x}} = 1 + \frac{x}{12} + O(x^2) $$
as $x\rightarrow0$. Applying this to (\ref{eq:cos2}) with $x=(b+1/2)/\sqrt{\alpha}$ gives
\begin{equation} \label{eq:theta2}
\theta_2 = \frac{\sqrt{2b+1}}{\alpha^{1/4}}\left( 1 + \frac{1}{24} \frac{2b+1}{\sqrt{\alpha}} \right)
+ O\left(\frac{1}{\alpha^{5/4}}\right)
\end{equation}
as $\alpha\rightarrow\infty$.
Similarly, we find:
\begin{align}
\tan{\theta_1} &= 1 + \frac{2a-1}{\sqrt{\alpha/2}}
+ O\left(\frac{1}{\alpha}\right), \label{eq:tan(theta1)}\\
\tan{\theta_2} &= \frac{\sqrt{2b+1}}{\alpha^{1/4}}\left( 1 + \frac{3}{8} \frac{2b+1}{\sqrt{\alpha}} \right)
+ O\left(\frac{1}{\alpha^{5/4}}\right). \label{eq:tan(theta2)}
\end{align}
Substituting the expressions (\ref{eq:theta1}), (\ref{eq:theta2}), (\ref{eq:tan(theta1)}) and (\ref{eq:tan(theta2)}) into the integral (\ref{eq:tan_integral}) gives the required result.
\end{proof}
\medskip
\begin{proof}[Proof of proposition \ref{prop:Tprime_asymptotics}]
From the formula (\ref{eq:Tprime(alpha)}) for the derivative of the period function, if $\alpha=(v_k+b)^2$ then
\begin{equation} \label{eq:Tprime(alpha)_II}
\frac{1}{2} (2v_1+1)^2 \mathscr{T}^{\prime}(\alpha) = 2 -8(2v_1+1)^2 \sum_{n=v_1+1}^{v_k} \frac{1}{(4n^2-1)(2\fl{\sqrt{\alpha - n^2}}+1)}.
\end{equation}
(Recall that if $\alpha\in\mathscr{I}^e$ for some $e\in\mathscr{E}$, the floor function in the denominator of the summand satisfies $\fl{\sqrt{\alpha - n^2}} = \fl{\sqrt{e - n^2}}$.)
We consider the summand in (\ref{eq:Tprime(alpha)_II}). For $n$ in the range $v_1+1\leq n\leq v_k-1$, we have
\begin{equation*}
\frac{1}{(4n^2-1)(2\fl{\sqrt{\alpha - n^2}}+1)} = \frac{1}{8n^2\sqrt{\alpha - n^2}} \, \left( 1 - \frac{1}{4n^2} \right)^{-1} \left( 1 - \frac{\{\sqrt{\alpha - n^2}\}-1/2}{\sqrt{\alpha - n^2}} \right)^{-1}.
\end{equation*}
Here the square root is bounded below by
$$ \sqrt{\alpha - n^2} \geq \sqrt{v_k^2 - (v_k-1)^2} = \sqrt{2v_k -1} > \alpha^{1/4}, $$
whereas $n^2 >\alpha/2$. Hence as $\alpha\rightarrow\infty$, we have
\begin{equation*}
\frac{1}{(4n^2-1)(2\fl{\sqrt{\alpha - n^2}}+1)} = \frac{1}{8n^2\sqrt{\alpha - n^2}} \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right).
\end{equation*}
Summing over $n$, we can use (\ref{eq:Tprime_bound2}) from the previous proof to obtain
\begin{align*}
& 8(2v_1+1)^2 \sum_{n=v_1+1}^{v_k-1} \frac{1}{(4n^2-1)(2\fl{\sqrt{\alpha - n^2}}+1)} \\
&= (2v_1+1)^2 \sum_{n=v_1+1}^{v_k-1} \frac{1}{n^2\sqrt{\alpha - n^2}} \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right) \\
&= \frac{1}{\alpha} (2v_1+1)^2 \left(1 + O\left(\frac{1}{\alpha^{1/4}}\right) \right) \\
&= 2 + O\left(\frac{1}{\alpha^{1/4}} \right)
\end{align*}
as $\alpha\rightarrow\infty$. Thus substituting this into (\ref{eq:Tprime(alpha)_II}), we see that only the $n=v_k$ term remains:
\begin{align}
\frac{1}{2} (2v_1+1)^2 \mathscr{T}^{\prime}(\alpha)
&= \frac{-8(2v_1+1)^2}{(4v_k^2-1)\left(2\Bfl{\sqrt{\alpha-v_k^2}}+1\right)} + O\left(\frac{1}{\alpha^{1/4}} \right) \nonumber \\
&= \frac{-4}{2\fl{\sqrt{2v_kb+b^2}}+1} \left(1 + O\left(\frac{1}{\sqrt{\alpha}}\right) \right) + O\left(\frac{1}{\alpha^{1/4}} \right). \label{eq:Tprime_final_term}
\end{align}
If $b>0$, then
\begin{equation*
\frac{1}{2\fl{\sqrt{2v_kb+b^2}}+1} \rightarrow 0
\end{equation*}
as $v_k\rightarrow\infty$.
Thus the remaining term in (\ref{eq:Tprime_final_term}) goes to zero and $(2v_1+1)^2 \mathscr{T}^{\prime}(\alpha)$ vanishes in the limit.
However, if $b=0$, then we have
$$ \frac{1}{2} (2v_1+1)^2 \mathscr{T}^{\prime}(\alpha) = -4 + O\left(\frac{1}{\alpha^{1/4}} \right). $$
\end{proof}
\chapter{Tedious proofs} \label{chap:Appendix}
\begin{prop_nonumber}[Proposition \ref{prop:octagon_orbits}, page \pageref{prop:octagon_orbits}]
For all $\lambda>0$ and $x\in\mathbb{N}$ in the range
\begin{equation} \label{eq:xrange1_II}
\frac{1}{2\lambda}+2 \leq x\leq \frac{1}{\lambda}-1,
\end{equation}
the orbit of $z=(x,x)$ under $F$ is symmetric and minimal if and only if
$$ 2x + \Bceil{\frac{1}{\lambda}} - 2\left\fl{ \frac{1}{\lambda}\right} \equiv 2 \mod{3}. $$
\end{prop_nonumber}
\begin{proof
As in the proof of proposition \ref{prop:square_orbits}, page \pageref{prop:square_orbits},
we begin by considering the fourth iterates of $F$.
From equation (\ref{eq:F4}), we have that
\begin{equation} \label{eq:box1}
F^4(x,y) = (x+1,y-1)
\hskip 40pt
0\leq x \leq \frac{1}{\lambda}-1, \quad
1\leq y \leq \frac{1}{\lambda}.
\end{equation}
A similar calculation reveals another set of lattice points
on which the fourth iterates of $F$ produce a uniform translation:
\begin{equation} \label{eq:box2}
F^4(x,y) = (x+1,y-3)
\hskip 40pt
\frac{1}{\lambda} \leq x < \frac{2}{\lambda}-1, \quad
3\leq y \leq \frac{1}{\lambda}+1.
\end{equation}
We use these two regimes of uniform behaviour to trace symmetric orbits from $\Fix{G}$ to $\Fix{H}$,
taking care at the boundaries between regimes.
We consider the orbit of $(x,x)$ with $x$ in the range (\ref{eq:xrange1_II}).
For any natural number $m$ satisfying
$$ x + (m-1) \leq \frac{1}{\lambda}-1, \hskip 40pt x - (m-1) \geq 1, $$
the behaviour (\ref{eq:box1}) gives us that
$$ F^{4m}(x,x) = (x+m,x-m). $$
Hence we let $m=\fl{1/\lambda}-x$, so that $F^{4m}(x,x)$ is given by
$$ F^{4m}(x,x) = \left( \Bfl{\frac{1}{\lambda}}, 2x - \Bfl{\frac{1}{\lambda}} \right), $$
and by the range (\ref{eq:xrange1_II}) of $x$:
$$ 4 \leq 2x - \Bfl{\frac{1}{\lambda}} \leq \frac{1}{\lambda}-1. $$
There are now two cases to consider.
If $\fl{1/\lambda}=1/\lambda$, i.e., if $1/\lambda\in\mathbb{N}$, then $F^{4m}(x,x)$ belongs to the set described by (\ref{eq:box2}),
in which case
$$ F^{4(m+1)}(x,x) = \left(\Bfl{\frac{1}{\lambda}}+1,2x - \Bfl{\frac{1}{\lambda}}-3 \right). $$
If not, then this point lies on the boundary between the regimes (\ref{eq:box1}) and (\ref{eq:box2}),
so we must calculate the behaviour of $F^4$ explicitly.
We find
$$ F^{4(m+1)}(x,x) = \left(\Bfl{\frac{1}{\lambda}}+1,2x - \Bfl{\frac{1}{\lambda}}-2 \right). $$
We summarise these two cases by writing
$$ F^{4(m+1)}(x,x) = \left(\Bfl{\frac{1}{\lambda}}+1,2x-3 +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}} \right). $$
Now the point $F^{4(m+1)}(x,x)$ is described by (\ref{eq:box2}), and for any natural number $n$ satisfying
$$ \Bfl{\frac{1}{\lambda}}+1 + (n-1) < \frac{2}{\lambda}-1,
\hskip 40pt
2x-3 +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}} -3(n-1) \geq 3, $$
we have
$$ F^{4(m+n+1)}(x,x) = \left( \Bfl{\frac{1}{\lambda}}+n+1,
2x +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}}-3(n+1) \right). $$
Hence we take
$$ n = \left\lfloor \frac{1}{3} \left( 2x -3 +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}} \right) \right\rfloor, $$
so that the $y$-coordinate of $F^{4(m+n+1)}(x,x)$ is given by
$$ 2x +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}}-3(n+1) = \delta, $$
where $\delta\in\{0,1,2\}$ is the residue of $2x +\ceil{1/\lambda} - 2\fl{1/\lambda}$ modulo $3$.
The point $F^{4(m+n+1)}(x,x)$ lies just above the positive $x$-axis. We apply $F^3$, to move the orbit close to the negative $y$-axis:
$$ F^{4(m+n+1)+3}(x,x) = \left( \delta - 3,
\fl{\lambda(1-\delta)} -\left( \Bfl{\frac{1}{\lambda}}+n+1 \right) \right). $$
The orbit is symmetric and minimal if and only if this point lies in $\Fix{H}$.
In this case, the relevant segment of $\Fix{H}$ is given by
$$ \left\{(x,y)\in\mathbb{Z}^2\,: \; x=-1, \; \fl{\lambda y}=-2 \right\}, $$
so the orbit is symmetric and minimal if and only if $\delta-3=-1$, or
$$ 2x +\Bceil{\frac{1}{\lambda}} - 2\Bfl{\frac{1}{\lambda}} \equiv 2 \mod{3}. $$
\end{proof}
\medskip
\begin{lemma_nonumber}[Lemma \ref{lemma:cP_variation}, page \pageref{lemma:cP_variation}]
Let $w\in\mathbb{R}^2$ and let $z=R_{\lambda}(w)$ be the lattice point in $(\lambda\Z)^2$ associated with $w$.
Then as $\lambda\to 0$:
$$ \forall \xi\in {\mathcal O}_{\tau}(z): \hskip 20pt |\mathscr{P}(\xi)-\mathscr{P}(w)| = O(\lambda). $$
\end{lemma_nonumber}
\begin{proof}
Let $r>0$ and $A(r,\lambda)$ be as in equation (\ref{eq:A}).
We begin by bounding the change in $\mathscr{P}$ under $F_{\lambda}^4$ in the set $A(r,\lambda)$.
By lemma \ref{lemma:Lambda} (page \pageref{lemma:Lambda}), we have that for sufficiently small $\lambda$,
all non-zero $z\in A(r,\lambda)\setminus\Lambda$ satisfy $F_{\lambda}^4(z) = z + \lambda\mathbf{w}(z)$.
For such $z$, there is no change in $\mathscr{P}$ under $F_{\lambda}^4$:
$$ \mathscr{P}(F_{\lambda}^4(z)) - \mathscr{P}(z) = 0 \hskip 40pt z\in A(r,\lambda)\setminus\Lambda. $$
If $z\in A(r,\lambda)\cap\Lambda$, then $F_{\lambda}^4(z) = z + \mathbf{v}(z)$,
where an explicit expression for $\mathbf{v}$ is given in equation (\ref{eq:v_abcd}), page \pageref{eq:v_abcd}.
For any $z,v\in\mathbb{R}^2$ we have
\begin{align}
\left| \mathscr{P}(z+v)- \mathscr{P}(z) \right| &= \left| \; \int_{[z,z+v]} \nabla\mathscr{P}(\xi)\cdot d\mathbf{\xi} \; \right| \nonumber \\
&\leq \max_{\xi\in[z,z+v]} \left( \|\nabla\mathscr{P}(\xi)\| \right) \, \|v\|, \label{eq:Delta_cP_bound}
\end{align}
where $[z,z+v]$ denotes the line segment joining the points $z$ and $z+v$, $d\mathbf{\xi}$ is
the line element tangent to this segment, and $\nabla\mathscr{P}$ is the gradient of $\mathscr{P}$, given by
$$ \nabla\mathscr{P}(x,y) = (2\fl{x} +1,2\fl{y} +1) \hskip 40pt
(x,y)\in \mathbb{R}^2\setminus \Delta. $$
If $z=\lambda(x,y)$ and $v=\mathbf{v}(z)$ is the discrete vector field, then for sufficiently small $\lambda$,
equations (\ref{eq:v_abcd}) and (\ref{eq:bd_ac_sets}) can be combined to give
\begin{align*}
\| \mathbf{v}(z) \|
&\leq \lambda \sqrt{(|2\fl{\lambda y}+1| +2)^2 + (|2\fl{\lambda x}+1| +1)^2} \\
&\leq \lambda \sqrt{(2|\fl{\lambda y}|+3)^2 + (2|\fl{\lambda x}|+2)^2} \\
&< \lambda \sqrt{(2|\lambda y|+5)^2 + (2|\lambda x|+4)^2} \\
&\leq \lambda\sqrt{2} (2\|z\|_{\infty} + 5).
\end{align*}
This inequality ensures that the length of the line segment $[z,z+v]$ goes to zero with $\lambda$,
so that for sufficiently small $\lambda$, the piecewise-constant form of the gradient $\nabla\mathscr{P}$ gives
\begin{align*}
\max_{\xi\in[z,z+v]} (\|\nabla\mathscr{P}(\xi)\|)
&\leq \sqrt{(|2\fl{\lambda x}+1| +2)^2 + (|2\fl{\lambda y}+1| +2)^2} \\
&\leq \sqrt{(2|\fl{\lambda x}|+3)^2 + (2|\fl{\lambda y}|+3)^2} \\
&\leq \sqrt{(2|\lambda x|+5)^2 + (2|\lambda y|+5)^2} \\
&\leq \sqrt{2} (2\|z\|_{\infty} + 5).
\end{align*}
Substituting these into the inequality (\ref{eq:Delta_cP_bound}), we have that for sufficiently small
$\lambda$:
$$ \left| \mathscr{P}(F_{\lambda}^4(z))- \mathscr{P}(z) \right| = \left|\mathscr{P}(z+\mathbf{v}(z))- \mathscr{P}(z) \right|
\leq 2\lambda (2\|z\|_{\infty} + 5)^2. $$
Similarly we consider the change in $\mathscr{P}$ under $F_{\lambda}$.
If $z=\lambda(x,y)$, then by the same sort of analysis, we have that for sufficiently small $\lambda$:
\begin{align*}
\left| \mathscr{P}(F_{\lambda}(z))- \mathscr{P}(z) \right| &= \left| P(\lambda(\fl{\lambda x} -y)) - P(\lambda y) \right|\\
&= \left| P(\lambda(y-\fl{\lambda x})) - P(\lambda y) \right| \\
&\leq \lambda |\fl{\lambda x}| \, (|P^{\prime}(\lambda y)|+2) \\
&\leq \lambda |\fl{\lambda x}| \, (2|\fl{\lambda y}| +3) \\
&\leq \lambda (|\lambda x| +1) \, (2|\lambda y| +5) \\
&\leq 2\lambda (\|z\|_{\infty}+3)^2,
\end{align*}
where $P$ is the piecewise-affine function defined in equation (\ref{def:P}).
(We refer the reader to page \pageref{thm:Polygons} for the proof that $P$ is even.)
It follows that for any orbit contained in $A(r,\lambda)$, if $k\in\mathbb{Z}_{\geq0}$ and $0\leq l<4$, then
\begin{equation} \label{eq:Delta_cP_bound_II}
\left| \mathscr{P}(F_{\lambda}^{4k+l}(z)) - \mathscr{P}(z) \right| \leq 2\lambda(m+l) (2\|z\|_{\infty} + 5)^2,
\end{equation}
where $m$ is the number of transition points in the orbit of $z$ under $F_{\lambda}^4$:
$$ m = \# \left( \{ z, F_{\lambda}^4(z), \dots, F_{\lambda}^{4k}(z) \} \cap \Lambda \right). $$
Similar expressions hold in backwards time, for iterates of $F_{\lambda}^{-4}$ and $F_{\lambda}^{-1}$.
For fixed $\lambda$, this estimate bounds the perturbed orbit of a point $z\in(\lambda\Z)^2$ to a polygonal annulus
around the polygon $\Pi(z)$, which grows in thickness as the number of transition points in the orbit increases.
By construction, the return orbit of $z$ under $F_{\lambda}^4$ contains exactly one
transition point for every time the orbit passes from one of the boxes $B_{m,n}$ to another.
Furthermore, the fourth iterates of $F_{\lambda}$ move parallel to the flow within each box,
so that, per revolution, there is one transition point per box that the return orbit intersects.
This number is (essentially) equal to the number of sides of $\Pi(w)$, and does not scale with $\lambda$.
Hence, we have
$$ \left| \mathscr{P}(\xi) - \mathscr{P}(z) \right| =O(\lambda) $$
for all $\xi\in{\mathcal O}_{\tau}(z)$.
\end{proof}
\medskip
\begin{lemma_nonumber}[Lemma \ref{lemma:epsilon_bounds}, page \pageref{lemma:epsilon_bounds}]
For $b\in[0,1)$ and $v_k\in\mathbb{N}$, let $v_1=\fl{(v_k+b)/\sqrt{2}}$.
Then the following limit exists
\begin{equation} \label{eq:epsilon(b)_II}
\epsilon(b) = \lim_{v_k\rightarrow\infty} \left( v_k^{3/2} \; \sum_{n=v_1+1}^{v_k-1}
\int_{n-1/2}^{n+1/2} \frac{\sqrt{(v_k+b)^2 - n^2}}{n^2} - \frac{\sqrt{(v_k+b)^2 - x^2}}{x^2} \, dx \right),
\end{equation}
and satisfies
\begin{equation} \label{eq:epsilon_bounds_II}
\frac{1}{36} \, \frac{1}{\sqrt{3(b+1)}} \leq \epsilon(b) \leq \frac{1}{12} \, \frac{1}{\sqrt{b+1}} \, \frac{2b+3}{2b+2}.
\end{equation}
\end{lemma_nonumber}
\begin{proof
For $n$ in the range $v_1+1 \leq n \leq v_k-1$, let
\begin{equation*}
I_n(v_k,b) = \int_{n-1/2}^{n+1/2} \frac{\sqrt{(v_k+b)^2-n^2}}{n^2} - \frac{\sqrt{(v_k+b)^2-x^2}}{x^2} \, dx.
\end{equation*}
Using the substitution $y=x-n$, we can write $I_n$ as
$$ I_n = \frac{\sqrt{(v_k+b)^2-n^2}}{n^2} \, \int_{-1/2}^{1/2} 1 - \left( 1 + \frac{y}{n} \right)^{-2}
\sqrt{1 - \frac{2ny+y^2}{(v_k+b)^2-n^2}} \, dy. $$
To simplify notation, we define the sequence
$$ A_n(v_k,b) = \frac{n}{(v_k+b)^2-n^2} \hskip 40pt v_1+1 \leq n \leq v_k-1, $$
which is increasing in $n$ and bounded according to
\begin{equation} \label{eq:An_range}
\frac{\sqrt{2}}{v_k+b} < A_n < \frac{1}{2}.
\end{equation}
Then $I_n$ becomes
\begin{equation} \label{eq:I_n_A_n}
I_n = \frac{1}{n^{3/2}\sqrt{A_n}} \,
\int_{-1/2}^{1/2} 1 - \left( 1 + \frac{y}{n} \right)^{-2} \sqrt{1 - 2A_ny -\frac{A_n y^2}{n}} \, dy.
\end{equation}
We expand the integrand of $I_n$ in powers of $1/n$,
retaining any terms which are order $1/n$ or larger.
Firstly, expanding the inverse power, we have
\begin{equation} \label{eq:I_n_expansion_I}
\left( 1 + \frac{y}{n} \right)^{-2} = 1 - \frac{2y}{n} + O\left(\frac{1}{n^2}\right) \hskip 40pt y\in[-1/2,1/2]
\end{equation}
as $n\rightarrow\infty$.
Then we tackle the square root by writing
$$ \sqrt{1 - 2A_ny -\frac{A_n y^2}{n}} = \sqrt{1 - 2A_n y} \, \sqrt{1 - \frac{A_n y^2}{n(1 - 2A_n y)}}. $$
The second of these factors can be expanded as follows:
\begin{equation} \label{eq:I_n_expansion_III}
\sqrt{1 - \frac{A_n y^2}{n(1 - 2A_n y)}} = 1 - \frac{A_n y^2}{2n(1 - 2A_n y)} + O\left(\frac{1}{n^2}\right) \hskip 40pt y\in[-1/2,1/2].
\end{equation}
The first factor, however, cannot be expanded in powers of $1/n$.
Instead, we use Taylor's Theorem (see, for example, \cite[Theorem 4.82]{Burkill}), applied to $f(x)=\sqrt{1+x}$ at $x=0$,
to obtain an explicit remainder term. This gives
\begin{equation} \label{eq:I_n_expansion_II}
\sqrt{1 - 2A_n y} = 1 - A_n y - R_2(y) \hskip 40pt y\in[-1/2,1/2],
\end{equation}
where $R_2$ is given by
\begin{equation} \label{eq:R2}
R_2(y) = \frac{A_n^2 y^2}{2(1 - 2\theta(y) A_n y)^{3/2}} \hskip 40pt \theta(y)\in(0,1).
\end{equation}
Thus, combining the expansions (\ref{eq:I_n_expansion_I}), (\ref{eq:I_n_expansion_III}), (\ref{eq:I_n_expansion_II}) and simplifying,
the integrand of $I_n$ is given by
\begin{align}
&1 - \left( 1 + \frac{y}{n} \right)^{-2} \sqrt{1 - 2A_ny -\frac{A_n y^2}{n}} \nonumber \\
&= A_n y + \frac{2y}{n} - \frac{3A_n y^2}{2n} + \frac{A_n^2 y^3}{2n(1-2A_n y)}
+ R_2(y) \left( 1 + O\left(\frac{1}{n}\right)\right) + O\left(\frac{1}{n^2}\right). \label{eq:I_n_integrand}
\end{align}
Now we integrate the expression (\ref{eq:I_n_integrand}) over $y$.
The terms which are linear in $y$ integrate to zero:
$$ \int_{-1/2}^{1/2} A_n y + \frac{2y}{n} \, dy = 0, $$
whereas the quadratic term integrates to give
$$ \int_{-1/2}^{1/2} - \frac{3A_n y^2}{2n} \, dy = -\frac{A_n}{8n}. $$
Using the definition (\ref{eq:R2}) of $R_2(y)$, the remaining terms in (\ref{eq:I_n_integrand}) can be regrouped to give
$$\int_{-1/2}^{1/2} \frac{A_n^2 y^3}{2n(1-2A_n y)} + R_2(y) \left( 1 + O\left(\frac{1}{n}\right)\right) \, dy
= \left( \int_{-1/2}^{1/2} R_2(y) \, dy \right) \left( 1 + O\left(\frac{1}{n}\right)\right). $$
Thus, by (\ref{eq:I_n_A_n}), $I_n$ is given by
\begin{equation} \label{eq:I_n_R}
I_n = -\frac{\sqrt{A_n}}{8n^{5/2}}
+\frac{1}{n^{3/2}\sqrt{A_n}} \left( \int_{-1/2}^{1/2} R_2(y) \, dy \right) \left( 1 + O\left(\frac{1}{n}\right)\right) +
O\left(\frac{1}{n^3}\right)
\end{equation}
as $n\rightarrow\infty$.
(In the final error term, we have used the fact that $(n^{3/2}\sqrt{A_n})^{-1}=O(1/n)$---cf. equation (\ref{eq:An_range}).)
We consider the behaviour of each term in $I_n$ as we sum over $n$.
We have already seen in the proof of proposition \ref{prop:Tprime_asymptotics}, equation (\ref{eq:Tprime_bound2}),
that the sum over the first term in (\ref{eq:I_n_R}) behaves like
\begin{equation*}
\sum_{n=v_1+1}^{v_k-1} \, \frac{\sqrt{A_n}}{n^{5/2}} = \sum_{n=v_1+1}^{v_k-1} \, \frac{1}{n^2\sqrt{(v_k+b)^2-n^2}}
= O \left(\frac{1}{v_k^2}\right)
\end{equation*}
as $v_k\to\infty$.
Thus this term does not contribute:
noting that the sum is over order $n$ (i.e., order $v_k$) terms, equation (\ref{eq:I_n_R}) gives us that
\begin{equation} \label{eq:I_n_R_II}
\sum_{n=v_1+1}^{v_k-1} I_n
= \sum_{n=v_1+1}^{v_k-1} \left[ \frac{1}{n^{3/2}\sqrt{A_n}}
\left( \int_{-1/2}^{1/2} R_2(y) \, dy \right)\right] \left( 1 + O\left(\frac{1}{v_k}\right)\right)
+ O \left(\frac{1}{v_k^2}\right),
\end{equation}
so that the only relevant contribution comes from the $R_2$ term.
\medskip
We bound the following integral over $y$:
$$ \frac{\sqrt{2}}{18\sqrt{3}} = \int_{-1/2}^{1/2} \frac{y^2}{(3/2)^{3/2}} \, dy
< \int_{-1/2}^{1/2} \frac{y^2}{(1 - 2\theta(y) A_n y)^{3/2}} \, dy
< \int_{-1/2}^{1/2} \frac{y^2}{(1/2)^{3/2}} \, dy = \frac{\sqrt{2}}{6}, $$
so that by the definition (\ref{eq:R2}) of $R_2$:
\begin{equation} \label{eq:R2_sum_bounds}
\frac{\sqrt{2}}{36\sqrt{3}} \, \left(\frac{A_n}{n}\right)^{3/2}
< \frac{1}{n^{3/2}\sqrt{A_n}} \left( \int_{-1/2}^{1/2} R_2(y) \, dy \right)
< \frac{\sqrt{2}}{12} \, \left(\frac{A_n}{n}\right)^{3/2}.
\end{equation}
Now we consider the sum
\begin{align*}
\sum_{n=v_1+1}^{v_k-1} \, \left(\frac{A_n}{n}\right)^{3/2}
&= \sum_{n=v_1+1}^{v_k-1} \, \left(\frac{1}{(v_k+b)^2-n^2}\right)^{3/2}.
\end{align*}
The summand is increasing in $n$, so we can bound the sum according to
\begin{align*}
\sum_{n=v_1+1}^{v_k-1} \, \left(\frac{A_n}{n}\right)^{3/2}
&\geq \int_{v_1}^{v_k-1} \, \left(\frac{1}{(v_k+b)^2-x^2}\right)^{3/2} \, dx \\
&= \frac{1}{(v_k+b)^2} \left[ \frac{x}{\sqrt{(v_k+b)^2-x^2}} \right]_{v_1}^{v_k-1} \\
&= \frac{1}{(v_k+b)^2} \left( \frac{v_k-1}{\sqrt{(v_k+b)^2-(v_k-1)^2}} - \frac{v_1}{\sqrt{(v_k+b)^2-v_1^2}}\right)\\
&= \frac{1}{(v_k+b)^2} \left( \frac{v_k-1}{\sqrt{2v_k(b+1)}} + O\left(1\right) \right)\\
&= \frac{1}{v_k^{3/2}} \, \frac{1}{\sqrt{2(b+1)}} + O\left(\frac{1}{v_k^2}\right).
\end{align*}
Combining this with (\ref{eq:R2_sum_bounds}), we have that
$$ \liminf_{v_k\to\infty}
\left[ \sum_{n=v_1+1}^{v_k-1} \, \frac{v_k^{3/2}}{n^{3/2}\sqrt{A_n}} \left( \int_{-1/2}^{1/2} R_2(y) \, dy \right) \right]
\geq \frac{1}{36\sqrt{3}} \, \frac{1}{\sqrt{b+1}}. $$
Similarly
\begin{align*}
\sum_{n=v_1+1}^{v_k-1} \, \left(\frac{A_n}{n}\right)^{3/2}
& \leq \int_{v_1+1}^{v_k-1} \, \left(\frac{1}{(v_k+b)^2-x^2}\right)^{3/2} \, dx +\left(\frac{A_{v_k-1}}{v_k-1}\right)^{3/2} \\
& = \frac{1}{v_k^{3/2}} \left( \frac{1}{\sqrt{2(b+1)}} + \frac{1}{(2(b+1))^{3/2}} \right) + O\left(\frac{1}{v_k^2}\right) \\
& = \frac{1}{v_k^{3/2}} \, \frac{1}{\sqrt{2(b+1)}} \, \frac{2b+3}{2b+2} + O\left(\frac{1}{v_k^2}\right),
\end{align*}
which combines with (\ref{eq:R2_sum_bounds}) to give
\begin{equation} \label{eq:limsup}
\limsup_{v_k\to\infty}
\left[ \sum_{n=v_1+1}^{v_k-1} \, \frac{v_k^{3/2}}{n^{3/2}\sqrt{A_n}} \left( \int_{-1/2}^{1/2} R_2(y) \, dy \right) \right]
\leq \frac{1}{12} \, \frac{1}{\sqrt{b+1}} \, \frac{2b+3}{2b+2}.
\end{equation}
Equation (\ref{eq:I_n_R_II}) gives us that the same limit inferior and limit superior apply to the sum over $v_k^{3/2} I_n$:
thus if the limit $\epsilon(b)$ exists, then it must satisfy (\ref{eq:epsilon_bounds_II}).
\medskip
It remains to show the convergence of the sum over the remainder term $R_2$.
This is not straightforward since both the bounds of the sum and the terms themselves vary as $v_k\to\infty$:
although all terms are positive and the number of terms increases with $v_k$, the size of each term also varies with $v_k$.
To get an explicit expression for $R_2$, we use the full Taylor's series representation \cite[Theorem 5.8]{Burkill}, whereby
\begin{equation} \label{eq:R2_II}
R_2(y) = \sum_{j=2}^{\infty} \binom{1/2}{j} (-2A_n y)^j,
\end{equation}
and the binomial coefficients are defined as follows:
$$ \binom{1/2}{j} = \prod_{k=1}^j \frac{1/2-(j-k)}{k}. $$
Now the sum under consideration is given by
\begin{align*}
& \sum_{n=v_1+1}^{v_k-1} \left[ \frac{v_k^{3/2}}{n^{3/2}\sqrt{A_n}}
\left( \int_{-1/2}^{1/2} R_2(y) \, dy \right)\right] \\
&= \sum_{n=v_1+1}^{v_k-1} \left[ \frac{v_k^{3/2}}{n^{3/2}\sqrt{A_n}} \left( \int_{-1/2}^{1/2} \,
\sum_{j=2}^{\infty} \left[\binom{1/2}{j} (-2A_n y)^j \right] dy \right) \right] \\
&= \sum_{n=v_1+1}^{v_k-1} \left[ \frac{v_k^{3/2}}{n^{3/2}\sqrt{A_n}} \, \sum_{j=1}^{\infty}
\left[ \frac{1}{2j+1} \binom{1/2}{2j} A_n^{2j} \right] \right] \\
&= \sum_{n=v_1+1}^{v_k-1} \left[ \frac{v_k^{3/2}}{n^{3/2}} \, \sum_{j=1}^{\infty}
\left[ \frac{1}{2j+1} \binom{1/2}{2j} A_n^{2j-1/2} \right] \right].
\end{align*}
Note that all terms are positive, so the series in $j$ converges absolutely.
Furthermore, the sum over $n$ is finite. Thus we may exchange the order of summation to obtain
\begin{equation} \label{eq:I_n_IV}
\sum_{j=1}^{\infty} \left[ \frac{1}{2j+1} \binom{1/2}{2j} \sum_{n=v_1+1}^{v_k-1} \left[ \left(\frac{v_k}{n}\right)^{3/2}
A_n^{2j-1/2} \right]\right].
\end{equation}
To prove that this sum converges, we let
$$ S_j(v_k) = \sum_{n=v_1+1}^{v_k-1} \left(\frac{v_k}{n}\right)^{3/2} A_n^{2j-1/2} \hskip 40pt j\in\mathbb{N}, \; v_k\in\mathbb{N}, $$
and show first that $S_j(v_k)$ converges as $v_k\to\infty$ for all values of $j$.
We do this by showing that the sequence is Cauchy, i.e., that for all $\delta>0$ there exists $N\in\mathbb{N}$ such that
$$ v_k>N, \; l\in\mathbb{N} \quad \Rightarrow \quad |S_j(v_k+l)-S_j(v_k)|<\delta. $$
We begin by replacing the index $n$ by $m=v_k-n$, which gives
\begin{align*}
S_j(v_k) &= \sum_{n=v_1+1}^{v_k-1} \left(\frac{v_k}{n}\right)^{3/2} \left(\frac{n}{(v_k+b)^2-n^2}\right)^{2j-1/2} \\
&= \sum_{m=1}^{v_k-v_1-1} \, \left(\frac{v_k}{v_k-m}\right)^{3/2} \, \left(\frac{v_k-m}{(v_k+b)^2 - (v_k-m)^2}\right)^{2j-1/2} \\
&= \sum_{m=1}^{v_k-v_1-1} \, \left(1 + \frac{m}{v_k-m}\right)^{3/2} \, \left(\frac{v_k-m}{(b+m)(2v_k+b-m)}\right)^{2j-1/2}.
\end{align*}
Recall the definition $v_1=\fl{(v_k+b)/\sqrt{2}}$ of $v_1$. For $l\in\mathbb{N}$, we write
$$ v^{\prime}_1 = \Bfl{\frac{v_k+l+b}{\sqrt{2}}}. $$
Then the difference between terms in the sequence $S_j(v_k)$ behaves like
\begin{align*}
& | S_j(v_k+l) - S_j(v_k) | \\
& \hskip 20pt = \left| \sum_{m=1}^{v_k+l-v^{\prime}_1-1} \, \left(1 + \frac{m}{v_k+l-m}\right)^{3/2} \,
\left(\frac{v_k+l-m}{(b+m)(2v_k+2l+b-m)}\right)^{2j-1/2} \right. \\
& \hskip 90pt \left. - \sum_{m=1}^{v_k-v_1-1} \,
\left(1 + \frac{m}{v_k-m}\right)^{3/2} \, \left(\frac{v_k-m}{(b+m)(2v_k+b-m)}\right)^{2j-1/2} \right| \\
& \hskip 20pt = \left| \sum_{m=1}^{v_k-v_1-1} \, \left(1 + \frac{m}{v_k-m}\right)^{3/2} \, \left(\frac{v_k-m}{(b+m)(2v_k+b-m)}\right)^{2j-1/2} \right.\\
& \hskip 45pt \times \left[ \left(1 - \frac{m}{v_k}\left(\frac{l}{v_k+l-m}\right)\right)^{3/2} \, \left(\frac{1+l/(v_k-m)}{1+2l/(2v_k+b-m)}\right)^{2j-1/2} - 1 \right] \\
& \hskip 70pt \left. + \sum_{m=v_k-v_1}^{v_k+l-v^{\prime}_1-1} \,
\left(1 + \frac{m}{v_k+l-m}\right)^{3/2} \, \left(\frac{v_k+l-m}{(b+m)(2v_k+2l+b-m)}\right)^{2j-1/2} \right| \\
& \hskip 20pt = S_j(v_k) \, O\left(\frac{1}{v_k}\right) + O\left(\frac{1}{v_k^{2j-1/2}}\right)
\end{align*}
as $v_k\rightarrow\infty$.
We know that $S_j(v_k)$ is bounded, since by (\ref{eq:limsup}) and the above exchange of summation:
$$ \limsup_{v_k\to\infty}
\left[ \sum_{j=1}^{\infty} \left[ \frac{1}{2j+1} \binom{1/2}{2j} \, S_j(v_k) \right] \right]
\leq \frac{1}{12} \, \frac{1}{\sqrt{b+1}} \, \frac{2b+3}{2b+2}.$$
Thus the distance $| S_j(v_k+l) - S_j(v_k) |$ can be made arbitrarily small for sufficiently large $v_k$,
and the sequence $S_j(v_k)$ is Cauchy.
Furthermore, when we substitute this bound into (\ref{eq:I_n_IV}), we have
\begin{align*}
& v_k^{3/2} \, \left| \, \sum_{n=v^{\prime}_1+1}^{v_k+l-1} \, I_n(v_k+l,b) - \sum_{n=v_1+1}^{v_k-1} \, I_n(v_k,b) \, \right| \\
& = \sum_{j=2}^{\infty} \left[ \frac{-1}{2j+1} \binom{1/2}{2j} | S_j(v_k+l) - S_j(v_k) |\right] + O\left(\frac{1}{\alpha^{1/4}}\right) \\
& = \sum_{j=2}^{\infty} \left[ \frac{-1}{2j+1} \binom{1/2}{2j} \left( S_j(v_k) \, O\left(\frac{1}{v_k}\right) + O\left(\frac{1}{v_k^{2j-1/2}}\right) \right) \right] + O\left(\frac{1}{\sqrt{v_k}}\right) \\
& = v_k^{3/2} \, \sum_{n=v^{\prime}_1+1}^{v_k+l-1} \, I_n(v_k,b) \, O\left(\frac{1}{v_k}\right)
+ \sum_{j=2}^{\infty} \left[ \frac{-1}{2j+1} \binom{1/2}{2j} O\left(\frac{1}{v_k^{2j-1/2}}\right) \right] + O\left(\frac{1}{\sqrt{v_k}}\right) \\
& = v_k^{3/2} \, \sum_{n=v^{\prime}_1+1}^{v_k+l-1} \, I_n(v_k,b) \, O\left(\frac{1}{v_k}\right)
+ O\left(\frac{1}{\sqrt{v_k}}\right) \rightarrow 0
\end{align*}
as $v_k\rightarrow0$. Again we know that the sum over $I_n$ is bounded, and the convergence of the limit $\epsilon(b)$ follows.
\end{proof}
Note that the bound can be chosen to be uniform in $b$.
\chapter*{Concluding remarks}
In this thesis we investigated the dynamics of the discretised rotation (\ref{def:F})
in a new parameter regime: the limit $\lambda\to 0$.
A natural embedding of the lattice $\mathbb{Z}^2$ into the plane
transformed the discretised rotation into a perturbation $F_{\lambda}$ of an integrable,
piecewise-affine Hamiltonian system,
which was found to be nonlinear.
Thus we were lead to consider $F_{\lambda}$ as a discrete near-integrable system.
In this setting, the perturbation mechanism was no longer that of round-off,
but of linked strip maps:
in each of the polygonal annuli defined by the polygon classes,
indexed by the sums of squares $e\in\mathscr{E}$,
the dynamics of $F_{\lambda}$ are similar to those of a polygonal outer billiard.
This structure introduced a non-Archimedean character to the behaviour of $F_{\lambda}$.
We defined a symbolic coding associated with the strip map,
built out of a sequence of congruences modulo two-dimensional lattices,
which, for sufficiently small $\lambda$, induces a lattice structure on the return map $\Phi$.
This lattice structure removes $\lambda$ from its role as the perturbation parameter.
Instead, a change of coordinates allowed us to consider $\Phi$
as a sequence of discretised twist maps on the cylinder:
one for each polygon class.
In this setting, the limit of vanishing discretisation,
and hence of vanishing perturbation, corresponds to the limit $e\to\infty$.
The twist $K(e)$ also varies between polygon classes.
In the case where the twist vanishes in the limit, i.e., $K(e)\to 0$, we found discrete resonances,
whose behaviour depends on the local rotation number.
By contrast, for the sequence of perfect squares, where $K(e)\to 4$, we found that the limiting period statistics
coincide with those of a random reversible map on a discrete phase space.
Finally, we discuss open questions and avenues for further investigation.
\medskip
In the introduction to this work, we outlined the difficulty in reproducing
the features of Hamiltonian perturbation theory in a discrete phase space.
At the outset, figure \ref{fig:PolygonalOrbits} suggested that such features
could be found for the map $F$ in the limit $\lambda\to 0$,
when considered relative to the correct `unperturbed' dynamics.
This proposal was later reinforced by phase plots of the return map,
such as figures \ref{fig:resonance}(b) and \ref{fig:PrimaryResonances}.
We identified the minimal orbits, which close after just one revolution around the origin,
as the analogue of KAM curves: the minimal orbits are the simplest orbits,
which retain the natural recurrence time of the underlying dynamics (rather than some larger multiple thereof),
and are confined to convex invariant polygons, each of which is a small perturbation of an invariant curve of the integrable system.
However, like all orbits of $F$ encountered in this study, the minimal orbits are periodic,
and do not disconnect the space like their quasi-periodic counterparts on the continuum.
The apparent island chains we observe are more complex.
Although orbits cluster in the $\theta$-direction according to the local rotation number,
preliminary numerical experiments suggest that the organisation within each island
does not conform to the phenomenology of smooth Hamiltonian perturbation theory.
In particular, islands are not necessarily invariant:
orbits can wander between one island and the next---see figure \ref{fig:ResonancesCloseUp}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{Graphics/PlotPhi_e=160234_0} \\
\caption{\hl{A close-up of a primary resonance for $e=160234\approx400.3^2$ and $\lambda\approx 3.5\times 10^{-9}$.
The plot shows part of a large number of symmetric orbits of $\Phi$ in the cylindrical coordinates $(\theta,\rho)\in\mathbb{S}^1 \times \mathbb{R}$.
There are approximately $560$ lattice sites per unit length in each of the coordinate directions.
We see that there are parts of the cylinder which are filled with symmetric orbits,
and others which are devoid of them, but no sharp boundary between the two. } }
\label{fig:ResonancesCloseUp}
\end{figure}
To explore this new phenomenon, further extensive numerical investigation is required,
and a simplification of the model may prove necessary.
The character of the perturbation which distinguishes the return map $\Phi$
from the unperturbed dynamics is still unclear: the perturbation could be probabilistic in nature,
and hence best modelled by a random reversible perturbation;
could have a complicated but deterministic structure; or could be a mixture of the two.
One of the few similar systems found in the literature is
a toy model of a discrete twist map, investigated numerically in \cite{ZhangVivaldi}.
In that case, a suitably chosen one-dimensional surface of section revealed
an interval exchange transformation on an infinite sequence of intervals.
We do not expect our dynamics to be so simple.
However, a first step in studying the perturbation could be to define a
suitable (quasi-one-dimensional) surface of section for the $\Phi$ dynamics.
\chapter*{Acknowledgements}
~ \\[1cm]
\begin{center}
\textbf{\Large Acknowledgements}\\[1cm]
\end{center}
Thank-you first and foremost to Franco Vivaldi for supervising this project---I hope you've enjoyed it as much as I have.
Before I arrived at Queen Mary, I was advised that ``if you don't like Franco, then you don't like life'',
and fortunately nothing in these last three years has caused me to question my will to live.
Thank-you also to Oliver Jenkinson for his supervision during my first year at Queen Mary,
on an entirely different but equally interesting project---it's a shame I didn't find any exciting dominant measures!
Thanks to Shaun Bullett and Christian Beck, my second supervisors,
who oversaw the bureaucracy of my progression with interest and encouragement.
Thanks to everyone at Queen Mary's School of Mathematical Sciences for giving me such an enjoyable place to work,
albeit one which was regularly without water or heating; and for buying my cakes.
Thank-you to John Roberts for hosting me in Sydney in late 2012: an incredible opportunity,
despite the cockroaches.
Thank-you to Jeroen Lamb and Stefano Luzzatto, whose lectures at Imperial College inspired me to get into the
field of dynamical systems and ergodic theory.
Thanks in particular to Jeroen for all his help and advice whilst applying for a PhD studentship.
Thank-you to all the other students I've met along the way, both at Queen Mary and elsewhere.
Thanks to the come-dine-with-me crew for their fabulous cooking.
In particular, thanks to Julia Slipantschuk for her companionship and her tolerance of my taste in film.
Thanks to Georg Ostrovski for my first citation: the cheque's in the post.
Thank-you to the EPSRC for funding my studies, and to the Eileen Eliza Colyer Prize
and the Queen Mary Postgraduate Research Fund for their contributions to my jet-setting lifestyle.
Finally, thank-you to the bank of mum and dad for putting a roof over my head for the duration of my studies.
Fortunately for you I didn't stay at Imperial College, otherwise you would have had to buy a house in Kensington.
Unfortunately for you, this is just the beginning: now I'm heading out into the world unemployed and overqualified.
\chapter{The integrable limit}\label{chap:IntegrableLimit}
In this chapter we introduce the behaviour of the discretised rotation $F$ in the limit $\lambda\to 0$,
which we call the \defn{integrable limit}.
For ease of exposition, we assume that $\lambda>0$.
We embed the phase space $\mathbb{Z}^2$ into the plane,
to obtain a rescaled lattice map $F_{\lambda}$,
and show that the limiting dynamics are described by an integrable, piecewise-affine Hamiltonian system.
This Hamiltonian system is accompanied by a natural
partition of the plane into a countable sequence of polygonal annuli,
which classify the integrable orbits according to a certain symbolic coding.
Then we define a return map of $F_{\lambda}$, whose domain is a thin strip $X$ aligned along the symmetry axis.
We show that the orbits of $F_{\lambda}$, up to the time of first return to $X$,
shadow the orbits of the integrable Hamiltonian system.
Furthermore, we show that the integrable dynamics are nonlinear:
the return map corresponding to the Hamiltonian flow satisfies a twist condition.
Finally, we briefly describe the dynamics of $F$ in the limits $\lambda\to\pm 1$:
the other parameter regimes in which the dynamics at the limit is an exact rotation.
Preliminary observations suggest that we can expect similar behaviour to the $\lambda\to 0$ case.
Much of the work in this chapter has been published in \cite{ReeveBlackVivaldi}.
\section{The rescaled lattice map} \label{sec:Rescaling}
We make some elementary observations about the behaviour
of orbits of the discretised rotation $F$ in the limit $\lambda\to 0$.
Recall that when $\lambda$ is small, the map $F$ is the discretisation of a
rotation whose angle is close to $\pi/2$.
We find that no orbits are periodic with period four
(excluding the origin---a fixed point of the dynamics which we ignore in this discussion).
\hl{Instead the orbits of minimal period are the fixed points of a secondary recurrence:
in particular, the fourth iterates of $F$ induce a perturbed rotation (on polygons, rather than circles),
whose angle approaches zero as $\lambda\to 0$}.
Thus all orbits of $F$ visit all four quadrants of the plane,
and for the rest of this section we restrict our attention to those orbits which begin in the first quadrant.
In the following proposition, we describe the orbits closest to the origin.
We see that the orbit of any point, for sufficiently small $\lambda$,
is symmetric and coincides with a level set of $|x|+|y|$ everywhere except in the third quadrant,
where the orbit slips by one lattice point (see figure \ref{fig:squares}).
Each orbit \hl{is a fixed point of the secondary recurrence,
and preserves a convex polygon---an approximate square.
We refer to the recurrence time of the orbits,
during which these invariant polygons are populated,
as one} \defn{revolution} \hl{about the origin}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{Graphics/squares}
\caption{Periodic orbits of $F$ in the region $|x|+|y|\leq 1/\lambda$.
\hl{Each orbit lies on a convex polygon which is close to a square.}}
\label{fig:squares}
\end{figure}
\begin{proposition}
Let $\lambda>0$. For all $z=(x,y)\in\mathbb{Z}^2$ with $x,y\geq0$ satisfying
\begin{equation} \label{eq:lambda_bound}
x+y < \frac{1}{\lambda},
\end{equation}
the orbit of $z$ under $F$ is symmetric and periodic with minimal period
$$ 4(x+y)+1. $$
\label{prop:square_orbits}
\end{proposition}
\begin{proof}
We begin by considering the fourth iterates of $F$.
For $\lambda>0$ and $(x,y)$ satisfying $0\leq x \leq 1/\lambda-1$, $1\leq y \leq 1/\lambda$, we have
\begin{align}
F(x,y) &= (\fl{\lambda x} -y,x) = (-y,x), \nonumber \\
F^2(x,y) &= (\fl{-\lambda y}-x,-y) = (-(x+1),-y), \nonumber \\
F^3(x,y) &= (\fl{-\lambda(x+1)}+y,-(x+1)) = (y-1,-(x+1)), \nonumber \\
F^4(x,y) &= (\fl{\lambda(y-1)}+x+1,y-1) = (x+1,y-1). \label{eq:F4}
\end{align}
Thus every fourth iterate of $F$ translates such points by the vector $(1,-1)$.
Now let $z=(x,y)$ with $x,y\geq0$ satisfying (\ref{eq:lambda_bound}).
(We assume that $z\neq(0,0)$.)
We show that the orbit of $z$ intersects both of the fixed sets $\Fix{G}$ and $\Fix{H}$,
and thus is symmetric and periodic by theorem \ref{thm:SymmetricOrbits} part (iii)
(see page \pageref{thm:SymmetricOrbits}).
Recall that the line $\Fix{G}$ is the set of points with $x=y$, whereas the set $\Fix{H}$ includes the line segment
$$ \{ (0,y) \, : \; \fl{\lambda y} =0 \} \subset \Fix{H} $$
(see equations (\ref{eq:FixG}) and (\ref{eq:FixH})).
Using (\ref{eq:F4}), we have that the orbit of $z$ intersects $\Fix{H}$ at the point
$$ F^{-4x}(z) = z - x(1,-1) = (0, x+y)\in\Fix{H}. $$
To show that the orbit intersects $\Fix{G}$, there are two cases to consider.
If the difference between $x$ and $y$ is even, i.e., if
$$ x-y = 2m \hskip 40pt m\in\mathbb{Z}, $$
then we have
$$ F^{-4m}(z) = z - m(1,-1) = (x-m, y+m)\in\Fix{G}. $$
By theorem \ref{thm:SymmetricOrbits} part (iii), the orbit of $z$ is symmetric and periodic.
Furthermore, we have
$$ F^{-4m}(z) \in \Fix{G} \cap F^{4(x-m)}(\Fix{H}), $$
so that the period of $z$ is given by
$$ 2(4x-4m)+1 = 2(4x-2(x-y))+1 = 4(x+y)+1 $$
as required.
If the difference between $x$ and $y$ is odd, so that
$$ x-y = 2m-1 \hskip 40pt m\in\mathbb{Z}, $$
then applying $F^{-4m}$ gives
$$ F^{-4m}(z) = z - m(1,-1) = (x-m, y+m)=(x-m,x-m+1). $$
If we now apply $F^2$, to move the orbit into the third quadrant, then we have
\begin{align*}
F^{-4m+1}(z) &= (-(x-m+1),x-m), \\
F^{-4m+2}(z) &= (-(x-m+1), -(x-m+1)) \in\Fix{G}.
\end{align*}
Again the orbit of $z$ is symmetric and periodic, and since
$$ F^{-4m+2}(z) \in \Fix{G} \cap F^{4(x-m)+2}(\Fix{H}), $$
the period of $z$ is given by
$$ 2(4x-4m+2)+1 = 2(4x-2(x-y))+1 = 4(x+y)+1. $$
This completes the proof.
\end{proof}
However, this is just the beginning of the story, since for all $\lambda>0$ there are
points in $\mathbb{Z}^2$ which do not satisfy (\ref{eq:lambda_bound}).
Further from the origin, we find that not all orbits are symmetric;
that orbits trace a sequence of different polygonal shapes;
and \hl{that orbits may make more than one revolution about the origin}
(period multiplication---see figure \ref{fig:PeriodFunction}).
If we restrict our attention to the symmetric orbits,
in particular the orbits which intersect the positive half of the symmetry line $\Fix{G}$,
we have the following description of a collection of orbits which,
like those in proposition \ref{prop:square_orbits},
\hl{make just one revolution about the origin}.
We refer to such orbits as \defn{minimal orbits}.
We defer the proof of proposition \ref{prop:octagon_orbits} to appendix \ref{chap:Appendix}.
\begin{proposition} \label{prop:octagon_orbits}
For all $\lambda>0$ and $x\in\mathbb{N}$ in the range
$$ \frac{1}{2\lambda}+2 \leq x\leq \frac{1}{\lambda}-1, $$
the orbit of $z=(x,x)$ under $F$ is symmetric and minimal if and only if
$$ 2x + \Bceil{\frac{1}{\lambda}} - 2\left\fl{ \frac{1}{\lambda}\right} \equiv 2 \mod{3}. $$
\end{proposition}
The novel element in this proposition is the appearance of congruences---a
feature which will be developed further in chapter \ref{chap:PerturbedDynamics}.
\medskip
Both propositions \ref{prop:square_orbits} and \ref{prop:octagon_orbits} suggest
that the analysis of the limit $\lambda\to 0$ requires some scaling.
For $\lambda>0$\footnote{The rescaled lattice map with $\lambda<0$ is related to the $\lambda>0$ case
via $F_{-\lambda}=R_x \circ F_{\lambda} \circ R_y$, where $R_x$ and $R_y$ are reflections
in the $x$ and $y$ axes, respectively.}, we normalise the natural length scale $1/\lambda$ by introducing
the \defn{scaled lattice map} $F_{\lambda}$, which is conjugate to $F$,
and acts on points $z=\lambda(x,y)$ of the scaled lattice $(\lambda\Z)^2$:
\begin{equation}\label{def:F_lambda}
F_{\lambda}: (\lambda\Z)^2 \to (\lambda\Z)^2
\hskip 40pt
F_{\lambda}(z)=\lambda F(z/\lambda)
\hskip 40pt
\lambda>0.
\end{equation}
The discretisation length of $F_\lambda$ is $\lambda$.
Then we define the \defn{discrete vector field}, which measures the
deviation of $F_{\lambda}^4$ from the identity:
\begin{equation}\label{def:v}
\mathbf{v}: \; (\lambda\Z)^2 \to (\lambda\Z)^2
\hskip 40pt
\mathbf{v}(z) = F_{\lambda}^4(z)-z.
\end{equation}
To capture the main features of $\mathbf{v}$ on the scaled lattice,
we introduce an \defn{auxiliary vector field} $\mathbf{w}$ on the plane,
given by
\begin{equation}\label{def:w}
\mathbf{w}: \; \mathbb{R}^2 \to \mathbb{Z}^2
\hskip 40pt
\mathbf{w}(x,y)=(2\fl{ y }+1,-(2\fl{ x }+1)).
\end{equation}
The field $\mathbf{w}$ is constant on every translated unit square
(called a \defn{box})
\begin{equation}\label{def:B_mn}
B_{m,n} = \{ (x,y)\in\mathbb{R}^2 \, : \; \fl{x} =m, \; \fl{y} = n \}, \quad m,n\in\mathbb{Z}
\end{equation}
and we denote the value of $\mathbf{w}$ on $B_{m,n}$ as
\begin{equation} \label{def:w_mn}
\mathbf{w}_{m,n}=(2n+1,-(2m+1)).
\end{equation}
The following proposition, whose proof we postpone to below (section \ref{sec:Recurrence}, page \pageref{proof:mu_1}),
states that if we ignore a set of points of zero density,
then the \hl{vector fields} $\mathbf{v}$ and $\mathbf{w}$ are parallel.
\begin{proposition} \label{prop:mu_1}
For $r>0$, we define the set
\begin{equation}\label{eq:A}
A(r,\lambda) = \{ z\in(\lambda\Z)^2 \, : \; \| z \|_{\infty} < r \},
\end{equation}
(with $\| (u,v) \|_{\infty} = \max (|u|,|v|)$),
and the ratio
\begin{displaymath}
\mu_1(r,\lambda) = \frac{ \# \{z\in A(r,\lambda) \, : \; \mathbf{v}(z) = \lambda\mathbf{w}(z) \} }{\# A(r,\lambda)} .
\end{displaymath}
Then we have
\begin{displaymath}
\lim_{\lambda\to 0} \mu_1(r,\lambda) = 1 .
\end{displaymath}
\end{proposition}
The integrable limit of the system \eqref{def:F} is the asymptotic regime
that results from replacing $\mathbf{v}$ by $\lambda\mathbf{w}$.
The points where the two vector fields differ have
the property that $\lambda x$ or $\lambda y$ is close to an integer.
The perturbation of the integrable orbits will take place in these small domains.
\section{The integrable Hamiltonian}\label{sec:Hamiltonian}
We define the real function
\begin{equation}\label{def:P}
P:\mathbb{R}\to \mathbb{R} \hskip 40pt P(x)=\fl{x}^2+(2\fl{x}+1)\{x\},
\end{equation}
where $\{x\}$ denotes the fractional part of $x$.
The function $P$ is piecewise-affine, and coincides with the function
$x\mapsto x^2$ on the integers, thus:
\begin{equation}\label{eq:SqrtP}
P(\fl{ x }) = \fl{ x }^2 \hskip 40pt \fl{ \sqrt{P(x)} } = \fl{ x }.
\end{equation}
Using the second statement in (\ref{eq:SqrtP}), we can invert $P$ up to sign by defining
\begin{equation}\label{def:Pinv}
P^{-1}: \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}
\hskip 40pt
x \mapsto \frac{x + \fl{ \sqrt{x} } (1+\fl{ \sqrt{x}
})}{2\fl{ \sqrt{x} }+1} ,
\end{equation}
so that $(P^{-1}\circ P)(x) = |x|$.
We define the following Hamiltonian
\begin{equation}\label{eq:Hamiltonian}
\mathscr{P}: \; \mathbb{R}^2 \; \to \mathbb{R}
\hskip 40pt
\mathscr{P}(x,y) = P(x)+P(y).
\end{equation}
The function $\mathscr{P}$ is continuous and piecewise-affine.
It is differentiable in $\mathbb{R}^2\setminus \Delta$, where $\Delta$ is
the set of orthogonal lines given by
\begin{equation}\label{eq:Delta}
\Delta=\{(x,y)\in\mathbb{R}^2 \, : \; (x-\fl{ x})(y-\fl{ y})=0\}.
\end{equation}
The set $\Delta$ is the boundary of the boxes $B_{m,n}$, defined in (\ref{def:B_mn}).
The associated Hamiltonian vector field, defined for all points
$(x,y)\in\mathbb{R}^2\setminus \Delta$, is equal to the vector field
$\mathbf{w}$ given in (\ref{def:w}):
\begin{equation}\label{eq:HamiltonianVectorField}
\left( \frac{ \partial\mathscr{P}(x,y)}{\partial y},
-\frac{\partial\mathscr{P}(x,y)}{\partial x} \right) = \mathbf{w}(x,y)
\hskip 40pt
(x,y)\in \mathbb{R}^2\setminus \Delta.
\end{equation}
We say that the function
$$ \gamma : \mathbb{R} \to \mathbb{R}^2 $$
is a \defn{flow curve} of the Hamiltonian $\mathscr{P}$ if it satisfies
$$ \frac{d\gamma(t)}{dt} = \mathbf{w}(\gamma(t)) \hskip 40pt t\in\mathbb{R}. $$
Then the flow $\varphi$ associated with $\mathscr{P}$ is the family of \defn{time-advance maps} $\varphi^t$
satisfying\footnote{We claim without proof that $\varphi$ is well-defined everywhere except the origin.}
\begin{equation*}
\varphi^t: \mathbb{R}^2 \to \mathbb{R}^2 \hskip 40pt \varphi^t(\gamma(s)) = \gamma(s+t)
\end{equation*}
for any flow curve $\gamma$ and all $s,t\in\mathbb{R}$.
Proposition \ref{prop:mu_1} states that the vector fields $\mathbf{v}$ and $\lambda\mathbf{w}$ agree almost everywhere in the limit $\lambda\to 0$.
In turn, the piecewise-constant form of $\mathbf{w}$ ensures that $\varphi^{\lambda}$ is equal to $F_{\lambda}^4$ almost everywhere.
\begin{corollary} \label{corollary:mu_1}
Let $r>0$ and $A(r,\lambda)$ be as in equation (\ref{eq:A}).
Then
\begin{displaymath}
\lim_{\lambda\to 0} \left( \frac{ \# \{z\in A(r,\lambda) \, : \; \varphi^{\lambda}(z) = F_{\lambda}^4(z) \} }{\# A(r,\lambda)} \right) = 1.
\end{displaymath}
\end{corollary}
\medskip
For a point $z\in\mathbb{R}^2$, we write $\Pi(z)$ for the orbit of $z$ under the flow $\varphi$, i.e., the level set of
$\mathscr{P}$ passing through $z$:
\begin{equation} \label{def:Pi(z)}
\Pi(z) = \{ w\in\mathbb{R}^2 \, : \; \mathscr{P}(w) = \mathscr{P}(z) \}.
\end{equation}
Below (theorem \ref{thm:Polygons}) we shall see that these sets are polygons,
whose vertices belong to $\Delta$.
The \defn{value} of a polygon $\Pi(z)$ is the real number $\mathscr{P}(z)$, and if
$\Pi(z)$ contains a lattice point, then we speak of a \defn{critical polygon}.
The critical polygons act as separatrices, and form a distinguished subset of the plane:
\begin{equation*
\Gamma=\bigcup_{z\in\mathbb{Z}^2}\Pi(z).
\end{equation*}
All topological information concerning the Hamiltonian $\mathscr{P}$ is
encoded in the partition of the plane generated by $\Gamma\cup\Delta$.
To characterise $\mathscr{P}$ arithmetically, we consider the Hamiltonian
\begin{equation*}
\mathscr{Q}(x,y) = x^2 + y^2,
\end{equation*}
which is equal to the member $\mathscr{Q}_{0}$ of the family of functions (\ref{def:Q_lambda}),
and represents the unperturbed rotations (no round-off) in the limit $\lambda\to 0$.
Its level sets are circles, and the circles containing lattice points will be
called \defn{critical circles}.
By construction, the functions $\mathscr{P}$ and $\mathscr{Q}$ coincide over $\mathbb{Z}^2$, and
hence the value of every critical polygon belongs to $\mathscr{Q}(\mathbb{Z}^2)$, the set
of non-negative integers which are representable as the sum of two squares.
We denote this set by $\mathscr{E}$.
A classical result, due to Fermat and Euler, states that
a natural number $n$ is a sum of two squares if and only if
any prime congruent to 3 modulo 4 which divides $n$ occurs
with an even exponent in the prime factorisation of $n$
\cite[theorem 366]{HardyWright}.
We refer to $\mathscr{E}$ as the set of \defn{critical numbers},
and use the notation
\label{def:cE}
\begin{displaymath}
\mathscr{E} = \{e_i \, : \; i\geq 0\} = \{0,1,2,4,5,8,9,10,13,16,17,\dots \} .
\end{displaymath}
There is an associated family of \defn{critical intervals}, given by
\begin{equation}\label{def:Ie}
\mathscr{I}^{e_i} = (e_i,e_{i+1}).
\end{equation}
Let us define
$$
\mathscr{E}(x)=\#\{e\in\mathscr{E} \, : \; e\leq x\}.
$$
The following result, due to Landau and Ramanujan, gives the asymptotic
behaviour of $\mathscr{E}(x)$ (see, e.g., \cite{MoreeKazaran})
\begin{equation}\label{eq:LandauRamanujan}
\lim_{x\to\infty}\frac{\sqrt{\ln x}}{x}\,\mathscr{E}(x)=K,
\end{equation}
where $K$ is the Landau-Ramanujan constant
$$
K=\frac{1}{\sqrt{2}}\prod_{p\,\,\mathrm{prime}\atop {p\equiv 3\;\mathrm{mod}\; 4}}
\left(1-\frac{1}{p^2}\right)^{-1/2}\,=\,0.764\ldots .
$$
Furthermore, let $r(n)$ be the number of representations of the integer $n$
as a sum of two squares. To compute $r(n)$, we first factor $n$ as follows
$$
n=2^a\prod_i p_i^{b_i}\prod_j q_j^{c_j},
$$
where the $p_i$ and $q_j$ are primes congruent to 1 and 3 modulo 4, respectively.
(Each product is equal to 1 if there are no prime divisors of the corresponding type.)
Then we have \cite[theorem 278]{HardyWright}
\begin{equation}\label{eq:r}
r(n)=4\prod_i(b_i+1)\prod_j\left(\frac{1+(-1)^{c_j}}{2}\right).
\end{equation}
Note that this product is zero whenever $n$ is not a critical number,
i.e., $r(n)=0$ if $n\notin\mathscr{E}$.
We now have the following characterisation of the invariant curves of the Hamiltonian $\mathscr{P}$.
\begin{theorem}\label{thm:Polygons}
The level sets $\Pi(z)$ of $\mathscr{P}$ are convex polygons, invariant under
the dihedral group $D_4$, generated by the two orientation-reversing involutions
\begin{equation}\label{eq:Dihedral}
G:\quad (x,y) \mapsto (y,x)
\hskip 40pt
G':\quad (x,y) \mapsto (x,-y).
\end{equation}
The polygon $\Pi(z)$ is critical if and only if
$
\mathscr{P}(z)\in\mathscr{E}.
$
The number of sides of $\Pi(z)$ is equal to
\begin{equation}\label{eq:NumberOfSides}
4(2\left\fl{\sqrt{\mathscr{P}(z)}\right}+1)-r(\mathscr{P}(z)),
\end{equation}
where the function $r$ is given in (\ref{eq:r}).
For every $e\in\mathscr{E}$, the critical polygon with value $e$ intersects
one and only one critical circle, namely that with the same value.
The intersection consists of $r(e)$ lattice points, and the polygon lies
inside the circle.
\end{theorem}
\begin{proof}
The symmetry properties follow from the fact that the Hamiltonian $\mathscr{P}$ is
invariant under the interchange of its arguments, and the function $P$ is even:
\begin{align*}
P(-x) &= \fl{ -x }^2 + \{-x\}(2\fl{ -x }+1) \\
&= \left\{ \begin{array}{ll} (-\fl{ x }-1)^2 - (1-\{x\})(2\fl{ x }+1) & x\notin\mathbb{Z}\\
(-\fl{ x })^2 & x\in\mathbb{Z}
\end{array} \right. \\
&= \fl{ x }^2 + \{x\}(2\fl{ x }+1) = P(x).
\end{align*}
The vector field (\ref{eq:HamiltonianVectorField}) is piecewise-constant,
and equal to $\mathbf{w}_{m,n}$ in the box $B_{m,n}$ (cf.~equations
(\ref{def:B_mn}) and (\ref{def:w_mn})).
Hence a level set $\Pi(z)$ is a union of line segments.
Since the Hamiltonian $\mathscr{P}$ is continuous, $\Pi(z)$ is connected.
Thus $\Pi(z)$ is a polygonal curve.
It is easy to verify that no three segments can have an end-point in common
(considering end-points in the first octant will suffice).
Equally, segments cannot intersect
inside boxes, because they are parallel there. But a non self-intersecting
symmetric polygonal curve must be a polygon.
Next we prove convexity. Due to dihedral symmetry, if $\Pi(z)$ is convex
within the open first octant $0<y<x$, then it is piecewise-convex.
Thus we suppose that $\Pi(z)$ has an edge in the box $B_{m,n}$, where
$0< n\leq m$. The adjacent edge in the direction of the flow must be
in one of the boxes
\begin{displaymath}
B_{m,n-1},\quad B_{m+1,n-1},\quad B_{m+1,n}.
\end{displaymath}
Using (\ref{def:w_mn}) one verifies that the three determinants
\begin{displaymath}
\det(\mathbf{w}_{m,n},\mathbf{w}_{m,n-1})
\qquad
\det(\mathbf{w}_{m,n},\mathbf{w}_{m+1,n-1})
\qquad
\det(\mathbf{w}_{m,n},\mathbf{w}_{m+1,n})
\end{displaymath}
are negative. This means that, in each case, at the boundary between adjacent boxes,
the integral curve turns clockwise. So $\Pi(z)$ is piecewise-convex.
It remains to prove that convexity is preserved across the boundaries of the
first octant, which belong to the fixed sets $\Fix{G}$ (the line $x=y$)
and $\Fix{G}'$ (the line $y=0$) of the involutions (\ref{eq:Dihedral}).
Indeed, $\Pi(z)$ is either orthogonal to $\mathrm{Fix}\,G$ (in which case
convexity is clearly preserved), or has a vertex $(m,m)$ on it; in the latter case,
the relevant determinant is $\det(\mathbf{w}_{m-1,m},\mathbf{w}_{m,m-1})=-8m<0$.
The preservation of convexity across $\Fix{G}'$ is proved similarly, and thus
$\Pi(z)$ is convex.
The statement on the criticality of $\mathscr{P}(z)$ follows from
the fact that, on $\mathbb{Z}^2$, we have $\mathscr{P}=\mathscr{Q}$.
Consider now the edges of $\Pi(z)$. The intersections of $\Pi(z)$
with the $x$-axis have abscissas $\pm P^{-1} (\mathscr{P}(z))$.
Using (\ref{eq:SqrtP}) we have that there are
$2\fl{\sqrt{\mathscr{P}(z)}}+1$ integer points between them, hence as
many lines orthogonal to the $x$-axis with integer abscissa.
The same holds for the $y$-axis. If $\Pi(z)$ is non-critical,
it follows that $\Pi(z)$ intersects $\Delta$ in exactly
$4(2\left\fl{\sqrt{\mathscr{P}(z)}\right}+1)$ points,
each line being intersected twice.
Because the vector field changes across each line,
the polygon has $4(2\left\fl{\sqrt{\mathscr{P}(z)}\right}+1)$ vertices.
If the polygon is critical, then we have $\mathscr{P}(z)=e\in\mathscr{E}$. At each
of the $r(e)$ vertices that belong to $\mathbb{Z}^2$, two lines in $\Delta$
intersect, resulting in one fewer vertex. So $r(e)$ vertices must be removed
from the count.
Next we deal with intersections of critical curves.
Let us consider two arbitrary critical curves
$$
\mathscr{P}(x,y)=e
\qquad
\mathscr{Q}(x,y)=e+f
\qquad
e, e+f\in\mathscr{E}.
$$
This system of equations yields
$$
\{x\}^2 + \{y\}^2 -\{x\} -\{y\}=f,
$$
which is a circle with centre at $(1/2,1/2)$ and radius $\rho$, where
\begin{equation}\label{eq:rho}
\rho^2=f+\frac{1}{2}.
\end{equation}
Since we must have $0\leq\{x\},\{y\}<1$, we find $\rho^2\leq 1/2$, and
since $f$ is an integer, we obtain $\{x\}=\{y\}=f=0$. So critical
polygons and circles intersect only if they have the same value,
and their intersection consists of lattice points. Then the
number of these lattice points is necessarily equal to $r(e)$.
Finally, since the critical curve $\mathscr{P}(x,y)=e$ is a convex polygon,
whose only intersections with the critical circle $\mathscr{Q}(x,y)=e$
occur at vertices, we have that critical polygons lie inside critical circles.
\end{proof}
From this theorem it follows that the set $\Gamma$ of critical polygons
partitions the plane into concentric domains, which we call \defn{polygon classes}.
Each domain contains a single critical circle, and has no lattice points in its interior.
The values of all the polygons in a class is a critical interval of the form (\ref{def:Ie}),
and we associate the critical number $e\in\mathscr{E}$ with the polygon class $\mathscr{P}^{-1}(\mathscr{I}^e)$.
There is a dual arrangement for critical circles.
Because counting critical polygons is the same as counting critical circles,
the number of critical polygons (or, equivalently, of polygon classes) contained
in a circle of radius $\sqrt{x}$ is equal to $\mathscr{E}(x)$, with asymptotic
formula (\ref{eq:LandauRamanujan}).
From equation (\ref{eq:rho}), one can show that the total variation
$\Delta\mathscr{Q}(\alpha)$ of $\mathscr{Q}$ along the polygon $\mathscr{P}(z)=\alpha$ satisfies the bound
$$
\Delta\mathscr{Q}(\alpha)\leq \frac{1}{2}
$$
which is strict (e.g., for $\alpha=1$).
\subsection*{Symbolic coding of polygon classes}
In theorem \ref{thm:Polygons} we classified the invariant curves of
the Hamiltonian $\mathscr{P}$ in terms of critical numbers. We found that the
set $\Gamma$ of critical polygons partitions the plane into concentric annular
domains---the polygon classes.
In this section we define a symbolic coding on the set of classes,
which specifies the common itinerary of all orbits in a class,
taken with respect to the lattice $\mathbb{Z}^2$.
Suppose that the polygon $\Pi(z)$ is non-critical. Then all vertices of
$\Pi(z)$ belong to $\Delta\setminus \mathbb{Z}^2$, where $\Delta$ was
defined in (\ref{eq:Delta}). Let $\xi$ be a vertex.
Then $\xi$ has one integer and one non-integer coordinate, and we let $u$
be the value of the non-integer coordinate.
We say that the vertex $\xi$ is of \defn{type} $v$ if $\fl{ |u| } =v$.
Then we write $v_j$ for the type of the $j$th vertex, where the vertices of $\Pi(z)$
are enumerated according to their position in the plane,
starting from the positive half of symmetry line $\Fix{G}$ and proceeding clockwise.
\begin{figure}[b]
\centering
\includegraphics[scale=0.9]{TikZ/VertexList}
\caption{A polygon with $\mathscr{P}(z)$ in the interval $(9,10)$ and its vertices in the first octant.}
\label{fig:V(9)}
\end{figure}
The sequence of vertex types $v_j$ reflects the eight-fold symmetry of $\Pi(z)$.
Hence if the $k$th vertex lies on the $x$-axis, then there are $2k-1$ vertices
belonging to each quarter-turn, and the vertex types satisfy
\begin{equation} \label{eq:v_symmetry}
v_j = v_{2k-j} = v_{(2k-1)i+j}, \hskip 20pt
1\leq j \leq k, \quad 0\leq i \leq 3.
\end{equation}
Thus it suffices to consider the vertices in the first octant, and
the \defn{vertex list} of $\Pi(z)$ is the sequence of vertex types
\label{def:VertexList}
\begin{displaymath}
V=(v_1,\dots,v_{k}).
\end{displaymath}
We note that the vertex list can be decomposed into two disjoint subsequences;
those entries belonging to a vertex with integer $x$-coordinate and those
belonging to a vertex with integer $y$-coordinate.
These subsequences are non-decreasing and non-increasing, respectively.
From theorem \ref{thm:Polygons}, it follows that for
every $e\in\mathscr{E}$, the set of polygons $\Pi(z)$ with
$
\mathscr{P}(z)\in \mathscr{I}^e
$
have the same vertex list.
Let $k$ be the number of entries in the vertex list. Since the polygon
$\Pi(z)$ is non-critical, equation (\ref{eq:NumberOfSides})
gives us that
$4( 2\fl{ \sqrt{e} } +1) = 4(2k-1)$, and hence
\begin{equation*
k=\#V=\fl{ \sqrt{e}} +1.
\end{equation*}
Any two polygons with the same vertex list have not only the same number of edges,
but intersect the same collection of boxes, and have the same collection of tangent
vectors. The critical polygons which intersect the lattice $\mathbb{Z}^2$, where
the vertex list is multiply defined, form the boundaries between classes.
The symbolic coding of these polygons is ambiguous, but this item will not
be required in our analysis.
Thus the vertex list is a function on classes, hence on $\mathscr{E}$.
For example, the polygon class identified with the interval $\mathscr{I}^9=(9,10)$
(see figure \ref{fig:V(9)}) has vertex list
\begin{displaymath}
V(9)=( 2, 2, 0, 3).
\end{displaymath}
For each class, there are two vertex types which we can calculate explicitly:
the first and the last. If $\alpha\in\mathscr{I}^e$, and the polygon $\mathscr{P}(z)=\alpha$ intersects
the symmetry line $\Fix{G}$ at some point $(x,x)$ in the first quadrant,
then $v_1 = \fl{x}$.
By the definition (\ref{eq:Hamiltonian}) of the Hamiltonian $\mathscr{P}$, $x$ satisfies
$$ \mathscr{P}(z) = 2 P(x) = \alpha. $$
Thus inverting $P$ and using (\ref{eq:SqrtP}), it is straightforward to show that
the first vertex type is given by
\begin{equation}\label{def:v1}
v_1 = \fl{P^{-1}(\alpha/2)} = \fl{\sqrt{e/2}} \hskip 40pt \alpha\in\mathscr{I}^e.
\end{equation}
Similarly the last vertex type, corresponding to the vertex on the $x$-axis, is given by
\begin{equation} \label{def:vk}
v_k = \fl{P^{-1}(\alpha)} = \fl{\sqrt{e}} \hskip 40pt \alpha\in\mathscr{I}^e.
\end{equation}
\begin{table}[!h]
\centering
\begin{tabular}{|l|l|}
\hline
e & V(e) \\
\hline
9 & $ (2,2,0,3) $\\
10 & $(2,1,3,3)$ \\
18 & $(3,3,1,4,4)$ \\
29 & $(3,4,2,5,5,5)$ \\
49 & $(4,5,3,6,6,6,0,7)$ \\
52 & $(5,4,6,6,6,1,7,7)$ \\
\hline
\end{tabular}
\caption{A table showing the vertex list $V(e)$ for a selection of
critical numbers $e$. Notice that the first entry in the vertex list is
always $\fl{\sqrt{e/2}}$, the last is $\fl{\sqrt{e}}$, and the
number of entries in the list is $k=\fl{\sqrt{e}}+1$.}
\label{table:V(e)}
\end{table}
\section{Recurrence and return map} \label{sec:Recurrence} \label{SEC:RECURRENCE}
We have already seen that the lattice map $F$ is reversible
with respect to the reflection $G$ of equation (\ref{def:GH}).
The scaled map $F_\lambda$ has the same property, and all orbits of
$F_\lambda$ return repeatedly to a neighbourhood of the symmetry line
$\Fix{G}$, i.e., the line $x=y$.
From equation (\ref{def:A}), the rotation number $\nu$ has the asymptotic form
\begin{displaymath}
\nu = \frac{1}{2\pi}\arccos\left(\frac{\lambda}{2}\right)
= \frac{1}{4}- \frac{\lambda}{4\pi} + O(\lambda^3) \hskip 40pt \lambda\to 0.
\end{displaymath}
The integer $t=4$ is the \defn{zeroth-order recurrence time} of orbits under
$F_{\lambda}$, that is, the number of iterations needed for a point to return
to an $O(\lambda)$-neighbourhood of its starting point. It turns out
(see below---lemma \ref{lemma:Lambda})
that the field $\mathbf{v}(z)$ (equation (\ref{def:v})) is non-zero
for all non-zero points $z$, so no orbit has period four.
Accordingly, for the limit $\lambda\to 0$, we define the \defn{first-order recurrence time}
$t^*$ of the rotation to be the next time of closest approach:
\begin{equation}\label{eq:tstar}
t^*(\lambda) = \min \left\{k\in\mathbb{N} \, : \; d_H(k\nu,\mathbb{N}) \leq d_H(4\nu,\mathbb{N}), \; k>4\right\}
= \frac{\pi}{\lambda} + O(1),
\end{equation}
where $d_H$ is the Hausdorff distance, and the expression $d_H(x,A)$, with $x\in\mathbb{R}$,
is to be understood as the Hausdorff distance between the sets $\{x\}$ and $A$.
The integer $t^*$ provides a natural recurrence timescale for $F_{\lambda}$.
Let $T(z)$ be the minimal period of the point $z\in\mathbb{Z}^2$ under $F$,
so that $T(z/\lambda)$ is the corresponding function for
points $z\in(\lambda\Z)^2$ under $F_{\lambda}$.
(In accordance with the \hl{periodicity conjecture}, page \pageref{conj:Periodicity}, we assume that this
function is well-defined.)
Since, as $\lambda\to 0$, the recurrence time $t^*$ diverges,
the periods of the orbits will cluster around integer multiples
of $t^*$, giving rise to branches of the period function (figure \ref{fig:PeriodFunction}).
The lowest branch corresponds to orbits which perform a single revolution
about the origin---the minimal orbits---and their period is approximately equal to $t^*$.
The period function $T$ has a normalised counterpart, given by (cf.~(\ref{eq:tstar}))
\begin{equation*
T_{\lambda}: (\lambda\Z)^2 \to \frac{\lambda}{\pi}\,\mathbb{N}
\hskip 40pt
T_\lambda(z)=\frac{\lambda}{\pi}\,T(z/\lambda).
\end{equation*}
The values of $T_\lambda$ oscillate about the integers.
We construct a Poincar\'e return map $\Phi$ on a neighbourhood of the
positive half of the symmetry line $\Fix{G}$.
Let $d(z)$ be the perpendicular distance between a point $z$ and
$\Fix{G}$:
\begin{equation*}
d(z) = d_H(z,\Fix{G}).
\end{equation*}
We define the domain $X$ of the return map $\Phi$ to be the set of points
$z\in(\lambda\mathbb{Z}_{\geq0})^2$ which are closer to $\Fix{G}$ than their
preimages under ${F}_\lambda^4$, and at least as close as their images:
\begin{equation}\label{def:X}
X = \{z\in(\lambda\mathbb{Z}_{\geq0})^2 \, : \; d(z) \leq d(F_{\lambda}^4(z)), \; d(z) < d(F_{\lambda}^{-4}(z)) \}.
\end{equation}
\hl{According to corollary} \ref{corollary:mu_1} (page \pageref{corollary:mu_1}), when $\lambda$ is small,
\hl{the fourth iterates of $F_{\lambda}$ typically agree with $\varphi^{\lambda}$, the time-$\lambda$ advance map of the flow.
Thus, in a neighbourhood of the symmetry line $\Fix{G}$, the map $F_{\lambda}^4$ is
simply a translation perpendicular to $\Fix{G}$}:
$$ F_{\lambda}^4(z) = z + \lambda\mathbf{w}_{m,m} \hskip 20pt z\in B_{m,m}, \; m\in\mathbb{Z}_{\geq 0}, $$
\hl{where $\mathbf{w}_{m,m}$ is the local component of the Hamiltonian vector field $\mathbf{w}$
in $B_{m,m}$} (see equation \eqref{def:w_mn}).
It follows that the main component of $X$ in $B_{m,m}$ is a
thin strip of width $\lambda\|\mathbf{w}_{m,m}\|$ lying parallel to the symmetry line $\Fix{G}$
(see figure \ref{fig:lattice_Le}, page \pageref{fig:lattice_Le}).
\hl{Furthermore, it is natural to identify the sides of this strip,
which are connected by the translation $z\mapsto z+\lambda\mathbf{w}_{m,m}$,
so that locally the dynamics take place on a cylinder.}
This description breaks down when $z$ is close to a vertex,
i.e., close to the boundary of $B_{m,m}$.
We formalise these properties below (section \ref{sec:RegularDomains}).
The transit time $\tau$ to the set $X$ is well-defined for all $z\in(\lambda\Z)^2$:
\begin{equation} \label{eq:tau}
\tau : (\lambda\Z)^2 \to \mathbb{N}
\hskip 40pt \tau(z)=\min \{ k\in\mathbb{N} \, : \; F_{\lambda}^k(z) \in X \}.
\end{equation}
Thus the first return map $\Phi$ is the function
\begin{equation} \label{def:Phi}
\Phi : X \to X
\hskip 40pt
\Phi(z) = F_{\lambda}^{\tau(z)}(z).
\end{equation}
We refer to the orbit of $z\in X$ up to the return time $\tau(z)$ as the
\defn{return orbit} of $z$:
\begin{equation*
{\mathcal O}_{\tau}(z) = \{ F_{\lambda}^k(z) \, : \; 0\leq k \leq \tau(z) \} \hskip 40pt z\in X.
\end{equation*}
We let $\tau_{-}$ be the transit time to $X$ under $F_{\lambda}^{-1}$:
\begin{displaymath}
\tau_{-} : (\lambda\Z)^2 \to \mathbb{Z}_{\geq0}
\hskip 40pt
\tau_{-}(z) = \min \{ k\in\mathbb{Z}_{\geq0} \, : \; F_{\lambda}^{-k}(z) \in X \},
\end{displaymath}
so that the return orbit for a general $z\in(\lambda\Z)^2$ is given by
\begin{equation*
{\mathcal O}_{\tau}(z) = \{ F_{\lambda}^k(z) \, : \; -\tau_{-}(z)\leq k \leq \tau(z) \} \hskip 40pt z\in (\lambda\Z)^2.
\end{equation*}
\medskip
To associate a return orbit with an integrable orbit, we define the rescaled
round-off function $R_{\lambda}$, which rounds points on the plane down to
the next lattice point:
\begin{equation} \label{eq:R_lambda}
R_{\lambda}: \mathbb{R}^2 \to (\lambda\Z)^2
\hskip 40pt
R_{\lambda}(w)=\lambda R(w/\lambda),
\end{equation}
where $R$ is the integer round-off function (\ref{eq:R}).
For every point $w\in\mathbb{R}^2$ and every $\delta>0$, the set of points
$$
\{z\in(\lambda\Z)^2 \, : \; z= R_{\lambda}(w), \; \,0< \lambda<\delta\}
$$
that represent $w$ on the lattice as $\lambda\to0$ is countably infinite.
The corresponding set of points on $\mathbb{Z}^2$, before rescaling, is unbounded.
According to proposition \ref{prop:mu_1}, the points of the scaled lattice
$(\lambda\Z)^2$ at which the (rescaled) integrable and discrete vector fields have
different values are rare, as a proportion of lattice points. The following
result shows that these points are also rare within each return orbit.
\begin{proposition} \label{prop:mu_2}
For any $w\in\mathbb{R}^2$, if we define the ratio
\begin{displaymath}
\mu_2(w,\lambda) = \frac{\#\{z\in{\mathcal O}_{\tau}(R_{\lambda}(w)) \, : \; \mathbf{v}(z)=\lambda\mathbf{w}(z)\}}{\# {\mathcal O}_{\tau}(R_{\lambda}(w))} ,
\end{displaymath}
then we have
\begin{displaymath}
\lim_{\lambda\to 0} \mu_2(w,\lambda) = 1 .
\end{displaymath}
\end{proposition}
Finally we formulate a shadowing theorem, which states that
for timescales corresponding to a first return to the domain $X$, every
integrable orbit has a scaled return orbit that shadows it.
Furthermore, this scaled return orbit of the round-off map
converges to the integrable orbit in the Hausdorff metric as $\lambda\to 0$,
\hl{so that up to their natural recurrence time, orbits of $F_{\lambda}$ render increasingly accurate
approximations of the flow trajectories}.
\begin{theorem} \label{thm:Hausdorff}
For any $w\in\mathbb{R}^2$, let $\Pi(w)$ be the orbit of $w$ under the flow $\varphi$, and let
${\mathcal O}_{\tau}(R_{\lambda}(w))$ be the return orbit at the \hl{corresponding}
lattice point. Then
\begin{displaymath}
\lim_{\lambda\to 0} d_H
\left(\Pi(w),{\mathcal O}_{\tau}(R_{\lambda}(w))\right)=0,
\end{displaymath}
where $d_H$ is the Hausdorff distance on $\mathbb{R}^2$.
\end{theorem}
This result justifies the term `integrable limit' assigned to the flow $\varphi$ generated by $\mathscr{P}$.
The proofs for proposition \ref{prop:mu_2} and theorem \ref{thm:Hausdorff} can be found below.
\subsection*{Transition points
To establish propositions \ref{prop:mu_1} and \ref{prop:mu_2},
we seek to isolate the lattice points $z\in(\lambda\Z)^2$ where
the discrete vector field $\mathbf{v}(z)$ deviates from the scaled auxiliary
vector field $\lambda\mathbf{w}(z)$. We say that a point $z\in(\lambda\Z)^2$ is a
\defn{transition point} if $z$ and its image under $F_\lambda^4$
do not belong to the same box, namely if
\begin{displaymath}
R(F_{\lambda}^4(z))\not=R(z).
\end{displaymath}
Let $\Lambda$ be the set of transition points. Then
\begin{equation} \label{eq:Lambda}
\Lambda = \bigcup_{m,n\in\mathbb{Z}} \Lambda_{m,n},
\end{equation}
where
\begin{displaymath}
\Lambda_{m,n} = F_{\lambda}^{-4}(B_{m,n}\cap(\lambda\Z)^2)\setminus B_{m,n}.
\end{displaymath}
\begin{figure}[t]
\centering
\includegraphics[scale=0.9]{TikZ/LambdaSigma}
\caption{The structure of phase space. The boxes $B_{m,n}$, bounded by the
set $\Delta$, include regular domains (white) where the motion is integrable,
\hl{i.e., where $F_{\lambda}^4(z)=\varphi^{\lambda}(z)$}. By corollary \ref{corollary:mu_1}, page \pageref{corollary:mu_1},
\hl{the lattice points in these domains have full density as $\lambda\to 0$}.
The darker regions comprise the set $\Lambda$ of transition points, which is introduced below.
\hl{The transition points are where the perturbation from the integrable motion occurs.}
The darkest domains belong to the set $\Sigma\subset \Lambda$, defined in (\ref{eq:Sigma}),
\hl{which forms a neighbourhood of the set $\mathbb{Z}^2$.
Perturbed orbits which intersect the set $\Sigma$ are analogous to the critical polygons of
the flow, and we exclude them from our analysis.}}
\label{fig:LambdaSigma_plot}
\end{figure}
For small $\lambda$, the set of transition points consists of thin
strips of lattice points arranged along the lines $\Delta$ (see figure
\ref{fig:LambdaSigma_plot}). The following key lemma states that, for sufficiently
small $\lambda$, all points $z\not=(0,0)$ where $\mathbf{v}(z)\neq\lambda\mathbf{w}(z)$
are transition points.
\begin{lemma} \label{lemma:Lambda}
Let $A(r,\lambda)$ be as in equation (\ref{eq:A}).
Then for all $r>0$ there exists $\lambda^*>0$ such that, for all $\lambda<\lambda^*$
and $z\in A(r,\lambda)$, we have
\begin{displaymath}
z \notin \Lambda \cup \{(0,0)\} \quad \Rightarrow \quad \mathbf{v}(z)= \lambda\mathbf{w}(z).
\end{displaymath}
\end{lemma}
\begin{proof}
Let $r>0$ be given, and let $z=\lambda(x,y)\in A(r,\lambda)$.
We show that if $\lambda$ is sufficiently small (and $z\not=0$),
then
\begin{displaymath}
\mathbf{v}(z)\neq \lambda\mathbf{w}(z)
\quad \Rightarrow \quad
R(F_{\lambda}^4(z))\not=R(z).
\end{displaymath}
Since $z\in A(r,\lambda)$, we have $z\in B_{m,n}$ for some $|m|,|n|\leq\ceil{r}$,
where $\ceil{\cdot}$ is the ceiling function, defined by the
identity $\ceil{x}=-\fl{ -x }$.
Through repeated applications of $F_{\lambda}$, we have
\begin{equation}\label{eq:Fabcd}
\begin{array}{llll}
&F_{\lambda}(z) = \lambda(-y+m,x) &R(F_\lambda(z))=(-(a+1),m),\\
\noalign{\vspace*{2pt}}
&F^2_{\lambda}(z) = \lambda(-x-a-1,-y+m) &R(F_\lambda^2(z))=(-(b+1),-(a+1)),\\
\noalign{\vspace*{2pt}}
&F^3_{\lambda}(z) = \lambda(y-m-b-1,-x-a-1) &R(F_\lambda^3(z))=(c,-(b+1)), \\
\noalign{\vspace*{2pt}}
&F^4_{\lambda}(z) = \lambda(x+a+c+1,y-m-b-1) \quad &R(F_\lambda^4(z))=(d,c),
\end{array}
\end{equation}
where $m=\fl{\lambda x}$, $n=\fl{\lambda y}$, and the integers $a,b,c,d$ are given by
\begin{equation}\label{eq:abcd}
\begin{array}{rl}
a+1 &= \ceil{\lambda (y-m)},\\
\noalign{\vspace*{2pt}}
b+1 &= \ceil{\lambda (x+a+1)}, \\
\noalign{\vspace*{2pt}}
c &= \fl{ \lambda (y-m-b-1) } \\
\noalign{\vspace*{2pt}}
d &= \fl{ \lambda (x+a+c+1) }.
\end{array}
\end{equation}
The integers $a$, $b$, $c$ and $d$ label the boxes in which each iterate occurs,
and also give an explicit expression for the round-off term
$\fl{ \lambda x }$ at each step.
Thus reading from the last of these equations, the discrete vector field $\mathbf{v}$ of $z\in B_{m,n}$ is given by
\begin{equation} \label{eq:v_abcd}
\mathbf{v}(z)= F^4_{\lambda}(z) - z =
\lambda( a+c+1, -(m+b+1)),
\end{equation}
and $z$ is a transition point whenever at least one of the equalities
$d=m$ and $c=n$ on the final pair of box labels fails.
If the integers $m$, $a$, $b$, $c$ are sufficiently small relative to the
number of lattice points per unit length, i.e., if
\begin{equation} \label{eq:abc_ineq}
\max (|m|,|a+1|,|m+b+1|,|a+c+1|)<1/\lambda,
\end{equation}
then the map $F^4_{\lambda}$ moves the point $z$ at most one box in each of the $x$ and $y$ directions,
so that the labels $a$, $b$, $c$ and $d$ satisfy
\begin{equation} \label{eq:bd_ac_sets}
b,d\in\{m-1,m,m+1\}, \hskip 20pt a,c\in\{n-1,n,n+1\}.
\end{equation}
Similarly, (\ref{eq:abc_ineq}) dictates that the discrepancy between each of the pairs $(b,d)$, $(a,c)$ cannot be too large:
\begin{equation} \label{eq:bd_ac_ineq}
|b-d|, |a-c| \leq 1.
\end{equation}
Letting $\lambda^* = 1/(2\ceil{r} +3)$, we obtain
\begin{align*}
& \max (|m|,|a+1|,|m+b+1|,|a+c+1|) \\
& \quad \leq \max (|m|+|b+1|,|a+1|+|c|) \\
& \quad \leq \max (2|m|+|b-m|,2|n|+|a-n|+|c-n|)+1 \\
& \quad \leq 2\ceil{r}+3 \leq 1/\lambda^*,
\end{align*}
so that (\ref{eq:bd_ac_sets}) and (\ref{eq:bd_ac_ineq}) hold for all $\lambda<\lambda^*$.
Then the expression (\ref{eq:v_abcd}) for $\mathbf{v}$, combined with the inequality (\ref{eq:bd_ac_ineq}),
gives that $\mathbf{v}(z)= \lambda\mathbf{w}(z)$ if and only if
\begin{equation} \label{eq:v=w}
m=b \hskip 40pt n = a = c.
\end{equation}
Suppose now that $z$ is not a transition point, so that $c=n$ and $d=m$,
but that $\mathbf{v}(z)\neq \lambda\mathbf{w}(z)$, so that at least one of the
equalities (\ref{eq:v=w}) fails.
If $a\neq n$, straightforward manipulation of inequalities
shows that the only combination of values which satisfies (\ref{eq:bd_ac_sets}) is
\begin{displaymath}
a=n-1, \, m=0, \, b=-1, \, \lambda y = n.
\end{displaymath}
In particular, we have $b\neq m$.
Conversely if $b\neq m$, then using also the inequality (\ref{eq:bd_ac_ineq}) gives
\begin{displaymath}
b=m-1, \, c=0, \, a=-1, \, \lambda x = m,
\end{displaymath}
so that $n=c\neq a$.
Hence, combining these, the only possibility is $m=a+1=b+1=c=0$,
which corresponds to the unique point $z=(0,0)$.
\end{proof}
By construction, the auxiliary vector field $\mathbf{w}$ is
equal to the Hamiltonian vector field associated
with $\mathscr{P}$ (see equation (\ref{eq:HamiltonianVectorField})).
Since $\mathbf{w}$ is piecewise-constant, it follows that $\varphi^{\lambda}$, the time-$\lambda$ advance map of the Hamiltonian flow,
is equal to a translation by $\lambda\mathbf{w}$ everywhere except across the discontinuities of $\mathbf{w}$,
i.e., except at transition points.
Furthermore, lemma \ref{lemma:Lambda} gives us that, for sufficiently small $\lambda$,
any $z\in A(r,\lambda)\setminus\{(0,0)\}$ \hl{which is not a transition point} satisfies $\lambda\mathbf{w}(z)=\mathbf{v}(z)$.
Hence, a simple consequence of lemma \ref{lemma:Lambda} is that $F_{\lambda}^4$ is equal to a
time-$\lambda$ advance of the flow everywhere except at the transition points.
\begin{corollary} \label{corollary:Lambda}
Let $A(r,\lambda)$ be as in equation (\ref{eq:A}).
Then for all $r>0$ there exists $\lambda^*>0$ such that, for all $\lambda<\lambda^*$
and $z\in A(r,\lambda)$, we have
\begin{displaymath}
z \notin \Lambda \cup \{(0,0)\} \quad \Rightarrow \quad F_{\lambda}^4(z)= \varphi^{\lambda}(z).
\end{displaymath}
\end{corollary}
We now use lemma \ref{lemma:Lambda} to prove proposition \ref{prop:mu_1},
given in section \ref{sec:Hamiltonian}, page \pageref{prop:mu_1}.
\begin{proof}[Proof of proposition \ref{prop:mu_1}] \label{proof:mu_1}
From equation (\ref{eq:A}), we have that the number of lattice points
in the set $A(r,\lambda)$ is given by
\begin{displaymath}
\# A(r,\lambda) = \left( 2 \Bceil{\frac{r}{\lambda}} -1\right)^2.
\end{displaymath}
By lemma \ref{lemma:Lambda}, for sufficiently small $\lambda$,
every non-zero point $z\in A(r,\lambda)$
satisfying $\mathbf{v}(z)\neq \lambda\mathbf{w}(z)$ is a transition point, so has
$z\in \Lambda_{m,n}$ for some $m,n\in\mathbb{Z}$ with $|m|,|n|\leq \ceil{r}+1$.
Furthermore, every set $\Lambda_{m,n}$ is composed of two strips,
each of unit length, and width approximately equal to $\lambda(2m+1)$
and $\lambda(2n+1)$, respectively.
We can bound the number of lattice points in the set $\Lambda_{m,n}$ explicitly by
\begin{displaymath}
\# \Lambda_{m,n} \geq \frac{|2m+1|+|2n+1|-c}{\lambda},
\end{displaymath}
for some positive constant $c$, independent of $m$ and $n$. (Indeed $c=3$ is sufficient --
cf.~the methods used in the proof of proposition \ref{prop:Xe}.)
It follows that for fixed $r>0$, as $\lambda\to 0$ we have the estimate
\begin{align*}
\mu_1(r,\lambda) &= 1 - \frac{ \# \{z\in A(r,\lambda) \, : \;
\mathbf{v}(z) \neq \lambda\mathbf{w}(z) \} }{\# A(r,\lambda)} \\
&\geq 1 - \frac{ \# \left(A(r,\lambda) \cap \Lambda \right)}{\# A(r,\lambda)} \\
&\geq 1 - \frac{1}{\# A(r,\lambda)} \sum_{|m|,|n|\leq \ceil{r}+1} \# \Lambda_{m,n} \\
&\geq 1 - \left( 2 \Bceil{\frac{r}{\lambda}} -1\right)^{-2} \sum_{|m|,|n|\leq \ceil{r}+1}
\frac{|2m+1|+|2n+1|-c}{\lambda} \\
&= 1 + O(\lambda).
\end{align*}
Since $\mu_1(r,\lambda)\leq 1$, the proof is complete.
\end{proof}
\medskip
To prove proposition \ref{prop:mu_2} and theorem \ref{thm:Hausdorff}
we need a second lemma, which bounds the variation in the Hamiltonian function
$\mathscr{P}$ along perturbed orbits ${\mathcal O}_{\tau}(R_{\lambda}(w))$ as $\lambda\to 0$, where $w\in\mathbb{R}^2$.
By corollary \ref{corollary:Lambda}, we know $\mathscr{P}$ is invariant under $F_{\lambda}^4$ at all points $z\notin\Lambda$,
so that variations can only occur when the fourth iterates of $F_{\lambda}$ hit a transition point.
However, the number of transition points encountered by a perturbed orbit in one revolution
is (essentially) equal to the number of vertices of the corresponding polygon,
which is independent of $\lambda$.
Furthermore, the magnitude of the perturbation from the integrable motion at such a transition point
is $O(\lambda)$ as $\lambda\to 0$. Hence we have the following result.
\begin{lemma} \label{lemma:cP_variation}
Let $w\in\mathbb{R}^2$ and let $z=R_{\lambda}(w)\in(\lambda\Z)^2$ be the rounded lattice point associated with $w$.
Then as $\lambda\to 0$:
$$ \forall \xi\in {\mathcal O}_{\tau}(z): \hskip 20pt |\mathscr{P}(\xi)-\mathscr{P}(w)| = O(\lambda). $$
\end{lemma}
We postpone the proof of lemma \ref{lemma:cP_variation} to appendix \ref{chap:Appendix},
and proceed with the proof of proposition \ref{prop:mu_2} (page \pageref{prop:mu_2}).
\begin{proof}[Proof of proposition \ref{prop:mu_2}]
Let $w\in \mathbb{R}^2$ be given, and let $z=R_{\lambda}(w)$.
For small $\lambda$, the polygons $\Pi(z)$ and $\Pi(w)$ are close, since
$$ \|z - w\| = O(\lambda) $$
as $\lambda\to 0$.
Consider the polygons $\Pi(w)^{\pm}$ given by
$$ \Pi(w)^{\pm} = \{ \xi \, : \; \mathscr{P}(\xi) = \mathscr{P}(w)\pm1 \}, $$
where the abscissae $x^{\pm}$ of the intersections of $\Pi(w)^{\pm}$
with the positive $x$-axis are given by
$$ x^{\pm} = P^{-1}(\mathscr{P}(w)\pm1). $$
Without loss of generality, we may assume that neither of these polygons
is critical. Thus each of these integrable orbits intersects as many boxes
as it has sides. For the larger polygon $\Pi(w)^+$, the number of sides
(see theorem \ref{thm:Polygons}, page \pageref{thm:Polygons}) is given by
$$ 4\left( 2\Bfl{\sqrt{\mathscr{P}(w)+ 1}}+1 \right). $$
By construction, the return orbit of $z$ contains exactly one
transition point for every time the fourth iterates of $F_{\lambda}$ move the orbit from one of the boxes $B_{m,n}$ to another.
Furthermore, the fourth iterates of $F_{\lambda}$ move points parallel to the flow within each box,
so that, per revolution, there is exactly one transition point per box that the orbit intersects.
By lemma \ref{lemma:cP_variation}, the return orbit ${\mathcal O}_{\tau}(z)$ is bounded
between the polygons $\Pi(w)^{\pm}$ for sufficiently small $\lambda$.
Hence the number of boxes intersected in any one revolution
around the origin cannot exceed the number of sides of $\Pi(w)^{+}$:
$$ \# \left({\mathcal O}_{\tau}(z)\,\cap\,\Lambda \right) \leq 4\left( 2\Bfl{\sqrt{\mathscr{P}(w)+ 1}}+1 \right). $$
\medskip
Now we consider the total number of points in the return orbit ${\mathcal O}_{\tau}(z)$.
Since the perturbed orbit is bounded below by the integrable orbit $\Pi(w)^{-}$, it must contain a point $\xi$,
close to the positive $x$-axis, with $x$-coordinate not less than $x^-$.
Similarly for the negative $x$-axis.
The return orbit moves between neighbouring points via the action of $F_{\lambda}^4$,
i.e., by translations of the vector field $\mathbf{v}$.
If $\xi=\lambda(x,y)\in{\mathcal O}_{\tau}(z)$, then for sufficiently small $\lambda$, equations (\ref{eq:v_abcd}) and
(\ref{eq:bd_ac_sets}) from the proof above can be combined to give
\begin{align*}
\| \mathbf{v}(\xi) \|
&\leq \lambda \sqrt{(|2\fl{\lambda y}+1| +2)^2 + (|2\fl{\lambda x}+1| +1)^2} \\
&\leq \lambda \sqrt{(2|\fl{\lambda y}|+3)^2 + (2|\fl{\lambda x}|+2)^2} \\
&< \lambda \sqrt{(2|\lambda y|+5)^2 + (2|\lambda x|+4)^2} \\
&< \lambda \sqrt{2}(2x^+ +5),
\end{align*}
where $\|\xi\|_{\infty}\leq x^+$ because the orbit is bounded above by $\Pi(w)^{+}$.
Hence, for sufficiently small $\lambda$, the number of points in the orbit
is bounded below by the distance $4x^-$ divided by the maximal length of $\mathbf{v}$ along the orbit:
\begin{displaymath}
\# {\mathcal O}_{\tau}(z) \geq \frac{4x^-}{\lambda \sqrt{2}(2x^+ +5)}.
\end{displaymath}
Thus, as $\lambda\to 0$, we have the estimate
\begin{align*}
\mu_2(w,\lambda) &= 1 - \frac{ \# \{\xi\in {\mathcal O}_{\tau}(z) \, : \;
\mathbf{v}(\xi) \neq \lambda\mathbf{w}(\xi) \} }{\# {\mathcal O}_{\tau}(z)}, \\
&\geq 1 - \frac{ \# \left({\mathcal O}_{\tau}(z)\,\cap\,\Lambda \right) }{\# {\mathcal O}_{\tau}(z)} , \\
&\geq 1 - \lambda \,\frac{ \sqrt{2}(2x^+ +5)\left( 2\Bfl{\sqrt{\mathscr{P}(w)+ 1}}+1 \right) }{x^-} , \\
&= 1 + O(\lambda).
\end{align*}
Since $\mu_2(w,\lambda)\leq 1$, the proof is complete.
\end{proof}
\medskip
Finally, we can prove theorem \ref{thm:Hausdorff} of page \pageref{thm:Hausdorff}.
\begin{proof}[Proof of theorem \ref{thm:Hausdorff}]
Let $w\in \mathbb{R}^2$ be given, and let $z=R_{\lambda}(w)$,
so that ${\mathcal O}_{\tau}(z)$ is the return orbit which shadows the integrable orbit $\Pi(w)$.
By lemma \ref{lemma:cP_variation}, the variation in $\mathscr{P}$ along the orbit
of $z$ is $O(\lambda)$ as $\lambda\to 0$.
Furthermore, the derivative of $\mathscr{P}$ is bounded away from zero in a neighbourhood of $\Pi(w)$,
so that points in the orbit must be close to $\Pi(w)$ in the Hausdorff metric:
$$ \forall \xi\in {\mathcal O}_{\tau}(z): \hskip 20pt d_H(\xi,\Pi(w)) = O(\lambda) $$
as $\lambda\to 0$.
Neighbouring points $\xi,\xi+\mathbf{v}(\xi)$ in the return orbit ${\mathcal O}_{\tau}(z)$ are also
$O(\lambda)$-close as $\lambda\to 0$, so the result follows.
\end{proof}
\section{Nonlinearity} \label{sec:IntegrableReturnMap}
So far we have seen that, in the integrable limit, orbits of the rescaled
discretised rotation $F_{\lambda}$ shadow orbits of the Hamiltonian flow $\varphi$.
In particular, in corollary \ref{corollary:mu_1}, we showed that as $\lambda\rightarrow0$,
$F_{\lambda}^4$ is equal to the time-$\lambda$ advance map $\varphi^{\lambda}$ of the flow almost
everywhere in any bounded region.
We now introduce the period $\mathscr{T}$ of the flow $\varphi$:
\begin{equation} \label{def:cT(z)}
\mathscr{T}(z) :\mathbb{R}^2 \rightarrow \mathbb{R}_{\geq 0} \hskip 40pt \mathscr{T}(z) = \min\{t>0 \, : \; \varphi^t(z)=z\},
\end{equation}
so that the integrable counterpart $\mathscr{F}_{\lambda}$ to the discretised rotation $F_{\lambda}$ is given by
\begin{equation} \label{def:cF}
\mathscr{F}_{\lambda} : \mathbb{R}^2 \rightarrow \mathbb{R}^2 \hskip 40pt \mathscr{F}_{\lambda}(z) = \varphi^{(\lambda - \mathscr{T}(z))/4}(z).
\end{equation}
In accordance with corollary \ref{corollary:mu_1}, applying $\mathscr{F}_{\lambda}^4$ is equal to a time-$\lambda$ advance of the flow:
$$ \mathscr{F}_{\lambda}^4(z) = \varphi^{(\lambda - \mathscr{T}(z))}(z) = \varphi^{\lambda}(z). $$
As we did for $F_{\lambda}$ in section \ref{sec:Recurrence},
we can define a first return map for $\mathscr{F}_{\lambda}$.
The counterpart $\mathscr{X}$ to the return domain $X$ is given by the set of points in the
plane which are closer to $\Fix{G}$ than their preimages under $\varphi^{\lambda}$,
and at least as close as their images. In this case, the set $\mathscr{X}$ takes the simple form
\begin{equation} \label{eq:cX}
\mathscr{X} = \{ \varphi^{\lambda\theta}(x,x) \, : \; x\geq 0, \; \theta\in[-1/2,1/2) \}.
\end{equation}
We have the following explicit expression for the first return map.
\begin{proposition} \label{prop:cPhi(z)}
Let $z\in\mathscr{X}$, and let $z^{\prime}$ be the first return of $z$ to $\mathscr{X}$ \hl{under $\mathscr{F}_{\lambda}$}.
Suppose that $z=\varphi^{\lambda\theta}(x,x)$ and $z^{\prime}=\varphi^{\lambda\theta^{\prime}}(x,x)$,
where $\theta,\theta^{\prime}\in[-1/2,1/2)$ and $x\in\mathbb{R}_{\geq 0}$.
Then $\theta^{\prime}$ is related to $\theta$ via
\begin{equation} \label{eq:theta_prime}
\theta^{\prime} \equiv \theta+\frac{1}{4}-\frac{\mathscr{T}(z)}{4\lambda} \mod{1}.
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $t$ is the return time of $z$ to $\mathscr{X}$, so that $z^{\prime}=\mathscr{F}_{\lambda}^t(z)$.
If $z=\varphi^{\lambda\theta}(x,x)$, then by the definition (\ref{def:cF}) of $\mathscr{F}_{\lambda}$ we have
\begin{displaymath}
\mathscr{F}_{\lambda}^k(z) = \varphi^{\lambda\theta + k(\lambda - \mathscr{T}(z))/4} (x,x) \hskip 40pt k\in\mathbb{Z}.
\end{displaymath}
Thus, by the expression (\ref{eq:cX}) for $\mathscr{X}$, the $k$th iterate of $z$ under $\mathscr{F}_{\lambda}$ lies in the set $\mathscr{X}$ if and only if
\begin{displaymath}
\theta + k \left(\frac{\lambda - \mathscr{T}(z)}{4\lambda}\right) + \frac{m\mathscr{T}(z)}{\lambda} \in [-1/2,1/2)
\end{displaymath}
for some $m\in\mathbb{Z}$. The return time $t$ is the minimal $k\in\mathbb{N}$ for which this inclusion holds.
Writing $k=4l+r$ for $l\in\mathbb{Z}_{\geq 0}$ and $0\leq r\leq 3$, it is straightforward to see that
$k$ is minimal when $r=1$, $m=l$, and $l$ satisfies
\begin{displaymath}
\theta + l - \frac{ \mathscr{T}(z)}{4\lambda} \in [-3/4,1/4),
\end{displaymath}
i.e., when
\begin{displaymath}
l + \Bfl{\theta + \frac{3}{4} - \frac{\mathscr{T}(z)}{4\lambda}} =0.
\end{displaymath}
Thus the return time is given by
\begin{equation} \label{eq:tau(z)}
t = 4\Bceil{ \frac{\mathscr{T}(z)}{4\lambda} - \theta - \frac{3}{4} } + 1,
\end{equation}
where we have used the relation $-\fl{x}=\ceil{-x}$.
Now if $z^{\prime}=\varphi^{\lambda\theta^{\prime}}(x,x)$, where $\theta^{\prime}\in[-1/2,1/2)$,
then by construction, we have
$$ \lambda\theta^{\prime} \equiv \lambda\theta + t\left(\frac{\lambda - \mathscr{T}(z)}{4}\right) \mod{\mathscr{T}(z)}, $$
where we write $a \equiv b ~ (\mathrm{mod} ~ c)$ for real $c$ to denote that $(a-b)\in c\,\mathbb{Z}$. Thus it follows from the formula (\ref{eq:tau(z)}) for the return time that
\begin{align*}
\lambda\theta^{\prime}
&\equiv \lambda\theta + \lambda \Bceil{ \frac{\mathscr{T}(z)}{4\lambda}-\frac{3}{4}-\theta } + \frac{\lambda - \mathscr{T}(z)}{4} \mod{\mathscr{T}(z)} \\
&\equiv -\lambda \Bfl{ \theta+\frac{3}{4}-\frac{\mathscr{T}(z)}{4\lambda} } + \lambda\left( \theta+\frac{3}{4}-\frac{\mathscr{T}(z)}{4\lambda}\right) - \frac{\lambda}{2} \mod{\mathscr{T}(z)} \\
&= \lambda \left\{ \theta+\frac{3}{4}-\frac{\mathscr{T}(z)}{4\lambda} \right\} - \frac{\lambda}{2},
\end{align*}
where $\{x\}$ denotes the fractional part of $x$. Equivalently, dividing through by $\lambda$, we can write
$$ \theta^{\prime} \equiv \theta+\frac{1}{4}-\frac{\mathscr{T}(z)}{4\lambda} \mod{1}, $$
which completes the proof.
\end{proof}
It is natural to think of the first return map as a twist map on a cylinder with coordinates $\theta$ and $\mathscr{P}(z)$,
\hl{where points of the form $\varphi^{-\lambda/2}(x,x)$ and $\varphi^{\lambda/2}(x,x)$ are
identified since they both lie in the same orbit of $\mathscr{F}_{\lambda}$} (we make this construction explicit later---see section \ref{sec:cylinder_coordinates}).
To understand how this twist map behaves, we need to study how the period function $\mathscr{T}(z)$ varies with $\mathscr{P}(z)$.
If the period function is not constant, the return map is nonlinear.
\subsection*{The period function}
We now produce an explicit expression for the period $\mathscr{T}$ of the Hamiltonian flow.
Recall that $\Pi(z)$ denotes the orbit of $\varphi$ passing through the point $z\in\mathbb{R}^2$.
In this section we adapt this notation, and write $\Pi(\alpha)$ for the polygon on which $\mathscr{P}$ takes the value $\alpha$:
\begin{equation*}
\Pi(\alpha) = \{z\in\mathbb{R}^2 \, : \; \mathscr{P}(z) = \alpha \} \hskip 40pt \alpha\in\mathbb{R}_{\geq 0}.
\end{equation*}
Similarly \hl{we} overload the notation $\mathscr{T}$, and write $\mathscr{T}(\alpha)$ to denote the period of the flow on the polygon $\Pi(\alpha)$.
Let $e\in\mathscr{E}$ be a critical number, and let $\alpha\in \mathscr{I}^e$, so that $\Pi(\alpha)$ belongs to the polygon class associated with $e$.
Recall the vertex list $V(e)$ associated with this class of polygons, whose first entry, denoted $v_1$, and last entry, denoted $v_k$, are given by equations (\ref{def:v1}) and (\ref{def:vk}), respectively.
Then we have the following expression for the period of the flow $\varphi$.
\begin{proposition} \label{prop:T(alpha)}
Let $\alpha\geq0$, and let $v_1$ and $v_k$ be given as in (\ref{def:v1}) and (\ref{def:vk}).
Then the period $\mathscr{T}(\alpha)$ of the Hamiltonian flow on the polygon $\Pi(\alpha)$ is given by
\begin{equation} \label{eq:cT(alpha)}
\frac{\mathscr{T}(\alpha)}{8} = \frac{P^{-1}(\alpha/2)}{2v_1+1} -2 \sum_{n=v_1+1}^{v_k} \frac{P^{-1}(\alpha - n^2)}{4n^2-1}
\end{equation}
(where if $v_1=v_k$ the sum should be understood to be empty).
\end{proposition}
\begin{proof}
Take $\alpha\geq0$.
By the eight-fold symmetry of the level sets of $\mathscr{P}$, as stated in theorem \ref{thm:Polygons},
it suffices to consider the intersection of the polygon $\Pi(\alpha)$ with the first octant.
The point $(y,y)\in\Pi(\alpha)$ where the polygon intersects the positive half of the symmetry line $\Fix{G}$
satisfies
$$ y = P^{-1}(\alpha/2) \hskip 40pt \fl{y}=v_1, $$
whereas the point $(x,0)\in\Pi(\alpha)$ where the polygon intersects the positive $x$-axis satisfies
$$ x = P^{-1}(\alpha) \hskip 40pt \fl{x}=v_k. $$
We consider the time taken to flow between these two points.
To proceed, we partition the $y$-distance between the two points into a sequence of distances $d(n)$,
and corresponding flow-times $t(n)$. If $v_1=v_k$ the partition consists of a single element;
we simply let $d(v_1) = P^{-1}(\alpha/2)$, and $t(v_1)$ is the time
taken for the flow on $\Pi(\alpha)$ to move between the symmetry lines $x=y$
and $y=0$.
Suppose now that $v_1<v_k$.
Note that the $y$-coordinate of the vertex of $\Pi(\alpha)$ which lies in the first octant and has $x$-coordinate $x=n$ is given by
$$ P^{-1}(\alpha - n^2). $$
(Such vertices do not exist if $v_1=v_k$.)
Then we let $d(v_1)$ denote the $y$-distance between the points where $\Pi(\alpha)$
intersects the symmetry line $x=y$ and the line $x=v_1+1$:
$$ d(v_1) = P^{-1}(\alpha/2) - P^{-1}(\alpha - (v_1+1)^2), $$
and $d(v_k)$ denote the $y$-distance between the points where $\Pi(\alpha)$
intersects the line $x=v_k$ and the symmetry line $y=0$:
$$ d(v_k) = P^{-1}(\alpha - v_k^2). $$
For any $n$ with $v_1+1\leq n \leq v_k-1$,
$d(n)$ is the $y$-distance between the points where $\Pi(\alpha)$
intersects the lines $x=n$ and $x=n+1$:
$$ d(n) = P^{-1}(\alpha - n^2) -P^{-1}(\alpha - (n+1)^2). $$
Similarly, $t(v_1)$ is the time taken for the flow on $\Pi(\alpha)$
to move between the symmetry line $x=y$ and $x=v_1+1$, $t(v_1)$
is the time taken to move between the line $x=v_k$ and the symmetry
line $y=0$, and for $v_1+1\leq n \leq v_k-1$, $t(n)$ is the time
taken to flow between the lines $x=n$ and $x=n+1$.
With this notation, and using the symmetry of the flow $\varphi$,
the period $\mathscr{T}(\alpha)$ satisfies
\begin{equation} \label{eq:T(r)_sum_t(n)}
\frac{\mathscr{T}(\alpha)}{8} = \sum_{n=v_1}^{v_k} t(n).
\end{equation}
For any $n\geq 0$, the auxiliary vector field has constant $y$-component
between the lines $x=n$ and $x=n+1$, given by $-(2n+1)$ (see equation (\ref{def:w})).
Hence the times $t(n)$ and the distances $d(n)$ are related by
\begin{equation} \label{eq:t(n)}
t(n) = \frac{d(n)}{2n+1} \hskip 40pt v_1\leq n\leq v_k.
\end{equation}
If $v_1=v_k$ then the result follows from the definition of $d(v_1)$.
If $v_1<v_k$, substituting (\ref{eq:t(n)}) and the definition of the $d(n)$ into (\ref{eq:T(r)_sum_t(n)}) gives:
\begin{align*}
\frac{\mathscr{T}(\alpha)}{8} &= \sum_{n=v_1}^{v_k} \frac{d(n)}{2n+1} \\
&= \frac{P^{-1}(\alpha/2)}{2v_1+1} + \sum_{n=v_1+1}^{v_k} \frac{P^{-1}(\alpha - n^2)}{2n+1}
- \sum_{n=v_1}^{v_k-1} \frac{P^{-1}(\alpha - (n+1)^2)}{2n+1} \\
&= \frac{P^{-1}(\alpha/2)}{2v_1+1} -2 \sum_{n=v_1+1}^{v_k} \frac{P^{-1}(\alpha - n^2)}{4n^2-1},
\end{align*}
as required.
\end{proof}
Now we consider the derivative of the period function. For any integer $n$, the function $P^{-1}$, defined on $\mathbb{R}_{\geq0}$ in equation (\ref{def:Pinv}), satisfies $P^{-1}(n^2) = n$ and is affine on the interval $[n^2,(n+1)^2]$.
Hence $P^{-1}$ is differentiable at every $x$ which is not a perfect square, with:
$$ \frac{dP^{-1}(x)}{dx} = \frac{1}{2\fl{\sqrt{x}}+1}, \hskip 40pt \sqrt{x} \notin\mathbb{Z}. $$
Letting $x=\alpha - n^2$, we have that the derivative exists whenever $\alpha$ cannot be expressed as a sum of squares.
Considered as a function of $\alpha$, $P^{-1}(\alpha - n^2)$ is constant on the intervals $\mathscr{I}^e$:
$$ \frac{dP^{-1}(\alpha - n^2)}{dx} = \frac{1}{2\fl{\sqrt{e - n^2}}+1} \hskip 40pt \alpha\in\mathscr{I}^e. $$
Thus if $\alpha\in\mathscr{I}^e$ is non-critical, then $\mathscr{T}(\alpha)$ is differentiable and
\begin{equation} \label{eq:Tprime(alpha)}
\frac{\mathscr{T}^{\prime}(\alpha)}{4} = \frac{1}{(2v_1+1)^2} -4 \sum_{n=v_1+1}^{v_k} \frac{1}{(4n^2-1)(2\fl{\sqrt{e - n^2}}+1)},
\end{equation}
which is a function of $e$ only. Hence the period function $\mathscr{T}$ is piecewise-affine, with constant derivative on each of the intervals $\mathscr{I}^e$.
\section{The limits $\lambda\to \pm1$}\label{sec:LimitsPm1}
The limit $\lambda\to 0$ is not the only limit which describes an
approach to an exact rotation (where the round-off has no effect).
The limits $\lambda\to \pm 1$, corresponding to $\nu\to 1/6,1/3$,
describe the approach to the cases where $F^6=\mathrm{id}$ and $F^3=\mathrm{id}$, respectively.
It is also possible to describe these limiting dynamics with a piecewise-affine Hamiltonian,
which we introduce briefly here\footnote{The limit $\lambda\to 1$ has also been investigated in \cite{Siu}.}.
We analyse the limits $\lambda\to \pm 1$ by defining $\delta = \lambda \mp 1$
and letting $\delta\to 0$. Then the appropriate rescaling of $F$ is given by
the map
\begin{equation*
F_{\delta}: (\delta\mathbb{Z})^2 \to (\delta\mathbb{Z})^2
\hskip 40pt
F_{\delta}(z)=\delta F(z/\delta)
\end{equation*}
(as for $\lambda$, we assume that $\delta>0$).
Correspondingly, the discrete vector fields $\mathbf{v}_{\pm}$ become
\begin{equation*
\mathbf{v}_{\pm}: \; (\delta\mathbb{Z})^2 \to (\delta\mathbb{Z})^2
\hskip 40pt
\mathbf{v}_{+}(z) = F_{\delta}^6(z)-z
\hskip 40pt
\mathbf{v}_{-}(z) = F_{\delta}^3(z)-z,
\end{equation*}
and the auxiliary vector fields are given by
\begin{align*
\mathbf{w}_{+}(x,y)=2(\fl{y} -\fl{x-y},-(\fl{x} -\fl{y-x})), \\
\mathbf{w}_{-}(x,y)=(\fl{y}+\fl{x+y}+1,-(\fl{x}+\fl{x+y}+1)).
\end{align*}
This time, $\mathbf{w}_{\pm}$ are constant on the collection of triangles
\begin{align*
B_{m,n} &= \{ (x,y)\in\mathbb{R}^2 \, : \; \fl{ x } =m, \; \fl{ y } = n, \; \fl{ x \mp y } = m \mp n\}, \\
T_{m,n} &= \{ (x,y)\in\mathbb{R}^2 \, : \; \fl{ x } =m, \; \fl{ y } = n, \; \fl{ x \mp y } = m \mp n \mp 1 \},
\end{align*}
where $m,n\in\mathbb{Z}$.
\begin{figure}[t]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/Orbits_1_24} \\
(a) $\; \lambda=1+1/24$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/Orbits_1_48} \\
(b) $\; \lambda=1+1/48$ \\
\end{minipage}
\caption{\hl{A selection of periodic orbits of the rescaled map $F_{\delta}$ in the limit $\lambda\to 1$,
where $\nu\to 1/6$ (cf. the $\lambda\to 0$ case of figure \ref{fig:PolygonalOrbits}, page \pageref{fig:PolygonalOrbits}).
The grey lines show the discontinuity set $\Delta$.}}
\label{fig:PolygonalOrbits2}
\end{figure}
As in proposition \ref{prop:mu_1}, if we ignore a subset of the lattice the lattice $(\delta\mathbb{Z})^2$ of zero density,
then the functions $\mathbf{v}$ and $\mathbf{w}$ satisfy $\mathbf{v}(z)=\delta\mathbf{w}(z)$.
Recall the piecewise-affine function $P$ defined in (\ref{def:P}).
The Hamiltonians corresponding to the limits $\lambda\to \pm 1$
are given by
\begin{equation*
\mathscr{P}: \; \mathbb{R}^2 \; \to \mathbb{R}
\hskip 40pt
\mathscr{P}(x,y) = \left(\frac{1}{2}\right)^{(1\mp1)/2} \left( P(x)+P(x\mp y) + P(y) \right).
\end{equation*}
These Hamiltonians are again piecewise-affine and differentiable
in $\mathbb{R}^2\setminus \Delta$, where $\Delta$ is the set of lines given by
\begin{equation*
\Delta=\{(x,y)\in\mathbb{R}^2 \, : \; (x-\fl{ x})(y-\fl{ y})(x\mp y - \fl{ x\mp y})=0\},
\end{equation*}
the boundaries of the triangles $B_{m,n}$ and $T_{m,n}$.
One can easily verify that the Hamiltonian vector fields associated with $\mathscr{P}$
are equal to the auxiliary vector fields $\mathbf{w}_{\pm}$ wherever they are defined.
\chapter{Introduction} \label{chap:Introduction}
In this thesis we study the family of maps given by
\begin{equation} \label{def:F}
F:\, \mathbb{Z}^2 \to \mathbb{Z}^2 \hskip 20pt
(x,y)\,\mapsto\,(\fl{\lambda x} - y, \,x)
\hskip 20pt |\lambda|<2.
\end{equation}
For each value of the real parameter $\lambda$, the function $F$ is
an invertible map on the lattice of integer points in the plane.
Despite its simplicity, this model displays a rich landscape
of mathematical phenomena, connecting discrete dynamics and arithmetic.
The family (\ref{def:F}) first arose as a model of elliptic motion subject to round-off \cite{Vivaldi94b}.
If we remove the floor function in equation \eqref{def:F},
we obtain the one-parameter family of linear maps of the plane
\begin{equation} \label{def:A}
A:\, \mathbb{R}^2 \to \mathbb{R}^2 \hskip 20pt
(x,y)\,\mapsto\,(\lambda x - y, \,x)
\hskip 20pt \lambda = 2\cos(2\pi\nu),
\end{equation}
which are linearly conjugate to rotation by the angle $2\pi\nu$.
The invariant curves of $A$ are ellipses, given by level sets of the functions
\begin{equation} \label{def:Q_lambda}
\mathscr{Q}_{\lambda}(x,y)=x^2-\lambda x y +y^2,
\end{equation}
and all orbits are either periodic or quasi-periodic,
according to whether the rotation number $\nu$ is rational or irrational, respectively.
The map $F$ is a discretisation of $A$,
obtained by composing it with the piecewise-constant function
\begin{equation} \label{eq:R}
R: \, \mathbb{R}^2 \to \mathbb{Z}^2
\hskip 40pt
R(x,y) = (\fl{x}, \fl{y}),
\end{equation}
where $\fl{\cdot}$ is the floor function---the largest integer not exceeding its argument.
The floor function models the effect of round-off, pushing
the image point to the nearest integer point on
the left\footnote{The choice of round-off scheme is discussed further in section \ref{sec:Round-off}.}.
Thus the model $F$ is an example of a Hamiltonian (i.e., symplectic) map subject to uniform, invertible
round-off, of the style introduced by Rannou \cite{Rannou}.
In this context, we think of \eqref{def:F} as a perturbed Hamiltonian system,
so a natural property to consider is its stability \cite{Vivaldi94b,LowensteinHatjispyrosVivaldi,LowensteinVivaldi98,KouptsovLowensteinVivaldi02,Vivaldi06}.
Since $F$ is invertible, boundedness of orbits is equivalent to periodicity.
\begin{conjecture}[\cite{Vivaldi06}] \label{conj:Periodicity}
For all real $\lambda$ with $|\lambda|<2$, all orbits of $F$ are periodic\footnote{A
general conjecture on the boundedness of discretised Hamiltonian rotations was first
formulated in \cite{Blank94}.}.
\end{conjecture}
This is where the arithmetic flavour of the family \eqref{def:F} becomes apparent.
A closely related conjecture has been stated in the field of number theory:
the map $F$ appears (with a slightly different round-off scheme) in the
guise of an integer sequence, as part of a problem concerning
shift radix systems \cite{AkiyamaBrunottePethoThuswaldner,AkiyamaBrunottePethoSteiner06}.
Conjecture \ref{conj:Periodicity} holds trivially for the integer parameter values $\lambda=0,\pm1$,
where the map $F$ is of finite order.
Beyond this, the boundedness of all round-off orbits has been proved for only
\emph{eight} values of $\lambda$, which correspond to the rational
values of the rotation number $\nu$ for which $\lambda$ is a quadratic irrational:
\begin{equation}\label{eq:Lambdas}
\lambda=\frac{\pm1\pm\sqrt{5}}{2},\quad \pm\sqrt{2},\quad \pm\sqrt{3}.
\end{equation}
(The denominator of $\nu$ is $5,\,10,\,8$, and $12$, respectively.)
The case $\lambda=(1-\sqrt{5})/2$ was established in
\cite{LowensteinHatjispyrosVivaldi}, with computer assistance.
Similar techniques were used to extend the result to the other parameter
values, but only for a set of initial conditions having full density
\cite{KouptsovLowensteinVivaldi02}.
The conjecture for the eight parameters \eqref{eq:Lambdas} was settled
in \cite{AkiyamaBrunottePethoSteiner08} with an analytical proof.
More recently, Akiyama and Peth\H{o} \cite{AkiyamaPetho} proved that
(\ref{def:F}) has infinitely many periodic orbits for any parameter value.
We shall not make any further progress on conjecture \ref{conj:Periodicity} in this work.
The feature of the parameter values \eqref{eq:Lambdas} which enabled the
resolution of conjecture \ref{conj:Periodicity} in these cases,
is that the map $F$ admits a dense and uniform embedding in
a two-dimensional torus, where the round-off map extends continuously
to a piecewise isometry (which has \emph{zero entropy} and is not ergodic).
The natural density on the lattice $\mathbb{Z}^2$ is carried into the Lebesgue measure,
namely the Haar measure on the torus.
For any other rational value of $\nu$, the parameter $\lambda$ is an algebraic number of higher degree, and
there is a similar embedding in a higher-dimensional torus \cite{LowensteinVivaldi00,BruinLambert};
these systems are still unexplored, even in the cubic case.
\begin{figure}[t]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/snowflakesv2} \\
(a) $\; \lambda=\sqrt{2}$, $\nu=1/8$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/ellipsesv2} \\
(b) $\; \lambda=10/7$, $\nu\approx1/8$ \\
\end{minipage}
\caption{\hl{A selection of orbits of $F$ when the parameter $\lambda$ is (a) an algebraic integer and
(b) a rational number. All orbits are periodic, and the period of the orbits shown ranges from $8$ (both cases) to $235$
(rational case) and $511$ (algebraic case).}}
\label{fig:FOrbits}
\end{figure}
Irrational values of $\nu$ bring about a different dynamics, and a different
theory. The simplest cases correspond to rational values of $\lambda$:
in particular, to rational numbers whose denominator is the power of a prime $p$.
In this case the map $F$ admits a dense and uniform embedding in the ring
$\mathbb{Z}_p$ of $p$-adic integers \cite{BosioVivaldi}.
The embedded system extends continuously to the composition of a full
shift and an isometry (which has \emph{positive entropy}),
and the natural density on $\mathbb{Z}^2$ is now carried into the Haar measure on $\mathbb{Z}_p$.
This construct was used to prove a central limit theorem for the
departure of the round-off orbits from the unperturbed ones \cite{VivaldiVladimirov}.
This phenomenon injects a probabilistic element in the determination of the
period of the lattice orbits, highlighting the nature of the difficulties
that surround conjecture \ref{conj:Periodicity}.
\medskip
In this work we explore a new parameter regime, and the obvious next step is to
consider the \emph{approach} to a rational rotation number.
We choose the easiest such case---the approach to one of the cases \eqref{eq:Lambdas} seems
excessively complicated---and consider the limit $\lambda\to0$,
corresponding to the rotation number $\nu\to 1/4$.
This is one of five limits (the other limits being $\lambda\to\pm1,\pm2$)
where the dynamics at the limit is trivial because there is no round-off.
What we find is a new natural embedding of $F$, this time into the plane,
and a new dynamical mechanism, namely a discrete-space version of \emph{linked strip maps}:
maps originally introduced in the study of \emph{outer billiards} or \emph{dual billiards} of polygons
(for background, see \cite[Section III]{Tabachnikov}).
This construction was later generalised by Schwartz \cite{Schwartz11}.
We rescale the lattice $\mathbb{Z}^2$ by a factor of $\lambda$---to obtain the map $F_{\lambda}$ of equation
\eqref{def:F_lambda}---then embed it in $\mathbb{R}^2$ (see figure \ref{fig:PolygonalOrbits}).
Now the parameter $\lambda$ controls not only the rotation number $\nu$, but also the lattice spacing.
The limiting behaviour is described by a piecewise-affine Hamiltonian.
The invariant curves of this Hamiltonian are polygons,
and the fourth iterates of $F$ move parallel to the edge vectors of these polygons.
The role of the strip map is to aggregate this locally uniform behaviour
into a sequence of translations: one for each edge.
The perturbation occurs near the vertices.
In this much, the map $F$ bears a strong resemblance to the strip map construction of outer billiards.
The difference in our case is that the number of sides of the invariant polygons
increases with the distance from the origin;
near the origin they are squares, while at infinity they approach circles.
Hence our version of the strip map is composed of an ever increasing number of components,
and results in a perturbation of increasing complexity---a feature which
cannot be achieved in outer billiards of polygons without changing the shape of the billiard.
\begin{figure}[t]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/Orbits24v2} \\
(a) $\; \lambda=1/24$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/Orbits48v2} \\
(b) $\; \lambda=1/48$ \\
\end{minipage}
\caption{\hl{A selection of periodic orbits of the rescaled map $F_{\lambda}$, for two small values of
the parameter $\lambda$. The lattice spacing is such that each unit distance (illustrated by the grey lines)
contains $1/\lambda$ lattice points.
Here $\nu\approx 1/4$ but there are no orbits which have period $4$: instead the periods of orbits
cluster around integer multiples of a longer recurrence time} (see figure \ref{fig:PeriodFunction}, page \pageref{fig:PeriodFunction}).}
\label{fig:PolygonalOrbits}
\end{figure}
In this regime, we are led to consider the map $F$ as a perturbation of an
integrable Hamiltonian system, but the integrable system is no longer a rotation, and the
perturbation is no longer caused by round-off.
Thus the limit $\lambda\to 0$ is singular.
Furthermore, the integrable system is nonlinear, i.e., its time-advance
map satisfies a twist condition. The parameter $\lambda$
acts as a perturbation parameter, and a discrete version of near-integrable
Hamiltonian dynamics emerges on the lattice when the perturbation is switched on.
If we were considering near-integrable Hamiltonian dynamics on the continuum,
then we would be in the realm of KAM theory
(for background, see \cite[section 6.3]{ArrowsmithPlace}),
according to which a positive fraction of invariant curves, identified
by their rotation number, will survive a sufficiently small smooth perturbation.
In this scenario, the complement of the KAM curves consists of a hierarchical arrangement of
island chains and thin stochastic layers, and
the KAM curves disconnect the space, thereby ensuring the stability of the irregular orbits.
However, the map $F$ is defined on a discrete space, and the perturbation is discontinuous,
so no such general theory applies.
Before we describe our findings in more detail, we set the scene
by outlining previous work on space discretisation,
which consists of a patchwork of loosely connected phenomena,
arising in a variety of different contexts.
In particular, we highlight the occurrence of near-integrable phenomena.
\vfill
\section{Near-integrability in a discrete phase space} \label{sec:discrete}
There are various approaches to space discretisation,
which fall into two broad categories.
\begin{enumerate}[(i)]
\item \emph{Invariant structures}\\
This category comprises maps which preserve some finite or countable (and arithmetically interesting) subset of the phase space.
This includes the restriction of algebraic maps with algebraic parameters to discrete rings or fields, and piecewise-isometries involving rational rotations. In these cases, the dynamics of the original map remain unchanged, but we consider discrete subsets of parameter values and discrete subsets of the phase space.
\item \emph{Round-off}\\
Here we consider maps which are formed from the composition of a map of a continuum with a \emph{round-off function},
which forces the map to preserve a given finite or countable set (typically a lattice).
In this case, the original dynamics are subject to a discontinuous perturbation,
so that the relationship between the dynamics of the discrete system and those of the original system is often unclear.
\end{enumerate}
We are interested in the range of dynamical behaviours that can be observed in a discrete phase space, and how these can be described.
This question manifests itself slightly differently for the two types of discretisation.
In the case of invariant structures, we are typically concerned with which dynamical features \emph{remain} after discretisation:
what mark does the behaviour of the original system leave on that of its discrete counterpart?
In the case of round-off, we are interested in those features which are \emph{created} by discretisation:
how does the behaviour of a dynamical system change when we force it onto a lattice?
In both cases, we can ask: are the features of the original map recovered in the fine-discretisation limit?
In smooth Hamiltonian dynamics, near-integrable behaviour is characterised by the presence of invariant KAM curves,
on which the motion is quasi-periodic, separated by periodic island chains and stochastic layers.
Reproducing the structures of KAM theory in a discrete space is problematic.
On a lattice, quasi-periodic orbits do not exist.
Surrogate KAM surfaces must thus be identified, and their evolution must be
tracked, as the perturbation parameter is varied.
Furthermore, these orbits need not disconnect
the space, so their relevance to stability must be re-assessed.
We introduce the various types of discrete system below.
In each case, we describe examples which will be relevant in what follows,
and discuss the possibility of near-integrable phenomena.
\subsection*{Restriction to discrete rings and fields}
If an algebraic map preserves a subset of the phase space which is a discrete ring or field,
then we can study its dynamics when restricted to this subset.
In the finite case, this is equivalent to studying (subsets of) the periodic orbits of a map.
The dominant example in this class is the family of hyperbolic toral automorphisms (or \emph{cat maps}):
chaotic (Anosov) Hamiltonian maps, whose set of periodic orbits is precisely the set of rational points,
and which preserve lattices of rational points with any given denominator.
The special arithmetic properties of these maps enable a complete classification of the periodic orbits on such a rational lattice,
and there is a wealth of literature on this topic,
including \cite{HannayBerry,PercivalVivaldi,Keating91a,DysonFalk,EspostiIsola,BehrendsFiedler}.
The arithmetic of the denominator limits the number of allowed periods on any given lattice,
and the resulting period distribution function is singular.
Rational restrictions of maps with milder statistical properties have also been considered.
The Casati-Prosen triangle maps \cite{CasatiProsen,HorvatEspostiIsola} are
a family of zero entropy maps of the torus rooted in quantum chaos,
which are conjectured to be mixing for irrational parameters,
and preserve rational lattices for rational parameters.
In this case, the maps have time-reversal symmetry, and
the distribution of periods on such lattices is conjectured to converge
to a smooth distribution in the fine-discretisation limit \cite{NeumarkerRobertsVivaldi}.
We discuss this result further in section \ref{sec:time-reversal}.
\subsection*{Reduction to finite fields}
For an algebraic system it is natural to replace the coordinate field with a finite field,
for instance the field $\mathbb{F}_p$ of integers modulo a prime $p$.
The resulting map has a finite phase space, so all its orbits are (eventually) periodic.
The reduction process dispenses with the topology of the original map, but preserves algebraic properties,
such as symmetries or the presence of an integral.
Consequently, there is no near-integrable regime, and one witnesses a discontinuous
transition from integrable to non-integrable behaviour.
This transition manifests itself probabilistically via a (conjectured) abrupt
change in the asymptotic (large field) distribution of the periods of the orbits,
which can be used as a tool to detect integrability
\cite{RobertsVivaldi03,RobertsJogiaVivaldi,JogiaRobertsVivaldi}.
Similarly the reduction to finite fields can be used to detect
time-reversal symmetry \cite{RobertsVivaldi05,RobertsVivaldi09} (see also section \ref{sec:time-reversal}).
\subsection*{Piecewise-isometries}
Piecewise-isometries are a generalisation of interval exchange transformations to higher dimensions,
in which the phase space (typically the plane or the torus) is partitioned into a finite number of sets,
called \emph{atoms}, and the map acts as a different isometry on each atom. (For background, see \cite{Goetz}.)
It has been shown that all piecewise-isometries have zero entropy \cite{GutkinHaydn,Buzzi}.
Furthermore, for piecewise-isometries involving rational parameters,
the dynamics are discrete in the sense that the phase space features a
countable hierarchy of (eventually) periodic polygons, which move rigidly under the dynamics
(see \cite{AdlerKitchensTresser} and references therein).
In particular, this class of piecewise-isometries include the much-studied piecewise-rotations
of the torus with rational rotation number (see, for example, \cite{Kahng02,KouptsovLowensteinVivaldi02,Kahng04,KouptsovLowensteinVivaldi04,GoetzPoggiaspalla}).
These are the systems in which the discretised rotation $F$ was embedded
in order to settle conjecture \ref{conj:Periodicity} for the parameter values \eqref{eq:Lambdas}.
An example of a family of piecewise-isometries in unbounded phase space is the family of dual billiard maps on polygons \cite[Section III]{Tabachnikov}. \hl{When the polygon has rational coordinates, the dynamics are discrete and all orbits are periodic.}
As we have already mentioned, the dynamical mechanism which underlies the behaviour of $F$ in the limit $\lambda\to 0$
has much in common with that of outer billiards of polygons.
A near-integrable regime of a kind exists for these maps in the form of \emph{quasirational} polygons,
for which all orbits remain bounded thanks to the existence of bounding invariants called
\emph{necklaces}\footnote{For smooth billiard tables,
the outer billiards map is a twist map admitting KAM curves,
which ensure the boundedness of orbits (see \cite[Section I]{Tabachnikov}).}
\cite{VivaldiShaidenko,Kolodziej,GutkinSimanyi}---however,
in the quasirational case the dynamics are no longer discrete.
\hl{Only recently has an unbounded outer billiard orbit been exhibited} \cite{Schwartz07}.
\subsection*{Round-off}
One typically thinks of round-off in the context of computer arithmetic,
where real numbers are represented with finite precision.
In this context, it is the relationship between computer-generated orbits and
the true orbits of a dynamical system which is of principal interest.
This issue can be tackled to an extent by \emph{shadowing},
whereby a perturbed orbit (in this case a discretised orbit) of a chaotic map
is guaranteed to be close to an orbit of the unperturbed system
(see \cite[Section 18.1]{KatokHasselblatt},
or \cite{HammelYorkeGrebogi,GrebogiHammelYorkeSauer} for results specific to round-off).
However, shadowing tells us nothing about whether the behaviour of perturbed orbits is typical,
or what happens to orbits over long timescales,
where round-off typically introduces irreversible behaviour
\cite{BinderJensen,BeckRoepstorff,GoraBoyarsky}.
In rare cases, round-off fluctuations act like small-amplitude noise,
and give rise to Gaussian transport;
more commonly, the propagation of round-off error must be described as a deterministic
(as opposed to probabilistic) phenomenon.
A rigorous analysis of round-off in floating-point arithmetic is very difficult:
the set of representable numbers is neither uniform nor arithmetically closed,
hence calculations are performed in a modified arithmetic which is not even associative.
To put the study of round-off on a solid footing,
it is preferable to consider calculations in fixed-point (i.e., integer) arithmetic,
which is closed under ring operations (discounting overflow).
Several authors have used explicit fixed-point approximations of real Hamiltonian maps
in numerical experiments, which have the advantage that iteration can be performed exactly,
and that invertibility can be retained
(see Rannou et. al.~\cite{Rannou,Karney,EarnTremaine}).
Both Blank \cite{Blank89, Blank94} and Vladimirov \cite{Vladimirov} have presented
theoretical frameworks in which to study the statistical behaviour of round-off.
Blank considers the properties of ensembles of round-off maps with varying discretisation length,
whereas Vladimirov equips a discrete phase space with a measure which can be used to quantify
the deviation of exact and numerical trajectories.
In this work, we are interested in round-off as a dynamical phenomenon in its own right.
Like Rannou, we consider a uniform, invertible discretisation of a Hamiltonian map.
In fact, we consider a discretisation of a rotation---the prototypical integrable Hamiltonian map.
Applying such arithmetically well-behaved round-off to simple linear systems like the rotation
leads to dynamical phenomena which are born of discontinuity.
The model $F$, as introduced in the previous section,
has been studied by several authors from various points of view.
We study a parameter regime in which the behaviour of $F$ can be described as near-integrable.
The only other example of near-integrability in round-off dynamics is a numerical
study of a perturbed twist map \cite{ZhangVivaldi}:
a simpler model which we will return to in the conclusion.
\section{Main results \& outline of the thesis}
Chapter \ref{chap:Preliminaries} provides the reader with some technical background.
We discuss round-off, and justify the choice of round-off scheme employed in the model $F$.
Then we discuss time-reversal symmetry: the map $F$ is \emph{reversible} with respect to
reflection in the line $x=y$, and \emph{symmetric} orbits will play a key role in our analysis.
In chapter \ref{chap:IntegrableLimit} we consider the limit $\lambda\to 0$,
which we refer to as the \emph{integrable limit}.
We describe the orbits closest to the origin, which have a particularly simple form,
and motivate a rescaling of the lattice $\mathbb{Z}^2$ by a factor of $\lambda$.
We introduce a piecewise-affine Hamiltonian function $\mathscr{P}$
(equation \eqref{eq:Hamiltonian}), whose invariant curves are polygons,
representing the limiting foliation of the plane for the rescaled system (see figure \ref{fig:polygon_classes}).
The set of invariant polygons is partitioned by \emph{critical polygons},
which contain $\mathbb{Z}^2$ points, into infinitely many \emph{polygon classes},
which can be characterised arithmetically in terms of sums of squares.
Each polygon class is assigned a symbolic coding, which describes its
path relative to the lattice $\mathbb{Z}^2$.
\begin{thm_nonumber}[Theorem \ref{thm:Polygons}, page \pageref{thm:Polygons}]
The level sets of $\mathscr{P}$ are convex polygons.
The polygon $\mathscr{P}(z)=\alpha$ is critical if and only if $\alpha\in\mathscr{E}$,
where $\mathscr{E}$ is the set of natural numbers which can be written as the sum of two squares:
\begin{displaymath}
\mathscr{E} = \{0,1,2,4,5,8,9,10,13,16,17,\dots \} .
\end{displaymath}
\end{thm_nonumber}
To match Hamiltonian flow and lattice map, we exploit the fact that, for small
$\lambda$, the composite map $F^4$ is close to the identity. After scaling, it is
possible to identify the action of $F^4$ with a time-advance map of the flow
(in the spirit of Takens' theorem \cite[section 6.2.2]{ArrowsmithPlace}).
This time-advance map assumes the role of the unperturbed dynamics.
The two actions agree along the sides of the polygons, but differ in vanishingly
small regions near the vertices. This discrepancy provides the perturbation mechanism.
The period function of $F$ displays a non-trivial clustering
of the periods around integer multiples of a basic \hl{recurrence time} (see
figure \ref{fig:PeriodFunction}),
and all orbits recur to a small neighbourhood of the symmetry axis $x=y$.
In section \ref{sec:Recurrence}, we define a Poincar\'{e} return map $\Phi$ to reflect this behaviour,
and show that the \emph{return orbits}---the partial orbits \hl{iterated up to their recurrence time}---shadow
the integrable orbits.
\FloatBarrier
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{Graphics/polygon_classes2}
\caption{A selection of polygons $\mathscr{P}(x,y)=\alpha$, for values of $\alpha$ in the interval $[0,10]$.
The critical polygons are shown in red: the polygon classes are the annuli bounded between pairs of adjacent critical polygons.
All polygons are symmetric under reflection in the coordinate axes, and in the line $x=y$.}
\label{fig:polygon_classes}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{Graphics/PeriodFunction}
\caption{The normalised period function $T_\lambda(z)$ (see section \ref{sec:Recurrence})
for points $z=(x,x)$, and $\lambda=1/5000$.
The vertical lines mark the location of critical polygons, which
pass through lattice points.}
\label{fig:PeriodFunction}
\end{figure}
\vfill
\FloatBarrier
\begin{thm_nonumber}[Theorem \ref{thm:Hausdorff}, page \pageref{thm:Hausdorff}]
For any $w\in\mathbb{R}^2$, let $\Pi(w)$ be the orbit of $w$ under the Hamiltonian flow,
and let $\mathcal{O}(w,\lambda)$ be the return orbit of the lattice point in $(\lambda\Z)^2$ associated with $w$.
Then
\begin{displaymath}
\lim_{\lambda\to 0} d_H
\left(\Pi(w),\mathcal{O}(w,\lambda)\right)=0,
\end{displaymath}
where $d_H$ is the Hausdorff distance on $\mathbb{R}^2$.
\end{thm_nonumber}
In section \ref{sec:IntegrableReturnMap},
we calculate the period of the Hamiltonian flow as a function of the value of the Hamiltonian.
This leads us to conclude that the unperturbed return map is nonlinear, and that
the nonlinearity is piecewise-affine on the polygon classes.
In section \ref{sec:LimitsPm1}, we show briefly that a similar
construction applies in the limits $\lambda\to\pm1$,
where the rotation number $\nu$ approaches $1/6$ and $1/3$, respectively.
\medskip
In chapter \ref{chap:PerturbedDynamics} we consider the behaviour of the perturbed return map $\Phi$.
The lowest branch of the period function of $F$ comprises the \emph{minimal orbits:}
\hl{the fixed points of the return map,
for which the effects of the perturbation cancel out}.
These are the orbits of the integrable system that survive the perturbation,
and we treat them as analogues of KAM tori.
The other orbits mimic the divided phase space structure of a near-integrable
area-preserving map, in embryonic form near the origin, and with increasing
complexity at larger amplitudes.
The main result of this chapter is that for infinitely many polygon classes,
the symmetric minimal orbits occupy a positive density of the phase space.
The restriction to infinitely many classes---as opposed to all classes---stems
from a coprimality condition we impose in order to achieve convergence
of the density.
Matching the orbits of the perturbed return map to the polygon classes of the
integrable flow is a delicate procedure,
requiring the exclusion of certain anomalous domains, and establishing
that the size of these domains is negligible in the limit.
We do this in section \ref{sec:RegularDomains}.
Then, in section \ref{sec:MainTheorems},
we state the chapter's main theorems.
The first theorem states that, within each polygon class, the return map
commutes with translations by the elements of a two-dimensional lattice,
which is independent of $\lambda$ up to scale, provided that $\lambda$ is sufficiently small
(see figure \ref{fig:lattice_Le}, page \pageref{fig:lattice_Le}).
\hl{To avoid excessive notational overhead, the below statement of the theorem has been somewhat simplified.}
\begin{thm_nonumber}[Theorem \ref{thm:Phi_equivariance}, page \pageref{thm:Phi_equivariance}]
Associated with each polygon class, indexed by $e\in\mathscr{E}$, there is an integer lattice $\mathbb{L}^e\subset\mathbb{Z}^2$,
\hl{such that over a suitable domain}, and for all sufficiently small $\lambda$,
the return map $\Phi$ is equivariant under the group of
translations generated by $\lambda\mathbb{L}^e$:
\begin{displaymath}
\forall l\in\mathbb{L}^e:
\hskip 10pt
\Phi(z + \lambda l) = \Phi(z) + \lambda l .
\end{displaymath}
\end{thm_nonumber}
The second theorem states that, if the symbolic coding of a polygonal class
satisfies certain coprimality conditions, then the density
of symmetric minimal orbits among all orbits becomes
independent of $\lambda$, provided that $\lambda$ is small enough.
This density is a positive rational number, which is computed explicitly.
As the number of sides of the polygons increases to infinity, the density
tends to zero.
An immediate corollary of theorem \ref{thm:minimal_densities} is the existence of a positive
lower bound for the density of minimal orbits---symmetric or otherwise.
\begin{thm_nonumber}[Theorem \ref{thm:minimal_densities}, page \pageref{thm:minimal_densities}]
There is an infinite sequence of polygon classes, indexed by $e\in\mathscr{E}$,
such that within each polygon class, and for all sufficiently small $\lambda$,
the number of symmetric fixed points of $\Phi$ modulo $\lambda\mathbb{L}^e$ is non-zero and independent of $\lambda$.
Thus the asymptotic density of symmetric fixed points within these polygon classes converges and is positive.
\end{thm_nonumber}
The analysis of the return map requires tracking the return orbits, and this is done
through repeated applications of a \emph{strip map}, an acceleration device which
exploits local integrability. This is a variant of a construct introduced for outer
billiards of polygons (see \cite[chapter 7]{Schwartz}, and references therein),
although in our case the strip map has an increasing number of components,
providing a dynamics of increasing complexity.
We introduce the strip map in section \ref{sec:StripMap}, and establish
some of its properties.
There is a symbolic coding associated with the strip map; its cylinder sets
in the return domain form the congruence classes of the local lattice structure.
\hl{This fact gives a `non-Archimedean' character to the dynamics.
We prove the correspondence between the symbolic coding and the return map}
in section \ref{sec:lattice}, and this result leads to the conclusion of the proof of the main theorems.
\medskip
In chapter \ref{chap:Apeirogon} we explore the behaviour of the unperturbed return map at infinity.
A change of coordinates shows that the unperturbed return map is a linear twist map on the cylinder.
We study the asymptotics of the period function of the integrable flow,
and find that it undergoes damped oscillations as the distance from the origin increases.
A suitable scaling uncovers a limiting functional form
which has a singularity in its derivative.
This leads to a discontinuity in the asymptotic behaviour of the twist map.
Typically the twist converges to zero,
and hence the unperturbed return map converges (non-uniformly) to the identity.
However, for the polygon classes which correspond to perfect squares
(recall that the polygon classes are classified by the sums of squares),
the twist converges to a non-zero value.
\hl{Again we state a somewhat simplified version of the result.}
\begin{figure}[t]
\centering
\includegraphics[scale=0.17]{Graphics/PlotPhi_e=40000v3} \\
(a) $e=40000=200^2$ \\
~ \\
\includegraphics[scale=0.17]{Graphics/PlotPhi_e=40309v3} \\
(b) $e=40309\approx200.8^2$
\caption{\hl{Two pixel plots showing a large number of symmetric orbits of the return map $\Phi$
in the cylindrical coordinates $(\theta,\rho)\in\mathbb{S}^1 \times \mathbb{R}$,
where the $\rho$-axis is the symmetry axis, and the $\theta$-axis is a fixed line of the twist dynamics.
The resolution is such that the width of the cylinder (the $\theta$ direction) consists of approximately $280$ lattice sites.
In both cases, the orbits plotted occupy almost half of the region of phase space pictured.
The stark contrast between the two plots is caused by the difference in the twist $K(e)$:
in plot (a) $K(e)\approx 4$, whereas in plot (b) $K(e)\approx -0.1$.
The values of $\lambda$ used are (a) $\lambda\approx 7\times 10^{-9}$ and (b) $\lambda\approx 4\times 10^{-8}$.
In figure (b), the primary resonance at the origin is clearly visible, whereas a period $2$
resonance, which occurs at $\rho=1/2K(e)$, is seen to the left of the plot.}}
\label{fig:resonance}
\end{figure}
\begin{thm_nonumber}[Theorem \ref{thm:Omega_e}, page \pageref{thm:Omega_e} \& Proposition \ref{prop:Tprime_asymptotics}, page \pageref{prop:Tprime_asymptotics}]
Associated with each polygon class, indexed by $e\in\mathscr{E}$, there is a change of coordinates
which conjugates the unperturbed return map to the linear twist map $\Omega^e$, given by
\begin{displaymath}
\Omega^e : \mathbb{S}^1 \times \mathbb{R} \to \mathbb{S}^1 \times \mathbb{R}
\hskip 40pt
\Omega^e(\theta,\rho) = \left( \theta + K(e)\rho , \rho \right),
\end{displaymath}
where $K(e)$ is the twist.
Furthermore, as $e\to\infty$, the limiting behaviour of $K(e)$ is singular:
\begin{displaymath}
K(e) \to \left\{ \begin{array}{ll} 4 \quad & \sqrt{e}\in\mathbb{N} \\ 0 \quad & \mbox{otherwise.} \end{array}\right.
\end{displaymath}
\end{thm_nonumber}
Finally, in chapter \ref{chap:PerturbedAsymptotics}, we study the perturbed dynamics at infinity.
The contents of this chapter are based on extensive numerical experiments, tracking large orbits of
$F$ in integer arithmetic.
We study the phase portrait of the perturbed return map.
In the cases where the twist of the unperturbed return map converges to zero,
the form of the perturbation is laid bare, and we find delicate discrete resonance structures (see figure \ref{fig:resonance}(b)).
However, in the cases where the twist persists, the phase portrait is featureless and uniform
over length scales comparable with the strip's width (see figure \ref{fig:resonance}(a)).
This local uniformity allows us to compute the period distribution
function within these polygon classes numerically, and show that it is
consistent with the period statistics of a random reversible map.
\begin{obs_nonumber}[Observation \ref{obs:De}, page \pageref{obs:De}]
As $e\to\infty$ and $K(e)\to 4$, i.e., on the subsequence of perfect squares,
the distribution of periods among orbits in each polygon class converges to a limiting distribution.
This limiting distribution corresponds to a random reversible map in a discrete phase space of diverging cardinality.
\end{obs_nonumber}
\medskip
We finish with some concluding remarks and open questions.
\chapter{The perturbed dynamics at infinity} \label{chap:PerturbedAsymptotics} \label{CHAP:PERTURBEDASYMPTOTICS}
Now we revert to the perturbed dynamics, and the return map $\Phi$ defined in section \ref{sec:Recurrence}.
In chapter \ref{chap:PerturbedDynamics}, we showed that in the integrable limit $\lambda\to 0$,
the map $\Phi$ has a natural finite structure over $X^e$ for all $e\in\mathscr{E}$:
it is equivariant under the group of translations generated by the lattice $\lambda\mathbb{L}^e$
(theorem \ref{thm:Phi_equivariance}, page \pageref{thm:Phi_equivariance}).
Furthermore, in the previous chapter, we saw that under a suitable change of coordinates,
the unperturbed motion corresponds to a linear twist map $\Omega^e$ on the cylinder
(theorem \ref{thm:Omega_e}, page \pageref{thm:Omega_e}).
The behaviour of the twist of $\Omega^e$ was shown to be singular in the limit $e\to\infty$.
We begin this chapter by reconciling these two features of $\Phi$---the lattice structure and the underlying twist map---and
giving a qualitative description of the dynamics in the limit $e\to\infty$.
We find that, under the aforementioned change of coordinates,
the dynamics are that of a sequence of discrete twist maps with vanishing discretisation length.
In the regime where the underlying twist vanishes in the limit,
the remaining fluctuations result in intricate resonance structures,
reminiscent of the island chains observed in Hamiltonian perturbation theory.
Conversely, in the regime where the twist persists,
there are no discernible phase space features.
In section \ref{sec:pdf} we turn to the period distribution function of $\Phi$.
The reduction of $\Phi$ modulo the sequence of lattices $\lambda\mathbb{L}^e$
provides a natural finite setting in which we can compare the periods of $\Phi$
to those of the random reversible map discussed in theorem \ref{thm:GammaDistribution}
(page \pageref{thm:GammaDistribution}).
However, we find that the reduction is superfluous:
the number of congruence classes of $\mathbb{Z}^2/\,\mathbb{L}^e$ grows much faster than the range of a typical orbit.
Furthermore, in the regime where the dynamics of $\Phi$ are asymptotically uniform,
the period distribution function of $\Phi$ is well approximated by local data,
calculated over a vanishing subset of congruence classes.
We describe an extensive numerical experiment in which we calculate the period distribution function of $\Phi$
for increasing values of $e$.
The results suggests that, as $e\to\infty$,
the distribution of periods approaches the universal distribution $\mathcal{R}(x)$ of equation (\ref{def:R(x)}),
and thus is consistent with random reversible dynamics.
As in the case of the random reversible map, we observe that symmetric periodic orbits dominate.
Throughout this chapter we adopt an informal approach, focussing on qualitative observations and numerical evidence.
\section{Qualitative description and phase plots}
Recall the map $\Omega^e$ (equation (\ref{def:Omega^e})),
which corresponds to the unperturbed return map under the change of coordinates
$\eta^e(\lambda):(x,y)\mapsto(\theta,\rho)$ of equation (\ref{def:rho_theta}).
In the integrable limit $\lambda\to 0$, the image of the return domain
$\mathscr{X}^e$ under $\eta^e$ approaches the unit cylinder $\mathbb{S}^1\times\mathbb{R}$.
The domain $X^e$ of the perturbed return map $\Phi$ is a subset of $\mathscr{X}^e$,
thus we can also apply the change of coordinates $\eta^e$ to $X^e$.
Using the definition (\ref{def:Xe}) of $X^e$, we see that its image under $\eta^e$
approaches the following set\footnote{Recall that in the definition of $\eta^e$,
the preimage of the origin is some fixed point $z_0$ of the unperturbed dynamics.
In the discrete case we round $z_0$ onto the lattice via the function $R_{\lambda}$ of equation (\ref{eq:R_lambda}).
The choice of the fixed point $z_0$ will have no bearing on the qualitative or statistical properties of the dynamics.}
(see figure \ref{fig:Lattice2}):
\begin{equation} \label{eq:rho_theta_lattice}
\left\{ \frac{1}{2(2v_1+1)} \, (i,i+2j) \,: \; i,j\in\mathbb{Z}, \; -(2v_1+1)\leq i < 2v_1+1 \right\} \subset \mathbb{S}^1\times\mathbb{R},
\end{equation}
where $v_1$ of equation (\ref{def:v1}) is the integer part of $\sqrt{e/2}$.
(The image of the point $\lambda(x,y)$ corresponds to $i=x-y$ and $j=y-x_0$.)
This set is a rotated square lattice in the unit cylinder,
\hl{where the identification of points modulo $\langle \lambda\mathbf{w}_{v_1,v_1}\rangle$
in $(x,y)$ coordinates is reflected in the periodicity of the angular coordinate $\theta$.
Adjacent lattice points are separated by a lattice spacing of $1/\sqrt{2}(2v_1+1)$,
so that $e\to\infty$ corresponds to the continuum limit. }
We think of the dynamics of $\Phi$ as taking place in the $(\theta,\rho)$ coordinates,
and of $\Phi$ as a discrete version of $\Omega^e$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{TikZ/Lattice2}
\caption{The image of the lattice $(\lambda\Z)^2$ under the change of coordinates $\eta^e(\lambda)$.
There are $2v_1+1$ lattice points per unit length in each of the coordinate directions.}
\label{fig:Lattice2}
\end{figure}
The map $\Omega^e$ is a linear twist map, with characteristic length scale $\bar{\rho}$ in the $\rho$-direction
(see equation (\ref{eq:rho_bar})).
We compare this to the characteristic length scale of $\mathbb{L}^e$:
the lattice of section \ref{sec:MainTheorems} which characterises the symmetry properties of $\Phi$.
In particular, $\Phi$ is invariant under translation by the vector $\lambda\mathbf{L}$ of (\ref{def:L}) which,
in the $(\theta,\rho)$ coordinates, corresponds to the translation
\begin{equation} \label{eq:rho_tilde}
\rho \mapsto \rho + \tilde{\rho} \hskip 40pt \tilde{\rho} = \frac{q(e)}{(2v_1+1)^2}.
\end{equation}
A careful inspection of the definitions (\ref{eq:q}) of $q$ and (\ref{eq:Tprime(alpha)})
of $\mathscr{T}^{\prime}(e)$ confirms that $\tilde{\rho}$ is an integer multiple of $\bar{\rho}$,
and hence that the group of symmetries of $\Phi$ generated by this translation form a
subgroup of those of $\Omega^e$ generated by the translation (\ref{eq:rho_bar}).
\begin{proposition}
For $e\in\mathscr{E}$, let $\bar{\rho}$ be the periodicity of $\Omega^e$ in the $\rho$-direction, as given by (\ref{eq:rho_bar}),
and let $\tilde{\rho}$ be corresponding periodicity of $\Phi$ on $\eta^e(X^e)$, as given by (\ref{eq:rho_tilde}).
Then $\tilde{\rho}$ is an integer multiple of $\bar{\rho}$.
\end{proposition}
\begin{proof}
Let $e\in\mathscr{E}$ and $\alpha\in\mathscr{I}^e$,
so that $\mathscr{T}^{\prime}(e)=\mathscr{T}^{\prime}(\alpha)$, and
$\Pi(\alpha)$ belongs to the polygon class associated with $e$.
By (\ref{eq:rho_tilde}), $\tilde{\rho}$ is given by
$$ \tilde{\rho} = \frac{q(e)}{(2v_1+1)^2}. $$
The integer $q(e)$, given by (\ref{eq:q}), is the lowest common multiple of $(2v_1+1)^2$ and
the sequence of factors $(2v_j+1)(2v_{j+1}+1)$, where $v_j$ and $v_{j+1}$ are consecutive
distinct vertex types of the vertex list $V(e)$.
Combining this with the definition (\ref{eq:rho_bar}) of $\bar{\rho}$
and the formula (\ref{eq:Tprime(alpha)}) for $\mathscr{T}^{\prime}(\alpha)$,
we have that the ratio $\tilde{\rho}/\bar{\rho}$ is given by
$$ \frac{\tilde{\rho}}{\bar{\rho}} = -\frac{q(e)}{2} \, \mathscr{T}^{\prime}(e)
= -2q(e) \left( \frac{1}{(2v_1+1)^2} -4 \sum_{n=v_1+1}^{v_k} \frac{1}{(4n^2-1)(2\fl{\sqrt{e - n^2}}+1)} \right). $$
We claim that the denominator of every term in the bracketed sum divides $q(e)$.
To see that our claim holds, we note first that $(2v_1+1)^2$ divides $q(e)$ by construction.
Then, for every $n$ in the range $v_1+1\leq n \leq v_k$, we can write
$$ (4n^2-1)(2\fl{\sqrt{e - n^2}}+1) = (2n+1)(2\fl{\sqrt{e - n^2}}+1)(2n-1). $$
We must show that this product divides $q(e)$ for each $n$.
The first factor in this product divides $q(e)$, since for each $n$
there is at least one vertex $(x,v)$ of $\Pi(\alpha)$ in the first octant
which has vertex type $n$, i.e., with $\fl{x}=n$ and $v\in\mathbb{Z}$.
Let $v$ be the maximal integer for which this occurs.
Now consider the vertex of $\Pi(\alpha)$ which occurs prior to $(x,v)$.
This vertex has coordinates $(n,y)$ and type $v=\fl{y}$ (see figure \ref{fig:ConsecutiveVertices}),
where the properties (\ref{eq:SqrtP}) of $P$ give us that
$$ \alpha = n^2 + P(y)\in\mathscr{I}^e \hskip 20pt \Rightarrow \hskip 20pt v = \fl{\sqrt{\alpha-n^2}} = \fl{\sqrt{e-n^2}}. $$
Consequently $n$ and $\fl{\sqrt{e-n^2}}$ are consecutive distinct vertex types,
and the product
$$ (2n+1)(2\fl{\sqrt{e - n^2}}+1) $$
divides $q(e)$ by construction.
\begin{figure}[!h]
\centering
\includegraphics[scale=1]{TikZ/ConsecutiveVertices}
\caption{Three consecutive vertices of the polygon $\Pi(\alpha)$.}
\label{fig:ConsecutiveVertices}
\end{figure}
It remains to show that the product $(2\fl{\sqrt{e - n^2}}+1)(2n-1)$
divides $q(e)$: then the claim follows from the fact that $(2n+1)$ and $(2n-1)$
are consecutive odd numbers, and hence are coprime.
There are two cases to consider.
If $(n,y)$ is the first vertex, then $v=n-1=v_1$, and
$$ (2\fl{\sqrt{e - n^2}}+1)(2n-1) = (2v_1+1)^2, $$
which we have already seen divides $q(e)$.
If $(n,y)$ is not the first vertex, then the vertex prior to this must be of type $n-1$.
Thus $n-1$ and $\fl{\sqrt{e-n^2}}$ are also consecutive distinct vertex types,
and the proof is complete.
\end{proof}
Thus we see that the symmetry properties of $\Phi$ are consistent with those of $\Omega^e$.
In fact, the characteristic length scale $\tilde{\rho}$ of the symmetry group of $\Phi$
is a diverging multiple of $\bar{\rho}$---see table \ref{table:nu_range} (page \pageref{table:nu_range}).
Experimental observations confirm that $\Phi$ can be considered as a
perturbation of $\Omega^e$, whose fluctuations originate from the discretisation.
These fluctuations are small relative to the width of the cylinder as $e\rightarrow\infty$,
do not have any obvious structure, and can perturb the dynamics in both the $\rho$ and $\theta$ directions.
\medskip
Recall the sequence $e(v_k,b)$ of critical numbers defined in (\ref{def:e(vk,b)}).
By corollary \ref{corollary:Omega_asymptotics}, if $b\neq0$, then the sequence of maps $\Omega^{e(v_k,b)}$
converge non-uniformly to the identity as $v_k\to\infty$,
whereas if $b=0$, then $\Omega^{e(v_k,b)}$ converges to the map $(\theta,\rho)\mapsto(\theta+4\rho,\rho)$.
Accordingly, the characteristic length scale $|\bar{\rho}|$ of the twist dynamics is also singular in the limit,
\hl{with $|\bar{\rho}|\to\infty$ or $|\bar{\rho}|\to 1/4$}, for the $b\neq 0$ and $b=0$ cases, respectively
(see equation \eqref{eq:rho_bar_asymptotics}).
We define the \defn{rotation number} $\nu$ of a point on the cylinder
to be its rotation number under the twist map $\Omega^e$, i.e.,
$$ \nu(\theta,\rho) = \frac{\rho}{\bar{\rho}} \mod{1} \hskip 40pt (\theta,\rho)\in\mathbb{S}^1\times\mathbb{R}. $$
Similarly for a point $z\inX^e$, we write $\nu(z)$ to denote the rotation number
of the corresponding point $\eta^e(z)$ on the cylinder.
Subsets of the cylinder of the form
\begin{equation} \label{eq:fundamental_domain}
\left\{ (\theta,\rho) \,: \; \nu(\theta,\rho) -m \in [-1/2,1/2) \right\}\subset \mathbb{S}^1\times\mathbb{R} \hskip 40pt m\in\mathbb{Z}
\end{equation}
are referred to as \defn{fundamental domains} of the dynamics.
The number $N$ of lattice points of $\eta^e(X^e)$ per fundamental domain varies like
\begin{equation} \label{eq:pts_per_fundamental_domain}
N \sim 2(2v_1+1)^2|\bar{\rho}|
\end{equation}
as $e\rightarrow\infty$.
For a given value of $e$, we expect the dynamics of $\Phi$ to be qualitatively the same
in each fundamental domain, but to vary locally according to the rotation number.
Consequently, in order to observe the global behaviour of $\Phi$,
we need to sample whole fundamental domains.
However, this is made difficult by the divergence of $\bar{\rho}$ for $b\neq 0$.
To better understand the divergence of $\bar{\rho}$, we consider the limiting form
(\ref{eq:T_asymptotics}) of the period function $\mathscr{T}(\alpha)$.
By differentiating the limit and neglecting the term $\epsilon(b)$, and for large $v_k$,
we expect the piecewise-constant function $\mathscr{T}^{\prime}(\alpha)$ to behave approximately as
$$ \frac{1}{2}(2v_1+1)^2 \mathscr{T}^{\prime}(\alpha(v_k,b)) \approx \frac{2}{\sqrt{v_k}} \left(\sqrt{2b+1}-1/\sqrt{2b}\right)
\hskip 40pt b\neq 0. $$
By (\ref{eq:rho_bar}), this leads us to expect that
$$ \bar{\rho}(\alpha(v_k,b)) \approx \frac{\sqrt{v_k}}{2} \left(1/\sqrt{2b}-\sqrt{2b+1}\right)^{-1} \hskip 40pt b\neq 0. $$
Figure \ref{fig:rho_bar} shows that this rough analysis is valid,
although in both cases the relationship between the two functions is far from uniform.
\begin{figure}[!h]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/dT} \\
(a) $\frac{1}{2}(2v_1+1)^2 \mathscr{T}^{\prime}(\alpha)$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/rho_bar} \\
(b)\hl{ $|\bar{\rho}(\alpha)| = 2/(2v_1+1)^2 |\mathscr{T}^{\prime}(\alpha)|$} \\
\end{minipage}
\caption{A plot of (a) $\frac{1}{2}(2v_1+1)^2 \mathscr{T}^{\prime}(\alpha)$ and (b) $|\bar{\rho}(\alpha)|$ (solid lines)
against $\sqrt{\alpha}$ for $\sqrt{\alpha}\in[100,101)$, i.e., for $v_k=100$ and $b\in[0,1)$.
The dashed lines show the approximate limiting functions given above.}
\label{fig:rho_bar}
\end{figure}
Thus we see that $\bar{\rho}(\alpha(v_k,b))$ typically grows slowly, like $\sqrt{v_k}$,
except near $b=(-1+\sqrt{5})/4\approx0.3$,
where our approximate form for the derivative $\mathscr{T}^{\prime}$ is zero.
We can exploit this behaviour to examine orbits of $\Phi$ in domains $X^e$ where the twist $K(e)$ of $\Omega^e$ is
large ($b=0$---see figure \ref{fig:resonance}(a)),
moderate ($b=0.8$---see figure \ref{fig:resonance}(b))
or almost zero ($b=0.3$---see figure \ref{fig:PrimaryResonances}).
What we see in the case of vanishing twist ($b\neq 0$) is a sea of discrete resonance structures:
the discrete analogue of the island chains of Hamiltonian perturbation theory.
The global structure of the twist map recedes to infinity,
and the dynamics are dominated by the local rotation number.
Although orbits may wander over a significant range in the $\rho$-direction,
the variation of the rotation number $\nu(z)$ along orbits is vanishingly small
(see table \ref{table:nu_range}(b)).
\begin{figure}[h!]
\centering
\includegraphics[scale=0.38]{Graphics/PlotPhi_e=40925_v2} \\
\caption{\hl{A pixel plot of a primary resonance for
$e=40925\approx202.3^2$ and $\lambda\approx 2\times 10^{-8}$.
The plot shows a large number of symmetric orbits of $\Phi$ in the cylindrical coordinates $(\theta,\rho)\in\mathbb{S}^1 \times \mathbb{R}$.
The resolution of the plot is such that the (unit) width of the cylinder consists of approximately $280$ lattice sites.
For this value of $e$, the natural lengthscale $\bar{\rho}$ of the twist dynamics in the $\rho$-direction is large ($\bar{\rho}\approx -150$).} }
\label{fig:PrimaryResonances}
\end{figure}
\begin{table}[h!]
\centering
\small
\begin{tabular}{ c c c c | c|c | c|c |}
\cline{5-8}
& & & & \multicolumn{2}{ |c| }{$\Delta\rho$} & \multicolumn{2}{ |c| }{$\Delta\nu$} \\
\hline
\multicolumn{1}{ |c| }{$e$} & \multicolumn{1}{ |c| }{$v_k$} & \multicolumn{1}{ |c| }{$\bar{\rho}$} & \multicolumn{1}{ |c| }{$\tilde{\rho}$}
& Median & Maximum & Median & Maximum \\
\hline
\multicolumn{1}{|c|}{$10\,000$} & \multicolumn{1}{|c|}{$100$} & \multicolumn{1}{|c|}{$0.266$} & \multicolumn{1}{|c|}{$9.5\times 10^{71}$}
& $0.18$ & $0.65$ & $0.69$ & $2.5$ \\
\multicolumn{1}{ |c| }{$40\,000$} & \multicolumn{1}{ |c| }{$200$} & \multicolumn{1}{ |c| }{$0.259$} & \multicolumn{1}{ |c| }{$5.1\times 10^{147}$}
& $0.16$ & $0.55$ & $0.63$ & $2.1$ \\
\multicolumn{1}{ |c| }{$160\,000$} & \multicolumn{1}{ |c| }{$400$} & \multicolumn{1}{ |c| }{$0.257$} & \multicolumn{1}{ |c| }{$2.0\times 10^{297}$}
& $0.14$ & $0.48$ & $0.54$ & $1.9$\\
\multicolumn{1}{ |c| }{$640\,000$} & \multicolumn{1}{ |c| }{$800$} & \multicolumn{1}{ |c| }{$0.255$} & \multicolumn{1}{ |c| }{$4.3\times 10^{605}$}
& $0.13$ & $0.49$ & $0.51$ & $1.9$ \\
\hline
\end{tabular}\\[0.2cm]
(a) $b=0$ \\[0.5cm]
\begin{tabular}{ c c c c | c|c | c|c |}
\cline{5-8}
& & & & \multicolumn{2}{ |c| }{$\Delta\rho$} & \multicolumn{2}{ |c| }{$\Delta\nu$} \\
\hline
\multicolumn{1}{ |c| }{$e$} & \multicolumn{1}{ |c| }{$v_k$} & \multicolumn{1}{ |c| }{$\bar{\rho}$} & \multicolumn{1}{ |c| }{$\tilde{\rho}$}
& Median & Maximum & Median & Maximum \\
\hline
\multicolumn{1}{ |c| }{$10\,057$} & \multicolumn{1}{|c|}{$100$} & \multicolumn{1}{ |c| }{$163$} & \multicolumn{1}{ |c| }{$1.4\times 10^{77}$}
& $0.14$ & $2.9$ & $8.7\times 10^{-4}$ & $1.8\times 10^{-2}$ \\
\multicolumn{1}{ |c| }{$40\,113$} & \multicolumn{1}{|c|}{$200$} & \multicolumn{1}{ |c| }{$106$} & \multicolumn{1}{ |c| }{$3.4\times 10^{153}$}
& $0.25$ & $2.0$ & $2.3\times 10^{-3}$ & $1.9\times 10^{-2}$ \\
\multicolumn{1}{ |c| }{$160\,234$} & \multicolumn{1}{|c|}{$400$} & \multicolumn{1}{ |c| }{$4105$} & \multicolumn{1}{ |c| }{$1.8\times 10^{285}$}
& $4.8$ & $8.5$ & $1.2\times 10^{-3}$ & $2.1\times 10^{-3}$ \\
\hline
\end{tabular}\\[0.2cm]
(b) $b=0.3$
\caption{A table showing the values of $\bar{\rho}$ and $\tilde{\rho}$ for various values of $e$,
and the typical range $\Delta\rho$ and $\Delta\nu$ of $\rho(z)$ and $\nu(z)$, respectively, along orbits.
The distribution of the range was calculated according to the fraction of points sampled whose orbit has the given range.}
\label{table:nu_range}
\end{table}
In the $b=0$ case the global structure remains.
Within each fundamental domain there is little scope for resonance to develop
and phase portraits are largely featureless.
The typical variation in the rotation number along orbits is $1/2$ (see table \ref{table:nu_range}(a)),
leading to orbits which typically do not cluster in the $\theta$-direction.
In this case it makes sense to consider the statistical properties of orbits of $\Phi$,
and in the next section we consider the period distribution function.
\section{The period distribution function} \label{sec:pdf}
The lattice structure described in section \ref{sec:MainTheorems},
whereby the dynamics of $\Phi$ on each of the domains $X^e$ is equivariant under
the group of lattice translations generated by $\lambda\mathbb{L}^e$, gives a natural
finite structure on which to define the period distribution function of $\Phi$.
For $e\in\mathscr{E}$ and $z\in\mathbb{Z}^2$, we write $[z]$ to denote the equivalence class of $z$ modulo $\mathbb{L}^e$:
$$ [z] = z+ \mathbb{L}^e. $$
The set of all equivalence classes is denoted $\mathbb{Z}^2/\,\mathbb{L}^e$.
For sufficiently small $\lambda$, and for all equivalence classes $[z]\in\mathbb{Z}^2/\,\mathbb{L}^e$,
$[z]$ has a representative in $X^e$ whose image under $\Phi$ also lies in $X^e$:
$$ \exists \, w\in[z]: \; \lambda w, \Phi(\lambda w)\inX^e. $$
Thus we can let $\Phi$ act on $\mathbb{Z}^2/\,\mathbb{L}^e$ by defining
$$ \Phi([z]) = [\Phi(\lambda w)/\lambda]. $$
By the equivariance of $\Phi$ described in theorem \ref{thm:Phi_equivariance} (page \pageref{thm:Phi_equivariance}),
this action is well defined.
Similarly the reversing symmetry $G^e$ of proposition \ref{prop:Ge}
can be defined over $\mathbb{Z}^2/\,\mathbb{L}^e$.
The set $\mathbb{Z}^2/\,\mathbb{L}^e$ is finite, with size $N$ given by
$$ N = \#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right) = q(e) = (2v_1+1)^2\tilde{\rho} $$
(see equation (\ref{eq:theta_e})), and hence all orbits of $\Phi$ are periodic.
We define the period $T$ of $\Phi$ over $\mathbb{Z}^2/\,\mathbb{L}^e$ in the usual fashion:
$$ T([z]) = \min\{k\in\mathbb{N} \,: \; \Phi^k([z])=[z] \}. $$
Then the period distribution function $\mathscr{D}^e = \mathscr{D}^e(\lambda)$ of $\Phi$ is given as follows (cf.~equation (\ref{def:pdf})):
\begin{displaymath}
\mathscr{D}^e(x) = \frac{1}{N} \, \# \{ [z]\in\mathbb{Z}^2/\,\mathbb{L}^e : \; T([z])\leq \kappa x \},
\end{displaymath}
where $\kappa=\kappa(e,\lambda)$ is the scaling constant given by (cf.~theorem \ref{thm:GammaDistribution}, page \pageref{thm:GammaDistribution})
\begin{equation*}
\kappa = \frac{2N}{g+h} \hskip 40pt g=\#\Fix{G^e} \hskip 40pt h=\#\Fix{(\Phi\circ G^e)} .
\end{equation*}
For $e=v_k^2$, i.e., $b=0$, we wish to investigate whether the dynamics of $\Phi$ are sufficiently
disordered that its period statistics are consistent with the limiting distribution $\mathcal{R}$ of
theorem \ref{thm:GammaDistribution}, which corresponds to random dynamics.
In principle, it is possible to calculate $\mathscr{D}^e$ exactly;
however, the factorially diverging size of $N$ (and hence of $\tilde{\rho}$---see table \ref{table:nu_range}) makes this unfeasible.
In practice, we find we can approximate $\mathscr{D}^e$ by calculating a sequence of local distributions.
\FloatBarrier
\begin{figure}[t]
\centering
\includegraphics[scale=1.1]{Graphics/PeriodDist_vk=800} \\
\caption{A number of the distributions $\mathscr{D}(x)$ calculated for $v_k=800$ (green-blue dashed lines).
The number $m$ refers to the multiple of the characteristic length scale $\bar{\rho}$ sampled.
The solid black line is the limiting distribution $\mathcal{R}(x)$.}
\label{fig:Distribution}
\end{figure}
\begin{table}[b]
\centering
\begin{tabular}{ |c|c|c|c|c|}
\hline
$v_k$ & $m$ & Sample size & $\int (\mathcal{R}-\mathscr{D}) dx$ & Approx. time \\
\hline
$100$ & $32$ & $350\,000$ & $0.03$ & 15sec \\
$200$ & $32$ & $1\,360\,000$ & $0.04$ & 2min \\
$400$ & $32$ & $5\,370\,000$ & $0.06$ & 15min\\
$800$ & $32$ & $21\,200\,000$ & $0.06$ & 2hr \\
$1600$ & $16$ & $42\,900\,000$ & $0.11$ & 11hr \\
\hline
\end{tabular}
\caption{Some data relating to the calculation of distributions $\mathscr{D}$ for various different values of $v_k$,
and their corresponding maximum values of $m$.
For a given value of $m$, nine distinct distributions were calculated:
for three different values of $\lambda$, each with three different values of $z_0$ (recall $z_0=(\eta^e)^{-1}(0,0))$.
Neither the value of $\lambda$ nor $z_0$ was found to have any discernible effect on the distribution.
The sample size is indicative of the number of points sampled during the calculation of each distribution,
similarly for the approximate calculation time.
The integral $\int(\mathcal{R}-\mathscr{D})dx$ was calculated over the interval $[0,16]$ (every distribution calculated satisfies $\mathscr{D}(16)=1$),
and has been averaged over the nine individual distributions calculated.}
\label{table:D_table}
\end{table}
\vfill
\FloatBarrier
Experimental observations show that the reduction of the dynamics modulo $\lambda\mathbb{L}^e$ is unnecessary:
not only are all the orbits we computed already periodic, as conjecture \ref{conj:Periodicity} suggests,
but the number of equivalence classes in $\mathbb{Z}^2/\,\mathbb{L}^e$ grows much faster than the range of any orbit
(compare the characteristic length scale $\tilde{\rho}$ to the typical range $\Delta\rho$ of an orbit
as given in table \ref{table:nu_range}, page \pageref{table:nu_range}).
We observe that the reduction has no effect on the dynamics of $\Phi$ or its period function,
so that the period distribution $\mathscr{D}^e$ is also representative of the dynamics of $\Phi$ on its original domain $X^e$.
In what follows, it will always be the case that the period $T$ of $\Phi$ on $\mathbb{Z}^2/\,\mathbb{L}^e$ is
equal to the period $\tau$ of $\Phi$ on $X^e$:
$$ T([z]) = \tau(z) \hskip 40pt z\inX^e, $$
and in our discussion we assume that this holds for all orbits.
Furthermore, the dynamics of $\Phi$ are sufficiently uniform from one fundamental domain to the next,
that we can achieve good approximations to $\mathscr{D}^e$ by sampling the periods in just a small number such domains,
i.e., over vanishing subsets of $\mathbb{Z}^2/\,\mathbb{L}^e$.
It is this fact that allows us to estimate $\mathscr{D}^e$ with numerical calculations, which we describe in the next section.
\subsection*{Computational investigation}
For $e=v_k^2\in\mathscr{E}$, we wish to calculate a sequence of period distribution functions,
which will serve as approximations to $\mathscr{D}^e$ as $v_k\to\infty$.
The factorially diverging number of equivalence classes of $\mathbb{Z}^2/\,\mathbb{L}^e$
dictates that we must approximate $\mathscr{D}^e$ by sampling a much smaller subset of the phase space.
However, discounting the lattice structure, there are no natural $\Phi$-invariant
subsets of $X^e$. Below, we construct a sequence of $\Phi$-invariant sets which mimic
the fundamental domains defined in (\ref{eq:fundamental_domain})---the natural invariant structures of the twist dynamics.
We consider subsets $A=A(v_k,m,\lambda)$ of $X^e$ of the form
\begin{equation} \label{eq:A_m}
A(v_k,m,\lambda) = \{ z\inX^e \,: \; \nu(z) \in [-1/2,m-1/2] \} \hskip 40pt m\in\mathbb{N}.
\end{equation}
For sufficiently small $\lambda$, the counterpart of $A$ on the cylinder
covers $m$ copies of the fundamental domain of the twist dynamics, so that as $v_k\to\infty$:
$$ \# A \sim 2m(2v_1+1)^2|\bar{\rho}| \to \infty $$
(cf.~equation (\ref{eq:pts_per_fundamental_domain})).
Furthermore, since the length $\bar{\rho}$ is small relative to the length $\tilde{\rho}$ of the lattice $\mathbb{L}^e$,
the set $A$ represents a vanishing subset of the equivalence classes of $\lambda\mathbb{L}^e$:
$$ \frac{\# A}{N} \sim \frac{2m|\bar{\rho}|}{\tilde{\rho}} \to 0. $$
The set $A$ is not invariant under the perturbed dynamics $\Phi$.
Hence we define $\bar{A}$ to be the smallest invariant set which contains $A$:
\begin{equation*}
\bar{A}(v_k,m,\lambda) = \bigcup_{n\in\mathbb{Z}} \Phi^n(A(v_k,m,\lambda)).
\end{equation*}
In what follows, it is assumed that there is a critical parameter value $\lambda_c(v_k,m)$
such that $\bar{A}\subsetX^e$ for all $\lambda<\lambda_c$.
We observe that the overspill from $A$ under the map $\Phi$,
i.e., the set $\bar{A}\setminus A$,
is small relative to $A$ as $m\to\infty$ (see figure \ref{fig:NoOfPts}(a)).
\begin{observation} \label{prop:A_m}
Let $v_k\in\mathbb{N}$. Then for $m\in\mathbb{N}$ and $(\lambda(m))_{m\in\mathbb{Z}}$ satisfying $\lambda(m)<\lambda_c(v_k,m)$, we have:
$$ \frac{\#\bar{A}(v_k,m,\lambda(m))}{\# A(v_k,m,\lambda(m))} \to 1 $$
as $m\to\infty$.
\end{observation}
Then we measure the period distribution function $\mathscr{D}=\mathscr{D}(v_k,m,\lambda)$ of $\Phi$ over $\bar{A}$:
\begin{displaymath}
\mathscr{D}(x) = \frac{\# \{ z\in \bar{A} \,: \; \tau(z)\leq \kappa x \}}{\# \bar{A}},
\end{displaymath}
where the scaling constant $\kappa=\kappa(v_k,m,\lambda)$ is given by
\begin{equation} \label{eq:Phikappa_approx}
\kappa = \frac{2\#\bar{A}}{g+h} \hskip 20pt g=\#\left(\Fix{G^e}\cap\bar{A}\right) \hskip 20pt h=\#\left(\Fix{(\Phi\circ G^e)}\cap\bar{A}\right).
\end{equation}
For any $e=v_k^2\in\mathscr{E}$, $m\in\mathbb{N}$ and $\lambda(m)<\lambda_c(v_k,m)$, we have:
$$ (\mathscr{D}(v_k,m,\lambda(m)) - \mathscr{D}^e) \to 0 \hskip 40pt m\to\infty, $$
where $\mathscr{D}^e$ is the period distribution function of $\Phi$ over $\mathbb{Z}^2/\,\mathbb{L}^e$.
To study the behaviour of $\mathscr{D}^e$ as $v_k\to\infty$,
we need to let both $m$ and $v_k$ go to infinity simultaneously.
We do not have sufficient numerical evidence to specify a scheme $m(v_k)$ for
which the convergence
$$ (\mathscr{D}(v_k,m(v_k),\lambda(v_k,m)) - \mathscr{D}^e) \to 0 \hskip 40pt v_k\to\infty $$
holds. However, we do note that small values of $m$
were sufficient in all numerical experiments (see table \ref{table:D_table}),
which suggests that a scheme of the form
$$ m = C(2v_1+1) \hskip 40pt C>0 $$
may be sufficient.
\medskip
Since $\Fix{G^e}$ is the pair of lines $x=y$ and $x-y=-\lambda(2v_1+1)$,
the corresponding set on the cylinder is given by (cf.~(\ref{eq:rho_theta_lattice}))
$$ \eta^e(\Fix{G^e}) = \left\{ \frac{1}{2(2v_1+1)} \, (i,i+2j)\,: \; i\in\{-(2v_1+1),0\}, \; j\in\mathbb{Z} \right\}. $$
Intersecting this with $A$ restricts the index $j$ according to
$$ \frac{i+2j}{2\bar{\rho}(2v_1+1)} \in \left[ -\frac{1}{2}, m-\frac{1}{2} \right) $$
(see equation (\ref{eq:A_m})).
Thus, equating $A$ with $\bar{A}$ in the limit, we have
$$ g \sim \#\left(\Fix{G^e}\cap A\right) \sim 2m\bar{\rho}(2v_1+1) \to \infty $$
as $m,v_k\to\infty$.
The fixed space $\Fix{(\Phi\circ G^e)}$ is the lattice equivalent of the line $\Fix{\mathscr{H}^e}$ of equation (\ref{eq:Fix(cH)}). We have the following experimental observation for the size of $h$ (see figure \ref{fig:NoOfPts}(b)).
\begin{observation} \label{obs:g_h}
Let $v_k,m\in\mathbb{N}$, $\lambda(v_k,m)<\lambda_c(v_k,m)$,
and $g$, $h$ be as in equation (\ref{eq:Phikappa_approx}).
Then as $m,v_k\to\infty$:
$$ g \sim \sqrt{2} h. $$
\end{observation}
\begin{figure}[t]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.55]{Graphics/NoOfPoints_800} \\
(a) $\# \bar{A}/\# A-1$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.55]{Graphics/FixGFixH_800} \\
(b) $h/g - 1/\sqrt{2}$ \\
\end{minipage}
\caption{The convergence (a) of the ratio $\# \bar{A}/\# A$ to $1$ and (b) of the ratio $h/g$ to $1/\sqrt{2}$ as $m$ becomes large, for $v_k=800$. The line shows the average value of the relevant ratio among all experiments performed: the error bars indicate its minimum and maximum value. All axes are displayed with a logarithmic scale.}
\label{fig:NoOfPts}
\end{figure}
From this observation, it follows that
$$ \frac{g+h}{\#\bar{A}} \sim \frac{(2+\sqrt{2})}{2(2v_1+1)} \to 0 $$
as $m,v_k\to\infty$, and hence that the quantities $g$ and $h$ satisfy the conditions
(\ref{eq:g_h_conds}) of theorem \ref{thm:GammaDistribution}.
Indeed, we observe that the universal distribution $\mathcal{R}(x)$
is the limiting distribution for $\mathscr{D}$ in the limits $m,v_k\to\infty$
(see figure \ref{fig:Distribution}).
\begin{observation} \label{obs:De}
Let $v_k,m\in\mathbb{N}$ and $\lambda(v_k,m)<\lambda_c(v_k,m)$.
Then as $m,v_k\to\infty$:
$$ \mathscr{D}(v_k,m,\lambda(v_k,m)) \to \mathcal{R}, $$
where $\mathcal{R}$ is the universal distribution of equation \eqref{def:R(x)}.
\end{observation}
Finally we note that, as in theorem \ref{thm:GammaDistribution},
the symmetric orbits of $\Phi$ have full density (see figure \ref{fig:symm_points}).
\begin{observation} \label{obs:symm_points}
Let $v_k,m\in\mathbb{N}$ and $\lambda(v_k,m)<\lambda_c(v_k,m)$.
Furthermore, let $S=S(v_k,m,\lambda)$ be the set of points in $\bar{A}$ whose orbit under $\Phi$ is symmetric:
$$ S = \{ z\in \bar{A} \,: \; \mathcal{O}(z)=G^e(\mathcal{O}(z)) \}. $$
Then $S$ has full density in $\bar{A}(m)$ as $m,v_k\to\infty$:
$$ \frac{\#S(v_k,m,\lambda(v_k,m))}{\#\bar{A}(v_k,m,\lambda(v_k,m))}\to 1. $$
\end{observation}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.55]{Graphics/SymmPoints}
\caption{The quantity $1-\#S/\#\bar{A}$ as $v_k$ becomes large.
The line shows the average value of the relevant ratio over all experiments performed
(including over $m=1,2,4,8,16,32$---unlike the distribution $\mathscr{D}$ of figure \ref{fig:Distribution}, this ratio does not vary significantly with $m$): the error bars indicate its minimum and maximum value.
The axes are displayed with a logarithmic scale.}
\label{fig:symm_points}
\end{figure}
\chapter{The perturbed dynamics}\label{chap:PerturbedDynamics}
In chapter \ref{chap:IntegrableLimit}, the positive real line was partitioned
into the sequence of critical intervals $\mathscr{I}^e$, $e\in\mathscr{E}$;
accordingly, the set $\Gamma$ of critical polygons partitioned the plane into
the sequence of polygon classes (section \ref{sec:Hamiltonian}).
We defined a map $\Phi$,
corresponding to the first return of $F_\lambda$ to a thin strip $X$
placed along the symmetry axis, and showed that the return orbits shadow the integrable orbits (section \ref{sec:Recurrence}).
In this chapter we partition the set $X$ into sub-domains, which play the same
role for the perturbed orbits as the polygon classes for the integrable orbits.
Within each sub-domain, the perturbed orbits have local symmetry properties
which are $\lambda$-independent (up to scale):
$\Phi$ commutes with translations by the elements of a two-dimensional lattice.
This lattice structure arises from the cylinder sets of a symbolic coding:
an extension of the coding of the polygon classes.
Furthermore, we identify sub-domains where the fraction of minimal orbits---the
fixed points of the return map---is also $\lambda$-independent.
Thus we show that the minimal orbits occupy a positive density of the phase space as $\lambda\to 0$.
The work in this chapter has been published in \cite{ReeveBlackVivaldi}.
\section{Regular domains} \label{sec:RegularDomains}
The return orbits ${\mathcal O}_{\tau}(z)$ of the perturbed dynamics
shadow the orbits $\Pi(z)$ of the integrable Hamiltonian $\mathscr{P}$,
as we saw in theorem \ref{thm:Hausdorff} (page \pageref{thm:Hausdorff}).
Hence the polygon classes provide a natural partition of the set $X$ into the sequence of sets
$$ \mathscr{P}^{-1}(\mathscr{I}^e) \cap X \hskip 40pt e\in\mathscr{E}. $$
However, the quantity $\mathscr{P}$ is not constant along perturbed orbits.
If we define a symbolic coding on perturbed orbits,
it does not follow that the return orbit of some point $z\in X$ with $\mathscr{P}(z)\in \mathscr{I}^e$
has the same symbolic coding as the polygon class associated with $\mathscr{I}^e$:
perturbed orbits which start close to a critical polygon are likely to
wander between polygon classes.
To deal with this problem, it is necessary to replace the above sequence of sets by a sequence
of smaller \textbf{regular domains}, and then prove that, in the limit, these
domains still have full density in $X$.
We start by defining the \textbf{edges} of ${\mathcal O}_{\tau}(z)$
as the non-empty sets of the form
\begin{displaymath}
B_{m,n}\cap {\mathcal O}_{\tau}(z) \hskip 40pt m,n\in\mathbb{Z}.
\end{displaymath}
For sufficiently small $\lambda$, consecutive edges
of ${\mathcal O}_{\tau}(z)$ must lie in adjacent boxes,
and transitions between edges occur when the orbit meets the set
$\Lambda$, defined in equation (\ref{eq:Lambda}).
Thus we call the set ${\mathcal O}_{\tau}(z)\,\cap\,\Lambda$ the set of \textbf{vertices} of ${\mathcal O}_{\tau}(z)$.
By analogy with the vertices of the polygons, we say that the return orbit
${\mathcal O}_{\tau}(z)$ has a vertex on $x=m$ of \textbf{type} $v$
if there exists a point $w\in{\mathcal O}_{\tau}(z)$ such that
\begin{displaymath}
w\in B_{m,v} \cap F_{\lambda}^{-4}(B_{m-1,v})
\hskip 20pt \mbox{or} \hskip 20pt w\in B_{m-1,v} \cap F_{\lambda}^{-4}(B_{m,v}).
\end{displaymath}
Similarly for a vertex on $y=n$ of type $v$.
A perturbed orbit is \textbf{critical} if it has a vertex whose type is undefined,
i.e., if there exists $w\in{\mathcal O}_{\tau}(z)$ such that
\begin{displaymath}
w\in B_{m,n} \cap F_{\lambda}^{-4}(B_{m\pm1,n\pm1})
\end{displaymath}
for some $m,n\in\mathbb{Z}$ (see figure \ref{fig:critical_vertex}).
\begin{figure}[t]
\centering
\includegraphics[scale=0.75]{TikZ/CriticalVertex}
\caption{A critical vertex $w\in B_{m,n} \cap F_{\lambda}^{-4}(B_{m+1,n-1})$.}
\label{fig:critical_vertex}
\end{figure}
By excluding points whose perturbed orbit is critical,
we will construct a sequence of subsets $X^e$ of $X$ with the property that,
for all $z\inX^e$, the orbit ${\mathcal O}_{\tau}(z)$ has the same sequence of vertex types as $\Pi(z)$.
We now give the construction of $X^e$.
Let the set $\Sigma\subset\Lambda$ be given by
\begin{equation}\label{eq:Sigma}
\Sigma = \bigcup_{m,n\in\mathbb{Z}}\Sigma_{m,n},
\end{equation}
where
\begin{equation}\label{eq:Sigma_mn}
\Sigma_{m,n} = \{ z\in\Lambda \, : \; \|z - (m,n)\|_{\infty} \leq \lambda( \, \|\mathbf{w}_{m,n}\|_{\infty}+2) \, \}
\end{equation}
and $\|(u,v)\|_{\infty} = \max(|u|,|v|)$. The set $\Sigma_{m,n}$ is a small domain, adjacent
to the integer point $(m,n)$ (see figure \ref{fig:LambdaSigma_plot}, page \pageref{fig:LambdaSigma_plot}).
\label{def:regular}
If $z\in X$ and $\mathscr{P}(z)\in\mathscr{I}^e$ for some $e\in\mathscr{E}$, we say that $z$ is \textbf{regular} if three properties hold:
\begin{enumerate}[(i)]
\item neither $z$ nor $\Phi(z)$ are vertices of ${\mathcal O}_{\tau}(z)$, i.e.,
$ \{z,\Phi(z)\}\subset X\setminus \Lambda$;
\item the return orbit ${\mathcal O}_{\tau}(z)$ is contained in the polygon class associated with $e$, i.e.,
$ \mathscr{P}({\mathcal O}_{\tau}(z)) \subset \mathscr{I}^e$;
\item the return orbit ${\mathcal O}_{\tau}(z)$ does not intersect the set $\Sigma$.
\end{enumerate}
Points which are not regular are called \textbf{irregular}. Then the set $X^e$ is defined as
$$ X^e = \{ z\in X \, : \; \mathscr{P}(z) \in I^e(\lambda) \}, $$
where $I^e(\lambda)\subset \mathscr{I}^e$ is the largest interval such that all points in
$X^e$ are regular.
In fact, we can give a more explicit expression for $X^e$.
Let $v_1$ be the first entry in the vertex list $V(e)$, as defined in (\ref{def:v1}).
By construction, $X^e$ does not intersect $\Lambda$, so that $X^e\subset B_{v_1,v_1}\setminus\Lambda$.
Then, for sufficiently small $\lambda$, lemma \ref{lemma:Lambda} gives us that the auxiliary vector field $\mathbf{w}$
matches the discrete vector field $\mathbf{v}$ everywhere in $X^e$:
\begin{displaymath}
\mathbf{v}(z) = \lambda\mathbf{w}(z) = \lambda\mathbf{w}_{v_1,v_1} \hskip 40pt z\in X^e,
\end{displaymath}
\hl{so that the map $F_{\lambda}^4$ acts locally as the translation $z\mapsto z+\lambda\mathbf{w}_{v_1,v_1}$.}
Applying this to the definition (\ref{def:X}) of $X$,
it follows that if $\lambda(x,y)\inX^e$, then
\begin{displaymath}
-(2v_1+1) \leq x-y < 2v_1+1.
\end{displaymath}
Hence the set $X^e$ is the intersection of the lattice $(\lambda\Z)^2$ with a thin rectangle lying along the symmetry line $\Fix{G}$
(see figure \ref{fig:lattice_Le}, page \pageref{fig:lattice_Le}):
\begin{equation}\label{def:Xe}
X^e = \{ \lambda(x,y)\in(\lambda\Z)^2 \, : \; -(2v_1+1) \leq x-y < 2v_1+1, \; \mathscr{P}(\lambda x, \lambda y)\inI^e(\lambda) \}.
\end{equation}
\hl{Furthermore, it is natural to identify the sides of this rectangle,
which are connected by the local vector field,
so that the dynamics take place modulo the one-dimensional module
$\langle\lambda\mathbf{w}_{v_1,v_1}\rangle$ generated by $\lambda\mathbf{w}_{v_1,v_1}$.}
In principle, the interval $I^e(\lambda)$ need not be uniquely defined, and may be empty.
However, the following proposition ensures that $I^e(\lambda)$ is well-defined for all
sufficiently small $\lambda$, and indeed that the irregular points have zero density in $X$
as $\lambda\rightarrow 0$.
\begin{proposition} \label{prop:Ie}
If $e\in\mathscr{E}$ and $I^e(\lambda)$ is as above, then
\begin{displaymath}
\lim_{\lambda\rightarrow 0}\frac{ |I^e(\lambda)| }{|\mathscr{I}^e|} = 1.
\end{displaymath}
\end{proposition}
\begin{proof}
Let $e\in\mathscr{E}$. Consider $z\in X$ such that $\mathscr{P}(z)\in \mathscr{I}^e$, and suppose that $z$ is irregular.
We will show that $\mathscr{P}(z)$ must be $O(\lambda)$-close to the boundary of $\mathscr{I}^e$.
If the orbit of $z$ strays between polygon classes, i.e., if condition (ii) of regularity fails,
then we have
\begin{displaymath}
\exists \, w\in {\mathcal O}_{\tau}(z): \hskip 20pt \mathscr{P}(w) \notin \mathscr{I}^e.
\end{displaymath}
However, in lemma \ref{lemma:cP_variation} of section \ref{sec:Recurrence} (page \pageref{lemma:cP_variation}),
we showed that the maximum variation in $\mathscr{P}$ along an orbit
${\mathcal O}_{\tau}(z)$ is of order $\lambda$ as $\lambda\rightarrow 0$:
\begin{displaymath}
\forall z\in X,\, \forall w\in {\mathcal O}_{\tau}(z):
\hskip 20pt \mathscr{P}(w) - \mathscr{P}(z) = O(\lambda).
\end{displaymath}
Hence if $\mathscr{I}^e=(e,f)$, where $f$ is the successor of $e$ in the sequence $\mathscr{E}$, then
\begin{equation}\label{eq:Pvariation}
\mathscr{P}(z)=\mathscr{P}(w)+O(\lambda)=
\left\{\begin{array}{ll}
e+O(\lambda) & \quad \mathscr{P}(w)\leq e \\
f+O(\lambda) & \quad \mathscr{P}(w)\geq f. \\
\end{array}\right.
\end{equation}
In both cases, $\mathscr{P}(z)$ is near the boundary of $\mathscr{I}^e$.
If condition (ii) holds but $z\in\Lambda$, then one of
its coordinates must be nearly integer:
$$
d_H(z,\,\Delta) = O(\lambda),
$$
where the set $\Delta$ was defined in (\ref{eq:Delta}).
However, as the domain $X$ lies in an $O(\lambda)$-neighbourhood of the
symmetry line $\Fix{G}$, it follows that both coordinates must be nearly
integer, and $z$ must be close to a critical polygon:
\begin{displaymath}
d_H(z,\,\mathbb{Z}^2) = O(\lambda).
\end{displaymath}
Again, it follows that $\mathscr{P}(z)$ lies in a $O(\lambda)$-neighbourhood of the boundary of $\mathscr{I}^e$.
A similar argument holds if $\Phi(z)\in\Lambda$.
Finally, if (ii) holds but (iii) fails, i.e., if there is a point $w\in{\mathcal O}_{\tau}(z)\cap \Sigma$,
then by construction
\begin{displaymath}
d_H(w,\,\mathbb{Z}^2) = O(\lambda),
\end{displaymath}
and (\ref{eq:Pvariation}) applies as before.
Combining these observations, we have
$$
\frac{ |I^e(\lambda)| }{|\mathscr{I}^e|} = 1 - \frac{ |\mathscr{I}^e\setminusI^e(\lambda)| }{|\mathscr{I}^e|}
= 1 - \frac{ O(\lambda) }{|\mathscr{I}^e|},
$$
and the result follows.
\end{proof}
\medskip
Now we show that the sequence of sets $X^e$ fulfil their objective,
which was to exclude all points $z\in X$ whose perturbed orbit is critical
in the sense defined above.
\begin{proposition} \label{prop:Xe}
If $e\in\mathscr{E}$ and $z\inX^e$, then the perturbed orbit of $z$ is not critical.
\end{proposition}
\begin{proof}
Let $e\in\mathscr{E}$ and $z\in X$ with $\mathscr{P}(z)\in\mathscr{I}^e$.
Suppose that the return orbit of $z$ is critical, i.e.,
suppose there exists $w\in{\mathcal O}_{\tau}(z)$ such that
\begin{displaymath}
w\in B_{m,n} \hskip 20pt \mbox{and}
\hskip 20pt F_{\lambda}^{4}(w)=w+\mathbf{v}(w)\in B_{m\pm1,n\pm1}
\end{displaymath}
for some $m,n\in\mathbb{Z}$. We will show that $z\notinX^e$.
For simplicity, we assume that $m$ and $n$ are both non-negative, so that
by the orientation of the vector field in the first quadrant:
\begin{displaymath}
F_{\lambda}^{4}(w)=w+\mathbf{v}(w)\in B_{m+1,n-1}.
\end{displaymath}
Recall the proof of lemma \ref{lemma:Lambda} (page \pageref{lemma:Lambda}),
where we calculated the explicit form of the discrete vector field $\mathbf{v}$.
If $w=\lambda(x,y)\in B_{m,n}$, then by construction:
$$ m=\fl{\lambda x} \hskip 40pt n=\fl{\lambda y}. $$
Furthermore, if $F_{\lambda}^{4}(w)\in B_{m+1,n-1}$, then the last line of equation (\ref{eq:Fabcd}) implies that
$$ (d,c) = (m+1,n-1), $$
where the integers $c$ and $d$ are given by (\ref{eq:abcd}).
It follows that the perpendicular distance from $w$ to the lines $x=m+1$ and $y=n$,
respectively, is bounded according to
\begin{align*}
-\lambda(a+n) = -\lambda(a+c+1) &\leq \lambda x - (m+1) < 0 \\
0 &\leq \lambda y - n < \lambda(m+b+1),
\end{align*}
where again the integers $a$ and $b$ are given by (\ref{eq:abcd}).
Combining this observation with the constraint (\ref{eq:bd_ac_sets}) on $a$ and $b$ gives
\begin{align*}
\| w-(m+1,n) \|_{\infty} &\leq \lambda\max( |a+n|, |m+b+1|) \\
&\leq \lambda\max( 2n + |n-a|, 2m+1 + |m-b|) \\
&\leq \lambda\max( 2n+1, 2m+2) \\
&\leq \lambda \|\mathbf{w}_{m+1,n}\|_{\infty},
\end{align*}
where we have used the fact that $m$ and $n$ are non-negative.
Hence, by definition (\ref{eq:Sigma_mn}), $w\in\Sigma_{m+1,n}$ and $z$ is irregular, so $z\notin X^e$.
The cases where $m$ or $n$ are negative proceed similarly.
\end{proof}
\section{Lattice structure and orbits of minimal period} \label{sec:MainTheorems} \label{SEC:MAINTHEOREMS}
We turn our attention to the reversing symmetry group of the return map $\Phi$.
In section \ref{sec:time-reversal} we introduced the reversing symmetry $G$
of the lattice map $F_{\lambda}$ (see equation (\ref{def:GH}), page \pageref{def:GH}).
Since $\Phi$ is a return map of $F_{\lambda}$, it has an associated reversing symmetry.
In the following proposition we describe the form of this reversing symmetry
on the domains $X^e$ (equation (\ref{def:Xe})).
\begin{proposition} \label{prop:Ge}
For $e\in\mathscr{E}$, let $G^e$ be the involution of $X^e$ given by
\begin{equation} \label{eq:Ge}
G^e(x,y) = \left\{
\begin{array}{ll} (y,x) \quad & |x-y| < \lambda(2v_1+1) \\
(x,y) \quad & x-y = -\lambda(2v_1+1),
\end{array} \right.
\end{equation}
where $v_1 = \fl{\sqrt{e/2}}$ is the first entry of the vertex list associated with $\mathscr{I}^e$.
Then for all sufficiently small $\lambda$, $G^e$ is a reversing symmetry of $\Phi$ on $X^e$ in the following sense:
\begin{equation*}
\forall z,\Phi(z)\inX^e:
\hskip 20pt
\Phi^{-1}(z) = (G^e \circ \Phi \circ G^e)(z).
\end{equation*}
\end{proposition}
\begin{proof}
Recall that $X^e\subset B_{v_1,v_1}\setminus\Lambda$, so that if $z\inX^e$, then
$$ F_{\lambda}^4(z) = z + \lambda\mathbf{w}(z) = z + \lambda\mathbf{w}_{v_1,v_1}. $$
Furthermore, if $z$ lies on the line $x-y=-\lambda(2v_1+1)$,
then by the definition (\ref{def:w_mn}) of the auxiliary vector field, we have
$$ G(z) = z + \lambda\mathbf{w}_{v_1,v_1}. $$
Combining the above, we have the following relationship between $G^e$ and $G$:
\begin{equation} \label{eq:Ge_G}
G^e(x,y) = \left\{
\begin{array}{ll} G(x,y) \quad & |x-y| < \lambda(2v_1+1), \\
(F_{\lambda}^{-4}\circ G)(x,y) \quad & x-y = -\lambda(2v_1+1).
\end{array} \right.
\end{equation}
If $\tau=\tau(z)$ is the return time of $z$, then by construction
$$ \Phi(z) = F_{\lambda}^{\tau}(z), $$
and the reversibility of $F_{\lambda}$ with respect to $G$ gives us that
\begin{equation} \label{eq:Phi_reversibility}
(G \circ \Phi)(z) = (F_{\lambda}^{-\tau} \circ G)(z).
\end{equation}
Suppose now that neither $z$ nor $\Phi(z)$ lies on the line $x-y=-\lambda(2v_1+1)$.
Then combining (\ref{eq:Ge_G}) and (\ref{eq:Phi_reversibility}) gives
$$ (G^e \circ \Phi)(z) = (F_{\lambda}^{-\tau} \circ G^e)(z). $$
Furthermore, this point lies in $X^e$, so that
$$ (G^e \circ \Phi)(z) = (\Phi^{-1} \circ G^e)(z), $$
as required.
Suppose now that $\Phi(z)$ lies on the line $x-y=-\lambda(2v_1+1)$ but $z$ does not.
In this case, combining (\ref{eq:Ge_G}) and (\ref{eq:Phi_reversibility}) gives
$$ (G^e \circ \Phi)(z) = (F_{\lambda}^{-4} \circ G \circ \Phi)(z) = (F_{\lambda}^{-(\tau+4)} \circ G)(z) = (\Phi^{-1} \circ G^e)(z). $$
The other cases proceed similarly.
\end{proof}
\hl{Note the reversing symmetry $G^e$ of $\Phi$ is simply an adaptation of the original
reversing symmetry $G$ of $F_{\lambda}$, which accounts for the natural cylindrical topology of its domain $X^e$:}
$$ G^e(z) = G(z) \mod{\lambda\mathbf{w}_{v_1,v_1}} \hskip 40pt z\inX^e, $$
\hl{where we write (mod $\mathbf{a}$) for some vector $\mathbf{a}$ to
denote congruence modulo the one-dimensional module $\langle\mathbf{a}\rangle$
generated by $\mathbf{a}$.}
\medskip
The return map $\Phi$ also has non-trivial symmetries.
We define a sequence of lattices $\mathbb{L}^e\subset\mathbb{Z}^2$, $e\in\mathscr{E}$, independent of $\lambda$,
such that within the domain $X^e$,
the return map $\Phi$ is equivariant under the group of
translations generated by $\lambda\mathbb{L}^e$. The construction of $\mathbb{L}^e$ is as follows.
\begin{figure}[t]
\centering
\includegraphics[scale=0.85]{TikZ/Lattice_Phi}
\caption{\hl{The set $X^e$ (pink) is the subset of the Poincar\'{e} section $X$ whose orbits
under the return map $\Phi$ can be associated with the polygon class indexed by $e\in\mathscr{E}$.
The lattice $z+\lambda\mathbb{L}^e$, for some $z\inX^e$, is illustrated by the dashed lines:
all points which are congruent to $z$ modulo $\lambda\mathbb{L}^e$ exhibit the same dynamical behaviour.
The sets $\Lambda$ and $\Sigma$ are represented in light and dark grey, respectively.}}
\label{fig:lattice_Le}
\end{figure}
For $e\in\mathscr{E}$, suppose the vertex list $V(e)=(v_1,\dots,v_k)$
contains $l$ distinct entries. We define the sequence
$(\iota(j))_{1\leq j\leq l}$ such that the $\iota(j)$th entry in the vertex
list is the $j$th distinct entry. Since all repeated entries are consecutive,
it follows that the vertex list has the form
\begin{equation} \label{eq:v_iota}
V(e) = (v_{\iota(1)},\dots,v_{\iota(1)},v_{\iota(2)},\dots, v_{\iota(2)}, \,
\dots \, ,v_{\iota(l)},\dots, v_{\iota(l)}),
\end{equation}
with $v_{\iota(1)} = v_1$ and $v_{\iota(l)} = v_k$.
We define the vector $\mathbf{L}=\mathbf{L}(e)$ as:
\begin{equation} \label{def:L}
\mathbf{L} = \frac{q}{2v_1+1} \; (1,1),
\end{equation}
where the natural number $q=q(e)$ is defined as follows
\begin{equation}\label{eq:q}
q = \mathrm{lcm}((2v_{\iota(1)}+1)^2,(2v_{\iota(1)}+1)(2v_{\iota(2)}+1),
\ldots ,(2v_{\iota(l-1)}+1)(2v_{\iota(l)}+1)).
\end{equation}
Here the least common multiple runs over $(2v_1+1)^2$ and all products
of the form $(2v_j+1)(2v_{j+1}+1)$, where $v_j$ and
$v_{j+1}$ are consecutive, distinct vertex types.
Finally, the lattice $\mathbb{L}^e$ is given by
\begin{equation}\label{eq:Le}
\mathbb{L}^e = \left\langle \mathbf{L}, \frac{1}{2}\left(\mathbf{L}-\mathbf{w}_{v_1,v_1}\right) \right\rangle,
\end{equation}
where $\langle \cdots \rangle$ denotes the $\mathbb{Z}$-module generated by a set
of vectors, and the vector $\mathbf{w}_{v_1,v_1}$ given by (\ref{def:w_mn})
is the Hamiltonian vector field $\mathbf{w}$ in the domain $X^e$.
We note that the vector $\mathbf{L}$ is parallel to the symmetry line $\Fix{G}$, and
hence parallel to the domain $X^e$, whereas the vector $\mathbf{w}_{v_1,v_1}$ is perpendicular to it.
\begin{theorem} \label{thm:Phi_equivariance}
For every $e\in\mathscr{E}$, and all sufficiently small $\lambda$,
the map $\Phi$ commutes with translations by the elements
of $\lambda\mathbb{L}^e$ on the domain $X^e$:
\begin{equation} \label{eq:Phi_equivariance}
\forall l\in\mathbb{L}^e, \; \forall z,z+\lambda l\inX^e:
\hskip 20pt
\Phi(z + \lambda l) \equiv \Phi(z) + \lambda l \mod{\lambda\mathbf{w}_{v_1,v_1}}.
\end{equation}
\end{theorem}
There is a critical value of $\lambda$, depending on $e$, above which the statement
of the theorem is empty, as $X^e$ is insufficiently populated for a pair
of points $z,z+\lambda l\in X^e$ to exist.
The congruence under the local (rescaled) integrable vector field
$\lambda\mathbf{w}_{v_1,v_1}$ in equation (\ref{eq:Phi_equivariance})
\hl{invokes the cylindrical topology of $X^e$, which} is necessary for
the case that $\Phi(z) + \lambda l\notin X$.
\medskip
For certain $e\in\mathscr{E}$, we can calculate the number of congruence classes of
$\lambda\mathbb{L}^e$ corresponding to symmetric fixed points of $\Phi$, i.e., to symmetric minimal orbits.
We define the fraction of symmetric fixed points in $X^e$:
\begin{displaymath}
\delta(e,\lambda) = \frac{\# \{ z\in X^e \, : \; G(\mathcal{O}(z))=\mathcal{O}(z), \; \Phi(z)=z \}}{\#X^e},
\end{displaymath}
and prove the following result on the persistence of such points in the limit
$\lambda\rightarrow 0$.
\begin{theorem} \label{thm:minimal_densities}
Let $e\in\mathscr{E}$, and let $(v_1,\dots,v_k)$ be the vertex list of the
corresponding polygon class.
If $2v_1+1$ or $2v_k+1$ is coprime to $2v_j+1$ for all other vertex types
$v_j$, i.e., if
\begin{equation} \label{eq:v1_k_coprimality}
\exists \; i\in\{1,k\}, \quad \forall j\in\{1,\ldots\,k\},\quad
v_j\neq v_i \,\Rightarrow \, \gcd(2v_i+1,2v_j+1) =1,
\end{equation}
then, for sufficiently small $\lambda$, the number of symmetric fixed points
of $\Phi$ in $X^e$ modulo $\lambda\mathbb{L}^e$ is independent of $\lambda$.
Thus the asymptotic density of symmetric fixed points in $X^e$ converges,
and its value is given by
\begin{equation}\label{eq:Density}
\lim_{\lambda\rightarrow 0} \delta(e,\lambda) = \frac{1}{(2\fl{ \sqrt{e} }+1)(2\fl{ \sqrt{e/2} }+1)}.
\end{equation}
\end{theorem}
As for theorem \ref{thm:Phi_equivariance}, the smallness of $\lambda$ serves only to ensure that $X^e$
is sufficiently populated for all congruence classes modulo $\lambda\mathbb{L}^e$ to be represented.
The condition (\ref{eq:v1_k_coprimality}) on the orbit code is clearly satisfied
for infinitely many critical numbers $e$, e.g., those for which
either $2\fl{\sqrt{e}}+1$ or
$2\fl{\sqrt{e/2}}+1$ is a prime number.
The first violation occurs at $e=49$ (see table \ref{table:V(e)}, page \pageref{table:V(e)}), where $2v_1+1 = 9$
and $2v_k+1=15$ have a common factor.
We contrast this to the case of $e=52$, where $2v_k+1=15$ and $2v_2+1 = 9$ have a
common factor, but $2v_1+1=11$ is prime, so the condition (\ref{eq:v1_k_coprimality}) holds.
Numerical experiments show that the density
of values of $e$ for which (\ref{eq:v1_k_coprimality}) holds decays very slowly,
reaching 1/2 for $e\approx 500,000$.
The stated condition on the orbit code is actually stronger than
we require in the proof. This was done to simplify the formulation of the
theorem. We remark that the weaker condition is still not necessary for
the validity of the density expression (\ref{eq:Density}).
At the same time, there are values of $e$ for which the density of symmetric minimal
orbits deviates from the given formula, and convergence is not guaranteed.
Our numerical experiments show that these deviations are small, and don't seem
connected to new dynamical phenomena.
More significant are the fluctuations in the density of non-symmetric fixed points:
its dependence on $e$ is considerably less predictable than for symmetric orbits---see figure \ref{fig:Density}.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{Graphics/Density}
\caption{The density of symmetric minimal orbits, as a function
of the critical number $e$ (calculated for suitably small values of the parameter $\lambda$).
The solid line represents the estimate (\ref{eq:Density}).
The scattered points correspond to the density of all minimal orbits, symmetric
and non-symmetric.
}
\label{fig:Density}
\end{figure}
The asymptotic density of symmetric fixed points in $X^e$ provides an obvious lower bound
for the overall density of fixed points, which we denote $\eta(e,\lambda)$:
\begin{displaymath}
\eta(e,\lambda) = \frac{\# \{ z\in X^e \, : \; \Phi(z)=z \}}{\# X^e}.
\end{displaymath}
\begin{corollary} \label{cor:Density}
Let $e\in\mathscr{E}$ satisfy the condition (\ref{eq:v1_k_coprimality}) of theorem \ref{thm:minimal_densities}.
Then the asymptotic density of fixed points in $X^e$ is bounded below as follows:
$$\liminf_{\lambda\rightarrow 0} \eta(e,\lambda) \geq \frac{1}{(2\fl{ \sqrt{e} }+1)(2\fl{ \sqrt{e/2} }+1)}. $$
\end{corollary}
Note that we do not suggest that the density $\eta(e,\lambda)$ converges as $\lambda\to0$,
regardless of whether the condition (\ref{eq:v1_k_coprimality}) is satisfied or not.
\section{The strip map and symbolic coding of perturbed orbits}\label{sec:StripMap}
In section \ref{sec:Recurrence}, we saw that all non-zero points
where the discrete vector field $\mathbf{v}$ deviates from the scaled auxiliary
vector field $\lambda\mathbf{w}$ lie in the set of transition points $\Lambda$
(lemma \ref{lemma:Lambda}, page \pageref{lemma:Lambda}).
The set $\Lambda$ consists of thin strips of lattice points aligned along the
lines $\Delta$, which form the boundaries of the boxes $B_{m,n}$.
In order to study the dynamics at these points, where the perturbations from the
integrable limit occur, we define a transit map $\Psi$ to $\Lambda$ which we
call the \textbf{strip map}:
\begin{equation} \label{eq:Psi}
\Psi : (\lambda\Z)^2 \rightarrow \Lambda
\hskip 40pt
\Psi(z) = F_{\lambda}^{4t(z)}(z),
\end{equation}
where the transit time $t$ to $\Lambda$ is well-defined for all points excluding the origin:
\begin{displaymath}
t(z)= \min \{ k\in\mathbb{N} \, : \; F_{\lambda}^{4k}(z) \in \Lambda \} \hskip 40pt z\neq(0,0).
\end{displaymath}
(Since the origin plays no role in the present construction, to simplify
notation we shall write $(\lambda\Z)^2$ for $(\lambda\Z)^2\setminus\{(0,0)\}$, where appropriate.)
By abuse of notation, we define $\Psi^{-1}$ to be the transit map to $\Lambda$
under $F_{\lambda}^{-4}$. Note that $\Psi^{-1}$ is the inverse of $\Psi$ only on $\Lambda$.
If $z\in B_{m,n}\setminus\Lambda$ for some $m,n\in\mathbb{Z}$,
then lemma \ref{lemma:Lambda} implies that $\Psi(z)$
satisfies \begin{equation} \label{eq:Psi_congruence2}
\Psi(z) = z + \lambda t(z)\mathbf{w}_{m,n},
\end{equation}
where $\mathbf{w}_{m,n}$ is the value of the Hamiltonian vector field $\mathbf{w}$ in the box $B_{m,n}$.
If $z\in\Lambda_{m,n}$, then we may have $\mathbf{v}(z)\neq \lambda\mathbf{w}(z)$, so the expression becomes
\begin{equation} \label{eq:Psi_congruence1}
\Psi(z) = z + \mathbf{v}(z) + \lambda(t(z)-1)\mathbf{w}_{m,n}.
\end{equation}
In the previous section, we identified the set ${\mathcal O}_{\tau}(z)\,\cap\,\Lambda$ as the set of
vertices of the perturbed orbit ${\mathcal O}_{\tau}(z)$. Thus, within each quarter-turn, the
strip map $\Psi$ represents transit to the next vertex. For $1\leq j\leq k$, where $k$ is the
length of the vertex list at $z$, we say that the orbit ${\mathcal O}_{\tau}(z)$ \textbf{meets the $j$th vertex}
at the point $\Psi^j(z)\in\Lambda$.
For $z\in X$ regular, the polygon $\Pi(z)$ and the return orbit ${\mathcal O}_{\tau}(z)$ are non-critical,
and the number of sides of each is given by equation (\ref{eq:NumberOfSides}) of theorem \ref{thm:Polygons}.
Thus the full set of vertices of ${\mathcal O}_{\tau}(z)$ is given by
\begin{displaymath}
{\mathcal O}_{\tau}(z)\,\cap\,\Lambda = \bigcup_{i=0}^3 \; \bigcup_{j=1}^{2k-1} \; \{ (\Psi^j\circ F_{\lambda}^i)(z) \}.
\end{displaymath}
Recall that the vertices of a polygon (or orbit) are numbered in the clockwise
direction---the orientation of the integrable vector field $\mathbf{w}$.
Hence the first $2k-1$ vertices (those lying in the first quarter-turn) are
given by $(\Psi^j(z))_{1\leq j \leq 2k-1}$.
The action of $F_{\lambda}$ moves points from one quadrant to the next in the
opposing (anti-clockwise) direction, so that the vertices
$((\Psi^j\circ F)(z))_{1\leq j \leq 2k-1}$ are the last $2k-1$ vertices.
The following proposition is a simple consequence this arrangement.
\begin{proposition} \label{prop:regularity}
Let $e\in\mathscr{E}$ be a critical number, and let $k$ be the length of the
vertex list of the corresponding polygon class.
Then the return map $\Phi$ on $X^e$ is related to $\Psi$ via
\begin{equation} \label{eq:Phi_Psi}
\Phi(z) \equiv (\Psi^{2k}\circ F_{\lambda})(z) \mod{\lambda\mathbf{w}_{v_1,v_1}}
\hskip 40pt z\inX^e,
\end{equation}
where $v_1$ is the type of the first vertex and $\mathbf{w}_{v_1,v_1}$
is the value of the integrable vector field $\mathbf{w}$ at $z$.
\end{proposition}
\begin{proof} Let $z\inX^e$ and let $w$ be the last vertex in ${\mathcal O}_{\tau}(z)$.
By the preceding discussion, $w$ is given by
\begin{displaymath}
w = (\Psi^{2k-1}\circ F_{\lambda})(z) \in \Lambda_{v_1,v_1}.
\end{displaymath}
As $z$ is regular, the point $\Phi(z)$ satisfies
$\Phi(z)\in B_{v_1,v_1}\setminus \Lambda$.
Using the expression (\ref{eq:Psi_congruence2}) for $\Psi$ applied to $\Phi(z)$, we have
\begin{equation} \label{eq:PsiPhi(z)}
(\Psi\circ\Phi)(z) \equiv \Phi(z) \mod{\lambda\mathbf{w}_{v_1,v_1}}.
\end{equation}
But the point $\Phi(z)$ is given by
$$ \Phi(z) = F_{\lambda}^{4n}(w) $$
for some $1\leq n < t(w)$, where $t(w)$ is the transit time of $w$ to $\Lambda$.
Hence
\begin{equation} \label{eq:PsiPhi_w}
(\Psi\circ\Phi)(z) = \Psi(w) = (\Psi^{2k}\circ F_{\lambda})(z).
\end{equation}
The result follows from combining (\ref{eq:PsiPhi(z)}) and (\ref{eq:PsiPhi_w}).
\end{proof}
For $z\in X$ regular, we use the vertices
$(\Psi^j(z))_{1\leq j \leq 2k-1}$ in the first quarter-turn to define a
sequence of natural numbers $\sigma(z)$ called the \textbf{orbit code} of $z$,
which encapsulates how the perturbed orbit ${\mathcal O}_{\tau}(z)$
deviates from $\Pi(z)$.
Suppose the $j$th vertex of $\Pi(z)$ is a vertex of type $v_j$ lying on $y=n$,
and the orbit ${\mathcal O}_{\tau}(z)$ meets its corresponding vertex at $\Psi^j(z)$.
We define the pair $(x_j,y_j)$ via
\begin{equation}\label{eq:(xj,yj)}
\Psi^j(z) = \lambda\left( \Bceil{\frac{v_j}{\lambda}} + x_j, \Bceil{\frac{n}{\lambda}} + y_j \right),
\end{equation}
where $x_j\geq 0$, and $|y_j|$, which is (essentially) the number of lattice points
between $\Psi^j(z)$ and the line $y=n$, is small relative to $1/\lambda$.
Using similar arguments to those in the proof of proposition \ref{prop:Xe}
(i.e., by bounding the perpendicular distance from $\Psi^j(z)$ to the line $y=n$),
one can show that $y_j$ satisfies
\begin{displaymath}
-(2v_j+1) \leq y_j <0 \hskip 20pt \mbox{or} \hskip 20pt 0\leq y_j < 2v_j+1,
\end{displaymath}
depending whether the integrable vector field is oriented in the positive or
negative $y$-direction.
Hence the component of $\Lambda$ containing $\Psi^j(z)$ is a strip which is
$(2v_j+1)$ lattice points wide,
and the possible values of $y_j$ form a complete set of residues modulo $2v_j+1$.
The $j$th element $\sigma_j$ of the orbit
code $\sigma(z)$ is defined to be the unique residue in the set
$\{0,1,\dots, 2v_j\}$ which is congruent to $y_j$:
\begin{equation} \label{eq:sigma_j}
\sigma_j \equiv y_j \mod{2v_j+1}.
\end{equation}
We call $y$ the \textbf{integer coordinate} of the vertex and $x$ the
\textbf{non-integer coordinate}.
Similarly, if the $j$th vertex lies on $x=m$, then the $j$th element $\sigma_j$ of
the orbit code is defined to be the residue congruent to $x_j$ modulo $2v_j+1$.
In this case $x$ is the integer coordinate and $y$ is the non-integer coordinate.
For all vertices in the first quadrant, the fact that orbits progress clockwise
under the action of $F_{\lambda}^4$ means that $y_j$ will be non-negative
wherever $y$ is the integer coordinate, and $x_j$ will be negative
wherever $x$ is the integer coordinate:
\begin{equation} \label{eq:xj<0,yj>0}
-(2v_j+1) \leq x_j <0 \hskip 20pt \mbox{or} \hskip 20pt 0\leq y_j < 2v_j+1.
\end{equation}
Thus the value of $\sigma_j$ is given explicitly by
\begin{equation} \label{eq:sigma_j_2}
\sigma_j = x_j + 2v_j+1 \hskip 20pt \mbox{or} \hskip 20pt \sigma_j = y_j,
\end{equation}
respectively.
In addition to the values $\sigma_{j}$ for $1\leq j \leq 2k-1$, we define $\sigma_{-1}$,
which corresponds to the last vertex \textit{before} the symmetry line, i.e., to the point $\Psi^{-1}(z)$.
Thus the orbit code of $z$ is a sequence $\sigma(z)=(\sigma_{-1},\sigma_1,\dots,\sigma_{2k-1})$, such that
\begin{align*}
& 0\leq \sigma_{-1} < 2v_1+1, \\
& 0\leq \sigma_j < 2v_j+1, \hskip 20pt 1\leq j \leq 2k-1,
\end{align*}
where the $v_j$ are the vertex types.
In the next proposition we consider how a perturbed orbit behaves at
its vertices. We find that
the regularity of $z$ ensures that the discrete vector field $\mathbf{v}$
matches the Hamiltonian vector field $\lambda\mathbf{w}$ in the integer coordinate
at $\Psi^j(z)$. The possible discrepancy in the non-integer coordinate is
determined by the value of $\sigma_j$.
\begin{proposition} \label{thm:epsilon_j}
Let $e\in\mathscr{E}$ be a critical number and let $k$ be the length of the
vertex list of the corresponding polygon class.
For any $z\inX^e$ and any $j\in\{-1,1,2,\ldots,2k-2\}$,
let $m,n$ be such that $\Psi^j(z)\in\Lambda_{m,n}$.
Then the transit between the $j$th and $(j+1)$st vertices satisfies
\begin{equation}\label{eq:Psi^jp1}
\Psi^{j+1}(z) = \Psi^j(z) + \lambda \left( t\mathbf{w}_{m,n} + \epsilon_j(\sigma_j)\mathbf{e} \right),
\end{equation}
where $\epsilon_j$ is a function of the $j$th entry $\sigma_j$ of the orbit code $\sigma(z)$,
$\mathbf{e}$ is the unit vector in the direction of the non-integer coordinate of the $j$th vertex,
and $t=t(\Psi^j(z))$ is the transit time.
\end{proposition}
\begin{proof}
It suffices to show that
\begin{displaymath}
\mathbf{v}(\Psi^j(z)) = \lambda \left( \mathbf{w}_{m,n} + \epsilon_j(\sigma_j) \mathbf{e} \right),
\end{displaymath}
after which the result follows from equation (\ref{eq:Psi_congruence1}).
If $z\in X$ is regular, then the perturbed orbit ${\mathcal O}_{\tau}(z)$ is not critical.
Thus for any vertex $w$, which, by construction, satisfies
\begin{displaymath}
w\in\Lambda_{m,n}
\qquad
F_{\lambda}^4(w) = w+\mathbf{v}(w)\in B_{m,n}
\end{displaymath}
for some $m,n\in\mathbb{Z}$, we must have either $w\in B_{m,n\pm1}$ or $w\in B_{m\pm1,n}$.
For definiteness we suppose that $w\in B_{m,n+1}$,
so that the vertex $w$ lies on $y=n+1$ and is of type $m$.
The cases where $w\in B_{m,n-1}$ or $w\in B_{m\pm1, n}$ are similar.
Now the proof proceeds very much as that of proposition \ref{prop:Xe}.
The perturbed vector field $\mathbf{v}(w)$ is given by equation (\ref{eq:v_abcd}),
with $a,b,c,d$ as in (\ref{eq:abcd}). (Note that $R(w)=(m,n+1)$,
so the formula (\ref{eq:abcd}) must be modified accordingly.)
In this case, $R(F_{\lambda}^4(w))=(m,n)$ implies that
\begin{displaymath}
c= n \hskip 20pt \mbox{and} \hskip 20pt d =m,
\end{displaymath}
and according to (\ref{eq:bd_ac_sets}), the remaining integers $a$ and $b$ satisfy
\begin{displaymath}
a\in \{n,n+1\}, \hskip 40pt b=m.
\end{displaymath}
Thus we have
\begin{align*}
\mathbf{v}(w) &= \lambda(n+a+1,-(2m+1)) \\
&= \lambda (\mathbf{w}_{m,n} + (a-n)\mathbf{e}),
\end{align*}
where $\mathbf{e}=(1,0)$ is the unit vector in the $x$-direction, the non-integer
coordinate direction of the vertex.
If $w=\Psi^j(z)$, then $v_j=m$, and the coefficient of the difference between
$\mathbf{v}(w)$ and $\lambda\mathbf{w}_{m,n}$ in the $x$-direction is given by
\begin{align*}
\epsilon_j &= a-n \\
&= \ceil{\lambda(y-m)} -(n+1) \\
&= \left\{ \begin{array}{ll} 1 \; \; & \lambda y-(n+1) >\lambda m \\
0 \; \; & \mbox{otherwise}.
\end{array} \right.
\end{align*}
As in equation (\ref{eq:(xj,yj)}), we write
\begin{displaymath}
y = \Bceil{\frac{n+1}{\lambda}} + y_j,
\end{displaymath}
where by (\ref{eq:xj<0,yj>0}), $y_j$ satisfies $0\leq y_j<2m+1$.
Then, by (\ref{eq:sigma_j_2}), we have $y_j=\sigma_j$, and
the function $\epsilon_j$ is given by
\begin{displaymath}
\epsilon_j(\sigma_j) = \left\{ \begin{array}{ll} 1 \; \; & \sigma_j > m \\
0 \; \; & \mbox{otherwise,}
\end{array} \right.
\end{displaymath}
which completes the proof.
\end{proof}
Note that the function $\epsilon_j$ depends on $j$ via $m$.
In what follows we shall write $\epsilon_j$, omitting the argument.
We think of an orbit as moving according to the integrable vector
field at all points except the vertices, where there is a mismatch between
integrable and non-integrable dynamics, and points are given a small
`kick' in the non-integer coordinate direction.
In particular, in the situation described in the proof of proposition \ref{thm:epsilon_j},
the strip containing $\Psi^j(z)$,
which lies on the boundary between the boxes $B_{m,n+1}$ and $B_{m,n}$,
can be decomposed into two sub-strips: $0\leq y_j\leq m$ and $m<y_j<2m+1$.
In the sub-strip with $0\leq y_j\leq m$, which lies closest to $B_{m,n}$,
we have
$$ \mathbf{v}(\Psi^j(z)) = \lambda\mathbf{w}_{m,n}, $$
whereas in the sub-strip with $m<y_j<2m+1$, which lies further into $B_{m,n+1}$,
we have
$$ \mathbf{v}(\Psi^j(z)) = \frac{\lambda}{2}(\mathbf{w}_{m,n} + \mathbf{w}_{m,n+1}). $$
An analogous behaviour occurs in other situations.
\section{Proofs for section \ref{sec:MainTheorems}} \label{sec:lattice}
We prove theorems \ref{thm:Phi_equivariance} \& \ref{thm:minimal_densities}
via several lemmas.
The first and most significant step is to show that the orbit codes $\sigma(z)$ of points $z\in X^e$ are in
one-to-one correspondence with the set $\mathbb{Z}^2/\,\mathbb{L}^e$ of equivalence classes modulo $\mathbb{L}^e$.
We do this by constructing a sequence of nested lattices whose congruence classes are the
cylinder sets of the orbit code.
\subsection*{Orbit codes and lattice structure}
We define recursively a finite integer sequence $(q_j)$,
$j=1,\ldots,2k-1$, as follows:
\begin{align}
q_1&=(2v_1+1)^2\nonumber \\
q_j&=\begin{cases}
q_{j-1}&\mbox{if}\,\, v_j=v_{j-1}\\
\mathrm{lcm}((2v_j+1)(2v_{j-1}+1),q_{j-1}) &\mbox{if}\,\,\, v_j\neq v_{j-1}
\end{cases}
&j>1. \label{eq:q_j}
\end{align}
Then we let
\begin{equation}\label{eq:p_j}
p_j=q_j/(2v_j+1) \hskip 40pt j=1,\ldots,2k-1.
\end{equation}
By construction, $p_j$ is also an integer.
After defining the associated sequence of vectors
\begin{equation*}
\mathbf{L}_j = \frac{q_j}{2v_1+1} \; (1,1),
\end{equation*}
we let the lattices $\mathbb{L}^e_j$ be the $\mathbb{Z}$-modules with basis
\begin{equation} \label{eq:lattice_j}
\mathbb{L}^e_j =\left\langle \mathbf{L}_j,
\frac{1}{2}\left(\mathbf{L}_j -\mathbf{w}_{v_1,v_1}\right) \right\rangle.
\end{equation}
By construction
\begin{displaymath}
\mathbb{L}^e_{2k-1} \subseteq \mathbb{L}^e_{2k-2} \subseteq \dots \subseteq \mathbb{L}^e_1 \subset \mathbb{Z}^2.
\end{displaymath}
We claim that for all $1\leq j \leq 2k-1$, the closed form expression for $q_j$ is given by
\begin{equation}
q_j = \mathrm{lcm}((2v_{\iota(1)}+1)^2,(2v_{\iota(1)}+1)(2v_{\iota(2)}+1),
\dots,(2v_{\iota(i-1)}+1)(2v_{\iota(i)}+1)), \label{eq:q_j_closed_form}
\end{equation}
where $i$ is the number of distinct entries in the list $(v_1,v_2,\dots,v_j)$.
That the lowest common multiple (\ref{eq:q_j_closed_form}) runs over all products
$(2v_j+1)(2v_{j+1}+1)$ of consecutive, distinct vertex types follows from the form
(\ref{eq:v_iota}) of the vertex list and the symmetry (\ref{eq:v_symmetry}) of the vertex types.
Furthermore, since all distinct vertex types occur within the first $k$ vertex types,
the expression (\ref{eq:q_j_closed_form}) implies that the sequence $(q_j)$ is eventually
stationary:
\begin{equation} \label{eq:q_j=q}
q_j = q, \quad \mathbb{L}^e_j = \mathbb{L}^e \hskip 40pt k\leq j \leq 2k-1,
\end{equation}
where $q$ and $\mathbb{L}^e$ are given by equations (\ref{eq:q}) and (\ref{eq:Le}).
Any $z,\tilde{z} \in X^e$ which are congruent modulo $\lambda\mathbb{L}^e_j$ are related by
\begin{equation} \label{eq:z_tilde}
\tilde{z} = z + \frac{\lambda}{2}\left( (2a+b) \mathbf{L}_j -b\mathbf{w}_{v_1,v_1}\right),
\end{equation}
where $a,b\in\mathbb{Z}$ are the coordinates of $\tilde{z}-z$ relative to the module basis.
For a given $z$, $\tilde{z}$ is determined uniquely by the coefficient $2a+b$,
because if $z=\lambda(x,y)$, then
\begin{align*}
x\geq y \; \; &\Rightarrow \; \; b\in\{0,1\}, \\
x<y \; \; &\Rightarrow \; \; b\in\{-1,0\}.
\end{align*}
The point $z$ itself corresponds to $a=b=0$.
For given $e$, the following result details the role of the $\mathbb{L}^e_j$ as cylinder sets of the orbit code.
Applying the result for $j=2k-1$, along with the observation (\ref{eq:q_j=q}), implies that two points
share the same orbit code if and only if they are congruent modulo $\lambda\mathbb{L}^e$.
\begin{lemma} \label{thm:sigma_lattices}
Let $e$ be a critical number, let $k$ be the length of the vertex list of the
corresponding polygon class, and let $p_j$ and $\mathbb{L}^e_j$ be as above.
For any $1\leq j \leq 2k-1$ and all $z,\tilde{z}\in X^e$, the following three statements are equivalent:
\begin{enumerate}[(i)]
\item the orbit codes of $z$ and $\tilde{z}$ match up to the $j$th entry,
\item $z$ and $\tilde{z}$ are congruent modulo $\lambda\mathbb{L}^e_j$,
\item the points $\Psi^j(z)$ and $\Psi^j(\tilde{z})$ are congruent modulo
$\lambda p_j\mathbf{e}$, where $\mathbf{e}$ is the unit vector in the
direction of the non-integer coordinate of the $j$th vertex.
\end{enumerate}
\end{lemma}
\begin{proof} For $e\in\mathscr{E}$, let $z,\tilde z\inX^e$ and let the orbit codes
of $z$ and $\tilde{z}$ be denoted by
$(\sigma_{-1},\sigma_1,\dots,\sigma_{2k-1})$, and
$(\tilde\sigma_{-1},\tilde\sigma_1,\dots,\tilde\sigma_{2k-1})$, respectively.
We think of $z$ as fixed and $\tilde{z}$ as identified by the coordinates of $\tilde{z}-z$ in the relevant module.
We will make extensive use of proposition \ref{thm:epsilon_j}, page \pageref{thm:epsilon_j},
which describes the behaviour of $\Psi$ as a function of the orbit code.
We proceed by induction on $j$, with two induction hypotheses.
Firstly we suppose that $(i)$ is equivalent to $(ii)$, so that for any
$1\leq j\leq 2k-1$:
\begin{equation}
(\sigma_{-1},\sigma_1,\dots,\sigma_j)=
(\tilde\sigma_{-1},\tilde\sigma_1,\dots,\tilde\sigma_j)
\quad\Leftrightarrow\quad
\tilde{z} \equiv z \mod{\lambda\mathbb{L}^e_j}. \tag{H1}\label{eq:H1}
\end{equation}
Secondly, we suppose that $(ii)$ is equivalent to $(iii)$.
In particular:
\begin{equation} \tag{H2}\label{eq:H2}
\tilde{z} = z + \frac{\lambda}{2}\left( (2a+b) \mathbf{L}_j -b\mathbf{w}_{v_1,v_1}\right)
\quad\Leftrightarrow\quad
\Psi^j(\tilde{z}) = \Psi^j(z) + \lambda(2a +b) p_j \mathbf{e},
\end{equation}
where $\mathbf{e}$ is the unit vector in the direction of the non-integer coordinate of that vertex.
We begin with the base case $j=1$.
Suppose that the first vertex of a polygon in class $e$ lies on $y=v_1$,
so that $y$ is its integer coordinate (if $x$ is the integer coordinate,
then the analysis is identical).
By symmetry, the previous vertex lies on $x=v_1$ and its integer coordinate is $x$.
Using the property of $\Psi$ given in equation (\ref{eq:Psi_congruence2})
applied to $z\in B_{v_1,v_1}\setminus\Lambda$, we have
\begin{equation} \label{eq:Psi_congr4}
\Psi(z) \equiv z \mod{\lambda\mathbf{w}_{v_1,v_1}}.
\end{equation}
Furthermore, by proposition \ref{thm:epsilon_j}:
\begin{equation} \label{eq:Psi_congruence3}
\Psi^{-1}(z) + \lambda\epsilon_{-1}\mathbf{y} \equiv z \mod{\lambda\mathbf{w}_{v_1,v_1}},
\end{equation}
where $\mathbf{y}=(0,1)$ is the non-integer coordinate vector for the $(-1)$th vertex.
Thus if
$$ z=\lambda\left( \Bceil{\frac{v_1}{\lambda}}+x,\Bceil{\frac{v_1}{\lambda}}+y \right), $$
then by the definition (\ref{eq:sigma_j}) of the orbit code,
the $x$- and $y$-components of equations (\ref{eq:Psi_congruence3}) and (\ref{eq:Psi_congr4}),
respectively, give us that the first two entries in the orbit code $\sigma(z)$ satisfy
\begin{align*}
x \equiv \sigma_{-1} \mod{2v_1+1}, \\
y \equiv \sigma_1 \mod{2v_1+1}.
\end{align*}
It follows that $z,\tilde{z}\in X^e$ share the partial code $(\sigma_{-1},\sigma_1)$ if and only if
\begin{displaymath}
\tilde{z} \equiv z \mod{(\lambda(2v_1+1)\mathbb{Z})^2}.
\end{displaymath}
The lattice $\mathbb{L}^e_1$ is given by (cf.~(\ref{eq:lattice_j}))
\begin{displaymath}
\mathbb{L}^e_1 = \left\langle \mathbf{L}_1, \frac{1}{2} \left(\mathbf{L}_1 -\mathbf{w}_{v_1,v_1}\right) \right\rangle,
\end{displaymath}
where $\mathbf{L}_1=p_1(1,1)$, $p_1=2v_1+1$ and
\begin{displaymath}
\frac{1}{2} \left(\mathbf{L}_1 -\mathbf{w}_{v_1,v_1}\right) = \frac{1}{2} (p_1-p_1,p_1+p_1) = p_1\mathbf{y}.
\end{displaymath}
Thus $\mathbb{L}^e_1=((2v_1+1)\mathbb{Z})^2$ and the first hypothesis holds.
Now let $z,\tilde{z}\in X^e$ satisfy (\ref{eq:z_tilde}) with $j=1$.
If $\Psi(z) = F_{\lambda}^{4t}(z) = z +\lambda t \mathbf{w}_{v_1,v_1}$, where $t\in\mathbb{N}$ is the
transit time to $\Lambda$, then the identities
\begin{align*}
\tilde{z} + \lambda(t+ a +b)\mathbf{w}_{v_1,v_1}
&= \Psi(z) + \frac{\lambda}{2} (2a+b) \left(\mathbf{L}_1 +\mathbf{w}_{v_1,v_1}\right) \\
&= \Psi(z) + \lambda(2a +b)p_1 \mathbf{x},
\end{align*}
where $\mathbf{x}=(1,0)$ is the non-integer coordinate vector for the first vertex,
show that $\tilde{z}$ has transit time $t+a+b$ ,
and therefore $\Psi(\tilde{z}) = \Psi(z) + \lambda(2a +b)p_1 \mathbf{x}$, as required
(see figure \ref{fig:Psi_diagram}).
This completes the basis for induction.
\begin{figure}[t]
\centering
\includegraphics[scale=0.85]{TikZ/PsiPlot}
\caption{The points $z,\tilde{z}\inX^e$ and $\Psi(z),\Psi(\tilde{z})\in\Lambda$.
The set $X^e$ is shown in red, whereas the sets $\Lambda$ and $\Sigma$ are represented in light and dark grey, respectively.}
\label{fig:Psi_diagram}
\end{figure}
Now we proceed with the inductive step.
Let (\ref{eq:H1}) and (\ref{eq:H2}) hold for some $j\geq 1$.
Then $z$ and $\tilde{z}$ are related as in equation (\ref{eq:z_tilde}), for some $a$, $b$.
We think of $\tilde{\sigma}_{j+1}$, the $(j+1)$th entry of the orbit code of $\tilde{z}$,
as a function of $(a,b)$. We suppose that the $j$th vertex lies
on $y=n$ for some $n\in\mathbb{Z}$ (again the case in which the vertex lies on $x=m$ is identical).
Let the pair $(x_j,y_j)$ be defined from $\Psi^j(z)$ via equation (\ref{eq:(xj,yj)}).
Similarly, $\Psi^j(\tilde{z})$ defines the pair $(\tilde{x}_j,\tilde{y}_j)$.
By (\ref{eq:H2}), $\Psi^j(\tilde{z})$ satisfies
\begin{displaymath}
\Psi^j(\tilde{z}) = \Psi^j(z) + \lambda(2a +b) p_j \mathbf{x}.
\end{displaymath}
Combining this expression with proposition \ref{thm:epsilon_j}
applied to $\Psi^j(\tilde{z})\in\Lambda_{v_j,n-1}$, we obtain
\begin{align}
\Psi^{j+1}(\tilde{z}) &= \Psi^j(\tilde{z}) + \lambda\epsilon_j \mathbf{x} + \lambda\tilde t \mathbf{w}_{v_j,n-1} \nonumber \\
&= \Psi^j(z) + \lambda(2a +b) p_j\mathbf{x} + \lambda\epsilon_j \mathbf{x} + \lambda\tilde t \mathbf{w}_{v_j,n-1}, \label{eq:Psi^jp1tilde}
\end{align}
where $\tilde t$ is the transit time of $\tilde z$ to $\Lambda$.
There are now two cases to consider. \\
\noindent
\textit{Case 1: $v_j=v_{j+1}$.}
In this case the $j$th and $(j+1)$th vertices lie on parallel lines,
which we take to be $y=n$ and $y=n-1$, so $\Psi^{j+1}(z)$ is given by
\begin{align*}
\Psi^{j+1}(z) = \lambda\left( \Bceil{\frac{v_j}{\lambda}} + x_{j+1},\Bceil{\frac{n-1}{\lambda}} + y_{j+1} \right),
\end{align*}
and similarly for $\Psi^{j+1}(\tilde{z})$.
According to the definitions (\ref{eq:q_j}), (\ref{eq:p_j}) and (\ref{eq:lattice_j}),
we have $p_j=p_{j+1}$ and $\mathbb{L}^e_j=\mathbb{L}^e_{j+1}$.
Thus, to show that (\ref{eq:H1}) continues to hold, we need to show that $\tilde{\sigma}_{j+1}=\sigma_{j+1}$ for all $(a,b)$.
Similarly we need to show that the vector $\Psi^{j+1}(\tilde{z})-\Psi^{j+1}(z)$
is equal to the vector $\Psi^{j}(\tilde{z})-\Psi^{j}(z)$ of hypothesis (\ref{eq:H2}).
Because $y$ is the integer coordinate of both the $j$th and $(j+1)$th vertices,
the transit time is the same for the orbits of $z$ and $\tilde z$.
Therefore proposition \ref{thm:epsilon_j}
and equation (\ref{eq:Psi^jp1tilde}) with $\tilde t =t$ give us that
\begin{align*}
\Psi^{j+1}(\tilde{z}) &= \Psi^j(z) + \lambda(2a +b) p_j\mathbf{x}
+ \lambda\epsilon_j \mathbf{x} + \lambda t \mathbf{w}_{v_j,n-1}, \\
&= \Psi^{j+1}(z) + \lambda(2a +b) p_j\mathbf{x},
\end{align*}
and the second hypothesis (\ref{eq:H2}) remains satisfied.
Furthermore, $\Psi^{j+1}(\tilde{z})$ and $\Psi^{j+1}(z)$ have the same integer
($y$) coordinate. It follows that, by the definition (\ref{eq:sigma_j})
of the orbit code, $\tilde \sigma_{j+1}=\sigma_{j+1}$ and (\ref{eq:H1})
is also satisfied.
By the $y$-component of (\ref{eq:Psi^jp1tilde}), the value of ${\sigma}_{j+1}$
is determined explicitly by the congruence
\begin{equation} \label{eq:sigma_j+1_case1}
\Bceil{\frac{n-1}{\lambda}} + {\sigma}_{j+1} \equiv
\Bceil{\frac{n}{\lambda}} + \sigma_j \mod{2v_j+1}.
\end{equation}
This congruence shows that if $v_j=v_{j+1}$,
then there is a permutation $\pi$ of the set $\{0,1,\dots,2v_j\}$,
dependent on $\lambda$ but not on $z$, such that all orbit codes which have $j$th entry $\sigma_j$
will have $(j+1)$th entry $\pi(\sigma_j)$.
\medskip
\noindent
\textit{Case 2: $v_j\neq v_{j+1}$.}
In this case the $j$th and $(j+1)$th vertices lie on perpendicular lines.
We take these to be the lines $y=n$ and $x=v_j+1$, respectively, so that
$v_{j+1}=n-1$ and $\Psi^{j+1}(z)$ is given by
\begin{align*}
\Psi^{j+1}(z) = \lambda\left( \Bceil{\frac{v_j+1}{\lambda}}
+ x_{j+1}, \Bceil{\frac{n-1}{\lambda}} + y_{j+1}\right).
\end{align*}
(If $x$ is the integer co-ordinate, then the analysis is identical.)
We shall demonstrate the form of $\mathbb{L}^e_{j+1}$ by identifying those pairs
$(a,b)$ for which $\tilde{\sigma}_{j+1}=\sigma_{j+1}$.
Taking the $x$-coordinate of equation (\ref{eq:Psi^jp1tilde}), and recalling the explicit form
(\ref{eq:sigma_j_2}) of the orbit code, we see that $\tilde{\sigma}_{j+1}$ is determined by
\begin{equation} \label{eq:t,2a+b_eqn}
\Bceil{\frac{v_j}{\lambda}} + x_j + (2a+b)p_j +\epsilon_j +\tilde t(2v_{j+1}+1)
= \Bceil{\frac{v_j+1}{\lambda}} +\tilde{\sigma}_{j+1} - (2v_{j+1}+1).
\end{equation}
We think of this as an integer equation of the form $A(2a+b)+B\tilde t=C$, which has solutions
$2a+b\in\mathbb{Z}$ and $\tilde t\in\mathbb{N}$ for some given value of $\tilde{\sigma}_{j+1}$ if and only if
\begin{displaymath}
C = \Bceil{\frac{v_j+1}{\lambda}} +\tilde{\sigma}_{j+1}
- (2v_{j+1}+1) - \Bceil{\frac{v_j}{\lambda}} - x_j - \epsilon_j
\end{displaymath}
is sufficiently large and $C\equiv 0 \; (\mathrm{mod} \; \gcd(A,B))$, i.e., if $\lambda$ is sufficiently small and $\tilde{\sigma}_{j+1}$ satisfies the congruence
\begin{equation}
\tilde{\sigma}_{j+1} \equiv \Bceil{\frac{v_j}{\lambda}}
+ x_j+\epsilon_j-\Bceil{\frac{v_j+1}{\lambda}} \mod{\gcd( p_j, 2v_{j+1}+1)}. \label{eq:sigma_j+1_case2}
\end{equation}
To find the lattice $\mathbb{L}^e_{j+1}$, we need to solve this equation in the case $\tilde{\sigma}_{j+1}=\sigma_{j+1}$.
By assumption, the point $z$, given by the module coordinates $a=b=0$, corresponds to the solution
$2a+b=0$, $\tilde t=t$, for some transit time $t\in\mathbb{N}$.
Hence the general solution of (\ref{eq:t,2a+b_eqn}) is given by
\begin{align}
\tilde t&= t - s\; \frac{p_j}{\gcd( p_j, 2v_{j+1}+1)}, \label{eq:GeneralSolution_t}\\
2a+b&= s \; \frac{2v_{j+1}+1}{\gcd( p_j, 2v_{j+1}+1)}, \label{eq:GeneralSolution_2a+b}
\end{align}
for $s\in\mathbb{Z}$. The second of these equations implies that $s$ must have the same parity
as $2a+b$, so we can write $s=2\tilde{a}+b$, where $\tilde{a}\in\mathbb{Z}$ and
$b\in\{0,\pm 1\}$ for an appropriate choice of sign.
Substituting this expression into equation (\ref{eq:z_tilde}), the points $\tilde{z}$
for which $\tilde{\sigma}_{j+1}=\sigma_{j+1}$ are given by
\begin{align*}
\tilde{z} &= z + \frac{\lambda}{2}\left( s \; \frac{2v_{j+1}+1}{\gcd( p_j, 2v_{j+1}+1)} \mathbf{L}_j -b\mathbf{w}_{v_1,v_1}\right) \\
&= z + \frac{\lambda}{2}\left( (2\tilde{a}+b) \mathbf{L}_{j+1} -b\mathbf{w}_{v_1,v_1}\right).
\end{align*}
The last equality is justified by the identities
\begin{align*}
\frac{2v_{j+1}+1}{\gcd( p_j, 2v_{j+1}+1)} \; \mathbf{L}_j
&= \frac{(2v_{j+1}+1)(2v_j+1)}{\gcd( (2v_j+1)p_j, (2v_{j+1}+1)(2v_j+1))}
\; \frac{q_j}{2v_1+1} \; (1,1) \\
&= \frac{q_{j+1}}{2v_1+1} \; (1,1) = \mathbf{L}_{j+1},
\end{align*}
where we have used the relationship $\mathrm{lcm}(a,b) = a b/\gcd(a,b)$.
Therefore the first hypothesis (\ref{eq:H1}) remains satisfied.
Substituting the general solution (\ref{eq:GeneralSolution_t}) and (\ref{eq:GeneralSolution_2a+b})
into equation (\ref{eq:Psi^jp1tilde})
and using proposition \ref{thm:epsilon_j}, we find
\begin{align*}
\Psi^{j+1}(\tilde{z}) &= \Psi^j(z) + \lambda(2a +b) p_j\mathbf{x} +
\lambda\epsilon_j \mathbf{x} + \lambda\tilde t \mathbf{w}_{v_j,v_{j+1}} \\
&= \Psi^{j+1}(z) + \frac{\lambda s p_j}{\gcd( p_j, 2v_{j+1}+1)} \left( (2v_{j+1}+1)\mathbf{x} - \mathbf{w}_{v_j,v_{j+1}}\right) \\
&= \Psi^{j+1}(z) + \frac{\lambda(2\tilde{a}+b)p_j}{\gcd( p_j, 2v_{j+1}+1)} (2v_j+1)\mathbf{y} \\
&= \Psi^{j+1}(z) + \lambda(2\tilde{a}+b)p_{j+1}\mathbf{y}.
\end{align*}
Hypothesis (\ref{eq:H2}) remains satisfied, completing the induction.
Thus hypotheses (\ref{eq:H1}) and (\ref{eq:H2}) hold for all $1\leq j\leq 2k-1$,
and the equivalence of the three statements follows.
\end{proof}
Lemma \ref{thm:sigma_lattices} gives two characterisations
of the cylinder sets of the lattices $\mathbb{L}^e_j$.
Firstly, a cylinder set $z+\lambda\mathbb{L}^e_j$ can be identified by the partial orbit code
$(\sigma_{-1},\sigma_1,\dots,\sigma_j)$
(by the equivalence of statements (i) and (ii)).
Secondly, it can be identified by a pair of the form $(\sigma_j,\gamma_j)$,
where $\sigma_j$ is the $j$th entry in the orbit code (i.e., a residue modulo $2v_j+1$),
and $\gamma_j$ is a residue modulo $p_j=q_j/(2v_j+1)$
(by the equivalence of statements (ii) and (iii)---see equation (\ref{eq:H2})).
In particular, since $\mathbb{L}^e\subset\mathbb{L}^e_j$ for all $1\leq j \leq 2k-1$,
we can identify the cylinder sets of $\mathbb{L}^e$ by pairs of the form $(\sigma_j,\gamma_j)$,
where $\gamma_j$ is a residue modulo $q/(2v_j+1)$.
For any $j$, there is a one-to-one correspondence between
the set of all orbit codes and the set of pairs $(\sigma_j,\gamma_j)$.
We will use this alternative construction below.
\begin{corollary} \label{cor:sigma_lattice}
Let $e$ be a critical number, let $k$ be the length of the vertex list of the
corresponding polygon class, and let $j$ be in the range $1\leq j \leq 2k-1$.
Then two points $z$ and $\tilde{z}$ in $X^e$ have the same orbit code if and only if
the points $\Psi^j(z)$ and $\Psi^j(\tilde{z})$ are congruent modulo $\lambda q/(2v_j+1)\,\mathbf{e}$,
where $\mathbf{e}$ is the unit vector in the direction of the non-integer coordinate
of the $j$th vertex.
\end{corollary}
\medskip
We have shown the equivalence between orbit codes
and congruence classes of $\mathbb{L}^e$.
To complete the proof of theorem \ref{thm:Phi_equivariance},
we show that the orbit code $\sigma(z)$
determines uniquely the behaviour of $z$ under the return map $\Phi$.
\begin{proof}[Proof of theorem \ref{thm:Phi_equivariance}]
Consider two points $z,z+\lambda l\inX^e$ for some $l\in\mathbb{L}^e$ given by
\begin{displaymath}
l= \frac{1}{2}\left( (2a+b) \mathbf{L} -b\mathbf{w}_{v_1,v_1}\right).
\end{displaymath}
According to lemma \ref{thm:sigma_lattices} applied to $j=2k-1$,
these two points have the same orbit code and reach the $(2k-1)$th vertex
at the points $\Psi^{2k-1}(z),\Psi^{2k-1}(z+\lambda l)\in\Lambda_{v_1,-(v_1+1)}$,
which are congruent modulo
$\lambda p_{2k-1}\mathbf{e}$, where $\mathbf{e}$ is the unit vector in the
non-integer direction. In particular, $\Psi^{2k-1}(z)$ and $\Psi^{2k-1}(z+\lambda l)$ are related via
\begin{align}
\Psi^{2k-1}(z+\lambda l) &= \Psi^{2k-1}(z) + \lambda(2a+b)p_{2k-1}\mathbf{e}, \nonumber \\
&= \Psi^{2k-1}(z) + \lambda(2a+b)\frac{q}{(2v_1+1)} \; \mathbf{e}, \label{eq:vertex_2k-1}
\end{align}
where we have replaced $q_{2k-1}$ by $q$ using equation (\ref{eq:q_j=q}),
and $v_{2k-1}$ by $v_1$ using the symmetry (\ref{eq:v_symmetry}) of the vertex types.
We will show that the points where they reach the last vertex are related by a similar equation:
\begin{equation}
(\Psi^{2k-1}\circ F_{\lambda})(z+\lambda l) = (\Psi^{2k-1}\circ F_{\lambda})(z)
+ \lambda(2a+b)\frac{q}{(2v_1+1)} \; \mathbf{e}^{\perp}, \label{eq:vertex_8k-5}
\end{equation}
where the unit vector $\mathbf{e}^{\perp}$, the non-integer direction of the last vertex, is perpendicular to $\mathbf{e}$.
The last vertex of the return orbit ${\mathcal O}_{\tau}(z)$ lies in the set $\Lambda_{v_1,v_1}$,
so must be close to the image of the $(2k-1)$th vertex under $F_{\lambda}$ (see figure \ref{fig:Phi_Psi}).
If the $(2k-1)$th vertex lies on the line $x=v_1+1$, it is a simple exercise to show that these
two points are in fact equal, i.e., that $(\Psi^{2k-1}\circ F_{\lambda})(z) = (F_{\lambda}\circ \Psi^{2k-1})(z)$
for any $z\inX^e$. We consider the less obvious case in which the $(2k-1)$th
vertex lies on the line $y=-v_1$ and the non-integer direction is $\mathbf{x}=(1,0)$.
By the orientation of the vector field in the fourth quadrant, the orbit of
the point $z$ reaches this vertex at the point $\Psi^{2k-1}(z)$ given by:
\begin{equation}
\Psi^{2k-1}(z) = \lambda\left( \Bceil{\frac{v_1}{\lambda}} + x_{2k-1},
\Bceil{\frac{-v_1}{\lambda}} + y_{2k-1}\right), \label{eq:Psi_2k-1}
\end{equation}
where $x_{2k-1}\geq 0$ and $0\leq y_{2k-1}< 2v_1+1$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{TikZ/PsiBoxes}
\caption{The $(2k-1)$th and last vertices, joined by the action of $F_{\lambda}$. }
\label{fig:Phi_Psi}
\end{figure}
Applying $F_{\lambda}$ to (\ref{eq:vertex_2k-1}) and substituting the expression (\ref{eq:Psi_2k-1}), we get:
\begin{align*}
(F_{\lambda}\circ \Psi^{2k-1})(z+\lambda l)
&=(F_{\lambda}\circ \Psi^{2k-1})(z) +\lambda(2a+b)\frac{(2v_k+1) \;p}{(2v_1+1)} \; \mathbf{y} \\
& \hskip -20pt =\lambda\left( v_1-\Bceil{\frac{-v_1}{\lambda}} - y_{2k-1},
\Bceil{\frac{v_1}{\lambda}} + x_{2k-1} \right) +\lambda(2a+b)\frac{(2v_k+1) \;p}{(2v_1+1)} \; \mathbf{y},
\end{align*}
where the non-integer direction of the last vertex is $\mathbf{y}=(0,1)$.
By equation (\ref{eq:xj<0,yj>0}), if the first component of this point satisfies
$$
-(2v_1+1) \leq \left( v_1-\Bceil{\frac{-v_1}{\lambda}}
- y_{2k-1} \right) - \Bceil{\frac{v_1}{\lambda}} < 0,
$$
then this point is the last vertex of the orbit:
\begin{equation} \label{eq:Phi_equivariance_case1}
(\Psi^{2k-1}\circ F_{\lambda})(z+\lambda l) = (F_{\lambda}\circ \Psi^{2k-1})(z+\lambda l).
\end{equation}
If the above inequality is not satisfied, then it must be the upper bound that fails.
In this case $(F_{\lambda}\circ \Psi^{2k-1})(z+l)\in B_{v_1,v_1}$,
and we apply $F_{\lambda}^{-4}$ to find:
\begin{align}
(\Psi^{2k-1}\circ F_{\lambda})(z+\lambda l) &= (F_{\lambda}^{-3}\circ \Psi^{2k-1})(z+\lambda l) \nonumber \\
&= (F_{\lambda}\circ \Psi^{2k-1})(z+\lambda l) - \mathbf{v}((F_{\lambda}^{-3}\circ \Psi^{2k-1})(z+\lambda l)) \nonumber \\
&= (F_{\lambda}\circ \Psi^{2k-1})(z+\lambda l) -\lambda\mathbf{w}_{v_1,v_1}- \lambda\epsilon \, \mathbf{y}, \label{eq:Phi_equivariance_case2}
\end{align}
where the error term $\epsilon$ is independent of $(a,b)$ by proposition \ref{thm:epsilon_j}.
In both cases (\ref{eq:Phi_equivariance_case1}) and (\ref{eq:Phi_equivariance_case2}) the relationship (\ref{eq:vertex_8k-5}) follows.
Using (\ref{eq:vertex_8k-5}), the property (\ref{eq:Psi_congruence1}) of $\Psi$,
and the expression (\ref{eq:Phi_Psi}) for $\Phi$, we obtain
\begin{align*}
\Phi(z+\lambda l) &\equiv (\Psi^{2k-1}\circ F_{\lambda})(z+\lambda l) + \mathbf{v}((\Psi^{2k-1}\circ F_{\lambda})(z+\lambda l))
\mod{\lambda\mathbf{w}_{v_1,v_1}} \\
&\equiv (\Psi^{2k-1}\circ F_{\lambda})(z) + \frac{\lambda(2a+b)q}{(2v_1+1)} \;
\mathbf{y} + \mathbf{v}((\Psi^{2k-1}\circ F_{\lambda})(z)) \mod{\lambda \mathbf{w}_{v_1,v_1}} \\
&\equiv \Phi(z) + \frac{\lambda(2a+b)q}{(2v_1+1)} \;
\mathbf{y} \mod{\lambda\mathbf{w}_{v_1,v_1}} \\
&\equiv \Phi(z) + \frac{\lambda}{2}\,\frac{(2a+b)q}{(2v_1+1)} \;
\left((\mathbf{y}+\mathbf{x})+(\mathbf{y}-\mathbf{x})\right) \mod{\lambda \mathbf{w}_{v_1,v_1}} \\
&\equiv \Phi(z) + \frac{\lambda}{2}(2a+b)\left( \mathbf{L} - \frac{q}{(2v_1+1)^2} \;
\mathbf{w}_{v_1,v_1} \right) \mod{\lambda\mathbf{w}_{v_1,v_1}} \\
&\equiv \Phi(z) + \frac{\lambda}{2}\left((2a+b)\mathbf{L} - b \mathbf{w}_{v_1,v_1}\right) \mod{\lambda \mathbf{w}_{v_1,v_1}} \\
&\equiv \Phi(z) + \lambda l \mod{\lambda \mathbf{w}_{v_1,v_1}},
\end{align*}
where we have also used the fact that $(\mathbf{y}+\mathbf{x})=(1,1)$,
$(\mathbf{y}-\mathbf{x})=(-1,1)$, and that $q/(2v_1+1)^2$ is odd.
This completes the proof of theorem \ref{thm:Phi_equivariance}.
\end{proof}
\medskip
\subsection*{The density of fixed points}
The set $\Theta^e$ of possible orbit codes is a subset of the product space
\begin{equation*}
\Theta^e \subseteq \{ 0,1,\dots,2v_1 \} \times \prod_{j=1}^{2k-1} \{ 0,1,\dots,2v_j \}.
\end{equation*}
The set of congruence classes modulo $\mathbb{L}^e$ is given by the quotient space $\mathbb{Z}^2/\,\mathbb{L}^e$,
so that the total number of possible orbit codes is given by
\begin{equation} \label{eq:theta_e}
\# \Theta^e = \#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right) = -\frac{1}{2} \; \det\left(\mathbf{L}, \mathbf{w}_{v_1,v_1}\right) = q(e).
\end{equation}
We note that although the lattice $\mathbb{L}^e$ is independent of $\lambda$, the set of orbit codes $\Theta^e$ is not.
In the next lemma, we identify the orbit codes which correspond to symmetric
fixed points of $\Phi$.
Subsequently, in lemmas \ref{lemma:sigma_j_I} and \ref{lemma:sigma_j_II}, we identify values of $e$
for which the number of codes in $\Theta^e$ which satisfy the conditions of lemma
\ref{lemma:minimal_codes} is independent of $\lambda$.
The proof of theorem \ref{thm:minimal_densities} will then follow.
\begin{lemma} \label{lemma:minimal_codes}
For any $e\in\mathscr{E}$ with vertex list $(v_1,\dots,v_k)$, $z\in X^e$ and sufficiently small $\lambda$,
the point $z$ is a symmetric fixed point of $\Phi$ if and only if its orbit code
$\sigma(z)=(\sigma_{-1},\sigma_1,\dots,\sigma_{2k-1})$ satisfies:
\begin{center}
\textit{(i)} \; $\sigma_{-1}=\sigma_1$, \hspace{2cm} \textit{(ii)} \; $2\sigma_k \equiv v_k \mod{2v_k+1}$.
\end{center}
\end{lemma}
\begin{proof}
Recall the reversing symmetries $G$ and $H$ of $F$, introduced in equation (\ref{def:GH}),
and their fixed spaces (\ref{eq:FixG}) and (\ref{eq:FixH}).
We have already noted that $G$ is also a reversing symmetry of $F_{\lambda}$,
and the rescaled version of $H$ is given by
$$ H_{\lambda}(x,y) = (\lambda\fl{y}-x,y) \hskip 40pt \Fix{H_{\lambda}} = \{ (x,y)\in\mathbb{R}^2 \, : \; 2x=\lambda\fl{y} \}. $$
Recall also the reversing symmetry $G^e$ of $\Phi$ on $X^e$ (equation (\ref{eq:Ge})).
As in the proof of proposition \ref{prop:square_orbits}, we show that orbits are symmetric
and periodic using theorem \ref{thm:SymmetricOrbits}, page \pageref{thm:SymmetricOrbits}.
Take $e\in\mathscr{E}$ and a point $z\inX^e$. Suppose that $z$ is a symmetric fixed point of $\Phi$.
We must have $G^e(z)=z$, so the alternative expression (\ref{eq:Ge_G}) for $G^e$
gives that either $z\in\Fix{G}$, or
$$ F^4(z) = G(z) \hskip 20pt \Leftrightarrow \hskip 20pt F^2(z)\in\Fix{G}. $$
Thus the return orbit ${\mathcal O}_{\tau}(z)$ intersects $\Fix{G}$.
If $z$ is non-zero, the orbit of $z$ intersects the set $\Fix{G}\cup\Fix{H_{\lambda}}$ at exactly two points,
and as $z$ is a fixed point of $\Phi$, these two points must occur within a single revolution.
Hence we have:
$$ \#\left( {\mathcal O}_{\tau}(z) \cap \left(\Fix{G}\cup\Fix{H_{\lambda}}\right)\right)=2. $$
If the return orbit intersects $\Fix{G}$ twice, then these points must occur in the first and third quadrants, so that
$$
z \in \Fix{G} \cap F^{-2}(\Fix{G}).
$$
By theorem \ref{thm:SymmetricOrbits} part (iii), this implies that $z$ is periodic with period four,
and we have already observed that there are no points with minimal period four.
Thus the return orbit ${\mathcal O}_{\tau}(z)$ must intersect $\Fix{H_{\lambda}}$.
Points in $\Fix{H_{\lambda}}$ lie on disjoint vertical line segments of length one, in an
$O(\lambda)$-neighbourhood of the $y$-axis. Recall that the polygon $\Pi(z)$ intersects
the axes at vertices of type $v_k=\fl{\sqrt{e}}$, and hence intersects the $y$-axis in the boxes:
$$ B_{0,v_k} \hskip 20pt \mbox{and} \hskip 20pt B_{0,-(v_k+1)}. $$
If $v_k$ is even, it follows that the relevant segment $H^+$ of $\Fix{H_{\lambda}}$ is given by:
$$
H^+ = \{ \lambda(x,y)\in(\lambda\mathbb{Z})^2 \, : \; 2x =\fl{ \lambda y } = v_k \},
$$
which lies in the positive half-plane.
Similarly if $v_k$ is odd, the relevant segment $H^-$ of $\Fix{H_{\lambda}}$ is given by:
$$
H^- = \{ \lambda(x,y)\in(\lambda\mathbb{Z})^2 \, : \; 2x =\fl{ \lambda y } = -(v_k+1) \},
$$
which lies in the negative half-plane.
The proof now proceeds in two parts. \\
\noindent
\textit{(i) ${\mathcal O}_{\tau}(z)$ intersects $\Fix{G}$ if and only if $\sigma_{-1}=\sigma_1$.}
If $z=\lambda(x,y)$, then the property $\sigma_{-1}=\sigma_1$ is satisfied if and only if:
\begin{displaymath}
x \equiv y \mod{2v_1+1},
\end{displaymath}
i.e., if and only if $z\in\Fix{G^e}$.
We have already seen that $z\in\Fix{G^e}$ implies that ${\mathcal O}_{\tau}(z)$ intersects $\Fix{G}$.
Since the only points in ${\mathcal O}_{\tau}(z)$ which are close to $\Fix{G}$ are $z$ and $F_{\lambda}^2(z)$,
the converse also holds. \\
\noindent
\textit{(ii) ${\mathcal O}_{\tau}(z)$ intersects $\Fix{H_{\lambda}}$ if and only if}
$2\sigma_k \equiv v_k$ (mod $2v_k+1$).
Instead of considering the sets $H^+$ and $H^-$ directly, we consider
their images under $G$ and $F_{\lambda}$, respectively, which lie in a neighbourhood of the $x$-axis:
\begin{align}
G(H^+) &= \{ \lambda(x,y)\in(\lambda\mathbb{Z})^2 \, : \; 2y=\fl{\lambda x} = v_k,\nonumber \\
F_{\lambda}(H^-) &= \{ \lambda(x,y)\in(\lambda\mathbb{Z})^2 \, : \; 2y =\fl{-\lambda(x+1)}
= -(v_k+1) \}. \label{eq:F(FixH)}
\end{align}
In (\ref{eq:F(FixH)}), we assume that $\lambda(v_k+1)/2<1$,
so that $F_{\lambda}(w) = \lambda(-1-y,x)$ for all $w=\lambda(x,y)\in H^-$.
The orbit ${\mathcal O}_{\tau}(z)$ intersects $\Fix{H_{\lambda}}$ if and only if it intersects the relevant
one of these sets, according to the parity of $v_k$.
The polygon $\Pi(z)$ intersects the $x$-axis at the $k$th vertex, where $k$ is the length of the vertex list $V(e)$.
The return orbit ${\mathcal O}_{\tau}(z)$ reaches the $k$th vertex at the point $\Psi^k(z)$, given in the notation
of (\ref{eq:(xj,yj)}) by
\begin{displaymath}
\Psi^k(z) = \lambda\left( \Bceil{\frac{v_k}{\lambda}} + x_k, y_k\right),
\end{displaymath}
where, by (\ref{eq:sigma_j_2}), $y_k=\sigma_k$ is non-negative.
Hence if $v_k$ is even, ${\mathcal O}_{\tau}(z)$ intersects $\Fix{H_{\lambda}}$ if and only if:
$$ \Psi^k(z) \in G(H^+) \hskip 20pt \Leftrightarrow \hskip 20pt \sigma_k = v_k/2. $$
If $v_k$ is odd, then ${\mathcal O}_{\tau}(z)$ intersects $\Fix{H_{\lambda}}$ if and only if:
\begin{align*}
F_{\lambda}^4(\Psi^k(z)) \in F_{\lambda}(H^-) \hskip 20pt \Leftrightarrow \hskip 20pt
\sigma_k &= -(v_k+1)/2 + (2v_k+1) \\
&= (3v_k+1)/2.
\end{align*}
The congruence $2\sigma_k \equiv v_k$ (mod $2v_k+1$)
covers both of these cases, which completes the proof.
\end{proof}
\medskip
For all $e\in\mathscr{E}$ and sufficiently small $\lambda$,
the set $X^e$---see equation (\ref{def:Xe})---is non-empty and
contains at least one element from every congruence class
modulo $\lambda\mathbb{L}^e$. We now seek to identify the number of congruence classes
whose orbit code satisfies the conditions of lemma \ref{lemma:minimal_codes}.
As discussed in the proof of lemma \ref{lemma:minimal_codes}, the points $z\inX^e$ whose orbit code
$\sigma(z) = (\sigma_{-1},\sigma_1,\dots,\sigma_{2k-1})$ satisfies $\sigma_{-1} = \sigma_1$ are precisely
those which lie in $\Fix{G^e}$, i.e., with
$$ x \equiv y \mod{2v_1+1}. $$
All such points lie on one of two lines, parallel to the first generator $\mathbf{L}$ of the lattice $\mathbb{L}^e$.
Furthermore, all points on one line are congruent to those on the other,
as they are connected by the second generator $\lambda (\mathbf{L}-\mathbf{w}_{v_1,v_1})/2$.
Hence the number of points in $\Fix{G^e}$ modulo $\lambda\mathbb{L}^e$ is
\begin{equation*}
\frac{\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)}{2v_1+1} = \frac{q}{2v_1+1},
\end{equation*}
where we have used the expression (\ref{eq:theta_e}) for $\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)$.
It remains to determine what fraction of the points in $\Fix{G^e}$ satisfy
the second condition of lemma \ref{lemma:minimal_codes}.
We do this by identifying values of $e$ for which all possible values of
$\sigma_k$ occur with equal frequency, independently of $\lambda$.
If $k=1$, i.e., if a polygon class has just one vertex in the first octant ($e=0$),
then for any given $\sigma^*\in\{0,1,\dots,2v_1\}$,
the points $z=\lambda(x,y)\in\Fix{G^e}$ with $\sigma_1 = \sigma^*$ satisfy
\begin{displaymath}
x \equiv y \equiv \sigma^* \mod{2v_1+1}.
\end{displaymath}
Such points form a fraction
\begin{displaymath}
\frac{1}{2v_1+1}
\end{displaymath}
of all points in $\Fix{G^e}$ modulo $\lambda\mathbb{L}^e$.
Hence all possible values of $\sigma_1$ occur with equal frequency.
(In fact this case is trivial, since $v_1=0$ and all orbits are symmetric
fixed points of $\Phi$---see proposition \ref{prop:square_orbits}.)
More generally if $v_k=v_1$, i.e., if all vertices of the polygon class have the same type ($e=0,2,8$),
then all possible values of $\sigma_k$ occur with equal frequency in $\Fix{G^e}$ modulo $\lambda\mathbb{L}^e$.
This follows from the fact that, whenever $v_j=v_{j+1}$,
the map $\sigma_j \mapsto \sigma_{j+1}$ is a permutation of the set $\{0,1,\dots,2v_j\}$,
as we saw in case 1 of the proof of lemma \ref{thm:sigma_lattices}.
The following lemma deals with the case that a polygon class has two or more distinct vertex types.
We consider cylinder sets of the form $z+\lambda\mathbb{L}^e_j$,
i.e., sets of points whose orbit codes match up to the $j$th entry,
where $j$ is the penultimate distinct vertex type.
We show that under a certain congruence condition on the vertex list,
all possible values of $\sigma_k$ occur with equal frequency within any such cylinder set.
\begin{lemma} \label{lemma:sigma_j_I}
Let $e\in\mathscr{E}$. Suppose that the vertex list $(v_1,v_2,\dots,v_k)$ of the
associated polygon class has at least two distinct entries and let $j=\iota(l)-1$,
where $(\iota(i))_{i=1}^l$ is the sequence of distinct vertex types defined in (\ref{eq:v_iota}).
Furthermore, suppose that the vertex types satisfy
\begin{equation}
\gcd(2v_k+1,p_j) =1. \label{eq:coprimality_I}
\end{equation}
Then for every $z\inX^e$, all $\sigma^*\in\{0,1,\dots,2v_k\}$,
and all sufficiently small $\lambda$, the number of points in the set $(z + \lambda\mathbb{L}^e_j)$ modulo $\lambda\mathbb{L}^e$
whose orbit code has $k$th entry $\sigma^*$ is
\begin{equation*}
\frac{1}{2v_k+1} \, \# \left(\mathbb{L}^e_j/\,\mathbb{L}^e\right).
\end{equation*}
\end{lemma}
\begin{proof}
Suppose that $e\in\mathscr{E}$, that the vertex list $V(e)$ has at least two distinct entries,
and that the coprimality condition (\ref{eq:coprimality_I}) holds.
Let $z\inX^e$ have orbit code $\sigma(z) = (\sigma_{-1},\sigma_1,\dots,\sigma_{2k-1})$
and let the pair $(x_j,y_j)$ be defined as in equation (\ref{eq:(xj,yj)}), where
$j=\iota(l)-1$.
Since $z+\lambda\mathbb{L}^e_j$ is a cylinder set in the sense of lemma \ref{thm:sigma_lattices},
the orbit codes of all points $\tilde{z}\in z + \lambda\mathbb{L}^e_j$ match that of $z$ up to the $j$th entry.
We have to show that, among these orbit codes, all possible values of $\sigma_k$ occur with equal frequency.
Since $j=\iota(l)-1$, we have
$$ v_{j+1} = v_{\iota(l)} = v_k. $$
Let $\tilde{z}\in z + \lambda\mathbb{L}^e_j$ and let the $(j+1)$th entry
of the orbit code of $\tilde{z}$ be $\tilde{\sigma}_{j+1}$.
We show first that all possible values of $\tilde{\sigma}_{j+1}$ occur with equal frequency.
By construction $v_j\neq v_{j+1}$, so the possible values of $\tilde{\sigma}_{j+1}$ are determined by
case 2 of the proof of lemma \ref{thm:sigma_lattices}.
In the course of the proof, we saw that the occurrence of points $\tilde{z}$ with some fixed value of
$\tilde{\sigma}_{j+1}$ correspond to solutions of an integer equation, given in the case where $y$ is the
non-integer coordinate of the $j$th vertex by equation (\ref{eq:t,2a+b_eqn}).
(A similar equation holds when $x$ is the non-integer coordinate.)
Each solution $(2a+b,\tilde{t})\in \mathbb{Z}\times\mathbb{N}$ determines the module coordinates $(a,b)$ of
$\tilde{z}-z$ in $\lambda\mathbb{L}^e_j$ and the transit time $\tilde{t}$ of $\tilde{z}$ from the $j$th vertex to the $(j+1)$th.
Solutions of (\ref{eq:t,2a+b_eqn}) occur for all values of $\tilde{\sigma}_{j+1}$
satisfying the congruence (\ref{eq:sigma_j+1_case2}), and the condition that $\lambda$
be sufficiently small ensures that
at least one such solution is realised by a point $\tilde{z}\in X^e$.
By construction, each distinct value of $\tilde{\sigma}_{j+1}$
which has a solution defines a unique point in $z + \lambda\mathbb{L}^e_j$ modulo $\lambda\mathbb{L}^e_{j+1}$.
However, due to the coprimality condition (\ref{eq:coprimality_I}), the modulus of the congruence
(\ref{eq:sigma_j+1_case2}) is unity. Hence solutions occur for all possible values of $
\tilde{\sigma}_{j+1}$, and each corresponds to a unique congruence class of
$z + \lambda\mathbb{L}^e_j$ modulo $\lambda\mathbb{L}^e_{j+1}$. Furthermore, by (\ref{eq:q_j=q}), the lattices
$\mathbb{L}^e_{j+1}$ and $\mathbb{L}^e$ are equal, hence all possible values of $\tilde{\sigma}_{j+1}$
occur exactly once in $z + \lambda\mathbb{L}^e_j$ modulo $\lambda\mathbb{L}^e$.
If $j+1=\iota(l)=k$ then this completes the proof. If $\iota(l)<k$, take $i$ in the range
$\iota(l) \leq i <k$. By the definition of $\iota(l)$ as the index of the last distinct vertex type,
we have $v_{i}=v_{i+1}=v_k$. As discussed above, the map $\sigma_i \mapsto \sigma_{i+1}$
is a permutation of the set $\{0,1,\dots,2v_i\}$ whenever $v_i = v_{i+1}$.
Hence the equal frequency of the possible values of $\tilde{\sigma}_{i}$ implies that of $\tilde{\sigma}_{i+1}$
and the result follows.
\end{proof}
\medskip
In the previous section (equation (\ref{eq:sigma_j})), we defined
the $j$th entry $\sigma_j$ of the orbit code $\sigma(z)$ as the residue modulo $2v_j+1$
of the integer coordinate of ${\mathcal O}_{\tau}(z)$ at the $j$th vertex.
At the conclusion of the proof of lemma \ref{thm:sigma_lattices},
we remarked that we can also define the sequence $\gamma(z)$, whose $j$th entry
$\gamma_j$ is the residue of the non-integer coordinate modulo $q/(2v_j+1)$
(see corollary \ref{cor:sigma_lattice}, page \pageref{cor:sigma_lattice}).
The residue $\gamma_j$ is an alternative encoding of the other entries in the orbit code,
so that for any $z,\tilde{z}\inX^e$ and any $j$:
\begin{displaymath} \label{eq:(sigma_j,gamma_j)}
\tilde{z} \equiv z \mod{\lambda\mathbb{L}^e} \hskip 20pt \Leftrightarrow \hskip 20pt (\sigma_j,\gamma_j) = (\tilde{\sigma}_j,\tilde{\gamma}_j).
\end{displaymath}
In the following lemma, we use $\gamma(z)$ to identify polygon classes where,
among points in $\Fix{G^e}$ and for all $j$,
all possible values $\sigma_j\in\{0,1,\dots,2v_j\}$ occur with equal
frequency modulo $\lambda\mathbb{L}^e$, independently of $\lambda$.
\begin{lemma} \label{lemma:sigma_j_II}
Let $e\in\mathscr{E}$. Suppose that the vertex list $(v_1,v_2,\dots,v_k)$ of the
associated polygon class is such that $2v_1+1$ is coprime to $2v_j+1$ for all other vertex types $v_j$:
\begin{equation} \label{eq:v1_coprimality}
\gcd(2v_1+1,2v_j+1) =1 \hskip 40pt 2\leq j \leq k, \; v_j\neq v_1.
\end{equation}
Then for sufficiently small $\lambda$, for all $j$ in $1\leq j \leq 2k-1$,
and all $\sigma^*\in\{0,1,\dots,2v_j\}$, the number $n_j$ of points $z\in \Fix{G^e}$
modulo $\lambda\mathbb{L}^e$ whose orbit code has $\sigma_j = \sigma^*$ is given by:
\begin{equation}\label{eq:n_j}
n_j=\frac{\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)}{(2v_1+1)(2v_j+1)}.
\end{equation}
\end{lemma}
\begin{proof}
We use induction on $j$. Consider points $z\in\Fix{G^e}$ whose orbit code
has $j$th value $\sigma_j$, for some arbitrary
$\sigma_j\in\{0,1,\dots,2v_j\}$ and $j\in\{1,\dots,2k-1\}$.
Let the sequence $\gamma(z)$ be denoted $(\gamma_{-1},\gamma_1,\dots,\gamma_{2k-1})$.
Our induction hypotheses are that: \\
(i) the number of such points is given by $n_j$ (equation (\ref{eq:n_j})),
where $q(e)=\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)$ and the coprimality condition (\ref{eq:v1_coprimality})
ensures that $n_j$ is a natural number; \\
(ii) for each residue $r \in \{0,1,\dots, n_j-1\}$ modulo $n_j$, there is a unique congruence class modulo $\lambda\mathbb{L}^e$ whose value of $\gamma_j$ satisfies
\begin{displaymath}
\gamma_j \equiv r \mod{n_j}.
\end{displaymath}
The base case is $j=1$. The points $z=\lambda(x,y)\in\Fix{G^e}$ with some fixed value of $\sigma_1$ satisfy:
\begin{displaymath}
x \equiv y \equiv \sigma_1 \mod{2v_1+1}.
\end{displaymath}
Such points are congruent modulo $(\lambda(2v_1+1)\mathbb{Z})^2$, hence the number of such points modulo $\lambda\mathbb{L}^e$ is
\begin{displaymath}
\frac{q}{(2v_1+1)^2} = n_1.
\end{displaymath}
By lemma \ref{thm:sigma_lattices}, if $z$ is one such point, then any other point $\tilde{z}$ reaches the first vertex at
\begin{displaymath}
\Psi(\tilde{z}) = \Psi(z) + \lambda s p_1 \mathbf{e}
\end{displaymath}
for some $s\in\mathbb{Z}$, where $p_1=2v_1+1$ and $\mathbf{e}$ is the unit vector in the non-integer coordinate direction.
Then by the construction of $\gamma(z)$, if $\gamma(\tilde{z})=(\tilde{\gamma}_{-1},\tilde{\gamma}_1,\dots,\tilde{\gamma}_{2k-1})$,
the value of $\tilde{\gamma}_1$ is related to $\gamma_1$ by
\begin{displaymath}
\tilde{\gamma}_1 \equiv \gamma_1 + s (2v_1+1) \mod{(2v_1+1) \, n_1},
\end{displaymath}
and $z$ and $\tilde{z}$ are congruent modulo $\lambda\mathbb{L}^e$ if and only if $\tilde{\gamma}_1 = \gamma_1$.
Thus the $n_1$ distinct points modulo $\lambda\mathbb{L}^e$ correspond to distinct values of $s$ modulo $n_1$. Furthermore, if we consider the value of $\tilde{\gamma}_1$ modulo $n_1$, we have:
\begin{displaymath}
\tilde{\gamma}_1 \equiv \gamma_1 + s (2v_1+1) \mod{n_1}.
\end{displaymath}
Now $2v_1+1$ is coprime to the modulus, as by the coprimality condition (\ref{eq:v1_coprimality}) and the construction (\ref{eq:q}) of $q$, two is the highest power of $(2v_1+1)$ that divides $q$. It follows that each distinct value of $\tilde{\gamma}_1$ is distinct modulo $n_1$. This completes the base case.
To proceed with the inductive step, we suppose that the above hypotheses hold for some $j\in\{1,\dots,k-1\}$.
In the proof of lemma \ref{thm:sigma_lattices} we used equation (\ref{eq:Psi^jp1}) to
describe the behaviour of points as they move from one vertex to the next in two cases.
The first case occurs when $v_j = v_{j+1}$, so that the $j$th and $(j+1)$th vertices lie on parallel lines, and $n_j = n_{j+1}$.
In this case, the value of $\sigma_{j+1}$ is determined uniquely by the value of $\sigma_j$.
In particular, we saw that if the $j$th vertex lies on $y=n$ and the $(j+1)$th vertex lies on $y=n-1$,
then $\sigma_{j+1}$ and $\sigma_j$ are related by equation (\ref{eq:sigma_j+1_case1}).
We can use the same methods, considering this time the non-integer component of equation
(\ref{eq:Psi^jp1}), to show that $\gamma_{j+1}$ is determined by the
pair $(\sigma_j,\gamma_j)$ via:
\begin{displaymath}
\gamma_{j+1} \equiv \gamma_{j} +\epsilon_j +(2n-1)t \mod{(2v_1+1) \, n_j},
\end{displaymath}
where $\epsilon_j=\epsilon_j(\sigma_j)$, and $t = t(\sigma_j)$ is the transit time between vertices.
The one-to-one relationship between $\sigma_j$ and $\sigma_{j+1}$, ensures that there are $n_{j+1}=n_j$
points in $\Fix{G^e}$ that achieve any given value of $\sigma_{j+1}$ at the $(j+1)$th vertex.
Similarly, for any given value of $\sigma_j$, the above congruence
establishes a one-to-one relationship between $\gamma_j$ and $\gamma_{j+1}$ modulo $(2v_1+1)n_j$.
Because this bijection is a translation, it also holds modulo $n_j$.
In other words, there is a skew-product map of residue classes modulo $n_j$:
$(\sigma_j,\gamma_j)\mapsto(\sigma_{j+1},\gamma_{j+1})$.
This completes the inductive step for the first case.
In the second case, where $v_j \neq v_{j+1}$, the $j$th and $(j+1)$th vertices lie on perpendicular lines.
Again referring to the proof of lemma \ref{thm:sigma_lattices}, taking equation (\ref{eq:t,2a+b_eqn})
modulo $2v_{j+1}+1$ gives the following expression for $\sigma_{j+1}$ in terms of the pair $(\sigma_j,\gamma_j)$:
\begin{displaymath}
\Bceil{\frac{v_j+1}{\lambda}} +\sigma_{j+1}
\equiv \Bceil{\frac{v_j}{\lambda}} + \gamma_j +\epsilon_j \mod{2v_{j+1}+1}.
\end{displaymath}
Here we were able to replace $x_j$ with $\gamma_j$ as, by the construction (\ref{eq:q}) of $q$,
$2v_{j+1}+1$ is a divisor of the modulus $q/(2v_j+1)=(2v_1+1)n_j$ which defines $\gamma_j$.
If the coprimality condition (\ref{eq:v1_coprimality}) holds, then $2v_{j+1}+1$ also divides $n_j$.
Hence for any given pair $(\sigma_j,\sigma_{j+1})$, there are $n_j/(2v_{j+1}+1)$ values of
$\gamma_j$ modulo $n_j$ for which the following congruence is satisfied:
\begin{equation}\label{eq:gamma_j_II}
\gamma_j \equiv \Bceil{\frac{v_j+1}{\lambda}} +\sigma_{j+1}
- \Bceil{\frac{v_j}{\lambda}} -\epsilon_j + s(2v_{j+1}+1) \mod{n_j}
\end{equation}
where $s\in\mathbb{Z}$. The total number of points with any given value of $\sigma_{j+1}$ is thus:
\begin{displaymath}
(2v_j+1) \times \frac{n_j}{2v_{j+1}+1} = n_{j+1},
\end{displaymath}
which completes the inductive step for hypothesis (i).
Taking the second component of equation (\ref{eq:Psi^jp1}) modulo $n_{j+1}$ gives an expression for
$\gamma_{j+1}$ in terms of the pair $(\sigma_j,\gamma_j)$:
\begin{equation} \label{eq:gamma_jp1}
\gamma_{j+1}
\equiv \Bceil{\frac{n}{\lambda}} + \sigma_j + (2v_j+1)t - \Bceil{\frac{n-1}{\lambda}} \mod{n_{j+1}},
\end{equation}
where $t=t(\sigma_j,\gamma_j)$. For a given pair $(\sigma_j,\sigma_{j+1})$,
$t$ is given by equation (\ref{eq:t,2a+b_eqn}).
Hence taking equation (\ref{eq:t,2a+b_eqn}) modulo $n_j$, a multiple of $2v_{j+1}+1$,
and using the expression (\ref{eq:gamma_j_II}) for $\gamma_j$,
it follows that the values of $t$ satisfy
\begin{align*}
t+1 &\equiv \frac{ \ceil{(v_j+1)/\lambda} +\sigma_{j+1}
- \ceil{v_j/\lambda} - \gamma_j -\epsilon_j}{2v_{j+1}+1} \mod{\frac{n_j}{2v_{j+1}+1}} \\
&\equiv -s \mod{n_j/(2v_{j+1}+1)},
\end{align*}
where $s\in\mathbb{Z}$. Thus $t$ takes all values modulo $n_j/(2v_{j+1}+1) = n_{j+1}/(2v_j+1)$.
Applying this to equation (\ref{eq:gamma_jp1}) and letting $\sigma_j$ vary across the range $\sigma_j\in\{0,1,\dots,2v_j\}$,
we see that $\gamma_{j+1}$ achieves a complete set of residue classes modulo $n_{j+1}$, as required.
This completes the inductive step for hypothesis (ii) and the result follows from hypothesis (i) for $j=k$.
\end{proof}
\medskip
Finally, we can give the proof of theorem \ref{thm:minimal_densities} (page \pageref{thm:minimal_densities})
on the density of minimal orbits.
\begin{proof}[Proof of theorem \ref{thm:minimal_densities}]
Let $e\in\mathscr{E}$ be given and let $\sigma^*$ be the unique element of the set $\{0,1,\dots,2v_k\}$ that satisfies
\begin{displaymath}
2\sigma^* \equiv v_k \mod{2v_k+1}.
\end{displaymath}
By lemma \ref{lemma:minimal_codes}, $z\inX^e$ is a symmetric fixed point of
$\Phi$ if and only if its orbit code satisfies $\sigma_{-1}=\sigma_1$,
i.e., if $z\in\Fix{G^e}$, and $\sigma_k=\sigma^*$.
We will show that the number of points in $\Fix{G^e}$ modulo $\lambda\mathbb{L}^e$
whose orbit code has $\sigma_k=\sigma^*$ is given by
$$ \frac{q}{(2v_1+1)(2v_k+1)}, $$
where $q$, given by (\ref{eq:q}), is the total number of points modulo $\lambda\mathbb{L}^e$.
For $e=0,2,8$, all elements of the vertex list are the same. This case is dealt
with by the discussion preceding lemma \ref{lemma:sigma_j_I}.
Thus we assume that the vertex list contains at least two distinct elements.
Suppose first that $2v_1+1$ is coprime to $2v_j+1$ for all $v_j\neq v_1$,
so that the condition (\ref{eq:v1_coprimality}) for lemma \ref{lemma:sigma_j_II} holds.
Applying the lemma for $j=k$, we have that for sufficiently small $\lambda$,
the number of points in $\Fix{G^e}$ modulo $\lambda\mathbb{L}^e$ whose orbit code has $k$th entry $\sigma^*$ is
given by
$$ n_j = \frac{q}{(2v_1+1)(2v_k+1)}, $$
as required.
Suppose now that $2v_k+1$ is coprime to $2v_j+1$ for all $v_j\neq v_k$.
Let $j=\iota(l)-1$ be the penultimate distinct vertex type in the vertex list.
Note that $v_j = v_{\iota(l-1)}$ and $q_j=q_{\iota(l-1)}$, where
$q_j$ is given in closed form by (\ref{eq:q_j_closed_form}),
and that $v_i\neq v_k$ for all $1\leq i \leq j$.
It follows that $2v_k+1$ is coprime to $q_{\iota(l-1)}$.
Similarly $2v_k+1$ is coprime to $p_{\iota(l-1)}=q_{\iota(l-1)}/(2v_j+1)$,
and the condition (\ref{eq:coprimality_I}) of lemma \ref{lemma:sigma_j_I} holds.
Applying the lemma, we have that in every cylinder set of $\lambda\mathbb{L}^e_j$,
the number of points modulo $\lambda\mathbb{L}^e$ whose orbit code has $k$th entry $\sigma^*$ is given by
\begin{displaymath}
\frac{1}{2v_k+1} \#\left(\mathbb{L}^e_j/\mathbb{L}^e\right) = \frac{1}{2v_k+1}\,\frac{q}{q_j}.
\end{displaymath}
The set $\Fix{G^e}$ is the union of $q_j/(2v_1+1)$ such cylinder sets.
Hence, as before, the number of symmetric fixed points in $X^e$ modulo $\lambda\mathbb{L}^e$ is
\begin{displaymath}
\frac{1}{2v_k+1}\,\frac{q}{q_j}\times \frac{q_j}{2v_1+1} = \frac{q}{(2v_1+1)(2v_k+1)}.
\end{displaymath}
This number is independent of $\lambda$, which completes the proof of the first statement.
We have shown that for sufficiently small $\lambda$, and if (\ref{eq:v1_k_coprimality})
holds, then the fraction of symmetric fixed points of $\Phi$ in each fundamental domain of $\lambda\mathbb{L}^e$ is
\begin{displaymath}
\frac{1}{(2v_1+1)(2v_k+1)} = \frac{1}{(2\fl{\sqrt{e/2}}+1)(2\fl{\sqrt{e}}+1)},
\end{displaymath}
where we have used equations (\ref{def:v1}) and (\ref{def:vk}) for $v_1$ and $v_k$.
It remains to show that the density $\delta(e,\lambda)$ of symmetric fixed points in $X^e$ converges
to this fraction as $\lambda\rightarrow 0$.
By equation (\ref{def:Xe}), the domain $X^e$ is a subset of the lattice $(\lambda\mathbb{Z})^2$ bounded
by a rectangle lying parallel to the symmetry line $\Fix{G}$.
Similarly, a fundamental domain of the lattice $\lambda\mathbb{L}^e$ is a subset of $(\lambda\mathbb{Z})^2$ bounded
by a parallelogram of the form
\begin{displaymath}
\{ \alpha \mathbf{L} + \frac{\beta}{2}(\mathbf{L}-\mathbf{w}_{v_1,v_1}) \, : \; \alpha,\beta\in[0,\lambda) \},
\end{displaymath}
where the generator $\mathbf{L}$ is also parallel to the symmetry line.
These parallelograms tile the plane under translation by the elements of $\lambda\mathbb{L}^e$.
The width of $X^e$ (taken in the direction perpendicular to $\Fix{G}$) is $\lambda\|\mathbf{w}_{v_1,v_1}\|$---exactly twice
that of the above parallelogram (see figure \ref{fig:lattice_Le}, page \pageref{fig:lattice_Le}).
The number of parallelograms which fit lengthwise into $X^e$, however, goes to infinity as $\lambda$ goes to zero.
If $I^e(\lambda)=(\alpha_1,\alpha_2)\subset\mathscr{I}^e$, then the length $d$ of $X^e$ parallel to $\Fix{G}$ is given by
\begin{align*}
d &= \sqrt{2}\left(P^{-1}(\alpha_2/2)-P^{-1}(\alpha_1/2)\right) \\
&=\frac{1}{\sqrt{2}}\left(\frac{|I^e|}{2v_1+1}\right) \\
&=\frac{1}{\sqrt{2}}\left(\frac{|\mathscr{I}^e|}{2v_1+1}\right) + O(\lambda)
\end{align*}
as $\lambda\to 0$, where we have used the expression (\ref{def:Pinv}) for $P^{-1}$,
and proposition \ref{prop:Ie} for the length of $I^e(\lambda)$.
Thus, the number of parallelograms which can be contained in the rectangle bounding $X^e$ is at least
\begin{displaymath}
2 \left(\Bfl{ \frac{d}{\lambda \|\mathbf{L}\|} } -1 \right) - 8,
\end{displaymath}
where $\fl{d/\lambda \|\mathbf{L}\|}$ is the number of times that the vector
$\mathbf{L}$ fits lengthways into the rectangle,
we subtract $1$ for the slope of the parallelogram, and we subtract $8$ for the parallelograms which intersect
the boundary. Each parallelogram contains a complete fundamental domain of $\lambda\mathbb{L}^e$, and their
contribution to $\delta(e,\lambda)$ dominates in the limit $\lambda\rightarrow 0$.
Explicitly, the number of points in $X^e$ scales like
\begin{align*}
\# X^e &= \frac{2(2v_1+1)}{\lambda}\left(P^{-1}(\alpha_2/2)-P^{-1}(\alpha_1/2)\right) + O(1) \\
&= \frac{|I^e|}{\lambda} + O(1) \\
&= \frac{|\mathscr{I}^e|}{\lambda} + O(1)
\end{align*}
as $\lambda\to 0$, whereas the length of $\mathbf{L}$ is given by (\ref{def:L}) as:
$$ \|\mathbf{L}\| = \frac{\sqrt{2}q}{2v_1+1}, $$
and $\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)=q(e)$.
Hence the density $\delta(e,\lambda)$ satisfies
\begin{align*}
\delta(e,\lambda) &= \frac{\#\left(\mathbb{Z}^2/\,\mathbb{L}^e\right)}{\# X^e}
\left( \frac{2\Bfl{ d/\lambda \|\mathbf{L}\| } - 10}{(2v_1+1)(2v_k+1)} + O(1) \right) \\
&= \left(\frac{\lambda q}{|\mathscr{I}^e|} + O(\lambda^2) \right)
\left( \frac{2 d/\lambda \|\mathbf{L}\|}{(2v_1+1)(2v_k+1)} + O(1) \right) \\
&= \left(\frac{\lambda q}{|\mathscr{I}^e|} + O(\lambda^2) \right)
\left( \frac{ |\mathscr{I}^e|/\lambda q(e)}{(2v_1+1)(2v_k+1)} + O(1) \right) \\
& = \frac{1}{(2v_1+1)(2v_k+1)} + O(\lambda)
\end{align*}
as $\lambda\rightarrow 0$.
\end{proof}
\chapter{Preliminaries} \label{chap:Preliminaries}
In this chapter we provide the reader with background material
and key results on various topics which will be referred to throughout what follows.
\section{Round-off} \label{sec:Round-off}
When we speak of round-off as a method of discretisation,
we refer to the scenario in which a map $T$ on some set $X$ is replaced by
a perturbed map $F$ on some finite or countable subset $L\subset X$,
which is not invariant under $T$.
The map $F$ is the composition of $T$ with a \defn{round-off function} $R$,
which associates each point in $X$ with some (nearby) point in $L$.
The choice of $L$ and $R$ are referred to as the \defn{round-off scheme}.
In chapter \ref{chap:Introduction}, we mentioned the frameworks for round-off introduced by
Blank \cite{Blank94} and Vladimirov \cite{Vladimirov}:
we discuss their respective approaches briefly here.
The most general model is given by Blank,
who introduces the notion of an \emph{$\epsilon$-discretisation} $X_{\epsilon}$ of a compact set $X$,
which is simply an ordered collection of points in which neighbouring points are
separated by a distance of at most $\epsilon$.
The corresponding round-off function associates each point in $X$ with
the closest point in $X_{\epsilon}$
(or, in case there are several such points, the point which is smallest with respect to the ordering of $X$).
Vladimirov considers discretisations of linear maps of $\mathbb{R}^n$
on the integer lattice $\mathbb{Z}^n$.
The space is discretised via the introduction of so-called \emph{cells} $\Omega$,
which tile the space under translation by elements of $\mathbb{Z}^2$:
$$ \Omega + \mathbb{Z}^n = \mathbb{R}^n, \qquad \forall z\in\mathbb{Z}^n\setminus\{0\}: \quad \Omega \cap (\Omega + z) = \emptyset. $$
Then the round-off function is called a \emph{quantizer},
and may be any map which commutes with translations by
elements of the lattice $\mathbb{Z}^n$, and whose associated cells are Jordan measurable.
In our case we follow the conventions of \cite{AdlerKitchensTresser,KouptsovLowensteinVivaldi02},
borrowed from maps on the torus.
We consider a discretisation of a planar map onto a two-dimensional lattice $\mathbb{L}$ given by
$$ \mathbb{L} = C \mathbb{Z}^2, $$
where $C$ is a $2\times 2$ non-singular matrix.
A \defn{fundamental domain} $\Omega$ of $\mathbb{L}$ is a set which
tiles the plane under translation by elements of $\mathbb{L}$,
as per the above definition of a cell,
and we restrict our attention to fundamental domains whose
closure is a parallelogram.
Then the round-off function $R$ associates each point $z\in\mathbb{R}^2$
with the unique lattice point $l\in\mathbb{L}$ such that $z\in l+\Omega$, i.e.,
$$ R: \mathbb{R}^2 \to \mathbb{L} \hskip 40pt R(z) = (z-\Omega)\cap\mathbb{L}. $$
When modelling round-off as performed in fixed-point arithmetic,
it is typical to use a uniform square lattice of the form\footnote{We use $\mathbb{N}$ to denote the set of positive integers.}
$$ \mathbb{L} = \left(\frac{\mathbb{Z}}{N}\right)^2 \hskip 40pt N\in\mathbb{N}, $$
where $1/N$ is the \defn{lattice spacing},
and the fundamental domain corresponds to \defn{nearest-neighbour rounding}:
$$ \Omega = \frac{1}{N}\;\left[-\frac{1}{2},\frac{1}{2}\right)^2. $$
\medskip
In the case of the discretised rotation $F$ of (\ref{def:F}),
the unperturbed dynamics are given by the elliptic motion $A$ of equation (\ref{def:A}),
the lattice $\mathbb{L}$ is simply the integer lattice $\mathbb{Z}^2$,
and the fundamental domain is the unit square $\Omega=[0,1)^2$.
\begin{figure}[t]
\centering
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/NearestInteger_24} \\
(a) $\; \lambda=1/24$ \\
\end{minipage}
\quad
\begin{minipage}{7cm}
\centering
\includegraphics[scale=0.35]{Graphics/NearestInteger_48} \\
(b) $\; \lambda=1/48$ \\
\end{minipage}
\caption{\hl{A selection of periodic orbits of the discretised rotation formed from the composition of
the elliptic motion} (\ref{def:A}) \hl{with the nearest-neighbour round-off function,
for two small values of $\lambda$. The lattice spacing is such that each unit distance contains $1/\lambda$ lattice points.
The grey lines are the lines $x = n+1/2$, $ y = n+1/2$ for $n\in\mathbb{Z}$.
Here $\nu\approx 1/4$, and the orbits closest to the origin are periodic with period $4$.}}
\label{fig:NearestInteger}
\end{figure}
The lattice spacing of $F$ is fixed at one,
but there is no loss of generality:
the linearity of the underlying dynamics $A$ ensures that
the discretisation $F_{\alpha}$ with lattice spacing $\alpha>0$
(defined using the lattice $\mathbb{L}=(\alpha\mathbb{Z})^2$ and $\Omega=[0,\alpha)^2$)
is conjugate to $F$ via a rescaling:
$$ F_{\alpha}(z) = \alpha F(z/\alpha) \hskip 40pt z\in(\alpha\mathbb{Z})^2. $$
We will make use of this fact in the next chapter,
where we rescale the lattice $\mathbb{Z}^2$.
With regard to the choice of fundamental domain,
the specific form of $A$ means that the round-off function only affects the $x$-coordinate,
so that $F$ is invertible (and reversible---see next section) irrespective of the choice of $\Omega$.
Furthermore, we may take $\Omega$ to be symmetric under reflection in the line $y=x$ without loss of generality.
If this is the case, then it is a straightforward exercise to show that the inverse of
$F$ is the discretisation of the inverse of $A$:
$$ F^{-1} = R \circ A^{-1}. $$
The choice of $\Omega=[0,1)^2$ corresponds to rounding down,
which is arithmetically nicer due to its consistency with modular arithmetic.
This choice also maximises the asymmetry of $F$ under reflection in the line $y=-x$,
i.e., the asymmetry in the direction perpendicular to the symmetry of $F$.
However, the character of the results described in this thesis is
not heavily dependent on the choice of round-off scheme:
we compare the orbits seen in figure \ref{fig:PolygonalOrbits}
to those in figure \ref{fig:NearestInteger},
which were calculated using a nearest-neighbour round-off scheme (i.e., $\Omega=[-1/2,1/2)^2$).
\section{Time-reversal symmetry} \label{sec:time-reversal}
For a detailed background on the subject of time-reversal symmetry,
we refer the reader to the surveys \cite{QuispelRoberts} and \cite{LambRoberts}, and references therein.
In broad terms, time-reversal symmetry is a property of an invertible dynamical system, whereby reversing the direction of time
maps valid trajectories into other valid trajectories. For example, consider the position $x(t)$ of a particle of unit mass moving
in a conservative force field $f=-\nabla V$, where $V$ is the potential. The equation governing the motion is
\begin{equation} \label{eq:conservativeForce}
\ddot{x} = f(x),
\end{equation}
where a dot denotes a derivative with respect to time. The equation (\ref{eq:conservativeForce}) is invariant under the transformation
$t\mapsto -t$, and if $x(t)$ is a solution, then $x(-t)$ is also a solution, with the same initial position but opposing initial velocity.
It is standard practice to write systems such as (\ref{eq:conservativeForce}) in the Hamiltonian formalism, where the motion of the
particle is described by its position $q(t)=x(t)$ and momentum $p(t)=\dot{x}(t)$. The Hamiltonian $H(q,p)$ is given by
$$ H(q,p) = \frac{p^2}{2} + V(q), $$
and the governing equations are Hamilton's equations:
\begin{equation} \label{eq:HamiltonsEquations}
\frac{dq}{dt} = \frac{\partial H}{\partial p} = p \hskip 40pt \frac{dp}{dt} = -\frac{\partial H}{\partial q} = f(q).
\end{equation}
In this setting, the time-reversal symmetry property of the system (\ref{eq:conservativeForce}) hinges upon the fact that the Hamiltonian is even in the momentum coordinate:
\begin{equation*}
H(q,p)= H(q,-p),
\end{equation*}
so that the system of equations (\ref{eq:HamiltonsEquations}) are invariant under the transformation
\begin{equation} \label{eq:HamitonianSymmetry}
(t, q, p) \mapsto (-t, q, -p).
\end{equation}
Thus, in classical mechanics, time-reversal symmetry describes the invariance of a system under the reversal of the time-direction combined with a reflection in phase space, which changes the sign of the momentum coordinate.
In section \ref{sec:Hamiltonian}, we introduce an abstract, piecewise-affine Hamiltonian of the plane with such a classical time-reversal symmetry. The Hamiltonian $\mathscr{P}$ (see equation (\ref{eq:Hamiltonian})), which represents the limiting dynamics of the discretised rotation $F$, is even in both coordinates, so that orbits of the corresponding flow are reversed by reflections in both axes.
However, the invariance of a system under a transformation such as (\ref{eq:HamitonianSymmetry}) is not a good definition for time-reversal symmetry, since it is not a coordinate independent property. This leads us to the definition of \defn{reversibility}, originally proposed (in a more restricted form\footnote{Devaney required that the phase space have even dimension, and that the involution $G$ fix a subspace with half the dimension of the phase space.}) by Devaney \cite{Devaney}. In this definition, the reflection in the momentum coordinate is replaced by an arbitrary \defn{involution} $G$ of the phase space, i.e., a map $G$ whose second iterate is the identity: $G^2 = \mathrm{id}$. The definition of reversibility can be applied to any flow (not necessarily Hamiltonian) or any map, and we shall be primarily interested in the latter.
\begin{definition}
A map $F$ is \defn{reversible} if it can be expressed as the product of two involutions:
\begin{equation} \label{eq:ReversibilityInvolutions}
F= H\circ G \hskip 40pt H= F\circ G \hskip 40pt G^2 = H^2 = \mathrm{id}.
\end{equation}
The involutions $G$ and $H$ are called \defn{reversing symmetries}.
\end{definition}
A reversible map $F$ is necessarily invertible, and an equivalent definition of reversibility is to require that $F$ is conjugate to its inverse via an involution $G$:
\begin{equation*}
F^{-1} = G \circ F \circ G \hskip 40pt G^2 = \mathrm{id}.
\end{equation*}
The definition of reversibility is consistent with the idea that the involution $G$ reverses the direction of time, since if $x^{\prime}=F(x)$, then
$$ G(x^{\prime}) = F^{-1} ( G(x)). $$
We note that reversible maps arise naturally from reversible flows, since any Poincar\'{e} return map or time-advance map of a reversible flow yields a reversible map. However, the definition (\ref{eq:ReversibilityInvolutions}) is purely algebraic, and there is no requirement that $F$ have any smoothness properties. We refer the reader to \cite{QuispelRoberts} for a wealth of examples of reversible flows and maps, both in physics and dynamical systems theory.
The lattice map $F$ of equation (\ref{def:F}) is reversible, and its reversing symmetries are given by
\begin{equation} \label{def:GH}
G(x,y) = (y,x) \hskip 40pt H(x,y) = (\lfloor \lambda y\rfloor - x, y).
\end{equation}
\subsection*{Symmetric orbits}
Let $O$ denote some (forwards and backwards) orbit of a reversible map $F$. The orbit $O$ is called \defn{symmetric} with respect to the reversing symmetry $G$ if it is setwise invariant under $G$:
$$ G(O) = O. $$
An orbit which is not symmetric is called \defn{asymmetric}.
It is typical that the symmetric orbits of a dynamical system exhibit different behaviour to the asymmetric orbits. In \cite{QuispelRoberts}, symmetric orbits are associated with the type of universal behaviour displayed by conservative (symplectic) systems, whereas asymmetric orbits are associated with the type of universal behaviour displayed by dissipative systems. In our work we also differentiate between symmetric and asymmetric orbits, with the latter ultimately dominating the phase space (see section \ref{chap:PerturbedAsymptotics}).
In principle, it is not clear how to locate or identify symmetric orbits, other than by an exhaustive search of the phase space. To this end, we introduce the \defn{fixed space} $\Fix{G}$ of a reversing symmetry $G$:
$$ \Fix{G} = \{ z \, : \; G(z)=z \}. $$
The symmetric orbits of a reversible map are structured by these fixed spaces, as we see in the following folklore theorem.
\begin{theorem}\cite[Theorem 4.2]{LambRoberts} \label{thm:SymmetricOrbits}
Let $F$ be a reversible map in the sense of (\ref{eq:ReversibilityInvolutions}). Then the orbits of $F$ satisfy the following properties:
\begin{enumerate}[(i)]
\item An orbit is symmetric if and only if it intersects the set $\Fix{G}\cup\Fix{H}$.
\item A symmetric orbit intersects the set $\Fix{G} \cup \Fix{H}$ exactly once if and only if it is either aperiodic or a fixed point.
\item A symmetric periodic orbit which is not a fixed point intersects the set $\Fix{G} \cup \Fix{H}$ exactly twice; it has even period $2p$ if and only if it intersects one of the sets $\Fix{G} \cap F^p(\Fix{G})$ or $\Fix{H} \cap F^p(\Fix{H})$, and has odd period $2p+1$ if and only if it intersects the set $\Fix{G} \cap F^p(\Fix{H})$.
\end{enumerate}
\end{theorem}
In dynamical systems of the plane, it is often the case that the fixed space of a reversing symmetry is a line, in which case we refer to a \defn{symmetry line}. In our case, the fixed space of the involution $G$ given in (\ref{def:GH}) is a symmetry line:
\begin{equation} \label{eq:FixG}
\Fix{G} = \{ (x,y)\in\mathbb{R}^2 \, : \; x=y \},
\end{equation}
whereas the fixed set of the involution $H$ is given by the collection of line segments
\begin{equation} \label{eq:FixH}
\Fix{H} = \{ (x,y)\in\mathbb{R}^2 \, : \; 2x = \lfloor \lambda y\rfloor \}.
\end{equation}
(We think of $G$ and $H$ as acting on the plane, although they will also be applied to lattice subsets thereof.)
\subsection*{Reversible and equivariant dynamics}
We note at this point that reversing symmetries do not just come in pairs, but in typically infinite sequences: if $G$ is a reversing symmetry of $F$, then $F^n\circ G$ is also a reversing symmetry for all $n\in\mathbb{Z}$. The set of $G$ and its iterates under the motion form a \defn{family} of reversing symmetries, and we call two reversing symmetries $G$ and $G^{\prime}$ \defn{independent} if $G^{\prime}$ is not a member of the family generated by $G$. A system which has just one family of reversing symmetries is called \defn{purely reversible}.
Furthermore, the composition of two reversing symmetries is a \defn{symmetry}, i.e., a map $S$ on the phase space that maps valid trajectories onto valid trajectories. In the case of a map $F$, this means that $F$ commutes with $S$:
$$ S \circ F = F \circ S, $$
or, if the symmetry $S$ is invertible, that $F$ is \defn{equivariant} under $S$:
$$ F = S^{-1} \circ F \circ S. $$
If we compose two reversing symmetries from the same family, then we obtain a trivial symmetry, i.e., an iterate of $F$. If, however, a dynamical system has a pair of independent reversing symmetries, then their composition is a non-trivial symmetry.
The group generated by the reversing symmetries of a dynamical system is called the \defn{reversing symmetry group}, which contains the \defn{symmetry group} as a normal subgroup. Thus the study of reversible dynamics can be approached as an extension to that of equivariant dynamics.
The lattice map $F$ is purely reversible---its reversing symmetry group consists of the family
generated by the reflection $G$ of (\ref{def:GH}).
\subsection*{Time-reversal symmetry in discrete spaces}
Finally we summarise a series of papers \cite{RobertsVivaldi05,RobertsVivaldi09,NeumarkerRobertsVivaldi}
concerning universal behaviour among reversible maps with a finite phase space.
If an invertible map $F$ has a finite phase space, then all orbits of $F$ are periodic, and the period $T(z)$ of
some point $z$ in the phase space is given by
\begin{equation*}
T(z) = \min\{k \, : \; F^k(z)=z \}.
\end{equation*}
Furthermore, if the phase space consists of $N$ points, then the \defn{period distribution function} $\mathscr{D}$ of $F$ is given by
\begin{equation} \label{def:pdf}
\mathscr{D}(x) = \frac{1}{N} \, \# \{ z \, : \; T(z)\leq \kappa x \},
\end{equation}
where $\kappa$ is some scaling parameter, so that $\mathscr{D}(x)$ is the fraction of points in the phase space whose period under $F$ is less than or equal to $\kappa x$.
The period distribution function $\mathscr{D}$ is a non-decreasing step function, with $\mathscr{D}(0)=0$ and $\mathscr{D}(x)\rightarrow 1$ as $x\rightarrow\infty$. The possible values of $\mathscr{D}$ are restricted to the set
$$ \left\{ \frac{k}{N} \, : \; 0\leq k \leq N \right\}, $$
and its discontinuities lie in the set
$$ \left\{ \frac{k}{\kappa} \, : \; k \in \mathbb{N} \right\}. $$
In \cite{RobertsVivaldi09}, it was shown that for a suitable choice of the scaling parameter $\kappa$,
the expected period distribution function of a random reversible map on $N$ points
converges to a universal limiting distribution as $N\rightarrow\infty$.
This universal distribution function is given by a gamma (or Erlang) distribution
\begin{equation} \label{def:R(x)}
\mathcal{R}(x) = 1 - e^{-x}(1+x).
\end{equation}
The scaling parameter $\kappa$ depends on the fraction of the phase space occupied by the fixed spaces of the reversing symmetries of the map.
\begin{theorem} \cite[Theorem A]{RobertsVivaldi09} \label{thm:GammaDistribution}
Let $(G,H)$ be a pair of random involutions of a set $\Omega$ with $N$ points, and let
\begin{equation*}
g=\#\Fix{G} \hskip 40pt h=\#\Fix{H} \hskip 40pt \kappa = \frac{2N}{g+h}.
\end{equation*}
Let $\mathscr{D}_N(x)$ be the expectation value of the fraction of $\Omega$ occupied by periodic orbits of $H\circ G$ with period less than $\kappa x$, computed with respect to the uniform probability. If, with increasing $N$, $g$ and $h$ satisfy the conditions
\begin{equation} \label{eq:g_h_conds}
\lim_{N\rightarrow\infty} g(N) + h(N) = \infty \hskip 40pt \lim_{N\rightarrow\infty} \frac{g(N)+h(N)}{N} = 0,
\end{equation}
then for all $x\geq 0$, we have the limit
$$ \mathscr{D}_N(x) \rightarrow \mathcal{R}(x), $$
where $\mathcal{R}(x)$ is the universal distribution (\ref{def:R(x)}). Moreover, almost all points in $\Omega$ belong to symmetric periodic orbits.
\end{theorem}
Note that since Theorem \ref{thm:GammaDistribution} treats the composition of two random involutions,
the resulting reversible map will be purely reversible with full probability as $N\rightarrow\infty$.
Thus we expect (\ref{def:R(x)}) to be the limiting distribution for suitably `random' (in particular, non-integrable)
purely reversible maps\footnote{Theorem
\ref{thm:GammaDistribution} has recently been extended---see \cite{Hutz}.}.
Experimental evidence suggests that $\mathcal{R}(x)$ is indeed the limiting distribution for a number of planar algebraic maps.
The distribution (\ref{def:R(x)}) was first identified in \cite{RobertsVivaldi05} as the signature of time-reversal symmetry
in non-integrable planar polynomial automorphisms (i.e., polynomial maps with a polynomial inverse),
of which the area-preserving H\`{e}non map is an example.
The maps were reduced to permutations of finite fields of increasing size.
For suitably chosen parameter values, this reduction preserves the invertibility, symmetry and non-integrability
of the original map, but also introduces modular multiplication---the ingredient which provides the `randomness'.
More recently, in \cite{NeumarkerRobertsVivaldi}, the distribution (\ref{def:R(x)}) has been observed for the
Casati-Prosen family of maps---a two parameter family of reversible maps of the torus,
which have zero entropy but are conjectured to be mixing.
For rational parameter values, these maps preserve rational lattices,
and thus can be restricted directly to finite sets of increasing size:
the distribution $\mathcal{R}(x)$ is conjectured to be the limiting distribution for a set of rational parameter values with full measure.
In chapter \ref{chap:PerturbedAsymptotics}, \hl{we consider the period distribution function of the perturbed return map $\Phi$
on each of the polygon classes}---indexed
by the sums of two squares.
We provide numerical evidence that $\mathcal{R}(x)$ is the limiting distribution
along the subsequence of polygon classes which correspond to perfect squares,
where the nonlinearity of the return map persists at infinity.
In our case, the finite structure arises naturally from the symmetry of the system,
since on each polygon class, the return map is equivariant with respect to a group of lattice translations.
As the number of equivalence classes modulo this sequence of lattices diverges,
the period distribution function converges to $\mathcal{R}(x)$.
|
2,869,038,154,970 | arxiv | \section{Introduction}
A radio pulsar is a rotating neutron star (NS) with strong surface
magnetic fields, which emits a beam of electromagnetic radiation
along the axis of the fields. Much resembling the way of a
lighthouse, the radiation can only be seen when the light is pointed
to the direction of an observer. Since a NS is a very stable
rotator, it produces a series of pulses with a very precise interval
that ranges from milliseconds to seconds in the radio band.
The arrival times of the pulses can be recorded with very high
precision. Indeed, thanks to the high precision, a surprising amount
can be learned from them (see Lyne \& Graham-Smith 2012). As early
as the first pulsar was discovered (on November 28, 1967), Hewish
and his collaborators noticed that they provide the information that
not only the radio source might be a rotating NS, but also about its
position and motion, as well as the dispersion effect during the
pulses' propagation through the interstellar medium (Hewish el al.
1968).
The time-of-arrival (TOA) measurements now give the precise
information on various modes of the spin evolution of individual
NSs, and on the orbits and rotational slowdown of binary pulsars,
and thus made possible some fundamental tests of general relativity
and gravitational radiation. Some millisecond pulsars with rather
stable pulsations (even challenging the best atomic clocks), are
used as a system of Galactic clocks for ephemeris time or
gravitational wave detection.
The variations of spin frequency $\nu$ and its first derivative
$\dot{\nu}$ of pulsars are obtained from polynomial fit results of
arriving time epochs (i.e. phase sequences) of pulses. Since the
rotational period is nearly constant, these observable quantities,
$\nu$, $\dot{\nu}$ and $\ddot{\nu}$ can be obtained by fitting the
phases to the third order of its Taylor expansion over a time span
$t_{\rm s}$,
\begin{equation}\label{phase}
\Phi_i = {\Phi} + \nu (t_i-t) + \frac{1}{2}\dot \nu (t_i-t)^2 +
\frac{1}{6}\ddot\nu (t_i-t)^3.
\end{equation}
One can thus get the values of $\nu$, $\dot{\nu}$ and $\ddot{\nu}$
at $t$ from fitting to Equation~(1) for independent $N$ data blocks
around $t$, i.e. $i=1,...,N$.
The most obvious feature of a pulsar spin evolution is that it is
observed to slow down gradually, i.e. $\dot\nu<0$. According to
classical electrodynamics, a inclined magnetic dipole in vacuum lose
its rotational energy via emitting low-frequency electromagnetic
radiation. Assuming the pure magnetic dipole radiation as the
braking mechanism (e.g. Lorimer 2004), we have
\begin{equation}\label{braking law}
\dot\nu =-A B_0^2 \nu^3,
\end{equation}
in which $A=8\pi^2R^6\sin\theta^2/3c^3I$ is a constant, $B_0$ is the
strength of the dipole magnetic fields at the surface of the NS,
$R~(\simeq10^6~{\rm cm})$, $I~(\simeq10^{45}~{\rm g~cm^2})$, and
$\theta~(\simeq\pi/2)$ are the radius, moment of inertia, and angle
of magnetic inclination from the rotation axis, respectively. Though
the energy flow from a pulsar may be a combination of this dipole
radiation and an outflow of particles, Equation (\ref{braking law})
is still valid, since the magnetic energy dominate the total energy
of the outer magnetosphere (Lyne \& Graham-Smith 2012).
\begin{figure}[h]
\center{
\includegraphics[angle=0,scale=0.3]{f1.eps}}
\caption{Observed correlation between $\ddot \nu$ and
$3\dot{\nu}^2/\nu$. The prediction of the standard magnetic dipole
radiation model, i.e., $\ddot \nu=3\dot{\nu}^2/\nu$ is shown as the
dashed line, which under-predicts significantly the magnitudes of
$\ddot \nu$ for most pulsars and also cannot explain $\ddot \nu<0$
for nearly half of the pulsars. All the data are taken from Hobbs et
al. (2010), and the figure is taken from Paper I.} \label{Fig:1}
\end{figure}
Following Equation (\ref{braking law}) and assuming $\dot B_0=0$,
the frequency's second derivative can be simply expressed as
\begin{equation}\label{ddotnu}
\ddot\nu=3\dot\nu^2/\nu.
\end{equation}
The model predicts $\ddot\nu>0$ and $|\ddot\nu|$ should be very
small. However, as shown in Figure \ref{Fig:1} (Zhang
\& Xie 2012a; hereafter Paper I), the observed
$\ddot{\nu}$ is often significantly different from the model
predictions, so that the braking mechanism may be oversimplified.
However how and why the torque varies still remains controversial,
which is an outstanding problem in our understanding of neutron
stars.
In this paper, we give a brief review for the phenomenological model
we constructed recently, and its applications on the spin behaviors
of pulsars, which include the statistical properties of $\ddot\nu$
and $n_{\rm b}$, and the dipole magnetic field evolution of some
individual pulsars, as well as their physical implications on NS
interiors. A phenomenological model for glitch recoveries of
individual pulsars are also briefly discussed.
\section{The phenomenological model}
To model the discrepancy between the observed $\ddot\nu$ and the
predicted $\ddot\nu$ by Equation (\ref{ddotnu}), the braking law of a pulsar is generally
assumed as
\begin{equation}
\dot \nu =-K\nu ^{n_{\rm b}},\label{braking_law}
\end{equation}
where $n_{\rm b}$ is called its braking index. Manchester \& Taylor
(1977) gave that
\begin{equation}
n_{\rm b}=\ddot{\nu}\nu/\dot{\nu}^2,\label{braking_index}
\end{equation}
if $\dot{K}=0$. For the standard magnetic dipole radiation model
with constant magnetic field ($\dot{K}=0$), Equation (\ref{ddotnu}) applies and yields $n_{\rm b}=3$. Therefore
$n_{\rm b}\ne 3$ indicates some deviation from the standard magnetic
dipole radiation model with constant magnetic fields.
Blandford \& Romani (1988) re-formulated the braking law of a pulsar
as,
\begin{equation}
\dot \nu =-K(t)\nu ^3.\label{blandford1}
\end{equation}
This means that the standard magnetic dipole radiation is
responsible for the instantaneous spin-down of a pulsar, but the
braking torque determined by $K(t)$ may be variable. In this
formulation, $n_{\rm b}\ne 3$ does not indicate deviation from the
standard magnetic dipole radiation model, but means only that $K(t)$
is time dependent. Assuming that magnetic field evolution is
responsible for the variation of $K(t)$, we have $K=AB(t)^2$, in
which $B(t)$ is the time variable dipole magnetic field strength of
a pulsar. The above equation then suggests that $n_{\rm b}< 3$
indicates magnetic field growth of a pulsar, and vice versa, since
$\dot{\nu}<0$ and $K>0$. This can be seen more clearly from (Paper I and Zhang
\& Xie 2012b (Paper II)),
\begin{equation}
\dot{K}=\frac{\dot{\nu}^2}{\nu^4}(3-n_{\rm b}).\label{zhang2012}
\end{equation}
Equation (\ref{blandford1}) cab be re-written as
\begin{equation}\label{braking law2}
\dot\nu \nu^{-3}=-A B(t)^2,
\end{equation}
and the time
scale of the magnetic field long-term evolution of each pulsar (see
Equation~(6) in Paper I) is given by
\begin{equation}\label{tau_B}
\tau_{B}\equiv \frac{B}{\dot B}=\frac{2\dot\nu_0\nu_0}{\ddot\nu_{\rm
L}\nu_0 -3\dot\nu_0^2}.
\end{equation}
$\tau_{B}<0$ indicates magnetic field decrease and
vice versa.
In Paper I and II, we constructed a phenomenological model for the
dipole magnetic field evolution of pulsars with a long-term decay
modulated by short-term oscillations,
\begin{equation}\label{B evolution}
B(t)=B_{\rm d}(t)(1+\sum k_i\sin(\phi_i+2\pi\frac{t}{T_i})),
\end{equation}
where $t$ is the pulsar's age, and $k_i$, $\phi_i$, $T_i$ are the
amplitude, phase and period of the $i$-th oscillating component,
respectively. $B_{\rm d}(t)=B_0(t/t_0)^{-\alpha}$, in which $B_0$ is the
field strength at the age $t_0$, and $\alpha$ is the power law
index.
By substituting Equation (\ref{B evolution}) into Equation
(\ref{braking law2}) and taking only the dominating oscillating
component, we obtained the analytic approximation for $\dot\nu$ (Xie
\& Zhang 2013c, hereafter Paper V):
\begin{equation}\label{vdot}
\dot{\nu}\simeq \dot\nu_0(1+2 k(\sin(\phi+2\pi\frac{t}{T})-\sin
\phi))+\ddot\nu_{\rm L}(t-t_0),
\end{equation}
where $\dot\nu_0=\dot\nu(t_0)$, $\ddot\nu_{\rm
L}=-2\alpha\dot\nu_0/t_0$ describes the long-term monotonic
variation of $\dot\nu(t)$. Therefore Equation~(\ref{vdot}) can be
tested with the long-term monitoring observations of individual
pulsars. If the long-term observed average of $\ddot\nu$ is
approximately given by the expression for $\ddot\nu_{\rm L}$ above
(i.e. $\langle\ddot\nu\rangle\simeq\ddot\nu_{\rm L}$), then we can
use the previously reported $\ddot\nu$ obtained from the timing
solution fits of the whole data span as an estimate of
$\ddot\nu_{\rm L}$.
Similarly we also find (Paper I)
\begin{equation}\label{vddot}
\ddot{\nu}\simeq -2\dot{\nu}(\alpha/t_{\rm age}+f C(t)),
\end{equation}
where $t_{\rm age}$ is the real age of the pulsar, $f\equiv 2\pi
k/T$ for the dominating oscillating component, and
$C(t)=\cos(\phi+2\pi \frac{t}{T})$. For relatively young pulsars
with $t_{\rm age}<3\times 10^5$~yr, the first term in
Equation~(\ref{vddot}) dominates and we should have $\ddot{\nu}>0$
if $\alpha>0$. Considering that the characteristic ages ($\tau_{\rm
c}$) of young pulsars are normally several times larger than $t_{\rm
age}$, Equation~(\ref{vddot}) thus explains naturally the observed
$\ddot\nu >0$ for most young pulsars with $\tau_{\rm c}<10^6$ yr.
Without other information about $t$, we replace it with the magnetic
age of a pulsar $t=t_0(B_0/B)^{(1/\alpha)}$ in Equation
(\ref{vddot}); we then have,
\begin{equation}\label{ddot_p2}
\ddot{\nu}\simeq \eta(-\dot{\nu})^{1+\beta}/\nu^{3\beta}+
2\dot{\nu}fC(t),
\end{equation}
where $\beta=1/2\alpha$, $\eta=(3.3\times10^{19}/B_0)^{2\beta}/\beta
t_0$ and $B=3.3\times10^{19}\sqrt{P\dot{P}}$~G is assumed. Thus the
model predicts a correlation between $\ddot{\nu}$ and
$(-\dot{\nu})^{1+\beta}/\nu^{3\beta}$ for young pulsars with
$\tau_{\rm c}<10^6$ yr. Similarly for much older pulsars, the second
term in Equation~(\ref{vddot}) dominates, in agreement with the
observational fact that the numbers of negative and positive
$\ddot\nu$ are almost equal for the old pulsars.
From Equation~(\ref{braking law2}), we can also obtain,
\begin{equation}
n_{\rm b}=3+\frac{\tau_{\rm c}}{t}(2- 4 ft C(t)).\label{n_b}
\end{equation}
As we have shown in Paper I, the oscillatory term can be ignored in
determining $\ddot{\nu}$, so young pulsars with $\tau_{\rm c}\le
10^5$ should have $n_{\rm b}>0$, consistent with observations.
\section{Dipole magnetic field evolution of individual pulsars}
\begin{figure}[h]
\center{
\includegraphics[angle=0,scale=0.3]{f2.eps}}
\caption{{\it Upper Panel}: $\dot\nu (t) $ for PSR B1828$-$11 during
the past 20 years. The reported data are represented by red stars;
and the solid black line is calculated from Equation~(\ref{vdot}).
{\it Lower Panel}: Time differences between the peak positions of
reported data and analytical calculation. The figure is taken from
Paper V.} \label{Fig:2}
\end{figure}
In Equation (\ref{vdot}), we found that $\dot\nu$ evolution contains
a long-term change modulated by short-term oscillations. It is very
interesting to check whether the long-term changes can be unveiled
from the observational data of some individual pulsars. The sample of Lyne et al. (2010) provides the precise histories of
$\dot\nu$ for seventeen pulsars and thus may be applied to test
Equation~(\ref{vdot}). In the sample, the $\dot\nu$ evolutions for
most of the pulsars exhibit complex patterns. A subset of pulsars
with small $\tau_{\rm c}$ and $\tau_{\rm B}$ are thus selected to
reveal clearly their long-term magnetic field changes.
Figure \ref{Fig:2} (taken from Paper V) shows the comparison between
the reported and analytically calculated $\dot\nu(t)$ for B1828-11.
The one major difference is caused by the decrease of the
oscillation periods of the reported data after $\sim4000$~days.
Nevertheless, our model describes the general trend of the reported
data quite well. From the results, we found $\ddot\nu_{\rm L}>0$ for the pulsar,
which means that $\alpha>0$, i.e., magnetic field decay is directly
observed for them, as predicted by our phenomenological model. The
decay time scale is $|\tau_{B}|=3.3\times 10^4~{\rm yr}$. The
alternative possibility that it is caused by the magnetic
inclination change is ruled out with the data of the position angle
and pulse width changes (Paper V). Theoretically, there are three
avenues for magnetic field decay in isolated NSs, ohmic decay,
ambipolar diffusion, and Hall drift (Goldreich \& Reisenegger 1992).
We found that the Hall drift at outer crust of the NS is responsible
for the field decay, which gives a time scale
$|\tau_{\rm Hall}|=1.1\times 10^4~{\rm yr}$ that agrees with $\tau_{\rm
B}$ of the pulsar (see Paper V for the details). The time scales for
the other avenues are too long and thus not important. The
consistency between the two time scales also implies that the
majority of dipolar magnetic field lines are restricted to the outer
crusts (above the neutron drip point), rather than penetrating the
cores of the NSs.
The diffusive motion of the magnetic fields perturbs the background
dipole magnetic fields at the base of the NS crust. Such
perturbations propagate as circularly polarized ``Hall waves" along
the dipole field lines upward into the lower density regions in the
crusts. The Hall waves can strain the crust, and the elastic
response of the crust to the Hall wave can induce an angular
displacement (Cumming et al. 2004)
\begin{equation}\label{Strain}
\theta_{\rm s}=3\times 10^{-7} B_{12}^2n^{13/9}\frac{\delta B_{\rm
b}}{B},
\end{equation}
in which $B_{12}$ is the strength of the dipole magnetic fields with
unit of $10^{12}$ G, $n$ is the wave number over the crust, and
$\delta B_{\rm b}$ is the amplitude of the mode at the base of the
crust. We found that the short-term oscillations in $\dot\nu$ and
pulse width can be explained dramatically well with moderate values
of the parameters, $n=1200$ and $\delta B/B=0.2$ (Paper V).
Therefore, we concluded that the Hall drift and Hall waves in NS
crusts are responsible for the observed long-term evolution of the
spin-down rates and their quasi-periodic modulations, respectively.
\section{Statistical properties of pulsar timing noise}
\subsection{Reproducing the observed distribution of $\ddot\nu$ and $n_{\rm b}$}
Hobbs et al. (2010; hereafter H2010) carried out a very extensive
study on observed $\ddot\nu$ for 366 pulsars. Some of their main
results are: (1) All young pulsars have $\ddot{\nu} > 0$; (2)
Approximately half of the older pulsars have $\ddot{\nu} > 0$ and
the other half have $\ddot{\nu} < 0$; and (3) The value of
$\ddot{\nu}$ measured depends upon the data span and the exact
choice of data processed. In Figure \ref{Fig:1} (which is taken from
Paper I), we plotted the comparison between the observed $\ddot \nu$
and that predicted by Equation ({\ref{ddotnu}}). This model predicts
$\ddot \nu>0$, against the fact that many pulsars show $\ddot
\nu<0$. This is a major failure of this model. It is also clear that
this model under-predicts the amplitudes of $\ddot \nu$ by several
orders of magnitudes for most pulsars.
In Figure~\ref{Fig:2}(a, b) (Paper I), we show the
comparison between the predicted correlation between $\ddot{\nu}$
and $(-\dot{\nu})^{1+\beta}/\nu^{3\beta}$ for young pulsars with
$\tau_{\rm c}<10^6$ yr by Equation~(\ref{ddot_p2}) and data with
$\alpha=0.5$ and $\alpha=1.0$, respectively. In both cases the model
can describe the data reasonably well. We thus conclude that a
simple power-law decay model is favored by data.
\begin{figure}[h]
\center{
\includegraphics[angle=0,scale=0.35]{f3.eps}}
\caption{Correlations between $\ddot{\nu}$ and
$(-\dot{\nu})^{1+\beta}/\nu^{3\beta}$ with $\alpha=0.5$ (panel (a))
and $\alpha=1.0$ (panel (b)) for young pulsars with $\tau_{\rm
c}\leq 2\times 10^6$~yr and $\ddot \nu>0$. The dotted lines are the
best-fit of $\ddot{\nu}=\eta(-\dot{\nu})^{1+\beta}/\nu^{3\beta}$.
This figure is taken from Paper I.} \label{Fig:2}
\end{figure}
\begin{figure}[h]
\center{
\includegraphics[angle=0,scale=0.45]{f4.eps}}
\caption{Correlation between $n_{\rm b}/\tau_{\rm c}$ and $\tau_{\rm
c}$. The different curves show the analytically calculated model
predictions using Equation~(\ref{n_b}) for different values of $k$.
The figure is taken from Paper II.} \label{Fig:4}
\end{figure}
In Figure~\ref{Fig:4} (taken from Paper II), we show the observed
correlation between $n_{\rm b}/\tau_{\rm c}$ and $\tau_{\rm c}$,
overplotted with the analytical results of Equation~(\ref{n_b}) with
$C(t)=\pm 1$, $T=10$~yr and $k=10^{-3},\ 10^{-4}$ and $10^{-5}$,
respectively. Once again, the analytical results agree with the data
quite well.
We also performed Monte Carlo simulations for the distributions of
reported data (H2010) in $\ddot\nu$-$\tau_{\rm c}$ and $n$-$\tau_c$
diagrams, as shown in Figure \ref{Fig:5} (taken from
Paper IV), respectively. The two dimensional K-S tests show that the
distributions of two samples in each panel are remarkably
consistent.
\begin{figure}
\center{
\includegraphics[angle=0,scale=0.2]{f5.eps}}
\caption{$\ddot\nu$-$\tau_{\rm c}$ and $n$-$\tau_c$ diagrams. The
simulated data and reported data are represented with solid circles
and open circles, respectively. The figure is taken from Paper IV.}
\label{Fig:5}
\end{figure}
\begin{figure}[h]
\center{
\includegraphics[angle=0,scale=0.35]{f6.eps}}
\caption{Left: ``instantaneous" braking index $n_{\rm b}$ as a
function of time. Right: ``averaged" braking index $n_{\rm b}$ as a
function of the time span of the fitting. $T=15$~yr is used in both
cases. The figure is taken from Paper II.} \label{Fig:6}
\end{figure}
\subsection{The instantaneous and averaged values of $n_{\rm b}$}
Equation (\ref{n_b}) gives $n_{\rm b}$ as a function of $t$, i.e.,
the calculated $n_{\rm b}$ is in fact a function of time for a given
pulsar, as shown in the left panel of Figure~\ref{Fig:6}, in which
the horizontal axis ``Time" is the calendar time. We call $n_{\rm
b}$ calculated this way the ``instantaneous" braking index. However,
in analyzing the observed timing data of a pulsar, one usually fits
the data on TOAs over a certain time span to Equation (\ref{phase}),
where $\Phi (t)$ is the phase of TOA of the observed pulses, and
$\Phi_0$, $\nu_0$, $\dot \nu_0$ and $\ddot\nu_0$ are the values of
these parameters at $t_0$, to be determined from the fitting.
$n_{\rm b}$ calculated from $\nu_0$, $\dot \nu_0$ and $\ddot\nu_0$
is thus not exactly the same as the ``instantaneous" braking index.
We call $n_{\rm b}$ calculated this way over a certain time span the
``averaged" braking index.
In the right panel of Figure~\ref{Fig:6}, we show the simulated
result for the ``averaged" braking index as a function of time span.
It can be seen that the ``averaged" $n_{\rm b}$ is close to the
``instantaneous" one when the time span is shorter than $T$, which
is the oscillation period of the magnetic fields. The close match
between our model predicted ``instantaneous" $n_{\rm b}$ and the
``averaged" $n_{\rm b}$, as shown in Figure~\ref{Fig:4}, suggests
that the time spans used in the H2010 sample are usually smaller
than $T$.
For some pulsars the observation history may be longer than $T$ and
one can thus test the prediction of Figure~\ref{Fig:6} with the
existing data. In doing so, we can also obtain both $f$ and $T$ for
a pulsar, thus allowing a direct test of our model for a single
pulsar. We can in principle then include the model of magnetic field
evolution for each pulsar in modeling its long term timing data, in
order to remove the red noise in its timing residuals, which may
potentially be the limiting factor to the sensitivity in detecting
gravitational waves with pulsars.
\section{A phenomenological model for glitches}
In this section, we describe the phenomenological spin-down model
for the glitch and slow glitch recoveries (see Xie \& Zhang 2013;
hereafter Paper III). We found that Equation~(\ref{braking law2})
can be modified slightly to describe a glitch event,
\begin{equation}\label{rredipole}
\dot\nu\nu^{-3} =-H_0 G(t),
\end{equation}
where $H_0=\frac{8\pi^2(BR^3\sin\chi)^2}{3c^3I}=1/2\tau_{\rm
c}\nu_0^2$, $\tau_{\rm c}=-\nu/2\dot\nu$ is the characteristic age
of a pulsar, and $G(t)$ represents very small changes in the
effective strength of dipole magnetic field $B\sin\chi$ during a
glitch recovery. In the following we assume $G(t)=1+\kappa
e^{-\Delta t/\tau}$.
Integrating and solving Equation~(\ref{rredipole}), we have
\begin{equation}\label{nu}
\nu(t)\approx \nu_0+\Delta\nu_{\rm d}e^{-\Delta t/\tau}.
\end{equation}
The derivative of $\nu$ is
\begin{equation}\label{dnu}
\dot\nu(t)\approx \dot\nu_0-\Delta\dot\nu_{\rm d}e^{-\Delta t/\tau}.
\end{equation}
We know $\Delta t\sim \tau \sim 100~{\rm days}$ and $\kappa\ll 1$.
In Figure {\ref{Fig:7}} (taken from Paper III), we show the
simulations for the reported three slow glitches of B1822-09 over
the 1995-2004 interval. We confirmed that the slow glitch behavior
can be explained by our phenomenological model with $\kappa<0$. It
is also clear that the instantaneous values of $\Delta\dot\nu$,
which are obtained directly with the model with the parameters are
given by the simulation, are much larger than the reported results
in literature.
Yuan et al. (2010) reported a very large glitch occurred between
2005 August 26 and September 8 (MJDs 53608 and 53621), the largest
known glitch ever observed, with a fractional frequency increase of
$\Delta\nu/\nu\sim20.5\times10^{-6}$. In the left panels of
Figure~\ref{Fig:8}, we show the fits with one exponential term
$G(t)=(1+\kappa\exp{(-\Delta t/\tau)})$ for a comparison with the
``realistic'' simulation of two terms below. We show the modeled
glitch recovery with $G(t)=(1+\kappa_1\exp{(-\Delta
t/\tau_1)}+\kappa_2\exp{(-\Delta t/\tau_2)})$ in the right panels of
Figure~\ref{Fig:8}. Clearly the simulated profiles of the two term
fit matches the reported ones better than that of the one term fit.
One can see that $|\Delta \dot\nu_{\rm I}|$ are also slightly larger
than the reported $|\Delta \dot\nu_{\rm O}|$ for both the one-term
fit and two-term fit.
\begin{figure}
\centering
\includegraphics[scale=0.35]{f7.eps}
\caption{Slow glitches of Pulsar B1822--09. Observational results
are taken from Shabanova (2005). Upper panels: variations of
$\Delta\nu$ relative to the pre-glitch solution. Bottom panels:
variations of $\dot{\nu}$. Left panels: comparison between the
reported and simulated (both are also time-averaged) $\Delta\nu$ and
$\dot\nu$. Right panels: comparison between the reported and
restored (i.e. model-predicted) instantaneous $\Delta\nu$ and
$\dot\nu$. The figure is taken for Paper III.} \label{Fig:7}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=0,scale=0.32]{f8.eps}
\caption{The giant glitch of Pulsar B2334+61. Observational results
are taken from Yuan et al. 2010. Upper panels: variations of
$\Delta\nu$. Bottom panels: variations of $\dot{\nu}$. The left and
right panels represent for models with one and two decay components,
respectively. The figure is taken for Paper III.} \label{Fig:8}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.32]{f9.eps}
\caption{Schematic depictions of $\nu$, $\dot\nu$ and $G(t)$ for the
slow and classical glitch recoveries. The pre-glitch tracks are
represented by dotted line. The classical glitch recoveries are
represented by solid lines. The slow glitches are represented by
dashed lines.} \label{Fig:9}
\end{figure}
We thus concluded that the classical and slow glitch recoveries can
be well modeled by a simple function, $G(t)=1+ \kappa\exp{(-\Delta
t/\tau)}$, with positive or negative $\kappa$, respectively. Based
on the results, we generalize the variations of $\nu$ and $\dot\nu$
for slow and classical glitch recoveries, as shown in
Figure~\ref{Fig:9}. The pre-glitch tracks are represented by dotted
line. After the jump, the classical glitch recoveries (represented
by solid line) generally have $\nu$ variation that tends to restore
its initial values, and usually the restoration is composed by a
exponential decay and a permanent linear decrease with slope
$\Delta\dot\nu_{\rm p}$; however, for slow glitches (represented by
dashed line), $\nu$ monotonically increases, as shown in panel (1).
In panel (2), $\dot\nu$ of classical glitch recoveries that tends to
restore its initial values, but cannot completely recover for
$\Delta\dot\nu_{\rm p}\neq 0$; $\dot\nu$ of slow glitch recoveries
almost completely recover to its initial value, corresponding to the
increase of $\nu$.
The function, $G(t)=1+ \kappa\exp{(-\Delta t/\tau)}$, with positive
or negative $\kappa$, are shown in panel (3), respectively. However,
it is should be noticed that the model only have two parameters,
$\kappa$ and $\tau$, from which we can obtain $\Delta\nu_{\rm d}$
and $\Delta\dot\nu_{\rm d}$, but not $\Delta\nu_{\rm p}$ and
$\Delta\dot\nu_{\rm p}$, which are not modelled. The expression of
$\Delta\nu_{\rm p}$ and $\Delta\dot\nu_{\rm p}$ that relate to the
initial jumps of $\nu_0$ and $\dot\nu_0$, are not given by the
model, since the glitch relaxation processes are only considered
here. It has been suggested that these non-recoverable jumps are the
consequence of permanent dipole magnetic field increase during the
glitch event (Lin and Zhang 2004). Nevertheless, we conclude that
the major difference between slow glitch and classical glitch
recoveries are that they show opposite trends with opposite signs of
$\kappa$, in our phenomenological model.
Also as shown above, all the reported results of all pulsar glitches are systematically biased, and thus cannot be compared directly with theoretical models.
In Paper III, we carried extensive simulations in examining all possible sources of biases due to imperfections both in observations, analysis methods, and the ways in reporting results in literature. We suggested some fitting procedures that can significantly reduce the
biases for fitting the observed glitch recoveries and comparison with theoretical models.
\section{Summary}
We tested models of magnetic field evolution of NSs with the observed spin evolutions of individual pulsars and the
statistical properties of their timing noise. In
all models, the magnetic dipole radiation is assumed to dominate the instantaneous
spin-down of pulsars; therefore, different models of their magnetic
field evolution lead to different properties of their spin-down. We
constructed a phenomenological model of the evolution of the
magnetic fields of NSs, which is made of a long-term decay modulated
by short-term oscillations. By comparising our model predictions with the precisely observed spin-down evolutions of some individual pulsars, we found that
the Hall drift and Hall waves in the NS crusts are responsible for
the long-term change and short-term quasi-periodical oscillations,
respectively. We showed that the observed braking
indices of the pulsars in the sample of H2010, which span over a
range of more than 100 millions, can be completely reproduced with
the model. We find that the ``instantaneous" braking index of a
pulsar may be different from the ``averaged'' braking index obtained
from data. We also presented a phenomenological model for the
recovery processes of classical and slow glitches, which is used to
successfully model the observed slow and classical glitch events
from pulsars B1822-09 and B2334+61, respectively. Significant biases are found for fitting glitch recovery, in the widely used fitting procedures and reported results in literature.
\acknowledgements SNZ acknowledges partial funding support by 973 Program of China under grant Nos. 2009CB824800 and 2014CB845802, by the National Natural Science Foundation of China under grant Nos. 11133002 and 11373036, and by the Qianren start-up grant 292012312D1117210.
|
2,869,038,154,971 | arxiv | \section{Introduction}
The statistics of charge transport $\Delta Q$ through a junction is encoded in its generating function
\begin{equation}
\label{eq_1}
\chi(\lambda)=\ev{e^{i\lambda \Delta Q}}
= \sum \limits_{n=0}^{\infty} \frac{(i\lambda)^{n}}{n!}\,\ev{(\Delta Q)^{n}}\,.
\end{equation}
Different notions of charge transport, reflecting different measurement protocols, have been proposed. Each of them has been more or less closely associated to current correlators, which involve some time ordering prescription (like the Dyson \cite{levitov:93} or Keldysh \cite{beenakker:01, belzig:01, lesovik:03, salo:06} variants) or do not \cite{levitov:92}. The resulting statistics differ, but are always expressed in terms of the transparency $T$ of the junction. Somewhat unprecisely, the 3rd cumulant is
\begin{equation}
\label{eq_2}
\cumul{(\Delta Q)^{3}}\,\propto\, -2T^2(1-T)\,,\qquad
\cumul{(\Delta Q)^{3}}\,\propto\, T(1-T)(1-2T)\,,
\end{equation}
depending on whether time ordering is forgone, resp. imposed. Matters are however more subtle, since it was shown \cite{lesovik:03,sadovski:11} that, in some model with linear dispersion and depending on further details, the first result (\ref{eq_2}) arises even if the correlators are taken to be time ordered. The issue was clarified in \cite{graf:09}, where it was shown that: (i) as a rule, both variants of time ordering require amendment, giving way to unconventional prescriptions; and (ii) in the particular case of that same model, the new procedure yields the second result (\ref{eq_2}), at least in the Dyson variant.
The purpose of the present work is again twofold: First, to confirm item (i), though from a more general perspective, emphasizing the role of gauge coupling; and, second, to compute cumulants on the basis of the new procedure for models or variants not considered before. In particular, we will consider a model with quadratic dispersion and the cases in which the time ordering prescription reduces to the conventional one. In contrast to previous studies, binomial statistics is confirmed in all regimes.\\
For concreteness we present our results first within the setting of \cite{levitov:96}: A system which is endowed with charge and coupled to a spin $\tfrac{1}{2}$, the purpose of which is to serve as a detector, specifically as a galvanometer. Let the Hamiltonian of the combined system be
\begin{equation}
\label{eq_901}
H(\lambda\sigma_3 /2) = \begin{pmatrix} H(\lambda /2) & 0 \\ 0 & H(-\lambda /2)\end{pmatrix}\,,
\end{equation}
where the coupling $\lambda$ and the Hamiltonian $H(\lambda)$ will be specified later. For now we note that the spin precesses about the $3$-axis, since $\sigma_3$ is conserved.
Let $P\otimes\rho_\mathrm{i}$ be the initial joint state of the system and of the spin, and set $\langle A \rangle = \operatorname{tr}(P A)$, where $A$ is any operator of the system proper. Then
\begin{equation}
\label{eq_907}
\chi(\lambda) = \left\langle e^{i H(-\lambda/2)t}e^{-iH(\lambda/2)t}\right\rangle
\end{equation}
is the {\it influence functional} describing the spin state alone at a later time $t$,
\begin{equation}\label{if}
\rho_\mathrm{i}
=\begin{pmatrix}\rho_{++}&\rho_{+-}\\\rho_{-+}&\rho_{--}\end{pmatrix}\longmapsto
\rho_\mathrm{f}=
\begin{pmatrix}\rho_{++}&\rho_{+-}\chi(\lambda)\\
\rho_{-+}\chi(-\lambda)&\rho_{--}\end{pmatrix}\,,
\end{equation}
the representation being again in the eigenbasis of $\sigma_3$. With a grain of salt it may also be identified with the generating function in Eq.~(\ref{eq_1}), see item C3 below for details.
We restate (\ref{eq_907}) as
\begin{equation}
\label{eq_911}
\chi(\lambda) = \left\langle \overrightarrow{T} \exp \left( i \integral{0}{t}{t'}\HI(-\tfrac{\lambda}{2},t')\right)
\overleftarrow{T} \exp\left(-i \integral{0}{t}{t'}\HI(\tfrac{\lambda}{2},t')\right)\right\rangle\,,
\end{equation}
where $H$ is the Hamiltonian of the isolated system and
\begin{equation*}
\HI(\lambda, t)= e^{i H t}(H(\lambda)-H)e^{-i H t}
\end{equation*}
is the Hamiltonian in the interaction picture; moreover $\overleftarrow{T}$, $\overrightarrow{T}$ denote the usual and the reversed time ordering. We recall that the time ordering is supposed to occur inside the integrals once the exponentials are expanded in powers of $\lambda$. Eq.~(\ref{eq_911}) follows by inserting $1=\exp (-iHt)\exp (iHt)$ in the middle of the expectation (\ref{eq_907}) and by performing a Dyson expansion. \\% \cite{dyson:49}
Let $Q$ be the charge to the right of the junction. It is considered to be a primitive quantity, corresponding to $Q\otimes 1$ for the combined system, but independent of $\lambda$. By contrast and as implied by charge conservation, the current is then a derived quantity, namely the rate of charge:
\begin{equation}
\label{eq_910}
I(\lambda\sigma_3 /2) = \deriv{}{t}Q\otimes 1 = i\left[ H(\lambda\sigma_3 /2), Q\otimes 1 \right]
=\begin{pmatrix} I(\lambda /2) & 0 \\ 0 & I(-\lambda /2)\end{pmatrix}\,,
\end{equation}
where $I(\lambda)=i\left[H(\lambda), Q\right]$.
We shall next present two general types of Hamiltonians $H(\lambda)$ to be used in Eq.~(\ref{eq_901}). They are constructed from the Hamiltonian $H$ and from the charge $Q$. At first our goal is to emphasize structure. Later a more concrete physical meaning will be attached to the construction by means of a series of examples for $H$ and of remarks about $H(\lambda\sigma_3 /2)$.
The general Hamiltonians $H(\lambda)$ are as follows.
\begin{enumerate}
\item[A1.] \textit{Linear coupling}. The Hamiltonian is
\begin{equation*}
H(\lambda) = H - \lambda I\,,
\end{equation*}
where $I=i[H,Q]$ is the bare current through the junction, i.e. in absence of coupling. Hence,
\begin{equation*}
\HI(\lambda, t)=-\lambda I(t)\,,
\end{equation*}
where $I(t)$ is the bare current in the interaction picture, and Eq.~(\ref{eq_911}) reads
\begin{equation}
\label{eq_915}
\chi(\lambda)=
\left\langle \overrightarrow{T} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)
\overleftarrow{T} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)\right\rangle\,,
\end{equation}
which is the announced and well-known relation with current correlators (Keldysh time ordering). This setting is closely related to that of \cite{kindermann:01}.
\item[A2.] \textit{Gauge coupling}. Gauge transformations are generated by local charges $Q$. The Hamiltonian $H$ transforms as
\begin{align}
\label{eq_917}
H(\lambda)& = e^{i \lambda Q} H e^{-i \lambda Q} \\
\label{eq_918}
&= H - \lambda I - i \frac{\lambda^{2}}{2}[Q,I] + O(\lambda^{3})\,,\qquad (\lambda\to 0)\,
\end{align}
where $I=i[H,Q]$. (A specific model illustrating that coupling scheme on a spin will be considered shortly in C1.) Local currents are obtained by varying the gauge,
\begin{equation}
\label{eq_903}
H'(\lambda)\equiv\deriv{}{\lambda}H(\lambda) =-i\left[H(\lambda), Q\right]= -I(\lambda)\,,
\end{equation}
which results in
\begin{equation*}
I(\lambda) = e^{i \lambda Q} I e^{-i \lambda Q}\,.
\end{equation*}
The current parallels the kinematic momentum of a particle in that its value is gauge invariant, but its representation as an operator is not, i.e. $I(\lambda)\neq I$ as a rule. This is reflected in the current correlators, which are now those of
\begin{equation*}
\HI(\lambda, t)=-\integral{0}{\lambda}{\lambda'}I(\lambda',t)\,,
\end{equation*}
showing that Eq.~(\ref{eq_915}) is in need of amendment, and not just by replacing $I(t')$ with $I(\mp\lambda/2, t')$. The central result is that $\chi(\lambda)$ can still be expressed by bare current correlators; however Eq.~(\ref{eq_915}) has to be modified as
\begin{equation}
\label{eq_919}
\chi(\lambda)=
\left\langle \overrightarrow{T}^{*} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)
\overleftarrow{T}^{*} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)\right\rangle\,,
\end{equation}
where the unconventional ordering $\overleftarrow{T}^{*}$ means that the derivative in
\begin{equation*}
I(t) = \deriv{Q(t)}{t}\,,\qquad Q(t) = e^{i H t} Q e^{-i H t}
\end{equation*}
has to be taken after the time ordering,
\begin{equation}
\label{eq_921}
\overleftarrow{T}^{*}\left(I(t_1)\cdots I(t_n)\right)
:= \frac{\partial}{\partial t_{n}}\cdots\frac{\partial}{\partial t_{1}}\,
\overleftarrow{T}\left(Q(t_1)\cdots Q(t_n)\right)\,,
\end{equation}
(Matthews' time ordering \cite{matthews:49}); likewise for $\overrightarrow{T}^{*}$. Eq.~(\ref{eq_919}) follows from (\ref{eq_911}) by the identity
\begin{equation}
\label{eq_922}
\overleftarrow{T} \exp\left(-i\integral{0}{t}{t'}\HI(\lambda,t')\right)
= \overleftarrow{T}^{*} \exp\left(i\lambda \integral{0}{t}{t'} I(t')\right)\,,
\end{equation}
which will be discussed in Sect.~\ref{comp}. In summary: The $T$-ordering in connection with $\HI$ is equivalent to ${T}^{*}$-ordering in connection with $-\lambda I$, see Eqs.~(\ref{eq_911}, \ref{eq_919}).
\end{enumerate}
Item A1 is included in A2 as the special case in which
\begin{equation}
\label{eq_916}
[Q,I]=0\,,
\end{equation}
see Eq.~(\ref{eq_918}). Then the $T$ and ${T}^{*}$-orderings of currents may be used interchangeably. Moreover, the operator of current (\ref{eq_910}) becomes insensitive to the inclusion of spin: $I(\lambda)=I$.
Fairly concrete examples illustrating the general types are provided by a {\it single} particle moving on the line. Depending on its position $x\in\mathbb{R}$, the charge $Q$ to the right of the junction is $1$ or $0$, and is in fact implemented as the multiplication operator by the Heaviside function $\theta(x)$. That function can be replaced quite conveniently by a smooth version thereof, $Q=Q(x)$; the size of the region where they differ may be loosely associated with that of the detector. Through second quantization the examples implicitly describe {\it many} independent particles, too.
\begin{enumerate}
\item[B1.] \textit{Linear dispersion}. The Hamiltonian $H=p+V(x)$ on $L^{2}(\mathbb{R})$ describes a right moving particle. The current $I=i[H,Q]=Q'(x)$ satisfies $[Q,I]=0$, i.e. (\ref{eq_916}), whence the $T$-ordering suffices as a rule. By combining two copies of such a model, scattering between channels of left and right movers can be included. However Eq. (\ref{eq_916}) fails in the limiting but computationally simple case of $Q(x)=\theta(x)$ and $V(x)$ a point interaction at $x=0$ \cite{graf:09}. As a result, $T^{*}$-ordering is required in this special case.
\item[B2.] \textit{Quadratic dispersion}. The Hamiltonian is $H=p^{2}+V(x)$. Then
\begin{gather}
\label{eq_924}
I = i[H,Q] = p Q'(x) + Q'(x) p\,,\\
\label{eq_925}
[Q,I] = 2i Q'(x)^{2} \neq 0\,,
\end{gather}
which calls for $T^{*}$-ordering as a rule. However, in the limiting case of an ever smoother transition function we have $Q'^{2}\to 0$ (in any reasonable norm). As a result, $T$-ordering should suffice in that special case, as we will confirm.
\end{enumerate}
A few remarks will now address the role of the spin.
\begin{enumerate}
\item[C1.] We recall \cite{levitov:94} that the Hamiltonian (\ref{eq_901}, \ref{eq_917}) is physically realized by a spin coupled to the current flowing in a wire, as we presently explain. Let the straight wire run along the 1--axis and let $\vec{x}_0$ be the position of the spin in the 12--plane.
The vector potential due to the spin $\vec{\sigma}/2$ is
\begin{equation*}
\vec{A}=\vec{\nabla}f\wedge \frac{\vec{\sigma}}{2}\,,
\end{equation*}
where $f(\vec{x})=\mu|\vec{x}-\vec{x}_0|^{-1}$. (More general functions $f$ are obtained by smearing the position of the spin.) A particle in the wire couples to the spin through $(\vec{p}- \vec{A})\cdot\vec{e}_1\equiv p-\vec{A}\cdot\vec{e}_1$, where
\begin{equation*}
\vec{A}\cdot\vec{e}_1=(\vec{e}_1\wedge\vec{\nabla}f)\cdot \frac{\vec{\sigma}}{2}
=(\vec{e}_1\wedge\vec{\nabla}f)\cdot\vec{e}_3\frac{\sigma_3}{2}=\frac{\partial f}{\partial x_2}\frac{\sigma_3}{2}\,,
\end{equation*}
since by the stated geometry $\vec{e}_1\wedge\vec{\nabla}f$ lies in the 3--direction. We note that $\partial f/\partial x_2=O(r^{-2})$, $(r=|\vec{x}|\to\infty)$; hence, along the wire, $\partial f/\partial x_2=\lambda Q'(x)$ for some coupling $\lambda$ and a function $Q(x)$ of the kind described above. In summary: As $p$ gets replaced by
\begin{equation*}
(\vec{p}- \vec{A})\cdot\vec{e}_1=p-\lambda Q'(x)\frac{\sigma_3}{2}=e^{i\lambda Q\sigma_3/2}pe^{-i\lambda Q\sigma_3/2}\,,
\end{equation*}
so does $H$ by $H(\lambda\sigma_3 /2)$.
\item[C2.]
In the previous item $\lambda Q'(x)$ arises as a connection. It appears with the replacement $\lambda\to\lambda \sigma_3/2$, by which it acts non-trivially on the spin. As we presently explain, that property provides a geometric mechanism (though different from the physical one) by which the rotation of the spin becomes a counter of the transported charge. The mechanism somehow resembles that of a screw, whose motion rigidly links rotation with translation. More precisely, a state $\psi$ of the combined system obeys parallel transport along the line if
\begin{equation*}
\bigl(p-\lambda Q'(x)\frac{\sigma_3}{2}\bigr)\psi=0\,,
\end{equation*}
or equivalently if
\begin{equation*}
\deriv{\psi}{x}=i\lambda Q'(x)\frac{\sigma_3}{2}\psi\,.
\end{equation*}
Given that a spin state $\psi$ changes by $\mathrm{d}\psi=-i(\vec{\sigma}\cdot\vec{e}/2)\psi\mathrm{d}\theta$ under a rotation by $\mathrm{d}\theta$ about $\vec{e}$, the condition states that a charge transport $\mathrm{d}Q=Q'(x)\mathrm{d}x$ is linked to a precession $\mathrm{d}\theta=-\lambda \mathrm{d}Q$.
\item[C3.] The conclusion of the previous item can not be reached for the physical evolution of the combined system, as generated by the Hamiltonian (\ref{eq_901}), at least not without further ado. In Bohr's spirit \cite{bohr} and in elaboration of \cite{levitov:94} it is convenient to talk about the apparatus in classical terms, here a classical spin $\vec{s}\in\mathbb{R}^{3}$, ($\vert \vec{s}\vert =1$). The spin components $s_i$ have Poisson brackets $\lbrace s_i, s_j\rbrace = \epsilon_{ijk}s_k$ and the Hamiltonian is $H(\lambda s_3)$. In particular Eq.~(\ref{eq_901}) would be recovered by quantization. The equations of motion are
\begin{equation*}
\dot{s_i} = \lbrace s_i, H(\lambda s_3)\rbrace = \lambda H'(\lambda s_3) \lbrace s_i, s_3\rbrace\,,
\end{equation*}
or, by (\ref{eq_903}),
\begin{equation*}
\dot{\vec{s}} = -\lambda I(\lambda s_3) \vec{e}_3\wedge\vec{s}\,.
\end{equation*}
The angle of precession thus is
\begin{equation}
\label{eq_906}
\theta = -\lambda q\,,
\end{equation}
revealing the charge $q$ that has flowed during a time interval $[0,t]$. In this context $\chi$ can be interpreted as the {\it generating function} of the transported charge:
\begin{equation*}
\chi(\lambda) = \integ{}{q} \widehat{\chi}(q) e^{i \lambda q}\,.
\end{equation*}
Indeed, since
\begin{equation*}
\rho_\mathrm{s}(\theta)=
\begin{pmatrix}\rho_{++}&\rho_{+-}e^{-i\theta}\\
\rho_{-+}e^{i\theta}&\rho_{--}\end{pmatrix}\,,
\end{equation*}
is the state $\rho_\mathrm{i}$ of the spin after precession by the angle $\theta$, its final state (\ref{if}) is \cite{levitov:96}
\begin{equation*}
\rho_\mathrm{f} = \integ{}{q} \widehat{\chi}(q)\rho_s(-\lambda q)\,.
\end{equation*}
In view of (\ref{eq_906}), this is consistent with $\widehat{\chi}(q)$ being the probability (density) of transport $q$, as claimed. The interpretation is however hampered by the fact that $\widehat{\chi}(q)$ may fail to be positive \cite{kindermann:03}.
\end{enumerate}
We should also mention an earlier approach \cite{levitov:93, muzykanskii:03} to charge transport, which does not explicitly model a detector. It is based on two measurements of the charge $Q$, occurring at times $0$ and $t$. The transported charge is then identified with the difference $\Delta Q$ of their outcomes. The associated generating function is
\begin{equation}
\label{eq_909}
\widetilde{\chi}(\lambda) = \left\langle e^{i \lambda Q(t)}e^{-i \lambda Q}\right\rangle\,,
\end{equation}
at least in the case when the initial state $\rho$ is an eigenstate of $Q$ or an incoherent superposition of such, i.e., for $[\rho,Q]=0$; then $\widetilde{\chi}$ actually agrees with the expression (\ref{eq_907}). We will use the definition beyond this restriction, because it is irrelevant in the limit of large times. (See however \cite{shelankov:03} for the unrestricted definition.) By
$(e^{i H t}e^{i \lambda Q}e^{-i H t})e^{-i \lambda Q} = e^{i H t}(e^{i \lambda Q}e^{-i H t}e^{-i \lambda Q})$ we may restate the generating function in a form closer to (\ref{eq_907}),
\begin{equation*}
\widetilde{\chi}(\lambda) = \left\langle e^{i Ht}e^{-iH(\lambda)t}\right\rangle\,,
\end{equation*}
with $H(\lambda)$ as in Eq.~(\ref{eq_917}); and further in terms of current correlators by means of (\ref{eq_922})
\begin{equation}\label{eq_919bis}
\widetilde{\chi}(\lambda) = \left\langle\overleftarrow{T}^{*} \exp\left(i\lambda \integral{0}{t}{t'} I(t')\right)\right\rangle\,,
\end{equation}
where the star can again be dropped under the assumption (\ref{eq_916}).
We shall now describe the main result. It confirms the binomial statistics of charge transport in a variety of situations. Specifically, in the long--time limit the 2nd and 3rd cumulants of charge transport are
\begin{align}
\label{eq_19}
\lim\limits_{t\to\infty}\frac{1}{t}\cumul{(\Delta Q)^{2}}
&= \frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\left( 1-T(E)\right)\,,\\
\label{eq_20}
\lim\limits_{t\to\infty}\frac{1}{t}\cumul{(\Delta Q)^{3}}
&= \frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\left( 1-T(E)\right)\left( 1-2\,T(E)\right)\,,
\end{align}
where $\mur<\mul$ are the Fermi energies of the states incoming from the right and the left sides of the junction, and $T(E)$ is the transmission probability (transparency) at energy $E$. (Eq.~(\ref{eq_19}) was first obtained in~\cite{lesovik:89} without any time ordering prescription.) The results apply to
\begin{itemize}
\item[D1.] either generating function, Eq.~(\ref{eq_919}) or (\ref{eq_919bis});
\item[D2.] Hamiltonians with linear or quadratic dispersion relation (see items B, but in second quantization);
\item[D3.] independently of how sharp the jump of $Q$ is, i.e. of the width over which $Q(x)$ differs from $\theta(x)$.
\end{itemize}
Of some interest is the way that independence arises. The time ordering (\ref{eq_921}) is spelled out for $n=2$ as
\begin{align}
\overleftarrow{T}^{*}(I(t_1)I(t_2))
&= \frac{\partial}{\partial t_{2}}\frac{\partial}{\partial t_{1}}\bigl(Q(t_1)Q(t_2)\theta(t_{1}-t_{2})+Q(t_2)Q(t_1) \theta(t_{2}-t_{1})\bigr)\nonumber\\
&= \overleftarrow{T}(I(t_1)I(t_2))+[Q(t_1),I(t_1)]\delta(t_{1}-t_{2})\,,\label{ect}
\end{align}
and thus differs from the usual one, $\overleftarrow{T}$, by {\it contact terms} supported at coinciding times; likewise for $n=3$, where
\begin{equation}
\overleftarrow{T}^{*}(I_1 I_2 I_3) = \overleftarrow{T}(I_1 I_2 I_3) + 3\delta(t_2 - t_3) \overleftarrow{T}(I_1 [Q_2, I_2])
+ \delta(t_1 - t_2)\delta(t_2 - t_3)[Q_1,[Q_1,I_1]]\label{ect1}
\end{equation}
with the shorthand notation $I_i=I(t_i)$ (for general $n$, see \cite{graf:09}). Depending on circumstances, the terms in the expansions contribute variably to the invariable results (\ref{eq_19}, \ref{eq_20}), as we now detail.
Items D2, D3 come with interpolating parameters: The Fermi wavelength $\lambda_\textsc{F}$, with $\lambda_\textsc{F}\to 0$ as the linear dispersion is approached through rescalings
\begin{equation*}
\frac{\lambda_\textsc{F}}{2}(p\pm \lambda_\textsc{F}^{-1})^2-\frac{\lambda_\textsc{F}^{-1}}{2}\to \pm p
\end{equation*}
of right and left movers; and the width $l$ of the transition region. In the limit $l\to\infty$ \cite{Ng} of an ever larger detector $Q$ ceases to be defined. In the opposite limit $l\to 0$ and in the case of quadratic dispersion ($\lambda_\textsc{F}>0$), the commutator $[Q,I]$ diverges even in the sense of distributions, because of $Q'(x)\to \delta(x)$ (see item B2). One can though discuss the limits of the correlators (\ref{eq_19}, \ref{eq_20}) in these limits, and of their parts, but not those of the models themselves. However a model with $\lambda_\textsc{F}=0$, $l=0$ exists \cite{graf:09}, describing a
scatterer and a detector which are both pointlike and coincident.
Let us discuss Eq.~(\ref{eq_919bis}) first. The contributions of contact terms to the cumulants (\ref{eq_19}, \ref{eq_20}) are {\it non-trivial}, except in the limits for which $l/\lambda_\textsc{F}\to \infty$, as shown in Fig.~\ref{fig_1}.
\begin{figure}[hbtp]
\centering
\input{fig1.pdf_t}
\captiontitle{Matthews' vs. usual time ordering of current correlators}{The parameter range $(l,\lambda_\textsc{F})$ of models is shown as a square, which includes solid parts of the boundary. In the limits (thick arrows) of linear dispersion ($\lambda_F\to0$), or large detectors ($l\to\infty$), the contact terms appearing in Matthews' time ordering, $T^{*}$, vanish. In these limiting cases there is agreement with usual time ordering, $T$.}
\label{fig_1}
\end{figure}
As for Eq.~(\ref{eq_919}) the same is true for the 3rd cumulant; however for the 2nd the contact terms cancel between $\overrightarrow{T}^{*}$ and $\overleftarrow{T}^{*}$.\\
Relation with the literature is eased through books and the review articles on noise and counting statistics, and among them \cite{blanter:00,kindermann:03,sadovski:11}.\\
The plan of the article is as follows. In Sect.~\ref{sec_models} we introduce a model with quadratic dispersion relation and review the main features of a limiting case with linear dispersion. In Sect.~\ref{sec_overview} we will explain the broad structure of the computation of the cumulants and emphasize the methods. The first part of Sect.~\ref{sec_derivations} is devoted to the detailed derivation of asymptotic binomial statistics for the model with quadratic dispersion. In the second and third parts, the limiting cases of a large detector and of linear dispersion are given independent derivations. In Sect.~\ref{comp} we recall the reason for the $T^{*}$ time ordering and discuss the equivalence between the methods used here, resp. elsewhere, as e.g.~in \cite{lesovik:03}. Finally, two appendices collect some auxiliary results. Appendix~\ref{sec_temp_distr} contains the long--time limits of some distributions, while in Appendix~\ref{sec_matrix_elements} matrix elements of charge and current are computed.\\
\section{The models}
\label{sec_models}
\subsection{The quadratic dispersion model}
\label{subsec_quadratic_model}
We consider two conducting leads connected through a junction and model the whole device by independent fermions moving on the real line. The single-particle Hamiltonian is $H=p^{2}+V(x)$, acting on the Hilbert space $L^{2}(\mathbb{R})$. The kinetic energy is quadratic in the momentum $p=-id/dx$; the potential $V$ describes the junction and vanishes away from it, i.e., outside of some interval $[ -x_{0},\,x_{0}]$. The potential will enter the discussion only through its reflection and transmission amplitudes, $r(k)$ and $t(k)$. They can be read off from the \textit{Lippmann--Schwinger (LS) states} $\vert \psi_{k} \rangle$: Continuum eigenstates of $H$ of incoming momentum $k\neq 0$ and eigenvalue $E = k^{2}$ have wave--functions $\psi_{k}$ given outside that interval as
\begin{equation}\label{eq_21}
\begin{aligned}
k > 0\; : \quad \psi_{k}(x)&=\begin{cases} e^{ikx}+r(k) e^{-ikx}\,, \quad & (x < -x_{0})\\
t(k) e^{ikx}\,, \quad &(x > x_{0})
\end{cases}\\
k < 0\; : \quad \psi_{k}(x)&=\begin{cases} t(k) e^{ikx}\,, \quad & (x < -x_{0})\\
e^{ikx}+r(k) e^{-ikx}\,, \quad &(x > x_{0})\,.
\end{cases}
\end{aligned}
\end{equation}
Note that states with $k>0$ ($k<0$) have incoming parts that are right (left) moving. By the Schr\"odinger equation the scattering matrix
\begin{equation*}
S(k) = \begin{pmatrix}
t(k) & r(-k)\\
r(k) & t(-k)
\end{pmatrix}
\end{equation*}
is unitary. In particular, the transmission and reflection probabilities are even in $k$, whence $T(k^2) := \vert t(\pm k) \vert^2$ and $R(k^2) := \vert r(\pm k) \vert^2$, and satisfy
\begin{align}
\label{eq_23}
T(E) + R(E) = 1\,.
\end{align}
LS states form a (continuum) basis
of $L^{2}(\mathbb{R})$ normalized as
\begin{equation}
\label{eq_24}
\frac{1}{2 \pi} \integral{\mathbb{R}}{}{k} \vert \psi_{k} \rangle \langle \psi_{k} \vert = \mathds{1}\,.
\end{equation}
Time--reversal invariance of $H$ is, incidentally, a property which is not relied upon, in that the above discussion still applies when $p$ is replaced by $p-A(x)$, at least as long as $A$ has the same support properties as $V$.
As mentioned in the introduction, the charge to the right of the junction may be implemented on $L^{2}(\mathbb{R})$ as a multiplication operator, $Q=Q(x)$. More specifically, we assume
\begin{equation}
\label{eq_25}
Q(x) = \begin{cases} 0\,, \qquad &(x < x_{0})\\1\,, &(x \gg x_{0})\,.
\end{cases}
\end{equation}
The left and right leads are assumed to be reservoirs with energy levels occupied up to Fermi energies $\mul,\mur>0$ biased by $V=\mul-\mur>0$. The occupation of LS states thus is
\begin{equation}
\label{eq_27}
\rho(k)= \begin{cases}1\,,\quad &(-\kr\le k\le \kl)\\
0\,,\quad &\text{otherwise}\end{cases}\,,
\end{equation}
where $k_{\textsc{l},\textsc{r}}=(\mu_{\textsc{l},\textsc{r}})^{1/2}$; or for short $\rho(k)=\theta(k\in J)$ where $J:=[-\kr,\kl]$. More precisely, $\rho\equiv\rho(k)$ is the single--particle density matrix ($0\le \rho=\rho^*\le 1$) of the many--particle state $\langle\cdot\rangle$; actually $\rho$ determines a {\it quasi--free fermionic state} $\langle\cdot\rangle$, which for practical purposes means that expectations of many--particle operators can be computed by means of Wick's rule. As a matter of fact, for $\rho$ a projection as in (\ref{eq_27}) the state $\langle\cdot\rangle$ is necessarily quasi--free.
\subsection{The linear dispersion model}
\label{subsec_linear_model}
We briefly review the main features of the linear dispersion relation model used in \cite{graf:09} and underlying the computations of Sect.~\ref{subsec_limit_linear}. For a more detailed exposition, we refer to the original paper.
In the limit of long times (or low frequencies) it appears appropriate to linearize the dispersion relation near the Fermi energy. A suitable model arises by reinterpreting the two leads on either side of the junction: Rather than viewing them as non--chiral half--lines, they are now (full) chiral lines. In absence of scattering, which now amounts to a cut junction, the Hamiltonian is linear in the momentum $p=-id/dx$ and is given as
\begin{equation*}
H_0 = \begin{pmatrix} p &0 \\ 0&p \end{pmatrix}\,,
\end{equation*}
on $L^{2}(\mathbb{R})\oplus L^{2}(\mathbb{R})$. A point scatterer is then placed at $x=0$; it results in a unitary scattering matrix
\begin{equation*}
S = \begin{pmatrix}
r & t'\\
t & r'
\end{pmatrix}\,,
\end{equation*}
which is independendent of energy. In particular, $T=|t|^2=|t'|^2$ and $R=|r|^2=|r'|^2$ still satisfy Eq.~(\ref{eq_23}). The single-particle charge operator is the projection onto the right lead,
\begin{equation*}
Q = \begin{pmatrix} 0 &0 \\ 0 &1 \end{pmatrix}\,,
\end{equation*}
and the initial single--particle density matrix is the projection
\begin{equation*}
\rho = \begin{pmatrix} \theta(\mul-p) &0 \\ 0 &\theta(\mur-p) \end{pmatrix}\,,
\end{equation*}
representing two infinitely deep Fermi seas biased by $V = \mul - \mur>0$. The condition $[\rho,Q]=0$, underlying the unrestricted use of the generating function (\ref{eq_909}), is satisfied here.
A feature of the model is that the scattering process is instantaneous in the sense that the position of the point scatterer coincides with that of the detector. As a result, $[Q,I]\neq 0$ (\cite{graf:09}, Eq.~(3.17)), and the contact terms arising from $T^{*}$-ordering matter. In terms of the discussion given at the end of the introduction, the model has vanishing length scale $l=0$.
However the scattering process can be regarded as strictly causal by separating the two positions by $l>0$. This is achieved by replacing the charge operator $Q$ by its regularization $Q_l:= Q\theta(\vert x\vert > l)$, and accordingly the current $I$ by $I_l:=i[H,Q_l]=Q[\delta(x-l)-\delta(x+l)]$. Then the commutator $[Q_l,I_l]$ vanishes and with it all the contact terms.
\section{Overview}
\label{sec_overview}
Before engaging in the detailed computation of the cumulants (\ref{eq_19}, \ref{eq_20}) it is worthwhile giving an overview of the methods involved, and illustrating them in simple instances. The physical setting has been discussed at length in the introduction and will be recalled only briefly. We consider two leads separated by a tunnel junction, with particles in an initial multi--particle state $\langle\cdot\rangle$. We investigate the statistics of charge transport, $\Delta Q$, across the junction and during a time $t$. Specifically, we are interested in its moments $\ev{(\Delta Q)^{n}}$, determined as the expansion coefficients of some generating function, see Eq.~(\ref{eq_1}); and actually in the long--time limit of the associated cumulants $\cumul{(\Delta Q)^n}$.\\
\noindent{\bf Generating functions.} In the introduction two distinct generating functions were presented:
\begin{align}
\label{eq_402}
\widetilde{\chi}(\lambda) &= \ev{\overleftarrow{T}^{*} \exp\left(i\lambda \integral{0}{t}{t'} I(t')\right)}\,,\\
\chi(\lambda)&=
\ev{\overrightarrow{T}^{*} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)
\overleftarrow{T}^{*} \exp\left(i\tfrac{\lambda}{2}\integral{0}{t}{t'}I(t')\right)}\,,\nonumber
\end{align}
where
\begin{equation}
I(t) = e^{i H t} I e^{-i H t}
\label{eq_403bis}
\end{equation}
is the current across the junction. It is expressed in terms of the charge $Q(t)$ to its right as $I(t) =dQ(t)/dt$, whence
\begin{equation}
\label{eq_403ter}
I = i[H,Q]\,.
\end{equation}
We shall refer to $\widetilde{\chi}$ and $\chi$ as the generating functions of
the \textit{first} and of the \textit{second kind}, respectively.\\
\noindent{\bf Results.} The 2nd and 3rd cumulants exhibit asymptotic binomial behavior,
\begin{align}
\label{eq_413}
\lim\limits_{t\to\infty}\frac{1}{t}\cumul{(\Delta Q)^{2}}
&= \frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\left( 1-T(E)\right)\,,\\
\label{eq_414}
\lim\limits_{t\to\infty}\frac{1}{t}\cumul{(\Delta Q)^{3}}
&= \frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\left( 1-T(E)\right)\left( 1-2\,T(E)\right)\,,
\end{align}
where $T(E)$ is the transparency and $\mu_{\textsc{l},\textsc{r}}$ are the Fermi energies on the left and right leads, in various instances and for either generating function. Specifically:
\begin{itemize}
\item[-] in the quadratic dispersion model, contact terms matter up to the special case of an ever smoother step of the charge operator $Q(x)$ (see Section~\ref{subsec_quadratic_model}).
\item[-] in the linear dispersion model, contact terms vanish, except in the special case of instantaneous scattering (see Section~\ref{subsec_linear_model}).
\end{itemize}
In the rest of this section we address the methods used to obtain the results from the generating functions. \\
\noindent{\bf $T^{*}$-ordering.} It is convenient to recall the expansion in contact terms for $\overleftarrow{T}^{*}$-ordered products. With the shorthand notation $A_i=A(t_i)$, $A\equiv I,Q$, Eqs.~(\ref{ect}, \ref{ect1}) read
\begin{align}
\label{eq_405}
\overleftarrow{T}^{*}(I_1 I_2)
&= \overleftarrow{T}(I_1 I_2)+\delta(t_1-t_2)[Q_1, I_1]\,,\\
\label{eq_406}
\overleftarrow{T}^{*}(I_1 I_2 I_3) &= \overleftarrow{T}(I_1 I_2 I_3) + 3\delta(t_2 - t_3) \overleftarrow{T}(I_1 [Q_2, I_2])
+ \delta(t_1 - t_2)\delta(t_2 - t_3)[Q_1,[Q_1,I_1]]\,.
\end{align}
The ordering by $\overrightarrow{T}^{*}$ yields the same expansions, up to a minus sign for contact terms involving an odd number of commutators. A general expression for products of all orders may be found in \cite{graf:09}.\\
\noindent{\bf Cumulants.} Based on the generating function $\widetilde{\chi}$ of the first kind we have
\begin{align}
\label{eq_409}
\cumul{(\Delta Q)^2} &= \integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}^{*}(I_1 I_2)}
= \integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(I_1 I_2)} + \integral{0}{t}{t_1} \cumul{[Q_1,I_1]}\,,\\
\cumul{(\Delta Q)^3} &= \integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}^{*}(I_1 I_2 I_3)}\nonumber\\
\label{eq_410}
&= \integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}(I_1 I_2 I_3)}
+3\integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(I_1[Q_2,I_2])}+\integral{0}{t}{t_1}\cumul{[Q_1,[Q_1,I_1]]}\,,
\end{align}
where $d^nt=dt_1\ldots dt_n$. At first, similar equations are obtained for the moments by means of Eqs.~(\ref{eq_402}) and (\ref{eq_405}, \ref{eq_406}). Moments can then be replaced by cumulants; indeed, their combinatorial relation is universal, and hence the same on both sides of the equations.
Based on the generating function $\chi$ of the second kind we likewise find
\begin{align}
\label{eq_411}
\cumul{(\Delta Q)^2} &= \frac{1}{4}\integral{0}{t}{^{2}t}[\cumul{\overrightarrow{T}^{*}(I_1 I_2)} + 2\,\cumul{I_1 I_2}
+ \cumul{\overleftarrow{T}^{*}(I_1 I_2)}]
= \integral{0}{t}{^{2}t} \cumul{I_1 I_2}\,\\
\label{eq_412}
\cumul{(\Delta Q)^3} &= \frac{1}{8}\integral{0}{t}{^{3}t}[\cumul{\overrightarrow{T}^{*}(I_1 I_2 I_3)}
+ 3\, \cumul{\overrightarrow{T}^{*}(I_1 I_2)I_3} + 3\, \cumul{I_1\overleftarrow{T}^{*}(I_2 I_3)}
+ \cumul{\overleftarrow{T}^{*}(I_1 I_2 I_3)}]\,,
\end{align}
where the 3rd cumulant may also be expanded using (\ref{eq_405}, \ref{eq_406}) for both time arrows. We note that the cumulants of the second kind involve $\overleftarrow{T}^{*}$-ordered current correlators already present in those of the first kind, and more. However, due to the symmetry between usual and reversed time orderings, the contact terms in the 2nd cumulant (\ref{eq_411}) mutually cancel.\\
\noindent{\bf Wick's rule.} The many--particle state $\langle\cdot\rangle$ is the quasi--free state\cite{lundberg:76} determined by the single--particle density matrix $\rho$. Let $\widehat{A}$ be the second quantization of the single--particle operator $A$. Correlators of second quantized operators can be reduced to the level of first quantization thanks to \textit{Wick's rule}. In particular, with $\rho' := 1-\rho$ we have
\begin{align}
\label{eq_414a}
\cumul{\widehat{A}} &= \langle\widehat{A}\rangle = 0\,, \\
\label{eq_415}
\cumul{\widehat{A}\widehat{B}} &= \operatorname{tr}(\rho A \rho' B)\,, \\
\label{eq_416}
\cumul{\widehat{A}\widehat{B}\widehat{C}} &= \operatorname{tr}(\rho A \rho' B \rho' C) - \operatorname{tr}(\rho A \rho' C \rho B)\,.
\end{align}
The expressions follow from the usual formulation of the rule which involves creation and annihilation operators $\psi^*(a)$, $\psi(b)$ of single--particle states $a$, $b$: Expectations of products of such are computed by way of complete contraction schemes and reduced to just two kinds of contractions, $\langle\psi^*(a)\psi(b)\rangle=\langle b|\rho|a\rangle$, $\langle\psi(b)\psi^*(a)\rangle=\langle b|\rho'|a\rangle$, with further ones vanishing. The second quantization $A\mapsto \widehat{A}$ is defined for rank--one operators $A=|a_1\rangle\langle a_2|$ as
\begin{equation*}
\widehat{A}=\psi^*(a_1)\psi(a_2)-\langle a_1|\rho|a_2\rangle\,
\end{equation*}
and then extended by linearity in $A$. We stress the ``zero-point subtraction'' of $\langle a_1|\rho|a_2\rangle=\operatorname{tr}(\rho A)$, which implies $\langle\widehat{A}\rangle = 0$, but drops out from higher cumulants. The l.h.s. of Eq.~(\ref{eq_415}) gives rise to just one non--vanishing connected contraction scheme and thus equals
\begin{equation*}
\cumul{\psi^*(a_1)\psi(a_2)\psi^*(b_1)\psi(b_2)}= \langle\psi^*(a_1)\psi(b_2)\rangle\langle\psi(a_2)\psi^*(b_1)\rangle=\langle b_2|\rho|a_1\rangle\langle a_2|\rho'|b_1\rangle\,,
\end{equation*}
in agreement with the r.h.s..
The charge and current operators mentioned earlier in this section are meant in second quantization. We will henceforth denote them by $\widehat{Q}$ and $\widehat{I}$. \\
\noindent{\bf GNS space and Schwinger terms.} The reader may skip this item. It is in fact about some fine points which remain without practical consequences. Strictly, $\psi^*(a)$, $\psi(b)$ act on the \textit{GNS space} \cite{lundberg:76} of $\langle\cdot\rangle$ but, as we explain below, one may sometimes pretend it is replaced by Fock space. We only consider the case when $\rho$ is a projection. An operator $A$ admits a second quantization $\widehat{A}$ if $B=[\rho, A]$ is Hilbert-Schmidt, i.e. $\operatorname{tr}(B^{*}B)<\infty$. The traces (\ref{eq_415}, \ref{eq_416}) exist if $A$ and the other observables satisfy that condition. The (first quantized) operators $Q$ and $I$ do so in both models of Sect.~\ref{sec_models}. However, $\operatorname{tr}(\rho I)$ is well-defined only in the model with quadratic dispersion, and $\operatorname{tr}(\rho Q)$ in neither.
For $\rho=0$ the stated condition becomes trivial, and the GNS space is the Fock space. If its operators $\widehat{A}$ are used on another quasi-free state $\rho$, then (\ref{eq_415}, \ref{eq_416}) are still valid if the traces exist, but (\ref{eq_414a}) is to be replaced by $\cumul{\widehat{A}} = \langle\widehat{A}\rangle =\operatorname{tr}(\rho A)$. The difference is in the ``zero-point subtraction'' which may diverge, even for $[\rho, A]$ Hilbert-Schmidt. The point of the GNS space is that $\widehat{A}$ still remains defined there.
On the GNS space we have \cite{lundberg:76}
\begin{equation*}
[\widehat{A}, \widehat{B}] =\widehat{[A,B]} + S(A,B)\cdot 1\,,\qquad S(A,B)= \operatorname{tr}(\rho A \rho' B ) -\operatorname{tr}(\rho B \rho' A )\,,
\label{Sch}
\end{equation*}
where the last term is known as a {\it Schwinger term}. We infer
\begin{gather}\label{Sch1}
\widehat{A}(t)=\widehat{A(t)}+i\int_0^t\mathrm{d}t'\,S(H,A(t'))1\,,\\
\label{Sch2}
\cumul{[\widehat{A}, \widehat{B}]}=S(A,B)\,.
\end{gather}
Upon pretending that the GNS space is just Fock space, we have $[\widehat{A}, \widehat{B}] =\widehat{[A,B]}$ and $\widehat{A}(t)=\widehat{A(t)}$. However, Eq.~(\ref{Sch2}) still holds true, because (\ref{eq_415}) does. The same conclusion is obtained from $\cumul{[\widehat{A}, \widehat{B}]}=\cumul{\widehat{[A,B]}}=\operatorname{tr}(\rho[A,B])$.
Let us comment on the significance of Schwinger terms for the cumulants (\ref{eq_409}, \ref{eq_410}), where $I_i$ is now to be read as $\widehat{I}(t_i)$ (and likewise for $Q_i$). First, it may be replaced by $\widehat{I(t_i)}$, because the difference seen in (\ref{Sch1}) drops out from the results. Second, the contact terms $\cumul{[\widehat{Q(t_1)}, \widehat{I(t_1)}]}$ and $\cumul{[\widehat{Q(t_1)}, \widehat{[Q(t_1),I(t_1)]}]}$ are properly Schwinger terms. Informally however they may be understood in the context of Fock space, as discussed. \\
\noindent{\bf A simple case.} We illustrate the methods by considering a simple example: The 2nd cumulant of the second kind, Eq.~(\ref{eq_411}), for the model with quadratic dispersion and with state (\ref{eq_27}). Traces may be evaluated using the basis (\ref{eq_24}) of LS states. We so obtain
\begin{equation}
\cumul{\widehat{I}_1\widehat{I}_2} = \operatorname{tr}(\rho I_1 \rho' I_2)
= \frac{1}{(2\pi)^2} \integ{\mathbb{R}^{2}}{^{2}k} \matel{1}{\rho I_1}{2}\matel{2}{\rho' I_2}{1}
\label{eq_418}
= \frac{1}{(2\pi)^2} \integ{J\times\mathbb{R}\setminus{J}}{^{2}k} e^{i(E_1-E_2)(t_1-t_2)}\vert\matel{1}{I}{2}\vert^2\,,
\end{equation}
with the shorthand notations $\ket{i}=\ket{\psi_{k_i}}$, $E_i=k_i^2$, and $J=[-\kr, \kl]$. The last equality follows from Eq.~(\ref{eq_403bis}) and the eigenvalue equation $H\ket{i}=E_i\ket{i}$. \\
\noindent{\bf Time integrals.} The long-time limit of (\ref{eq_411}) now calls for
\begin{equation*}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{^{2}t} e^{i(E_1-E_2)(t_1-t_2)} = 2\pi \delta(E_1-E_2)
= \frac{2\pi}{2\vert k_1\vert}(\delta(k_1-k_2) + \delta(k_1+k_2))\,.
\end{equation*}
The first equality is by Eq.~(\ref{eq_37}) below and the second by $E_i = k_i^2$. Only one of the diagonals $k_1=\pm k_2$ openly intersects the integration domain $J\times\mathbb{R}\setminus{J}$ in Eq.~(\ref{eq_418}), and in fact just for $k_1=-k_2\in [\kr,\kl]$, see Fig.~\ref{fig_2}. Hence
\begin{figure}[hbtp]
\centering
\input{fig2.pdf_t}
\captiontitle{Integration over $k_1$, $k_2$ in Eq.~(\ref{eq_418})}{The integration domain $J\times\mathbb{R}\setminus{J}$ (shaded) and its intersection with $E_1=E_2$ (diagonals).}
\label{fig_2}
\end{figure}
\begin{equation}
\label{eq_421}
\lim\limits_{t\to\infty}\frac{1}{t}\cumul{(\Delta Q)^2}
=\frac{1}{2\pi}\integral{\kr}{\kl}{k_1}\frac{\vert\matel{1}{I}{-1}\vert^2}{2k_1}\,,
\end{equation}
where $\ket{-1}:=\ket{\psi_{-k_1}}$. Appendix~\ref{sec_temp_distr} collects further long--time limits of distributions.\\
\noindent{\bf Matrix elements of current.} The current $I$ is given in Eq.~(\ref{eq_924}). We compute the relevant matrix element for $k_1>0$ and observe that Eq.~(\ref{eq_21}) distinguishes between the cases $\pm k>0$; however only the expressions for $x>x_0$ matter here because of the support properties of $Q'$ seen in (\ref{eq_25}):
\begin{align}
\matel{1}{I}{-1} &= \matel{1}{pQ'(x)+Q'(x)p}{-1}\nonumber \\
&= \integral{-\infty}{\infty}{x} \overline{t(k_1)} e^{-ik_1 x}(pQ'(x)+Q'(x)p)(e^{-ik_1 x}+r(-k_1)e^{ik_1 x})\nonumber\\
&= 2k_1\overline{t(k_1)}r(-k_1)\integral{-\infty}{\infty}{x}Q'(x)= 2k_1\overline{t(k_1)}r(-k_1)\,,
\label{eq_422}
\end{align}
where the third equality is by partial integration. Hence
\begin{equation}
\label{eq_423}
\vert\matel{1}{I}{-1}\vert^2 = (2k_1)^2 T(k_1^2 ) R(k_1^2 ) = (2k_1)^2 T(k_1^2 )(1-T(k_1^2 ))\,,
\end{equation}
and Eq.~(\ref{eq_413}) follows by substituting $E=k_1^2$ in Eq. (\ref{eq_421}). In the simple case considered we thus confirm the binomial statistics.
Further computations of matrix elements of current may be found in Appendix \ref{sec_matrix_elements}. In particular, we mention
\begin{equation}
\label{eq_424}
\matel{1}{I}{1} = 2k_1 T(k_1)\,,\qquad(k_1>0)\,.
\end{equation}
\noindent{\bf Matrix elements of charge.} The cumulant (\ref{eq_411}) we just computed is the simplest among (\ref{eq_409}-\ref{eq_412}) in that it does not involve the charge operator $Q$. In preparation of the other cases, it pays to look at the relation between matrix elements of $Q$ and of $I$, still though within the model of quadratic dispersion. In view of the support property (\ref{eq_25}) of $Q(x)$, computing its Fourier transform demands a regularization at $x\to+\infty$:
\begin{equation}
\label{eq_425}
\widehat{Q}(k) = \lim\limits_{\varepsilon\downarrow 0}\integral{-\infty}{\infty}{x}Q(x)e^{-i(k-i\varepsilon)x}
=(-i)\lim\limits_{\epsilon\downarrow 0}\frac{\widehat{Q'}(k)}{k-i\,\varepsilon}
= (-i)\frac{\widehat{Q'}(k)}{k-i\,0}\,.
\end{equation}
The result is a distribution in $k$, and so is $\matel{1}{Q}{2}$ in $k_1,k_2$; whereas $\matel{1}{I}{2}$ is a smooth function of these variables. By (\ref{eq_403ter}) we have the equation
\begin{equation}
\label{eq_425a}
\matel{1}{I}{2}=i(E_1-E_2)\matel{1}{Q}{2}\,,
\end{equation}
which however can not be uniquely solved for $\matel{1}{Q}{2}$, because the distributional equation $xF(x)=0$ admits the non--trivial solutions $F(x)\propto\delta(x)$. In fact, in view of the Sokhatsky-Weierstrass (SW) formula
\begin{equation}\label{ws}
\frac{1}{x-i\,0}-\frac{1}{x+i\,0}=2\pi i \delta(x)\,,
\end{equation}
the general solution is
\begin{equation}
\label{eq_427}
i\matel{1}{Q}{2} = \frac{\matel{1}{I}{2}^{(+)}}{E_1-E_2+i\,0} + \frac{\matel{1}{I}{2}^{(-)}}
{E_1-E_2-i\,0}\,,
\end{equation}
where $\matel{1}{I}{2} = \matel{1}{I}{2}^{(+)}+\matel{1}{I}{2}^{(-)}$ is any split of the matrix element of current. It takes $\matel{1}{Q}{2}$ to make it unique, at least up to terms vanishing for $E_1=E_2$, which may still be shifted between the two terms $\matel{1}{I}{2}^{(\pm)}$.
In summary: An expression for $\matel{1}{I}{2}$ does not entail one for $\matel{1}{Q}{2}$; rather conversely, including the split. Such expressions will be derived in Appendix~\ref{sec_matrix_elements}. An important case is when $k_1\neq k_2$, whence $E_1=E_2$ arises by $k_1=- k_2$; then $\matel{1}{I}{2}^{(-)}=\matel{1}{I}{2}$ and Eq.~(\ref{eq_427}) simply reads
\begin{equation}
\label{eq_507}
\matel{1}{Q}{2} = (-i)\frac{\matel{1}{I}{2}}{E_1-E_2-i\,0}\,.
\end{equation}
Another case is
\begin{equation}
\label{eq_431}
\matel{-1}{I}{-1}^{(-)} = 2k_1 R(k_1)\,,\qquad(k_1>0)\,.
\end{equation}
\comment{
\begin{itemize}
\item $k_1,k_2>0$. We have
\begin{gather*}
i\matel{1}{Q}{2} = i\overline{t(k_1)}t(k_2)\widehat{Q}(k_1-k_2)
= \overline{t(k_1)}t(k_2)\frac{\widehat{Q'}(k_1-k_2)}{k_1-k_2-i\,0}\,,\\
(k_1+k_2)(k_1-k_2-i\,0)=E_1-E_2-i\,0\,,
\end{gather*}
and hence $\matel{1}{I}{2}^{(-)}=\matel{1}{I}{2}$, $\matel{1}{I}{2}^{(+)}=0$. The same conclusion holds when $k_1$ and $k_2$ have opposite signs, or more generally when $k_1$ and $k_2$ are in disjoint intervals. In fact, this latter case reduces to the previous one, since $E_1-E_2=0$ may only occur when $k_1$ and $k_2$ have opposite signs.
\item $k_1,k_2<0$. A straightforward computation yields
\begin{align*}
i\matel{1}{Q}{2}
=\frac{\widehat{Q'}(k_1-k_2)}{k_1-k_2 - i\,0}
+\overline{r(k_1)}\frac{\widehat{Q'}(-k_1-k_2)}{-k_1-k_2- i\,0}
+r(k_2)\frac{\widehat{Q'}(k_1+k_2)}{k_1+k_2- i\,0}+
\overline{r(k_1)}r(k_2)\frac{\widehat{Q'}(k_2-k_1)}{k_2-k_1 - i\,0}\,.
\end{align*}
The two middle terms have non--vanishing denominators in the stated range and may thus be linked to either term $\matel{1}{I}{2}^{(\pm)}$. Using $(k_1+k_2)(k_1-k_2\mp i\,0)=E_1-E_2\pm i\,0$ we may choose to set
\begin{equation}
\label{eq_430}
\matel{1}{I}{2}^{(-)} = -(k_1+k_2)\overline{r(k_1)}r(k_2)\widehat{Q'}(k_2-k_1)\,.
\end{equation}
\end{itemize}
}
\section{Derivations}
\label{sec_derivations}
\subsection{The quadratic dispersion model}
\label{subsec_derivations_quadratic}
Using the methods introduced in the previous section we shall derive the asymptotic binomial distribution (\ref{eq_413}, \ref{eq_414}) for both generating functions, $\widetilde{\chi}$ and $\chi$. We shall do so first for the model with quadratic dispersion relation of Section~\ref{subsec_quadratic_model}. It will become evident that contact terms are crucial. In other words Matthews' time--ordering can not be replaced by ordinary time--ordering, except in limiting cases, if the correct result is to be found. Two such cases, namely that of a large detector and of a linear dispersion, will be given independent treatments in the following sections.\\
\noindent{\bf 2nd cumulant of the first kind.} The cumulant is given in Eq.~(\ref{eq_409}) as
\begin{equation*}
\cumul{(\Delta Q)^2} = \integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}^{*}(\widehat{I}_1 \widehat{I}_2)}
= \mathrm{A} + \mathrm{B}
\end{equation*}
with
\begin{align*}
\mathrm{A} &= \integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2)}=2\iintegral{0}{t}{t_1}{0}{t_1}{t_2} \cumul{\widehat{I}_1 \widehat{I}_2} &\textnormal{(main term)}\,,\\
\mathrm{B} &= \integral{0}{t}{t_1}\cumul{[\widehat{Q}_1,\widehat{I}_1]} &\textnormal{(contact term)}\,.
\end{align*}
The connected correlators are computed by Wick's rule (\ref{eq_415}) and the resulting traces evaluated in the basis of LS states (\ref{eq_21}). We obtain
\begin{align}
\label{eq_504}
\mathrm{A} &= \frac{2}{(2\pi)^2} \integ{J\times\mathbb{R}\setminus{J}}{^{2}k} \iintegral{0}{t}{t_1}{0}{t_1}{t_2}
e^{i(E_1-E_2)(t_1-t_2)}\vert\matel{1}{I}{2}\vert^2\,,\\
\label{eq_505}
\mathrm{B} &= \frac{t}{(2\pi)^2} \integ{J\times\mathbb{R}\setminus{J}}{^{2}k}
(\matel{1}{Q}{2}\matel{2}{I}{1}-\matel{1}{I}{2}\matel{2}{Q}{1})\,.
\end{align}
In relation to the overview above we shall next (i) discuss time integrals and (ii) express matrix elements of charge in terms of those of current. We will do likewise later for all cumulants. In the present case the first item concerns only the main term, the second only the contact term.
(i) The asymptotic long--time behavior of the main term is given by Eq.~(\ref{eq_36}) with $x=E_1-E_2$ and the substitution $t_2\mapsto t_1-t_2$:
\begin{equation}
\label{eq_506}
\frac{1}{t}\iintegral{0}{t}{t_1}{0}{t_1}{t_2} e^{i(E_1-E_2)(t_1-t_2)} \;\xrightarrow[t\to+\infty]{}\;
\frac{i}{E_1-E_2+i\,0}\,,
\end{equation}
as distributions in $k_1$ and $k_2$. In Eq.~(\ref{eq_504}) $\matel{1}{I}{2}$ then qualifies as a test function, being essentially the Fourier transform of the compactly supported function $Q'(x)$.
(ii) Within the integration domain (\ref{eq_505}) $\matel{1}{Q}{2}$ is given by Eq.~(\ref{eq_507}), and $\matel{2}{Q}{1}$ is then obtained by exchanging $k_1$ and $k_2$, or by complex conjugation.
Collecting terms we so obtain
\begin{align*}
\lim\limits_{t\to+\infty}\frac{1}{t}\cumul{(\Delta Q)^2}
&= \frac{1}{(2\pi)^2} \integ{J\times\mathbb{R}\setminus{J}}{^{2}k}
(\frac{2i}{E_1-E_2+i\,0}-\frac{i}{E_1-E_2-i\,0}+\frac{i}{E_2-E_1-i\,0})\vert\matel{1}{I}{2}\vert^2\nonumber\\
&= \frac{1}{2\pi}\integral{k_\textsc{r}}{k_\textsc{l}}{k_1}\frac{\vert\matel{1}{I}{-1}\vert^2}{2k_1}
= \frac{1}{2\pi}\integral{\mu_\textsc{r}}{\mu_\textsc{l}}{E} T(E)(1-T(E))\,,
\end{align*}
as claimed. The second equality is by the SW formula (\ref{ws}) and the earlier remark restricting $k_1$ to $[\kr,\kl]$ (see Fig.~\ref{fig_2}); the last one by Eq.~(\ref{eq_423}) with $E=k_1^2$.\\
\noindent{\bf 3rd cumulant of the first kind.} Though longer, the computation of the 3rd cumulant retains the same two key ingredients: (i) the evaluation of time integrals and (ii) the expression of matrix elements of charge in terms of those of current. By Eq.~(\ref{eq_410}) we have
\begin{equation*}
\cumul{(\Delta Q)^3} = \integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}^{*}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}
= \mathrm{A}+\mathrm{B}+\mathrm{C}
\end{equation*}
with
\begin{align*}
\mathrm{A} &= \integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}=6\iiintegral{0}{t}{t_{1}}{0}{t_{1}}{t_{2}}{0}{t_{2}}{t_{3}}\cumul{\widehat{I}_1 \widehat{I}_2 \widehat{I}_3}
&\textnormal{(main term)}\,,\\
\mathrm{B} &= 3\integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2])}=3\iintegral{0}{t}{t_1}{0}{t_1}{t_2} \cumul{
\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2]+[\widehat{Q}_1,\widehat{I}_1]\widehat{I}_2}
&\textnormal{(1st contact term)}\,,\\
\mathrm{C} &= \integral{0}{t}{t_1}\cumul{[\widehat{Q}_1,[\widehat{Q}_1,\widehat{I}_1]]} &\textnormal{(2nd contact term)}\,.
\end{align*}
Here Wick's rule (\ref{eq_416}) is appropriate, whence each term splits into two.\\
\noindent{\bf (a) Main term.} (i) We get
\begin{align*}
\cumul{\widehat{I}_1\widehat{I}_2\widehat{I}_3} =&\, \operatorname{tr}(\rho I_1\rho' I_2\rho' I_3)-\operatorname{tr}(\rho I_1\rho' I_3\rho I_2)\nonumber\\
=&\, \frac{1}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
e^{i(E_1-E_2)t_1}e^{i(E_2-E_3)t_2}e^{i(E_3-E_1)t_3}\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}\nonumber\\
&\,-\frac{1}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times J}{^{3}k}
e^{i(E_1-E_2)t_1}e^{i(E_2-E_3)t_3}e^{i(E_3-E_1)t_2}\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}\,.
\end{align*}
The exponentials of the second term are obtained from those of the first one by exchanging $E_3-E_1$ by $E_2-E_3$, which leaves the sum $E_2-E_1$ unaffected. This symbolic {\it transformation rule} may be deduced from Wick's rule (\ref{eq_416}). For possibly distinct observables and for the terms as a whole it reads:
\begin{itemize}
\item[-] exchange $\matel{3}{\cdot}{1}$ by $\matel{2}{\cdot}{3}$;
\item[-] exchange $E_3-E_1$ by $E_2-E_3$;
\item[-] in the integration domain, replace $k_3\in \mathbb{R}\setminus{J}$ by $k_3\in J$;
\item[-] apply an overall minus sign.
\end{itemize}
In the present case the first item leaves the integrand unchanged. We shall denote the transformation by $\mathcal{T}_{(23)}$, as it essentially arises by exchanging positions $2$ and $3$ in the product of operators.
The time integrals are given by Eq. (\ref{eq_38}) with $x=E_1-E_2$ and $y=E_2-E_3$ resp. $y=E_3-E_1$. It yields
\begin{equation*}
\lim\limits_{t\to+\infty}\frac{1}{t}
\integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}
= \mathrm{A_I}+\mathrm{A_{II}}
\end{equation*}
with
\begin{equation}\label{eq_515}
\begin{aligned}
\mathrm{A_I} &= -\frac{6}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
\frac{\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_1-E_3+i\,0)}\,,\\
\mathrm{A_{II}} &= \mathcal{T}_{(23)}[\mathrm{A_I}]
= \frac{6}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times J}{^{3}k}
\frac{\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_3-E_2+i\,0)}\,.
\end{aligned}
\end{equation}
\noindent{\bf(b) Contact terms.} (i) The long--time behavior of the 1st contact term is given by Eq.~(\ref{eq_36}), while the integrand of the 2nd is time--independent. Expanding the commutators we obtain
\begin{align*}
\lim\limits_{t\to+\infty}\frac{3}{t}\,
\integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2])}
&= \mathrm{B_I}+\mathrm{B_{II}}\,,\\
\lim\limits_{t\to+\infty}\frac{1}{t}
\integral{0}{t}{t_1}\cumul{[\widehat{Q}_1,[\widehat{Q}_1,\widehat{I}_1]]}
&= \mathrm{C_I}+\mathrm{C_{II}}\,,
\end{align*}
with
\begin{multline*}
\mathrm{B_I} = \frac{3}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k} i
\left(\frac{\matel{1}{I}{2}\matel{2}{Q}{3}\matel{3}{I}{1}-\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{Q}{1}}{E_1-E_2+i\,0}
\right.\\
\left.
+\frac{\matel{1}{Q}{2}\matel{2}{I}{3}\matel{3}{I}{1}-\matel{1}{I}{2}\matel{2}{Q}{3}\matel{3}{I}{1}}{E_1-E_3+i\,0} \right)\,,
\end{multline*}
\begin{equation*}
\mathrm{C_I} = \frac{1}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
(\matel{1}{Q}{2}\matel{2}{Q}{3}\matel{3}{I}{1}-2\matel{1}{Q}{2}\matel{2}{I}{3}\matel{3}{Q}{1}
+\matel{1}{I}{2}\matel{2}{Q}{3}\matel{3}{Q}{1})\,.
\end{equation*}
and $\mathrm{B_{II}}=\mathcal{T}_{(23)}[\mathrm{B_I}]$, $\mathrm{C_{II}}=\mathcal{T}_{(23)}[\mathrm{C_I}]$.\\
(ii) We distinguish between matrix elements $\matel{i}{Q}{j}$ as to whether $k_i$ and $k_j$ belong to the same or to different sets among $J$ and $\mathbb{R}\setminus{J}$. In the first instance Eq.~(\ref{eq_507}) applies, whereas in the second its generalization (\ref{eq_427}) is required. For example the first two terms in the integrand of $\mathrm{B_I}$ become
\begin{align*}
\sum\limits_{s=\pm}\left(\frac{\matel{1}{I}{2}\matel{2}{I}{3}^{(s)}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_2-E_3+s\,i\,0)}
-\frac{\matel{1}{I}{2}\matel{2}{I}{3}^{(s)}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_3-E_1-i\,0)}\right)\,,
\end{align*}
where in the second term we used the identity $\matel{2}{I}{3} = \matel{2}{I}{3}^{(+)}+\matel{2}{I}{3}^{(-)}$. In view of the integration domains the splitting will more generally affects $\matel{2}{I}{3}$ in $\mathrm{\alpha_I}$ and $\matel{3}{I}{1}$ in $\mathrm{\alpha_{II}}$, ($\mathrm{\alpha}=\mathrm{A,B,C}$). As a result each term $\mathrm{\alpha_{I,II}}$ may be written as
\begin{align*}
\mathrm{\alpha_I} &= \frac{1}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
\sum\limits_{s=\pm}\mathrm{\alpha_I}^{(s)}(E_3-E_1, E_2-E_3)\matel{1}{I}{2}\matel{2}{I}{3}^{(s)}\matel{3}{I}{1}\,,\\
\mathrm{\alpha_{II}} &= \frac{1}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times J}{^{3}k}
\sum\limits_{s=\pm}\mathrm{\alpha_{II}}^{(s)}(E_3-E_1, E_2-E_3)\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}^{(s)}
\end{align*}
for some distributions $\mathrm{\alpha_{I,II}}^{(\pm)}$. We remark that $\mathrm{\alpha_{II}}^{(\pm)}(E_3-E_1, E_2-E_3)=-\mathrm{\alpha_I}^{(\pm)}(E_2-E_3, E_3-E_1)$, because the matrix elements carrying the superscript $(s)$ in the two cases are also those exchanged by the transformation rule $\mathcal{T}_{(23)}$. Moreover, the dependence of $\mathrm{\alpha_{I}}^{(s)}$ on $s$ is of the form
\begin{equation}\label{form1}
\mathrm{\alpha_I}^{(\pm)}(E_3-E_1,E_2-E_3)
= \frac{\widehat{\mathrm{\alpha}}_\mathrm{I}(E_3-E_1,E_2-E_3)}{E_1-E_3+i\,0}
+ \frac{\widecheck{\mathrm{\alpha}}_\mathrm{I}(E_3-E_1,E_2-E_3)}{E_2-E_3\pm i\,0}\,,
\end{equation}
where $\widehat{\mathrm{\alpha}}_\mathrm{I}$ and $\widecheck{\mathrm{\alpha}}_\mathrm{I}$ ($\mathrm{\alpha =A,B,C}$) are as follows:
\begin{align}
\widehat{\mathrm{A}}_\mathrm{I}&=-\frac{6}{E_1-E_2+i\,0}\,, &\widecheck{\mathrm{A}}_\mathrm{I} &= 0\,,\nonumber\\
\widehat{\mathrm{B}}_\mathrm{I}&=\frac{3}{E_1-E_2+i\,0}+\frac{3}{E_1-E_2-i\,0}\,, &\quad
\widecheck{\mathrm{B}}_\mathrm{I} &= \frac{3}{E_1-E_2+i\,0}-\frac{3}{E_1-E_3+i\,0}\,,\label{tab}\\
\widehat{\mathrm{C}}_\mathrm{I}&=-\frac{2}{E_1-E_2-i\,0}\,,&
\widecheck{\mathrm{C}}_\mathrm{I} &= -\frac{1}{E_1-E_2-i\,0}+\frac{1}{E_1-E_3+i\,0}\,.\nonumber
\end{align}
The claim we are heading to is
\begin{equation}
\label{eq_519}
\mathrm{A_I}+\mathrm{B_I}+\mathrm{C_I}=\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)R(E)^2\,,
\qquad \mathrm{A_{II}}+\mathrm{B_{II}}+\mathrm{C_{II}}=-\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)^2 R(E)\,,
\end{equation}
which leads to binomial statistics (\ref{eq_414}) in view of $TR^2-T^2 R = T(1-T)(1-2T)$. To establish it, we observe that the sum,
\begin{equation}
\label{eq_520}
\mathrm{A_I}+\mathrm{B_I}+\mathrm{C_I}
= \frac{1}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
\sum\limits_{s=\pm}\mathrm{\Delta_I}^{(s)}\matel{1}{I}{2}\matel{2}{I}{3}^{(s)}\matel{3}{I}{1}\,,
\end{equation}
likewise involves distributions $\mathrm{\Delta_I}^{(\pm)}$ of the form (\ref{form1}) with
\begin{equation*}
\widehat{\mathrm{\Delta}}_\mathrm{I}=2\pi i\delta(E_1-E_2)-\frac{2}{E_1-E_2+i\,0}\,,\qquad
\widecheck{\mathrm{\Delta}}_\mathrm{I}=-2\pi i\delta(E_1-E_2)+\frac{2(E_2-E_3)}{(E_1-E_2+i\,0)(E_1-E_3+i\,0)}\,.
\end{equation*}
This is seen by summing terms within the columns of table (\ref{tab}) and by using the SW formula (\ref{ws}). The second term of $\widecheck{\mathrm{\Delta}}_\mathrm{I}$ is a distribution with poles at $E_2-i\,0$, $E_3-i\,0$ not pinching the $E_1$--axis. It thus vanishes to first order at $E_2=E_3$ and cancels in $\mathrm{\Delta_I}^{(\pm)}$ against the second term of $\widehat{\mathrm{\Delta}}_\mathrm{I}$. We are thus left with
\begin{align}\label{eq_521}
\mathrm{\Delta_I}^{(\pm)}
= 2\pi i\,\delta(E_1-E_2)\left(\frac{1}{E_2-E_3+i\,0}-\frac{1}{E_2-E_3\pm i\,0}\right)\,,
\end{align}
and we conclude that $\mathrm{\Delta_I}^{(+)}=0$ and $\mathrm{\Delta_I}^{(-)}=(2\pi)^2\delta(E_1-E_2)\delta(E_2-E_3)$. The conditions $E_1=E_2$ and $E_2=E_3$ are satisfied along the diagonals $k_1=\pm k_2$ resp.~$k_2=\pm k_3$. That happens jointly and within the integration domain only for $k_1=-k_2=-k_3$ with $k_1\in [\kr,\kl]$, whence
\begin{equation*}
\mathrm{A_I}+\mathrm{B_I}+\mathrm{C_I}
= \frac{1}{2\pi}\integral{\kr}{\kl}{k_1}\frac{\vert\matel{1}{I}{-1}\vert^2\matel{-1}{I}{-1}^{(-)}}{(2k_1)^2}
= \frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)R(E)^2\,,
\end{equation*}
as claimed. The last equality is by Eqs.~(\ref{eq_423}, \ref{eq_431}) with $E=k_1^2$.
Similarly, by $\mathrm{\Delta_{II}}^{(\pm)}(E_3-E_1,E_2-E_3)=-\mathrm{\Delta_I^{(\pm)}}(E_2-E_3,E_3-E_1)$, we obtain $\mathrm{\Delta_{II}}^{(+)}=0$ and $\mathrm{\Delta_{II}}^{(-)}=-(2\pi)^2\delta(E_1-E_2)\delta(E_1-E_3)$. Hence
\begin{equation*}
\mathrm{A_{II}}+\mathrm{B_{II}}+\mathrm{C_{II}}
= -\frac{1}{2\pi}\integral{\kr}{\kl}{k_1}\frac{\vert\matel{1}{I}{-1}\vert^2\matel{1}{I}{1}^{(-)}}{(2k_1)^2}
= -\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)^2 R(E)\,,
\end{equation*}
where the last equality follows by Eqs.~(\ref{eq_423}, \ref{eq_424}) with $E=k_1^2$.\\
\noindent{\bf 3rd cumulant of the second kind.} The cumulant is given in Eq.~(\ref{eq_412}) as
\begin{equation}
\label{eq_524}
\cumul{(\Delta Q)^3} = \frac{1}{8}\integral{0}{t}{^{3}t}\bigl(
\cumul{\overrightarrow{T}^{*}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}
+ 3\, \cumul{\overrightarrow{T}^{*}(\widehat{I}_1 \widehat{I}_2)\widehat{I}_3}
+ 3\, \cumul{\widehat{I}_1\overleftarrow{T}^{*}(\widehat{I}_2 \widehat{I}_3)}
+ \cumul{\overleftarrow{T}^{*}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}\bigr)\,.
\end{equation}
A preliminary observation is useful. By $\overline{\langle\widehat{A}\rangle}=\langle\widehat{A}^*\rangle$ we have $\overline{\langle\widehat{A}\widehat{B}\rangle}=\langle\widehat{B}^*\widehat{A}^*\rangle$, $\overline{\langle\overleftarrow{T}(\widehat{A}(t_1)\widehat{B}(t_2))\rangle}=\langle\overrightarrow{T}(\widehat{A}(t_1)^*\widehat{B}(t_2)^*)\rangle$, and likewise for higher products, $T^*$--ordered products, and cumulants. Given that in the above expression the currents are self-adjoint and the $t_i$'s dummy variables, the two extreme and the two middle terms are so related. The missing item for establishing binomial statistics here is thus just
\begin{equation}
\label{eq_525}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^{3}t}
\cumul{\widehat{I}_1\overleftarrow{T}^{*}(\widehat{I}_2 \widehat{I}_3)}
=\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\left( 1-T(E)\right)\left( 1-2T(E)\right)\,.
\end{equation}
The computation is similar to that of the 3rd cumulant of the first kind, whence we refer to it for more details. By Eq.~(\ref{eq_405}) we have
\begin{equation*}
\integral{0}{t}{^{3}t}\cumul{\widehat{I}_1\overleftarrow{T}^{*}(\widehat{I}_2 \widehat{I}_3)} = \mathrm{D}+\mathrm{E}
\end{equation*}
with
\begin{equation}\label{eq_527}
\begin{aligned}
\mathrm{D} &= \integral{0}{t}{^{3}t}\cumul{\widehat{I}_1\overleftarrow{T}(\widehat{I}_2 \widehat{I}_3)}
&\textnormal{(main term)}\,,\\
\mathrm{E} &= \integral{0}{t}{^{2}t}\cumul{\widehat{I}_1 [\widehat{Q}_2,\widehat{I}_2])}
&\textnormal{(contact term)}\,.
\end{aligned}
\end{equation}
We apply Wick's rule (\ref{eq_416}) to both terms. The long--time behavior of the main term is extracted by Eq.~(\ref{eq_44}); that of the contact term by Eq.~(\ref{eq_37}). Expanding the commutators we so obtain
\begin{align*}
\lim\limits_{t\to+\infty}\frac{1}{t}
\integral{0}{t}{^{3}t}\cumul{\widehat{I}_1\overleftarrow{T}(\widehat{I}_2 \widehat{I}_3)}
&= \mathrm{D_I}+\mathrm{D_{II}}\,,\\
\lim\limits_{t\to+\infty}\frac{1}{t}
\integral{0}{t}{^{2}t}\cumul{\widehat{I}_1 [\widehat{Q}_2,\widehat{I}_2])} &=\mathrm{E_I}+\mathrm{E_{II}}\,,
\end{align*}
with
\begin{align}
\label{eq_531}
\mathrm{D_I} &= \frac{2}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
2\pi\,i\,\delta(E_1-E_2)\frac{\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}}{E_2-E_3+i\,0}\,,\\
\nonumber
\mathrm{E_I} &= \frac{1}{(2\pi)^3}\integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
2\pi\delta(E_1-E_2)(\matel{1}{I}{2}\matel{2}{Q}{3}\matel{3}{I}{1}-\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{Q}{1})\,.
\end{align}
$\mathrm{D_{II}}$ is obtained from $\mathrm{D_I}$ by the rule $\mathcal{T}_{(23)}$ introduced in relation with the previous cumulant. Likewise for $\mathrm{E_{II}}$ and $\mathrm{E_I}$. In analogy with the claim (\ref{eq_519}) made there, the present one is
\begin{equation}
\label{eq_532}
\mathrm{D_I}+\mathrm{E_I}=\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)R(E)^2\,,
\qquad \mathrm{D_{II}}+\mathrm{E_{II}}=-\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)^2 R(E)\,.
\end{equation}
By the same steps as those leading to Eq.~(\ref{eq_520}), we find
\begin{equation*}
\mathrm{D_I}+\mathrm{E_I} = \frac{1}{(2\pi)^3} \integ{J\times\mathbb{R}\setminus{J}\times\mathbb{R}\setminus{J}}{^{3}k}
\sum\limits_{s=\pm}\mathrm{\Gamma_I}^{(s)}(k_1,k_2,k_3)\matel{1}{I}{2}\matel{2}{I}{3}^{(s)}\matel{3}{I}{1}\,,
\end{equation*}
with the distributions
\begin{equation*}
\mathrm{\Gamma_I}^{(\pm)} = 2\pi i\,\delta(E_1-E_2)\left(\frac{1}{E_2-E_3+i\,0}-\frac{1}{E_2-E_3\pm i\,0}\right)
=\mathrm{\Delta_I}^{(\pm)}\,.
\end{equation*}
Hence this case reduces to the 3rd cumulant of the first kind (\ref{eq_521}), which establishes the claims~(\ref{eq_532}).
\subsection{The limit of a large detector}
\label{subsec_smeared_proj}
We will consider the situation of a detector extending over a region much larger than the Fermi wavelength. Clearly, the binomial distribution persists, this situation being a limiting case of the one dealt with before. The point though to be made is (i) that the contact terms in Eqs.~(\ref{eq_409}-\ref{eq_412}) vanish in the limit. Put differently, Matthews' time--ordered correlators reduce to ordinary time--ordered correlators, which alone account for the binomial distribution. Moreover, (ii) we provide an independent derivation of that latter fact. We shall analyze the cumulants separately, though in very similar manners. The values of some integrals used along the way are collected at the end of the section.
The large detector is modeled by means of scaling. Let $Q_0(x)$ be a fixed function satisfying (\ref{eq_25}). We choose the profile of the detector to be given by the function
\begin{equation*}
Q(x)= Q_{0}(x/l)\,,
\end{equation*}
which for $l\ge 1$ retains that property, and consider it in the limit $l\to\infty$. The scaling implies
\begin{align}
\label{eq_141}
Q'(x) = l^{-1} Q_{0}'(x/l)\,,\qquad
\widehat{Q'}(k) = \widehat{Q_{0}'}(lk)\,,\qquad
\widehat{Q'^{2}}(k)=l^{-1}\widehat{(Q_{0}')^{2}}(lk)\, .
\end{align}
\noindent{\bf 2nd cumulant of the first kind.}
(i) We first show that the contact term vanishes in the limit $l\to\infty$:
\begin{equation*}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{t_1} \cumul{[\widehat{Q}_1,\widehat{I}_1]}\;\xrightarrow[l\to+\infty]{}\;0\,.
\end{equation*}
The limit $t\to\infty$ is superfluous, since $\cumul{[\widehat{Q}_1,\widehat{I}_1]}$ is independent of $t_1$ and in fact by (\ref{eq_925}) equal to
\begin{equation*}
\cumul{\widehat{[Q,I]}}=2i\operatorname{tr}(\rho Q'(x)^{2})=\frac{2i}{2\pi}\integral{-\kr}{\kl}{k_1}\matel{1}{Q'^2}{1}\,.
\end{equation*}
Using (\ref{eq_B2}, \ref{eq_B3}) for $Q'^2$ instead of $Q$, the matrix element is seen to be a linear combination of $\widehat{Q'^2}(k)$ for $k=0,\pm 2k_1$. They are of order $O(l^{-1})$ by (\ref{eq_141}), proving the first claim.\\
(ii) Let us now come to the main term:
\begin{equation}
\label{eq_149}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{^{2}t}\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2)} = \frac{2i}{(2 \pi)^{2}}
\integ{J\times\mathbb{R}\setminus{J}}{^{2}k} \frac{\vert\matel{1}{I}{2}\vert^{2}}{E_{1}-E_{2}+i\,0} \;\xrightarrow[l\to+\infty]{}\;
\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)\,(1-T(E))
\,.
\end{equation}
The equality was shown in (\ref{eq_504}, \ref{eq_506}), whereas the limit is the second claim being made here.
The regularization $+i\,0$ of the denominator only matters when $E_1=E_2$, i.e. on the diagonals $k_1=\pm k_2$, and, once restricted to the integration domain, only for $k_1=-k_2\in [\kr,\kl]$ (see Fig.~\ref{fig_2}). Moreover, the matrix element $\matel{1}{I}{2}$ is a linear combination of $\widehat{Q'}(\pm k_1 \pm k_2)$. In the limit $l\to\infty$ their supports concentrate by Eq.~(\ref{eq_141}) near the same diagonals, which by the same token get restricted to part of just one. This allows to:
\begin{itemize}
\item[-] Use the factorization $E_{1}-E_{2}+i\,0 = (k_{1}-k_{2})(k_{1}+k_{2}+i\,0)$, as appropriate for $k_1>0$, $k_2<0$.
\item[-] Select the corresponding expression (\ref{me+-}) for $\matel{i}{I}{j}$ from Appendix~\ref{sec_matrix_elements}; and therein neglect any terms vanishing on that diagonal. Hence, effectively,
\begin{align}
\label{eq_149a}
\matel{1}{I}{2}=(k_1-k_2)\overline{t(k_1)}r(k_2)\widehat{Q'}(k_1+k_2)\,.
\end{align}
\end{itemize}
The integrand of Eq.~(\ref{eq_149}) so becomes
\begin{equation*}
(k_{1}-k_{2})T(E_{1}) R(E_{2})\frac{|\widehat{Q'}(k_1+k_2)|^2}{k_{1}+k_{2}+i\,0}\,.
\end{equation*}
The last factor depends on $l$ in the way seen in Eq.~(\ref{eq_41}) for $\rho(x)=|\widehat{Q_{0}'}(x)|^2=\widehat{Q_{0}'}(-x)\widehat{Q_{0}'}(x)$. It can thus be replaced in the limit by $C_{-} \delta(k_{1}+k_{2})$, where
\begin{equation}
\label{eq_620}
C_{-} =\integral{}{}{u} \frac{\widehat{Q_{0}'}(-v)\widehat{Q_{0}'}(v)}{v+i\,0}\,.
\end{equation}
Accepting for now that $C_{-}=-i\pi$, we obtain the limit (\ref{eq_149}) by means of $2k_1\mathrm{d}k_1=\mathrm{d}E_1$ and by the earlier remark restricting $k_1$ to $[\kr,\kl]$.\\
\noindent{\bf 3rd cumulant of the first kind.}
(i) We first show that the contact terms vanish in the limit $l\to\infty$:
\begin{equation*}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{^{2}t}
\cumul{\overleftarrow{T}(\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2])}\;\xrightarrow[l\to+\infty]{}\;0\,,
\qquad \lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{t_1}\cumul{[\widehat{Q}_1,[\widehat{Q}_1,\widehat{I}_1]]}
\;\xrightarrow[l\to+\infty]{}\;0\,.
\end{equation*}
The limit $t\to\infty$ is superfluous in the second claim, since the integrand is time--independent. It actually vanishes even at finite $l$ because
\begin{equation*}
\cumul{[\widehat{Q},[\widehat{Q},\widehat{I}]]} = \cumul{[\widehat{Q},\widehat{[Q,I]}]} = 2i\,\operatorname{tr}(\rho[Q,Q'^2])=0\,.
\end{equation*}
The second equality is by Eq.~(\ref{eq_925}) and the last one by the vanishing commutator. Turning to the 1st contact term, it is convenient to use the identity $\cumul{\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2]} =\cumul{\widehat{I}_1\widehat{[Q_2,I_2]}}$. By Wick's rule (\ref{eq_415}) and Eq.~(\ref{eq_925}) it may then be recast as
\begin{equation*}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{^{2}t}
\cumul{\overleftarrow{T}(\widehat{I}_1\widehat{[Q_2,I_2]})}
= -\frac{2}{(2\pi)^2}\integ{J\times\mathbb{R}\setminus{J}}{^{2}k}
\frac{\matel{1}{I}{2}\matel{2}{Q'^2}{1}+\matel{1}{Q'^2}{2}\matel{2}{I}{1}}{E_1-E_2+i\,0}\,.
\end{equation*}
By the results of Appendix \ref{sec_matrix_elements}, the numerator is a linear combination of $\widehat{Q'^2}(\pm k_1\pm k_2)\widehat{Q'}(\pm k_1\pm k_2)$ with various sign combinations. They are of order $O(l^{-1})$ by (\ref{eq_141}), proving the second claim.\\
(ii) Let us now come to the main term. We showed in Eqs.~(\ref{eq_515}) that
\begin{equation*}
\lim\limits_{t\to\infty}\frac{1}{t}\integral{0}{t}{^{3}t}\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}
=\mathrm{A_I}+\mathrm{A_{II}}\,,
\end{equation*}
with
\begin{align*}
\mathrm{A_I}&=-\frac{6}{(2\pi)^{3}}\integ{J\times\mathbb{R}\setminus J\times\mathbb{R}\setminus J}{^{3}k}
\frac{\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_1-E_3+i\,0)}\,,\\
\mathrm{A_{II}}&=\frac{6}{(2\pi)^{3}}\integ{J\times\mathbb{R}\setminus J\times J}{^{3}k}
\frac{\matel{1}{I}{2}\matel{2}{I}{3}\matel{3}{I}{1}}{(E_1-E_2+i\,0)(E_3-E_2+i\,0)}\,.
\end{align*}
The claim is now
\begin{equation}
\label{eq_156a}
\mathrm{A_I} \;\xrightarrow[l\to+\infty]{}\;
\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)R(E)^2\,,\qquad \mathrm{A_{II}} \;\xrightarrow[l\to+\infty]{}\;
-\frac{1}{2 \pi} \integral{\mur}{\mul}{E}T(E)^2R(E)\,.
\end{equation}
It independently confirms binomial statistics, in view of $TR^2-T^2R=T(1-T)(1-2T)$.
The computation is similar to that of the 2nd cumulant, whence we refer to the discussion following (\ref{eq_149}) for more details. We first discuss term $\mathrm{A_I}$. The regularization of the denominator only matters when $E_1=E_2$ or $E_1=E_3$ and, once the integration domain is taken into account, only for $k_1 = -k_2$ or $k_1 = -k_3$, both along $k_1\in [\kr,\kl]$. A matrix element $\matel{i}{I}{j}$, $(i\neq j)$ concentrates near the planes $k_i=\pm k_j$ as $l\to+\infty$; and their product near the intersection: $k_1 = -k_2 = -k_3$ with $k_1\in [\kr,\kl]$. This allows to:
\begin{itemize}
\item[-] Use the factorizations $E_1-E_j+i\,0=(k_1-k_j)(k_1+k_j+i\,0)$, $(j=2,3)$, as appropriate for $k_1>0$, $k_2,k_3<0$.
\item[-] Select the appropriate expressions for $\matel{i}{I}{j}$ and simplify them as done for Eq.~(\ref{eq_149a}). Hence, we also have from (\ref{me--}, \ref{me-+})
\begin{align}
\label{eq_616}
\matel{2}{I}{3}&=(k_2+k_3)\bigl( \widehat{Q'}(k_2-k_3)-\overline{r(k_2)}r(k_3)\widehat{Q'}(k_3-k_2)\bigr)\,,\\
\label{eq_617}
\matel{3}{I}{1}&=(k_1-k_3)t(k_1)\overline{r(k_3)}\widehat{Q'}(-k_3-k_1)\;.
\end{align}
\end{itemize}
The integrand of $\mathrm{A_I}$ is thus recast as
\begin{equation*}
(k_2+k_3)\,T(E_1)\frac{\widehat{Q'}(k_1+k_2)\,\widehat{Q'}(-k_1-k_3)}{(k_1+k_2+i\,0)(k_1+k_3+i\,0)}\,
\bigl( r(k_2)\,\overline{r(k_3)}\,\widehat{Q'}(k_2-k_3)-R(E_2)\,R(E_3)\,\widehat{Q'}(k_3-k_2)\bigr)\,.
\end{equation*}
As mentioned, the expression depends on $l$ through $\widehat{Q'}(k) = \widehat{Q_{0}'}(lk)$; moreover, it consists of two terms, to each of which Eq.~(\ref{eq_42}) is applicable by the following observation. Each term contains a product of distributions in the variables $x=k_1+k_2$, $y=k_1+k_3$, and $x-y=k_2-k_3$. The distributions correspond to
\begin{equation*}
\rho_1(x)=\widehat{Q_{0}'}(x)\,\qquad\rho_2(x)=\widehat{Q_{0}'}(-x)\,\qquad\rho_3(x)=\widehat{Q_{0}'}(\pm x)\,,
\end{equation*}
where the $\pm$ refers to the first, resp. second term. In the limit $l\to+\infty$, the integrand thus reduces to
\begin{equation}
\label{eq_604}
(k_2+k_3)\,T(E_1)\bigl( \widetilde{C}_+r(k_2)\,\overline{r(k_3)}- \widetilde{C}_-R(E_2)\,R(E_3)\bigr)
\delta(k_1+k_2)\delta(k_1+k_3)\,,
\end{equation}
where
\begin{equation*}
\widetilde{C}_\pm=\int \mathrm{d}u \mathrm{d}v
\frac{\widehat{Q_{0}'}(u)\widehat{Q_{0}'}(-v)\widehat{Q_{0}'}(\pm(u-v))}{(u+i\,0)(v+i\,0)}\,.
\end{equation*}
Accepting for now that $\widetilde{C}_+=0$ and $\widetilde{C}_{-}=-(2\pi)^2/6$, we obtain the first limit (\ref{eq_156a}) by means of $2k_1\mathrm{d}k_1=\mathrm{d}E_1$ and by the earlier remark restricting $k_1$ to $[\kr,\kl]$.\\
We now turn to $\mathrm{A_{II}}$. In view of the integration domain the integrand of $\mathrm{A_{II}}$ is supported in the limit $l\to+\infty$ near the segment $k_1=-k_2=k_3$ with $k_1\in[\kr,\kl]$.
We may thus:
\begin{itemize}
\item[-] Use the factorizations $E_j-E_2+i\,0=(k_j-k_2)(k_j+k_2+i\,0)$ ($j=1,3$) as appropriate for $k_1,k_3>0$ and $k_2<0$.
\item[-] Select and simplify the relevant matrix elements of current $\matel{i}{I}{j}$ as was done for Eq.~(\ref{eq_149a}). Hence we also have from (\ref{me-+}, \ref{eq_B4}):
\begin{align}
\label{eq_624}
\matel{2}{I}{3} &= (k_3-k_2)t(k_3)\overline{r(k_2)}\widehat{Q'}(-k_2-k_3)\\
\label{eq_625}
\matel{3}{I}{1} &= (k_3+k_1)\overline{t(k_3)}t(k_1)\widehat{Q'}(k_3-k_1)\,.
\end{align}
\end{itemize}
The integrand of $\mathrm{A_{II}}$ then reduces to
\begin{equation*}
(k_1+k_3)T(E_1)T(E_3)R(E_2)
\frac{\widehat{Q'}(k_1+k_2)\widehat{Q'}(-k_2-k_3)\widehat{Q'}(k_3-k_1)}{(k_1+k_2+i\,0)(k_2+k_3+i\,0)}\,.
\end{equation*}
In view of $\widehat{Q'}(k)=\widehat{Q_0'}(lk)$ Eq.~(\ref{eq_42}) may be applied with $x=k_1+k_2$, $y=k_2+k_3$ and $x-y=k_1-k_3$. Comparing with the derivation of the integrand (\ref{eq_604}) of $\mathrm{A_I}$, that of $\mathrm{A_{II}}$ reduces in the limit to
\begin{equation*}
\widetilde{C}_{-} (k_1+k_3)T(E_1)T(E_3)R(E_2)\delta(k_1+k_2)\delta(k_2+k_3)\,.
\end{equation*}
Now the second claim (\ref{eq_156a}) follows just as the first one did from (\ref{eq_604}).\\
\noindent{\bf 3rd cumulant of the second kind.} By the earlier observation following (\ref{eq_524}) we only need to investigate
\begin{equation*}
\integral{0}{t}{^{3}t}\cumul{\widehat{I}_1\overleftarrow{T}^{*}(\widehat{I}_2 \widehat{I}_3)}
= \integral{0}{t}{^{3}t}\cumul{\widehat{I}_1\overleftarrow{T}(\widehat{I}_2 \widehat{I}_3)}
+\integral{0}{t}{^{2}t}\cumul{\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2]}\,.
\end{equation*}
(i) We first show that the contact term vanishes as $l\to\infty$. Using $\cumul{\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2]} =\cumul{\widehat{I}_1\widehat{[Q_2,I_2]}}$, the time integral is given by Eq.~(\ref{eq_37}). Hence, by Eq.~(\ref{eq_925}), we have
\begin{equation*}
\lim\limits_{t\to+\infty}\frac{1}{t}\integral{0}{t}{^{2}t}\cumul{\widehat{I}_1 \widehat{[Q_2,I_2]})}
= \frac{2i}{2\pi}\integ{J\times\mathbb{R}\setminus{J}}{^{2}k}\delta(E_1-E_2)\matel{1}{I}{2}\matel{2}{Q'^2}{1}\,.
\end{equation*}
According to Eq.~(\ref{me+-}) and Eq.~(\ref{meq+-}) with $Q'^2$ substituted for $Q$, the integrand is a linear combination of $\widehat{Q'^2}(\pm k_1\pm k_2)\widehat{Q'}(\pm k_1\pm k_2)$ with various sign combinations. By (\ref{eq_141}) they are of order $O(l^{-1})$, proving the claim.\\
(ii) We now come to the main term. We showed
\begin{equation*}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^{3}t}
\cumul{\widehat{I}_1\overleftarrow{T}(\widehat{I}_2 \widehat{I}_3)}
= \mathrm{D_I}+\mathrm{D_{II}}\,,
\end{equation*}
with
\begin{align*}
\mathrm{D_I} &= \frac{2i}{(2\pi)^2}\int_{\kr}^{\kl}\frac{\mathrm{d}k_1}{2k_1}\integ{\mathbb{R}\setminus{J}}{k_3}
\frac{\matel{1}{I}{-1}\matel{-1}{I}{3}\matel{3}{I}{1}}{E_1-E_3+i\,0}\,,\\
\mathrm{D_{II}} &= -\frac{2i}{(2\pi)^2}\int_{\kr}^{\kl}\frac{\mathrm{d}k_1}{2k_1}\integ{J}{k_3}
\frac{\matel{1}{I}{-1}\matel{-1}{I}{3}\matel{3}{I}{1}}{E_3-E_1+i\,0}\,,
\end{align*}
where we evaluated the delta distributions in Eq.~(\ref{eq_531}). The claim now is
\begin{equation}
\label{eq_615}
\mathrm{D_I}\;\xrightarrow[l\to+\infty]{}\;\frac{1}{2 \pi} \integral{\mur}{\mul}{E} T(E)R(E)^2\,,\qquad
\mathrm{D_{II}} \;\xrightarrow[l\to+\infty]{}\;-\frac{1}{2 \pi} \integral{\mur}{\mul}{E}T(E)^2R(E)\,,
\end{equation}
which is sufficient in view of $TR^2-T^2R=T(1-T)(1-2T)$. We first consider $\mathrm{D_I}$. Taking the integration domain into account, the support of its integrand concentrates in the limit near the diagonal $k_1=-k_3$ for $k_1\in [\kr,\kl]$. This allows to:
\begin{itemize}
\item[-] Use the factorization $E_1-E_3+i\,0=(k_1-k_3)(k_1+k_3+i\,0)$ as appropriate for $k_1>0$ and $k_3<0$.
\item[-] Select the appropriate matrix elements of currents and simplify them as was done in Eq.~(\ref{eq_617}); moreover, substituting $-k_1$ for $k_2$ in Eq.~(\ref{eq_616}) we obtain
\begin{equation*}
\matel{-1}{I}{3} = (k_3-k_1)\bigl( \widehat{Q'}(-k_1-k_3)-\overline{r(k_1)}r(k_3)\widehat{Q'}(k_1+k_3)\bigr)\,;
\end{equation*}
$\matel{1}{I}{-1}$ is given by Eq.~(\ref{eq_422}).
\end{itemize}
The integrand of $\mathrm{D_I}$ may thus be recast as
\begin{equation*}
(k_3-k_1)\,T(E_1)\frac{\widehat{Q'}(-k_1-k_3)}{(k_1+k_3+i\,0)}\,
\bigl( r(k_1)\,\overline{r(k_3)}\,\widehat{Q'}(-k_1-k_3)-R(E_1)\,R(E_3)\,\widehat{Q'}(k_1+k_3)\bigr)\,.
\end{equation*}
As mentioned, the expression depends on $l$ through $\widehat{Q'}(k)=\widehat{Q_{0}'}(lk)$, whence Eq.~(\ref{eq_41}) may be applied with $x=k_1+k_3$. In the limit $l\to+\infty$ the above thus reduces to
\begin{equation*}
(k_3-k_1)\delta(k_1+k_3)\,
\bigl( C_+\,r(k_1)\,\overline{r(k_3)}- C_{-}\,R(E_1)\,R(E_3))\,,
\end{equation*}
with the constants $C_{-}$ and $C_+$ given by Eq.~(\ref{eq_620}) resp. by
\begin{equation*}
C_+ =\integral{}{}{u} \frac{(\widehat{Q_{0}'}(-v))^2}{v+i\,0}\,.
\end{equation*}
Accepting on top of $C_{-}=-i\pi$ that $C_+=0$, the first limit (\ref{eq_615}) follows by $2k_1\mathrm{d}k_1=\mathrm{d}E$.\\
We now turn to $\mathrm{D_{II}}$. Here the support of the integrand will concentrate near $k_1=k_3$ for $k_1\in [\kr,\kl]$ as $l\to+\infty$. The denominator therefore factorizes as $E_3-E_1+i\,0=(k_1+k_3)(k_3-k_1+i\,0)$, as appropriate for $k_1,k_3>0$; the relevant matrix elements are given by Eqs.~(\ref{eq_422}, \ref{eq_625}) and Eq.~(\ref{eq_624}) with $-k_1$ substituted for $k_2$. The integrand of $\mathrm{D_{II}}$ thus becomes
\begin{equation*}
(k_1+k_3)T(E_1)T(E_3)R(E_1)
\frac{\widehat{Q'}(k_1-k_3)\,\widehat{Q'}(k_3-k_1)}{(k_3-k_1+i\,0)}\,.
\end{equation*}
By Eq.~(\ref{eq_41}) it reduces in the limit to $C_{-} \delta(k_1-k_3)(k_1+k_3)T(E_1)^2R(E_1)$ , which establishes the second limit (\ref{eq_615}) by the aforementioned identity $C_{-}=-i\pi$.\\
To close this section, we compute the four integrals encountered along the way. Let us generalize $C_\pm$ to $C_\pm=C_\pm(0)$ where
\begin{equation*}
C_\pm(u)=\integral{}{}{v}\frac{\widehat{Q_{0}'}(-v)\widehat{Q_{0}'}(\pm (u-v))}{v+i\,0}\,,
\end{equation*}
for which we claim
\begin{equation*}
C_+(u)=0\,,\qquad
C_{-}(u)=-2\pi i \widehat{Q_{0}Q_{0}'}(-u)\,.
\end{equation*}
Indeed, by Eq.~(\ref{eq_425}) and Parseval's identity we have
\begin{equation*}
C_\pm(u)=(-i)\integral{}{}{v}\widehat{Q_{0}}(-v)\widehat{Q_{0}'}(\pm (u-v))=
-2\pi i \integral{}{}{x}Q_{0}(x)Q_{0}'(\mp x)e^{\mp iux}\,.
\end{equation*}
The first result follows by the support properties of $Q_{0}$; the second is now read off and its special case $C_{-}(0)=-i\pi$ is by
$\integral{}{}{x}Q_{0}(x)Q_{0}'(x)=1/2$. Let us now come to
\begin{equation*}
\widetilde{C}_\pm =\integral{}{}{u}\frac{\widehat{Q_{0}'}(u) C_\pm(u)}{u+i\,0}\,.
\end{equation*}
Clearly $\widetilde{C}_+=0$. By Eq.~(\ref{eq_425}) and the SW formula (\ref{ws}) we have
\begin{align*}
\widetilde{C}_{-} =\integral{}{}{u} \bigl(\frac{1}{u-i\,0}-2\pi i\,\delta(u)\bigr)
\widehat{Q_{0}'}(u)C_{-}(u)
=2\pi\integral{}{}{u}\widehat{Q_{0}}(u)\widehat{Q_{0}Q_{0}'}(-u)-\frac{(2\pi)^2}{2}\widehat{Q_{0}'}(0)
=-\frac{(2\pi)^2}{6}\,,
\end{align*}
where the last equality is by Parseval's identity followed by $\integral{}{}{x}Q_{0}^2(x)Q_{0}'(x)=1/3$, as well as by $\widehat{Q_{0}'}(0)=\integral{}{}{x}Q_{0}'(x)=1$.
\subsection{The case of a linear dispersion relation}
\label{subsec_limit_linear}
We shall consider the limiting case of a linear dispersion relation described in Sect.~\ref{subsec_linear_model}. It arises in the limit $\lambda\to 0$ of vanishing Fermi wavelength and thus ought to retain the binomial character of the transport statistics. In the model the scatterer and the detector are pointlike objects on the real line. We treat the cases where they are separated by $l>0$, respectively coincident ($l=0$). In the following we briefly review the results obtained in \cite{graf:09} for the 3rd cumulant of the first kind, before extending them to the 3rd cumulant of the second kind. \\
\noindent{\bf 3rd cumulant of the first kind.} It was shown in~\cite{graf:09} that the 3rd cumulant of the first kind (\ref{eq_410}) exhibits binomial statistics,
\begin{equation*}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^3 t}
\cumul{\overleftarrow{T}^{*}(\widehat{I}_1\widehat{I}_2\widehat{I}_3)} = \frac{V}{2\pi}T(1-T)(1-2T)\,,
\end{equation*}
for both coincident and non-coincident positions of scatterer and detector. This is the counterpart of Eq.~(\ref{eq_414}) for an energy--independent transparency $T(E)\equiv T$ and $V=\mu_\textsc{l}-\mu_\textsc{r}$. As mentioned in the introduction, contact terms make a non--vanishing contribution in the case $l=0$. More precisely, we have
\begin{align*}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^{3}t}
\cumul{\overleftarrow{T}(\widehat{I}_1 \widehat{I}_2 \widehat{I}_3)}&=\frac{V}{2\pi}(-2T^2)(1-T)
&\textnormal{(main term)}\,,\nonumber\\
\lim\limits_{t\to +\infty}\frac{3}{t}\,\integral{0}{t}{^{2}t}
\cumul{\overleftarrow{T}(\widehat{I}_1[\widehat{Q}_2,\widehat{I}_2])}&=0
&\textnormal{(1st contact term)}\,,\nonumber\\
\lim\limits_{t\to +\infty}\frac{1}{t}
\integral{0}{t}{t_1}\cumul{[\widehat{Q}_1,[\widehat{Q}_1,\widehat{I}_1]]}&=\frac{V}{2\pi}T(1-T)
&\textnormal{(2nd contact term)}\,.
\end{align*}
In the second case, the main term alone contributes. In both cases a noteworthy feature is that the contribution of the main term remains unchanged if the time ordering is reduced as in $\widehat{I}_1\overleftarrow{T}(\widehat{I}_2 \widehat{I}_3)$ or omitted altogether.\\
\noindent{\bf 3rd cumulant of the second kind.} As explained in connection with Eq.~(\ref{eq_525}) the only missing item in order to establish binomial statistics is
\begin{equation*}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^{3}t}
\cumul{\widehat{I}_1\overleftarrow{T}^{*}(\widehat{I}_2 \widehat{I}_3)}
=\frac{V}{2\pi}T(1-T)(1-2T)\,.
\end{equation*}
Once again, the cumulant consists of a main and a contact term, see Eqs.~(\ref{eq_527}). The contribution of the main term is known, being insensitive to time ordering; hence we merely need to show that the contact term yields the complementary contribution. We distinguish between two cases:\\
(i) {\it Coincident positions, $l=0$:} From the computation of the 1st contact term in \cite{graf:09} we infer
\begin{align*}
\lim\limits_{t\to +\infty}\frac{1}{t}\integral{0}{t}{^{2}t}
\cumul{\widehat{I}_1 [\widehat{Q}_2,\widehat{I}_2])}
&=-\frac{2i}{(2\pi)^2}T(1-T)\integral{}{}{x}\frac{\sin(Vx)}{(x-i\,0)^2}
=\frac{V}{2\pi}T(1-T)\,,
\end{align*}
as claimed. In the second equality we used the fact that, as distributions, the odd part of $(x-i\,0)^{-2}$ is $((x-i\,0)^{-2}-(x+i\,0)^{-2})/2 = \pm i\pi\delta'(x)$.
(ii) {\it Separated positions, $l>0$:} As mentioned in Sect.~\ref{subsec_linear_model}, any contact term vanishes as a consequence of $[Q_l,I_l]=0$.
\section{Comparison between different approaches}
\label{comp}
There are ways and means to compute cumulants of transported charge, and in particular those of \cite{lesovik:03}, which at first sight differ from the ones used here on various counts. The purpose of this section is to show that they nevertheless are fundamentally the same.
A first difference rests on the use of Eq.~(\ref{eq_911}), as opposed to Eq.~(\ref{eq_919}) as above. The link is provided by identity~(\ref{eq_922}), which deserves proof anyway. We recall once more that its two sides are to be understood as power series in $\lambda$ with the time ordering placed inside the multiple integrals. The l.h.s. equals
\begin{align*}
e^{i Ht}e^{-iH(\lambda)t}&=e^{i \lambda Q(t)}e^{-i \lambda Q}=\overleftarrow{T}\exp\bigl(i(Q(t)-Q)\bigr)
=\sum_{n=0}^\infty\frac{(i\lambda)^n}{n!}\overleftarrow{T}(Q(t)-Q))^n\,.
\end{align*}
In order to end up with things placed as stated, we apply the fundamental theorem of calculus to $\overleftarrow{T}(Q(t)-Q))^n$, rather than to $(Q(t)-Q))^n$:
\begin{align*}
\overleftarrow{T}(Q(t)-Q))^n&= \integral{0}{t}{t_1}
\frac{\partial}{\partial t_1}\left.\overleftarrow{T}\bigl((Q(t_1)-Q)\ldots
(Q(t_n)-Q)\bigr)\right|_{t_2 = \ldots = t_n = t}\\
&= \int_{0}^{t}\mathrm{d}t_1\ldots \mathrm{d}t_n\,\frac{\partial}{\partial t_n}\ldots\frac{\partial}{\partial t_1}\overleftarrow{T}\bigl((Q(t_1)-Q)\cdots (Q(t_n)-Q)\bigr)\,.
\end{align*}
We then expand the correlator in $Q(t_i)$ and $-Q$; the resulting second term is independent of $t_i$ and does not contribute to the derivative. By~(\ref{eq_921}) this proves~(\ref{eq_922}). A further proof of that identity, also given in \cite{graf:09}, is by comparing its two sides at each order $\lambda^n$. On the l.h.s. it is to be noted that
\begin{equation}\label{HI}
\HI(\lambda)=- \lambda I - i \frac{\lambda^{2}}{2}[Q,I] + O(\lambda^{3})
\end{equation}
is not homogeneous in $\lambda$; whereas the r.h.s. is expanded into $\overleftarrow{T}$-ordered products plus contact terms. It is instructive to check the case $n=2$. Up to a common factor $-\lambda^2/2$ the two sides are
\begin{equation*}
\integraal{0}{t}{t_1}{t_2}\overleftarrow{T}\bigl(I(t_1)I(t_2)\bigr)+\integral{0}{t}{t_1}[Q(t_1),I(t_1)]\,,\qquad
\integraal{0}{t}{t_1}{t_2}\overleftarrow{T}^{*}(I(t_1)I(t_2))\,;
\end{equation*}
by (\ref{ect}) they agree.
A further point deserving attention is as follows. For the model with quadratic dispersion the expansion (\ref{HI}) terminates at order $\lambda^{2}$ and reads in first quantization
\begin{equation*}
\HI(\lambda)=- \lambda (p Q'(x) + Q'(x) p) +\lambda^{2}Q'(x)^2\,,
\end{equation*}
see Eqs.~(\ref{eq_924}, \ref{eq_925}); and with
$\widehat{A}=\int\mathrm{d}x\mathrm{d}x'\widehat{\psi}(x)^*A(x,x')\widehat{\psi}(x')$ in second quantization
\begin{align}
\widehat{\HI}(\lambda)&=- \lambda \widehat{I} - i \frac{\lambda^{2}}{2}[\widehat{Q},\widehat{I}]\label{hatHI1}\\
&= \int \mathrm{d}x\bigl(- \lambda\widehat{\jmath}(x)Q'(x)+\lambda^2\widehat{\rho}(x)Q'(x)^2\bigr)\,,\label{hatHI2}
\end{align}
where charge and current densities are expressed in terms of fermionic operators $\widehat{\psi}(x)$ as
\begin{equation}\label{rhoj}
\widehat{\rho}(x)=\widehat{\psi}(x)^*\widehat{\psi}(x)\,,\qquad
\widehat{\jmath}(x)=-i\bigl(\widehat{\psi}(x)^*\widehat{\psi}'(x)-\widehat{\psi}'(x)^*\widehat{\psi}(x)\bigr)\,.
\end{equation}
Eq.~(\ref{hatHI2}) follows from (\ref{hatHI1}) by the commutation relation $[\widehat{Q},\widehat{I}]=\widehat{[Q,I]}$, but may also been obtained from $\widehat{H}(\lambda)$ as a starting point. Computations like those seen in Sect.~\ref{sec_derivations} can also be performed on the basis of Eqs.~(\ref{eq_911}, \ref{hatHI2}). However, a fact which was crucial there, but may escape notice here, is that the term of order $\lambda^2$ in (\ref{hatHI2}) is a commutator, as seen in (\ref{hatHI1}). The commutation relation, which states
\begin{equation*}
i\bigl[\int\mathrm{d}x\widehat{\rho}(x)Q(x), \int\mathrm{d}x'\widehat{\jmath}(x')Q'(x')\bigr]=-\int\mathrm{d}x\widehat{\rho}(x)Q'(x)^2\,,
\end{equation*}
has a seemingly independent derivation by means of
\begin{equation*}
i[\widehat{\rho}(x),\widehat{\jmath}(x')]=-\delta'(x'-x)\widehat{\rho}(x')\,,
\end{equation*}
which in turn follows from (\ref{rhoj}) and from
$[A^*B, C^*D]=A^*\{B, C^*\}D-C^*\{A^*,D\}B$ for annihilation operators $A$ through $D$.
|
2,869,038,154,972 | arxiv | \section{Introduction.}
Many experimental results on $B$ physics (see \cite{acpref}) can be seen as hints of physics beyond
the Standard Model (SM). Electroweak precision data also points to new physics scenarios
\cite{Chanowitz:2001}. In the LHC era, new physics related to the observability of the Higgs
boson is worthy to study and the elucidation of the Higgs sector properties is a topic of utmost importance.
A simple extension of the Standard Model (SM) is the introduction of a new generation of quarks and leptons (SM4).
Precision data do not exclude the existence of a sequential fourth generation
\cite{Maltoni:1999ta, He:2001tp, Novikov:2002tk, Kribs:2007nz, Hung:2007ak, Hashimoto:2010at}.
An extensive review and an exhaustive list of references to the work on the subject previous to our
century can be found in \cite{Frampton:1999xi}. Recent highlights on consequences of a fourth
generation can be found in \cite{Holdom:2009rf}. These include mechanisms of dynamical electroweak symmetry
breaking by condensates of fourth generation quarks and leptons
\cite{Holdom:1986rn,Hill:1990ge, Carpenter:1989ij, Hung:2009hy}, convergence improvement of the
three SM gauge couplings due to the Yukawa coupling contributions from the fourth generation \cite{Hung:1997zj},
the possibility of electroweak baryogenesis through first-order electroweak phase transition with four generations
\cite{Ham:2004xh, Fok:2008yg, Kikukawa:2009mu}, CP violation based on Jarlskog invariants generalized to
four generations \cite{Hou:2008xd} and the hierarchy problem \cite{Hung:2009ia}.
The $B\to K \pi$ CP asymmetries puzzles can also be easily solved by a fourth generation
\cite{Soni:2008bc,Hou:2005hd,Hou:2006jy} for a range of extra quark masses within the values allowed by
high precision LEP measurements \cite{Maltoni:1999ta,He:2001tp,Novikov:2002tk,Bobrowski:2009ng}, namely
\begin{eqnarray}\label{eq:benchmarks}
m_{\ell_4} - m_{\nu_4} &&\simeq 30 - 60 \; \mathrm{GeV} \nonumber\\
m_{u_4} - m_{d_4} &&\simeq
\left( 1 + \frac{1}{5} \ln \frac{m_H}{115 \; \mathrm{GeV}}
\right) \times 50 \; \mathrm{GeV} \\
|V_{u d_4}|,|V_{u_4 d}| &&\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.04 \ \nonumber\\
|U_{\ell_4}|,|U_{\mu_4}| &&\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.02 \; , \nonumber
\end{eqnarray}
where $V$ ($U$) is the CKM (MNS) quark (lepton) mixing matrix which is now a $4\times 4$ unitary matrix.
These bounds are subject to direct search limits from LEPII and CDF \cite{Achard:2001qw,Lister:2008is,Aaltonen:2009nr} :
\begin{eqnarray}\label{LEP-CDF}
m_{\nu_4,\ell_4} &> & 100\; \mathrm{GeV} \nonumber\\
m_{u_4} & >& 311\; \mathrm{GeV}\; \\
m_{d_4} & >& 338\; \mathrm{GeV}. \nonumber
\end{eqnarray}
In ref.\cite{Soni:2008bc,Hou:2006jy,Hou:2005hd}, in order to solve the CP asymmetry puzzles in
$B\to K \pi$, one needs the extra quarks to be within the following range \cite{Soni:2008bc}:
\begin{eqnarray}
400\; \mathrm{ GeV} <& m_{u_4} & < 600\; \mathrm{ GeV} .
\end{eqnarray}
Such values of new quark masses imply strong Yukawa couplings. So, it is natural to expect that
this fourth generation could play a special role in the electroweak symmetry breaking (EWSB). Contrary to
other works where it is assumed that Yukawa couplings are strong enough to produce composite scalars at
low energy\cite{Holdom:1986rn,Hill:1990ge, Carpenter:1989ij, Hung:2009hy}, we shall assume that the
perturbative treatment \cite{Coleman:1973jx} is still valid. This assumption is
justified by the fact that even fourth generation masses in the range of 300-600 GeV imply
Yukawa couplings ($g_{f}$) around 2-3. In the loop expansion, the perturbative parameters are given
by $g_{f}^2/4 \pi$ which are still smaller than one for these mass values.
In this work we study the effect of a fourth generation in the dynamical breaking of electroweak symmetry. In order
to isolate these effects and following the spirit of \cite{Fatelo:1994qf}, we start in Section II with a model
with vanishing scalar self-interactions at the classical level and maintain this condition at the one loop level.
In this model the symmetry breaking of the gauge group $SU(2)_L\times U(1)_Y$ is a dynamical effect exhausted by the Yukawa couplings of chiral fermions to the Higgs scalar. In Section III we relax the condition of vanishing effective self-interactions and perform a Renormalization Group (RG) improvement in order to determine whether perturbative conditions remain valid when the running of Yukawa couplings are taken into account. Finally, in Section IV we also explore the implications of this kind of dynamical EWSB mechanism in the minimal supersymmetric standard
model (MSSM) extended with a fourth generation of chiral matter (MSSM4) (see \cite{Chaichian:1995ef} for a closely related approach) which has been studied in many situations \cite{Dubicki:2003am,Carena:1995ep,Gunion:1995tp}.
\section{ Symmetry breaking induced by the fourth generation}
We start with the Lagrangian describing electroweak interactions and consider only the part required for our
purposes, namely
\begin{equation}\label{lag1}
{\mathcal L}= \frac{1}{2}\partial^\mu\phi\partial_\mu\phi-\frac{\mu^2(v)}{2}\phi^2 -\frac{\lambda(v)}{4!}\phi^4
+\sum_{a}\pacua{\bar\psi^a i\gamma^\mu\partial_\mu\psi^a -\frac{g_{a}(v)}{\sqrt{2}}\phi \bar\psi^a \psi^a}.
\end{equation}
Here, $\phi$ is the neutral component of the standard Higgs doublet and $\psi^{a}$ is the corresponding
fermion field with $a=t,u_4,d_4,\ell_4,\nu_4$. We assume that our description of the electroweak interaction
by the symmetries of the Standard Model is valid only up to a cutoff $\Lambda$, but our perturbative expansion
will be done on the physical couplings at the scale $v$ (see e.g.\cite{Aitchison} for a discussion on this viewpoint),
a fact that we emphasize by explicitly showing the dependence of the parameters on this scale which --anticipating
results-- we identify below as the electroweak symmetry breaking scale.
As it is well known, if $ \mu^{2}(v)<0$ and
$\lambda(v)>0$ we have spontaneous symmetry breaking (SSB) already at tree level. In this model we are
interested in the possibility of triggering EWSB without invoking a spontaneous breakdown, and therefore, we
require an authentic scalar field, {\it i.e.}, $\mu^{2}(v)>0$. We expect SB to be induced by quantum effects
and we are specially interested in the isolation of the effects due to the fourth generation in such a dynamical EWSB.
With this aim, we start taking $\lambda(v)=0$, which is the limiting case where one-loop effects of the
scalar sector are completely suppressed and only the matter sector is responsible for EWSB. The condition
$\lambda(v)=0$ should not be taken as a fundamental requirement of the model nor as a fine tunning condition,
but instead as the limiting scenario where the effects of the fourth generation are more easily recognizable.
The one-loop corrections to the classical potential $V^{(0)}=\frac{1}{2}\mu^2(v)\phi^2$ can be calculated
using standard techniques \cite{Sher:1988mj}. At one loop level we obtain
\begin{equation}
V^{(1)}_{f}=\frac{\mu^2(v)}{2}\phi^2-\sum_{a} \frac{4N_c^a}{32\pi^2}\int_0^{\Lambda^2} dk_E^2 k_E^2
\ln\left[\frac{k_E^2+ m_a^2(\phi)}{k_E^2+ m_a^2(0)}\right],
\end{equation}
where $N_c^a$ is the number of colors of the field labeled by $a$ and $m_a^2(\phi)=g_{a}^2(v)\phi^2/2$.
Including one-loop gauge boson contributions to this potential is straightforward and yields
\begin{equation}\label{Vef}
\begin{split}
V^{(1)}=&\frac{\mu^2(v)\phi^2}{2}+\sum_{a} \frac{n_a}{32\pi^2}\int_0^{\Lambda^2} dk_E^2 k_E^2
\ln\left[\frac{k_E^2+ m_a^2(\phi)}{k_E^2+ m_a^2(0)}\right]\\
=&\frac{\mu^2(v)\phi^2}{2}+\sum_{a} \frac{n_a}{64\pi^2}\biggl\{ \pacua{m_a^2(\phi)-m_a^2(0)}\Lambda^2
+\Lambda^4 \ln\left[ \frac{\Lambda^2+m_a^2(\phi)}{\Lambda^2+m_a^2(0)}\right]
\\&\qquad\qquad-m_a^4(\phi)\ln\left[1+ \frac{\Lambda^2}{m_a^2(\phi)}\right]+m_a^4(0)
\ln\left[1+ \frac{\Lambda^2}{m_a^2(0)}\right]\biggr\}
\end{split}
\end{equation}
where now $a=t,u_4,d_4,\ell_4,\nu_4, W, Z$ and the field-dependent squared masses for gauge bosons are
given by $m_W^2(\phi)=g_2^2\phi^2/4$ and $m_Z^2(\phi)=(g_1^2+g_2^2)\phi^2/4$, with $g_{1}$ and $g_{2}$
as the $U(1)$ and $SU(2)$ gauge couplings evaluated at the scale $v$ respectively. Consequently,
the degeneracies per particle are the following: $n_W=6$, $n_Z=3$, $n_t=n_{u_4}=n_{d_4}=-12$ and $n_{\ell_4}=n_{\nu_4}=-4$.
From \refeq{Vef}, one can see that the classical minimum $\langle\phi\rangle=0$ can be turned into a local
maximum by the one-loop corrections. A new minimum appears then at $\langle\phi\rangle=v\neq 0$ and all
particles in the model acquire a mass $m_a=m_a(v)$. The only non-trivial solution to
$\partial V^{(1)}/\partial\phi |_{\phi=v}=0$ is
\begin{equation}
\mu^2(v)=-\sum_{a}\frac{n_a m_a^4}{16\pi^2 v^2}\pacua{\frac{\Lambda^2}{m_a^2}-\ln\paren{1+\frac{\Lambda^2}{m_a^2}}},
\end{equation}
with $\mu^2(v)>0$ for the inputs of the problem as required, meaning that the tree level scalar mass
term is genuine and symmetry breaking is entirely driven by one-loop effects. The Higgs boson mass at one
loop level can be identified as
\begin{equation}\label{HM}
m_H^2(v)=\left.\frac{\partial^2 V^{(1)}}{\partial\phi^2}\right|_{\phi=v}=
-\sum_{a}\frac{n_a m_a^4}{8\pi^2v^2}\pacua{\ln\paren{1+\frac{\Lambda^2}{m_a^2}}-\frac{\Lambda^2}{m_a^2+\Lambda^2}}
\end{equation}
and the fourth derivative of the effective potential evaluated at the scale $v$ reads
\begin{equation}\label{cuarta}
\left.\frac{\partial^4 V^{(1)}}{\partial\phi^4}\right|_{\phi=v}=
-\sum_{a}\frac{3 n_a m_a^4}{8\pi^2v^4}\pacua{\ln\paren{1+\frac{\Lambda^2}{m_a^2}}+9\frac{m_a^2}{m_a^2+\Lambda^2}
-8\frac{m_a^4}{(m_a^2+\Lambda^2)^2}+\frac{8}{3}\frac{m_a^6}{(m_a^2+\Lambda^2)^3}-\frac{11}{3}}.
\end{equation}
The effective scalar self-interaction depends on the fermion masses $m_{a}$, the minimum of the
effective potential $v$ and the cut-off $\Lambda$ and it is worthy to study this dependence. This is shown in
Fig. (\ref{lambda}) for $v=246 ~$GeV and heavy fermion masses in the range given in
Eqs.(\ref{eq:benchmarks}, \ref{LEP-CDF}).
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{lambda1.eps}
\end{center}
\caption{Effective Higgs self-coupling at the electroweak scale $v=246~$GeV as a function of the cutoff $\Lambda$
for fixed values of the heavy fermions. The curves correspond to $m_{u_4}=$350, 400, 450 and 500 GeV
with $m_{\ell_4}=200$ GeV and mass splittings $m_{u_4} - m_{d_4} =60$ GeV and $ m_{\ell_4} - m_{\nu_4} =45 $ GeV from shallowest to deepest.}
\label{lambda}
\end{figure}
\end{center}
Notice that for given fermion masses, the specific value of the effective self-interaction at the
electroweak symmetry breaking scale depends on the value of the unknown scale $\Lambda$. Up to this point,
a wide range of possible values for the cut-off are eligible and one must take into account the dependence
of the parameters of the model on $\Lambda$, as we will do in section III. However, these values must be
consistent with the perturbative treatment we are using which requires a small effective scalar
self-interaction. From Fig. (\ref{lambda}) we can see that this narrows the range of values for $\Lambda$,
the allowed range depending on the specific masses of the fourth generation. Interestingly,
for given values of the fermion masses, there are specific values of $\Lambda$ such that the effective
self interaction also vanishes. These specific values are worthy to study
in detail because in this case the effects of scalar self-interactions in the EWSB at the next order in
perturbation theory also vanish and EWSB is still driven by the Yukawa couplings at that order. Furthermore,
in this case the scale $\Lambda$ is fixed by the electroweak scale $v$ and the values of the fermion masses.
There are two solutions to the equation
\begin{equation}\label{cuarta0}
\left.\frac{\partial^4 V^{(1)}}{\partial\phi^4}\right|_{\phi=v}=0,
\end{equation}
for heavy fermion masses in the range given in Eqs.(\ref{eq:benchmarks}, \ref{LEP-CDF}). One of them yields
$\Lambda$ around the electroweak symmetry breaking scale $v$ and we consider it as unphysical. The other
solution lies in the range
\begin{equation}\label{range}
1600\text{ GeV}<\Lambda<2500\text{ GeV},
\end{equation}
depending on the input for the masses of the fourth generation fermions.
Once we have fixed the cutoff $\Lambda$ for given masses of the heavy fermions, we obtain from Eq.
\refeq{HM} the corresponding Higgs mass as a function of the fourth generation quark and lepton masses.
In the numerical analysis we use
\begin {equation}\label{dif}
\begin{array}{ccc}
m_{\ell_4} - m_{\nu_4} =45 \; \mathrm{GeV},&\qquad & 100 \; \mathrm{GeV} \le m_{\ell_4}\le 400 \; \mathrm{GeV}, \\
m_{u_4} - m_{d_4} =60 \; \mathrm{GeV},&\qquad & 350 \; \mathrm{GeV} \le m_{u_4}\le 500 \; \mathrm{GeV},
\end{array}
\end {equation}
as suggested by Eqs.(\ref{eq:benchmarks}, \ref{LEP-CDF}). Under these considerations, the Higgs mass is a
smooth function of $m_{u_4}$ and $m_{\ell_4}$ and has a more pronounced dependence on $m_{u_4}$ as shown
in Figs.(\ref{mu4}, \ref{ml4}). More important, a modest Higgs mass of $\sim 350 $ GeV is
reachable and even a heavy Higgs of $\sim 800 $ GeV would be consistent with electroweak
precision data if EWSB is entirely driven by Yukawa forces of the hypothetical fourth generation and the top quark.
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{mu4.eps}
\end{center}
\caption{Higgs mass as a function of $m_{u_4}$ for different values of $m_{\ell_4}=400,~300,~200,~100 $ GeV from top to bottom. The lowest line contains only the contribution of $u_4$ and $d_4$.}
\label{mu4}
\end{figure}
\end{center}
Notice that if fourth generation
lepton masses are of order $100-200$ GeV, then the contribution of $u_4$ and $d_4$ almost determine
completely the Higgs mass prediction, as depicted in Fig.(\ref{mu4}). Combining Eq.\refeq{HM} and
Eq.\refeq{lambda} with condition Eq.\refeq{cuarta0}, this fact is expressed as follows:
\begin{equation}\label{HM1}
m_H^2\approx\sum_{q=u_4,d_4}\frac{4 m_q^4}{\pi^2v^2}\pacua{1-3\frac{m_q^2}{\Lambda^2}
+\mathcal{O}\paren{\frac{m_q^4}{\Lambda^4}}}.
\end{equation}
Also in this case, the cutoff for new physics should be within the range given in Eq.\refeq{range}.
A simple and good approximation for this case is
\begin{equation}\label{HMa}
m_H\approx\frac{1.8
9}{\pi v}\sqrt{m_{u_4}^4+m_{d_4}^4},
\end{equation}
with $\Lambda\approx 5 m_{u_4}$.
It is important to remark that even for masses of the 4th generation around 500 GeV, the corresponding Yukawa
couplings ($g_{q_4}$) are around $2-3$ thus the loop expansion parameter, given by $g_{q_4}^2/4\pi$,
is smaller than one and justifies our perturbative approach.
Results contained in Figs.(\ref{mu4}, \ref{ml4}) are in agreement --{\it mutatis mutandis}-- with the analysis performed in the full renormalized SM4 framework \cite{Hashimoto:2010at}.
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{ml4.eps}
\end{center}
\caption{Higgs mass as a function of $m_{\ell_4}$ for different values of $m_{u_4}=350,~400,~450,~500$ GeV from bottom to top.}
\label{ml4}
\end{figure}
\end{center}
\section{RG Improved Model}
It is important to check the consistency and stability of our previous approach to take into account the
running of the Yukawa couplings as it is well known that these couplings could reach the non perturbative
regime very quickly.
In this section, following the approach of \cite{Cvetic:1995va} we investigate the
leading effects of the heavy fourth generation quarks on the scalar sector with a special emphasis on the
perturbative nature of the analysis and its implications on the possible choices for the ultraviolet cut-off.
We use the Renormalization Group equation (RG) to estimate the running of the couplings.
From the previous results, we learned that the contribution of gauge bosons, top quark and fourth generation
leptons to the one-loop effective potential is negligible compared to that of the fourth generation quarks
if we assume that new leptons are relatively light. In a first approximation we will consider only the
effects of the running in the fourth family of quarks. In order to incorporate these effects properly in
the analysis of the Higgs mass, we will use the pole mass for the Higgs. Furthermore, the study of the perturbative regime will require the running of the fermion masses which are dictated by the running of
the Yukawa couplings. Since we will study the behavior of our observables as a function of these masses
it is important to work with the fermion masses as defined at the corresponding scale, i.e.
$m_{q_{4}}=m_{q_{4}}(\mu=m_{q_{4}})$. Finally we will incorporate renormalization effects of the vacuum expectation value of the scalar field. All these effects are more easily handled using a more conventional approach thus, unlike the previous section, here we start with the bare Lagrangian whose
sector of our primary interest is
\begin{equation}\label{Lagr}
{\mathcal L}^{(\Lambda)}= \frac{1}{2}\partial^\mu\phi\partial_\mu\phi-V^{(0)} \left( \phi^2; \Lambda \right)
+\sum_{a=u_4,d_4}\pacua{\bar\psi^a i\gamma^\mu\partial_\mu\psi^a -\frac{g_a\left( \Lambda \right)}{\sqrt{2}}\phi \bar\psi^a \psi^a},
\end{equation}
with
\begin{equation}
V^{(0)}\left( \phi^2; \Lambda \right)
= \frac{1}{2} \mu^2(\Lambda) \phi^2
+ \frac{\lambda\left( \Lambda \right) }{4!}
\phi^4 \ .
\label{Vtree}
\end{equation}
At one loop level we have
\begin{equation}
V^{(1)}\left( \phi^2; \Lambda \right)=V^{(0)}\left( \phi^2; \Lambda \right)-\sum_{a=u_4,d_4}\frac{4N_c^a}{32\pi^2}\int_0^{\Lambda^2} dk_E^2 k_E^2
\ln\left[1 + \frac{ g^2_a(\Lambda) \phi^2}{2 k_E^2}\right].
\end{equation}
Again, if we insist in a dynamical SB triggered by fourth generation quarks and we set $\lambda(\Lambda)=0$, the only non-trivial solution to
$\partial V^{(1)}/\partial\phi |_{\phi=\langle \phi \rangle_{1}}=0$ is
\begin{equation}
\mu^2(\Lambda)= \sum_{a=u_4,d_4}\frac{ g_a^2(\Lambda) N_{c}}
{8 \pi^2} \left[ \Lambda^2 - m_a^{(0)2}(\Lambda)
\ln \left( \frac{\Lambda^2}{ m_a^{(0)2}(\Lambda)} + 1 \right) \right] ,
\end{equation}
with $\mu^2(\Lambda)>0$ for the inputs of the analysis. This means that we have an authentic scalar in the SB sector. Here
\begin{equation}
m_a^{(0)}(\Lambda) =
\frac{g_a(\Lambda) \langle \phi \rangle_{1} }{\sqrt{2}}
\label{1ltgapdef}
\end{equation}
with $\langle \phi \rangle_{1}$ as the ``bare'' vacuum expectation value, where the subscript denotes the fact that this is
an approximation with only the one-loop fourth generation quantum effects.
It is important to notice that our Lagrangian depends now on the values of the coupling at the cut-off scale. In
this section {\it the cut-off scale will be defined as the scale where the perturbative regime for the Yukawa
couplings is still valid}. Above this scale, the Yukawa couplings could get strong enough to generate
non-perturbative effects as condensate formation or others.
In order to obtain predictions on physical quantities, we must make an adequate choice of $\Lambda$ taking special care in the preservation of the perturbative expansion.
The relations between the bare parameters of the model and the physical parameters proceed as follows: The physical (pole) mass $M_H$ of the scalar can be expressed in terms of the effective potential as
\begin{eqnarray}\label{MassH}
M_H^2 & = &
\frac{ d^2 V^{(0)} }{d \phi^2} {\Big|}_
{\phi= \langle \phi \rangle_{1}} + \Sigma_{HH}\left(
q^2 = M_H^2 \right)
\nonumber\\
& = &
\frac{d^2 V^{(1)} }
{d \phi^2} {\Big|}_
{\phi= \langle \phi \rangle_{1}} - \Sigma_{HH}\left( q^2 = 0 \right)+ \Sigma_{HH}\left(
q^2 = M_H^2 \right),
\label{MHpole1}
\end{eqnarray}
where $\Sigma_{HH}( q^2 )$ stands for the scalar self energy, that can be approximated as the following truncated Green function calculated with fourth generation one-loop effects only
\begin{equation}
\begin{split}
-i \Sigma_{HH}( q^2 )&=-i \Sigma^{u_4u_4}_{HH}( q^2 )-i \Sigma^{d_4d_4}_{HH}( q^2 )\\
&= \sum_{a=u_4,d_4}\left( \frac{g_a(\Lambda)}{\sqrt{2}} \right)^2
N_{c} \int \frac{d^4 k}{(2\pi)^4}
\mathop{\mathrm{Tr}} \left[
\frac{i}{\left( {k \llap /} - m_a^{(0)}(\Lambda) \right)}
\frac{i}{\left( {k \llap /} + {q \llap /} - m_a^{(0)}(\Lambda) \right)}
\right].
\end{split}
\end{equation}
A straightforward calculation performing the Wick rotation in Euclidean space and imposing a spherical cut-off on the euclidean quark momentum yields
\begin{equation}\label{SHH}
\begin{split}
\Sigma_{HH}(q^2)=& -\sum_{a=u_4,d_4} \frac{ g_a^2(\Lambda) N_c}{8 \pi^2}\left\{\Lambda^2+\left[\frac{q^2}{2}-3m^{(0)2}_a(\Lambda)\right]\ln\left(\frac{\Lambda^2}{ m_a^{(0)2}(\Lambda)}\right)\right.\\
&+2m^{(0)2}_a(\Lambda)-\frac{7}{12}q^2+\frac{ m_a^{(0)2}(\Lambda)}{\Lambda^2}\left[\frac{q^2}{2}-5m^{(0)2}_a(\Lambda)\right]\\
&\left.+\frac{ m_a^{(0)4}(\Lambda)}{\Lambda^4}\left[q^2+\frac{7}{2}m^{(0)2}_a(\Lambda)\right]+\mathcal{O}\left((q^2;m_a^{(0)2}(\Lambda))\frac{ m_a^{(0)6}(\Lambda)}{\Lambda^2}\right)\right\}.
\end{split}
\end{equation}
In this framework, the relation among the bare VEV $ \langle \phi \rangle_{1}$ and its renormalized counterpart $v\equiv\phi_{\text{ren}}$ is given by the renormalization of the kinetic scalar term and can be written as
\begin{equation}
Z_{\phi} \langle \phi \rangle_{1}^2= v^2,
\end{equation}
where
\begin{equation}\label{phiren}
\begin{split}
Z_{\phi}=& \left[ 1 - \frac{d \Sigma_{HH}(q^2)}{d q^2}
{\Big|}_{q^2=M_H^2} \right]\\
=& 1+\sum_{a=u_4,d_4} \frac{4 g_a^2(\Lambda) N_c}{64 \pi^2}\left\{\ln\left(\frac{\Lambda^2}{ m_a^{(0)2}(\Lambda)}\right)-\frac{7}{6}\right.\\
&\left.+\frac{ m_a^{(0)2}(\Lambda)}{\Lambda^2}+2\frac{ m_a^{(0)4}(\Lambda)}{\Lambda^4}+\mathcal{O}\left(\frac{ m_a^{(0)6}(\Lambda)}{\Lambda^6}\right)\right\}.
\end{split}
\end{equation}
In the matter sector, the running of the relevant Yukawa couplings can be summarized in the following Renormalization Group Equations:
\begin{eqnarray}
(16\pi^2)\mu\frac{\partial}{\partial\mu}g_{u_4} =\frac{9}{2}g^3_{u_4}+\frac{3}{2}g_{u_4}g^2_{d_4}\\
(16\pi^2)\mu\frac{\partial}{\partial\mu}g_{d_4} =\frac{9}{2}g^3_{d_4}+\frac{3}{2}g_{d_4}g^2_{u_4}.
\end{eqnarray}
In the approximation $g_{u_4}\approx g_{d_4}$, defining $g_{u_4}-g_{d_4}\equiv\Delta g $, the previous equations reduce to
\begin{eqnarray}
(16\pi^2)\mu\frac{\partial}{\partial\mu}g_{u_4} \approx 6g^3_{u_4}\\
(16\pi^2)\mu\frac{\partial}{\partial\mu}\Delta g \approx 12 g^2_{u_4}\Delta g
\end{eqnarray}
and the solution can be written as
\begin{equation} g_{u_4}(\mu)\approx\left[\frac{1}{g^2_{u_4}(\mu_0)}-\frac{6}{16\pi^2}\ln\left(\frac{\mu^2}{\mu_0^2}\right)\right]^{-1/2}
\end{equation}
\begin{equation}
\Delta g(\mu)\approx\Delta g(\mu_0)\left[1-\frac{3 g^2_{u_4}(\mu_0)}{8\pi^2}\ln\left(\frac{\mu^2}{\mu_0^2}\right)\right]^{-1}.
\end{equation}
The physical mass of the heaviest fourth generation quark are defined as
\begin{equation}
m_{u_4}\equiv
\frac{g_{u_4}(m_{u_4}) v}{\sqrt{2}}.
\end{equation}
The running of Yukawa couplings from $E=m_{u_4}$ to $E'=\Lambda$ is given by
\begin{equation}\label{g4} g_{u_4}(\Lambda)\approx\left[\frac{1}{g^2_{u_4}(m_{u_4})}-\frac{6}{16\pi^2}\ln\left(\frac{\Lambda^2}{m_{u_4}^2}\right)\right]^{-1/2}
\end{equation}
\begin{equation}
\Delta g(\Lambda)\approx\Delta g(m_{u_4})\left[1-\frac{3 g^2_{u_4}(m_{u_4})}{8\pi^2}\ln\left(\frac{\Lambda^2}{m_{u_4}^2}\right)\right]^{-1}.
\end{equation}
Inserting \refeq{SHH} into \refeq{MassH}, the squared pole mass of the scalar is
\begin{eqnarray}
M_H^{2} & = & \sum_{a=u_4,d_4} \frac{8 g_a^2(\Lambda) N_c Z_{\phi}^{-2}v^2}{64 \pi^2}\left[\ln\left(\frac{\Lambda^2}{ m_a^{(0)2}(\Lambda)}+1\right)-\frac{ \Lambda^2}{\Lambda^2+m_a^{(0)2}(\Lambda)}\right].
\label{MHpole2}
\end{eqnarray}
In this case, maximum cut-off can be naturally defined as the largest scale at which the model remains perturbative. That scale is achieved when alpha-Yukawa becomes equal to one
\begin{equation}\label{pert}
\frac{g^2_{u_4}(\Lambda_\text{max})}{4\pi}=1,
\end{equation}
which can be solved to yield
\begin{equation}\label{lmax}
\Lambda_{\text{max}}=m_{u_4} e^{\frac{2\pi^{2}v^{2}}{3m_{u_4}^2}(1-\frac{m_{u_4}^{2}}{2\pi v^2})}
\end{equation}
In Fig.(\ref{L00}), $\Lambda_\text{max}$ is shown as a function of the heaviest quark mass. Hence, for a quark with mass $m_{u_4}=400$ GeV the maximum cut-off is $\Lambda_\text{max}\approx 1700$ GeV; while for $m_{u_4}=500$ GeV we have $\Lambda_\text{max}\approx 860$ GeV. Even in this case, a Higgs mass between $350$ GeV and $650$ GeV is compatible with fourth generation quarks with masses between $350$ GeV and $500$ GeV which are responsible for the EWSB in a perturbative fashion with a physical cut-off $\Lambda<\Lambda_\text{max}$.
Comparing with the previous section we can see that perturbative effects indeed appear at a lower scale than
the naive scale for new physics. Still, taking the worst case, {\it e.g.}, $\Lambda=2 m_{u_4}$, the predicted Higgs mass lies in the same range as before and the perturbative expansion is valid up to $m_{u_4}\approx 480$GeV, where $\Lambda\approx\Lambda_\text{max}$. This is shown explicitly in Fig.(\ref{0060}), where the curve represents the Higgs mass that correspond to the cut-off choice $\Lambda=2m_{u_4}$ as a function of the mass of the heaviest quark $m_{u_4}$ with $\Delta m=m_{u_4}-m_{d_4}=60$ GeV. Thus, the predictions
of the previous model are not strongly modified by the RG Yukawa couplings, but the interpretation of the cutoff scale is different.
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{L.eps}
\end{center}
\caption{$\Lambda_\text{max}$ as a function of the physical mass of the heaviest quark $m_{u_4}$}
\label{L00}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[ptb]
\includegraphics[width= 0.8 \textwidth]{RGE4g.eps}
\caption{Higgs (pole) mass as a function of $m_{u_4}$ with $\Lambda=2m_{u_4}$ for $\Delta m=m_{u_4}-m_{d_4}=60$ GeV. }
\label{0060}
\end{figure}
\end{center}
\section{Dynamical symmetry breaking in MSSM4}
We now perform an analogous calculation in the context of a low energy supersymmetric extension of the
SM with a fourth generation of chiral matter. As is well known, in the Higgs sector of MSSM there are
two scalar doublets of opposite hypercharge: $H_d=(H_d^0,H_d^-)^T$, $H_u=(H_u^+,H_u^0)^T$. Breaking
supersymmetry softly, the tree-level scalar potential for the CP-even neutral scalars $H_1\equiv{\rm Re}\, H_d$
and $H_2\equiv{\rm Re}\, H_u$ is
\begin{equation}\label{V00}
V^{(0)}=\frac{1}{2}\left(H_1 H_2\right)\left(\begin{array}{cc}
m_1^2 & -m_{12}^2\\
-m_{12}^2 & m_2^2
\end{array}
\right)
\left(\begin{array}{c}
H_1\\
H_2
\end{array}\right)
+\frac{(g_1^2+g_2^2)}{32}(H_2^2-H_1^2)^2.
\end{equation}
The linear combination
\begin{equation}\label{red}
\left(\begin{array}{c}
\phi\\
\varphi
\end{array}\right)=\left(\begin{array}{cc}
\cos\beta & \sin\beta\\
-\sin\beta & \cos\beta
\end{array}
\right)
\left(\begin{array}{c}
H_1\\
H_2
\end{array}\right)
\end{equation}
with $\tan2\beta=2m_{12}^2/(m_2^2-m_1^2)$ diagonalizes the mass matrix in Eq.\refeq{V00} and the potential becomes
\begin{equation}\label{V002}
V^{(0)}=\frac{\mu^2}{2}\phi^2+\frac{M^2}{2}\varphi^2
+\frac{(g_1^2+g_2^2)}{32}\left[ \cos2\beta(\varphi^2-\phi^2)+\sin2\beta \phi\varphi \right]^2,
\end{equation}
where
\begin{equation}\label{mum}
\mu^2,M^2=\frac{1}{2}\pacua{m_1^2+m_2^2\mp\sqrt{(m_2^2-m_1^2)^2+4m_{12}^2}}.
\end{equation}
Here, as in section II, the parameters of the Lagrangian are identified as the physical ones evaluated at the electroweak scale. If we demand $\mu^2>0$, then only SUSY is broken at tree
level -leaving electroweak symmetry untouched- and from Eq. \refeq{mum} we have $m_1^2m_2^2>m_{12}^4$.
Also, if we require the potential to be bounded from below, the parameters are constrained to
satisfy $m_1^2+m_2^2\geq 2m_{12}^2$.
\bigskip
As usual in this context, we work in the decoupling limit where all SUSY partners of SM particles
and all physical scalars that emerge from the Higgs sector (except for $\phi$) are heavy, with masses
of the order of the global SUSY breaking scale $M_S$. From the analysis of the previous section we know
that the contribution of gauge bosons and fourth generation leptons to the one-loop effective potential
are negligible; in the first case because of the relative smallness of the gauge couplings compared to
the quark Yukawa couplings and in the second case because the number of degrees of freedom per lepton is
$1/3$ that of quarks. For simplicity, we also discard terms of the form $\phi\varphi^3$, $\phi^2\varphi^2$
and $\phi^3\varphi$ because their contribution is also dictated by the gauge couplings. Under these
simplifications, the resulting effective potential is given by Eq.\refeq{Vef} with
$a=t,u_4,d_4,\tilde{t}^{1,2},\tilde{u}_4^{1,2},\tilde{d}_4^{1,2}$ and field-dependent masses
$m_t^2(\phi)=g_t^2\sin^2\beta\phi^2/2$, $m_{u_4}^2(\phi)=g_{u_4}^2\sin^2\beta\phi^2/2$,
$m_{d_4}^2(\phi)=g_{d_4}^2\cos^2\beta\phi^2/2$,
\begin{equation}\label{mixmass}
m_{\tilde{q}^{1,2}}^2(\phi)=\frac{1}{2}\llav{m_{\tilde{q}^L}^2(\phi)+m_{\tilde{q}^R}^2(\phi)\mp
\sqrt{\pacua{m_{\tilde{q}^L}^2(\phi)-m_{\tilde{q}^R}^2(\phi)}^2+4\tilde{A}_q^2m_q^2(\phi)}},
\end{equation}
where $q=t,u_4,d_4$. In the above expression we have
\begin{equation}\label{mixm2}
\begin{array}{ccc}
m^2_{\tilde{t}^L}(\phi)= m^2_{Q_3}+m^2_{t}(\phi) + D^2_{\tilde{t}^L}(\phi), &\qquad& m^2_{\tilde{t}^R}(\phi)=
m^2_{U_3}+m^2_{t}(\phi) + D^2_{\tilde{t}^R}(\phi), \\ \\
m^2_{\tilde{u}_4^L}(\phi)= m^2_{Q_4}+m^2_{u_4}(\phi) + D^2_{\tilde{u}_4^L}(\phi), &\qquad& m^2_{\tilde{u}_4^R}(\phi)=
m^2_{U_4}+m^2_{u_4}(\phi) + D^2_{\tilde{u}_4^R}(\phi), \\ \\
m^2_{\tilde{d}_4^L}(\phi)= m^2_{Q_4}+m^2_{d_4}(\phi) + D^2_{\tilde{u}_4^L}(\phi), &\qquad& m^2_{\tilde{d}_4^R}(\phi)=
m^2_{D_4}+m^2_{d_4}(\phi) + D^2_{\tilde{d}_4^R}(\phi),
\end{array}
\end{equation}
with $m^2_{Q_3},m^2_{U_3},m^2_{D_3}, m^2_{Q_4},m^2_{U_4},m^2_{D_4}$ as soft supersymmetry-breaking mass
parameters for the left- and right-handed squarks and
\begin{equation}
\begin{array}{ccc}
D^2_{\tilde{q}^L}(\phi)=m_Z^2(\phi)\cos2\beta\pacua{T_{3L}(\tilde{q})-Q(\tilde{q})\sin^2\theta_W}, &
\qquad& D^2_{\tilde{q}^R}(\phi)=m_Z^2(\phi)\cos2\beta Q(\tilde{q})\sin^2\theta_W.
\end{array}
\end{equation}
Note that the discussion about Yukawa couplings given in the previous section applies in this case to
the quantities $g_t^{*}=g_t\sin\beta$, $g_{u_4}^{*}=g_{u_4}\sin\beta$ and $g_{d_4}^{*}=g_{d_4}\cos\beta$ for fixed $\beta$.
In Eq.\refeq{mixmass}, the parameters $\tilde{A}_q$ control the mixing between squarks in each generation.
We assume that there is no mixing, taking $\tilde{A}_t=\tilde{A}_{u_4}=\tilde{A}_{d_4}=0$. The degrees of freedom
per particle are $n_{\tilde{q}^{1}}=n_{\tilde{q}^{2}}=6$, $n_q=-12$. Notice also that in this case the tree-level scalar self interactions cannot be taken as zero.
\bigskip
The parameter $\mu$ can be expressed now in terms of the physical masses after the minimization of
the effective potential. At $\phi=v$ one obtains
\begin{equation}
\mu^2=-\frac{1}{2}m_Z^2\cos^2 2\beta+\sum_{q}\frac{3 m_q^2}{8\pi^2 v^2}\pacua{m_{\tilde{q}^1}^2
\ln\paren{1+\frac{\Lambda^2}{m_{\tilde{q}^1}^2}}+
m_{\tilde{q}^2}^2\ln\paren{1+\frac{\Lambda^2}{m_{\tilde{q}^2}^2}}-2 m_q^2\ln\paren{1+\frac{\Lambda^2}{m_q^2}}},
\end{equation}
with $\mu^2>0$ again for the present set up. For the Higgs mass and the effective Higgs self-coupling
at electroweak scale one has
\begin{equation}\label{HMs}
\left.\frac{\partial^2 V^{(1)}}{\partial\phi^2}\right|_{\phi=v}=m_H^2=m_Z^2\cos^2 2\beta
+\sum_{q}\frac{3 m_q^4}{4\pi^2v^2}\ln\paren{\frac{m_{\tilde{q}^1}^2 m_{\tilde{q}^2}^2}{m_{q}^4}}
\end{equation}
and
\begin{equation}\label{cuartas}
\begin{split}
\left.\frac{\partial^4 V^{(1)}}{\partial\phi^4}\right|_{\phi=v}=&\frac{3m_Z^2\cos^2 2\beta}{v^2}
\\&+\sum_{q}\frac{3 m_q^4}{4\pi^2v^4}\llav{3\ln\paren{\frac{m_{\tilde{q}^1}^2 m_{\tilde{q}^2}^2}{m_{q}^4}}
-4\pacua{m_q^4\paren{\frac{1}{m_{\tilde{q}^1}^4}+\frac{1}{m_{\tilde{q}^2}^4}}-3m_q^2\paren{\frac{1}{m_{\tilde{q}^1}^2}
+\frac{1}{m_{\tilde{q}^2}^2}}+4}}
\end{split}
\end{equation}
neglecting terms that vanish as $\Lambda\to\infty$ because the soft breaking terms for the squarks play the
role of natural regulators in this case.
\bigskip
Again, if we insist that electroweak SB is completely produced by quark and squark loops, consistency
requires that Higgs self interactions must remain small at least at the scale of SB as in Eq.\refeq{cuarta0}.
Taking for all squarks the same soft mass $m_s=m_{Q_3}=m_{U_3}= m_{Q_4}=m_{U_4}=m_{D_4}=\alpha m_{u_4}\sim M_{S}$
in Eq.\refeq{mixm2}, one can extract the maximum value of $\alpha$ allowed by
$\left.{\partial^4 V^{(1)}}/{\partial\phi^4}\right|_{\phi=v}=0$. This is shown in Fig. (\ref{lambdams}).
The solution turns to be very stable and lies in the range
\begin{equation}\label{rangea}
\begin{array}{ccc}
2.3<\alpha<2.8 &\Rightarrow \qquad& 800\text{ GeV}<m_s<1400\text{ GeV},
\end{array}
\end{equation}
for $0\le\beta\le\pi/2$ with fixed values of $g_t^*$, $g_{u_4}^*$ and $g_{d_4}^*$ and the relation
Eq.\refeq{dif} for the masses of the fourth generation quarks. The results turn to be weakly
$\beta$ dependent as we will see in Fig. (\ref{mums}).
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{lambdams.eps}
\end{center}
\caption{Effective Higgs self-coupling at the electroweak scale $v$ as a function of $\alpha$ for fixed
values of the heavy fermion masses and $\beta=\pi/4$. The curves correspond to $m_{u_4}=$350, 400, 450 and 500 GeV
from left to right at zero.}
\label{lambdams}
\end{figure}
\end{center}
Finally, once determined the parameter $\alpha$, Eq.\refeq{HMs} leads to the corresponding Higgs mass
upper bound in the limit $\Lambda\to\infty$ as a function of $\beta$ and $m_{u_4}$ as shown in
Fig. (\ref{mums}). The prediction for the Higgs mass is very similar to that of the previous section,
from $350 $ GeV to about $750$ GeV up to small corrections that would come from gauge bosons, leptons
and sleptons in the loop, which are expected to modify our results only a few percent. In fact, even
the contribution of top quarks is negligible (see Fig. (\ref{mums})). From Eq.\refeq{HMs}, Eq.\refeq{cuartas}
and Eq.\refeq{cuarta0}, the dominant contribution to Higgs mass (taking $\beta=\pi/4$ for simplicity,
which implies $m_{\tilde{q}}^2=m_{\tilde{q}^1}^2=m_{\tilde{q}^2}^2=m_s^2+m_q^2$) is:
\begin{equation}\label{HM2}
m_H^2\approx\sum_{q=u_4,d_4}\frac{4 m_q^4}{\pi^2v^2}\pacua{1-\frac{3}{2}\frac{m_q^2}{m_{\tilde{q}}^2}
+\frac{1}{2}\frac{m_q^4}{m_{\tilde{q}}^4}},
\end{equation}
with $\alpha$ and $m_s$ given by Eq.\refeq{rangea}. Given the small difference between $m_{u_4}$ and
$m_{d_4}$, a good approximation to Eq.\refeq{HM2} is simply
\begin{equation}\label{HM3}
m_H\approx\frac{1.83}{\pi v}\sqrt{m_{u_4}^4+m_{d_4}^4},
\end{equation}
which correspond to $\alpha\approx 2.8$ and $980 \text{ GeV}< m_s < 1400 \text{ GeV }$.
\begin{center}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.8 \textwidth]{higgsms.eps}
\end{center}
\caption{Higgs mass as a function of $m_{u_4}$. The external lines correspond to the extreme cases
$\beta=\pi/2$ (bottom) and $\beta=\pi/4$ (top). The middle line contains only the contribution of
$u_4$, $d_4$ and their super-partners with $\beta=\pi/4$.}
\label{mums}
\end{figure}
\end{center}
\section{Conclusions}
In this paper, we study the possibility of electroweak symmetry breaking by radiative corrections
\cite{Coleman:1973jx} due to a fourth generation in the Standard Model. We isolate the effects of the
fourth generation by taking a vanishing scalar self-coupling at the classical level and maintaining
this condition valid at one loop level at the electroweak symmetry breaking scale.
In such a scenario, electroweak symmetry is broken by radiative corrections due mainly to the fourth
generation and Higgs masses of the order of a few hundreds of GeV are consistent with electroweak precision
data. Furthermore, the theory is valid only up to a scale $\Lambda \sim 1-2$ TeV. Such low cut-off means that
the effects of new physics needed to describe electroweak interactions at energy above $\Lambda$ should be
measurable at the LHC. We use the renormalization group equation to study the impact of the running of the
Yukawa couplings in our results. We show that the predictions of the model are not strongly modified
by the running of the Yukawa couplings, but a slightly lower cut-off related to the breaking of the perturbative
regime is expected in this case.
As an example of models with new physics and therefore containing a natural scale for the cut-off of
the electroweak interactions regime, we study a simplified Minimal Supersymmetric Standard Model with four
generations. We obtain similar values for the Higgs mass with weak $\beta$ dependence. The natural scale for
the cut-off of the electroweak regime is given by the mass of the fourth generation squarks and
EWSB by radiative corrections due predominantly to the fourth generation
requires masses for the squarks of the order $m_s\sim 1 \; \mathrm{TeV}$.
\section{acknowledgements}
This work was supported by CONACyT (Mexico) under grant 50471-F, DINPO-UG and PROMEP-SEP.
|
2,869,038,154,973 | arxiv | \section{Introduction} \label{sec:intro}
The exoplanet community already has ways to detect an H$_2$ atmosphere by transmission spectroscopy via its pressure scale height one order of magnitude larger than that of an N$_2$ or CO$_2$ atmosphere \citep{miller2008atmospheric}. However, the mass of the H$_2$ atmosphere -- the parameter that controls the temperature at the bottom of the atmosphere and thus the possibility for liquid water \citep{pierrehumbert2011hydrogen,ramirez2017volcanic,koll2019hot} -- is not directly measurable from the transmission spectrum. Also, a planet's mass and radius typically allow multiple models of the interior structure \citep[e.g.,][]{rogers2010three,valencia2013bulk}. It is unclear whether the planets in the $1.7-3.5\ R_{\oplus}$ population \citep{fulton2018california} are mostly rocky planets with massive H$_2$/He gas envelopes \citep{owen2017evaporation,jin2018compositional} or planets with a massive water layer ($\sim50$ wt. \%) that do not require a large H$_2$ envelope to explain their radius \citep[e.g., referred to as ``ocean planets'' thereafter;][]{zeng2019growth,mousis2020irradiated,venturini2020nature}. Direct-imaging observations in the future may provide means to detect a surface underneath a thin atmosphere on temperate planets, via the ocean glint \citep{robinson2010detecting} or surface heterogeneity \citep{cowan2009alien,fan2019earth}. However, these methods are not applicable to the near-term capabilities such as the JWST and may pose challenges on precision even for ambitious direct-imaging mission concepts \citep{Gaudi2020}.
The temperate sub-Neptune K2-18 b is a harbinger of the class of planets that might be habitable and exemplifies the need for a near-term method to measure the size of an H$_2$ atmosphere. The planet of $8.6\ M_{\oplus}$ and $2.6\ R_{\oplus}$ is in the habitable zone of an M dwarf star, and has a transmission spectrum (obtained by \textit{Hubble} at $1.1 - 1.7\ \mu m$) with confirmed spectral features, which indicates that the planet should host an atmosphere dominated by H$_2$ \citep{tsiaras2019water,benneke2019water}. Interior structure models showed that the planet can have a massive ($>\sim1000$ bar) H$_2$ atmosphere overlaying a rocky/Fe core and a possibly supercritical water layer, or a smaller ($<100$ bar) H$_2$ atmosphere with a water-dominated interior \citep{madhusudhan2020interior,mousis2020irradiated,nixon2021deep}. For K2-18~b, specifically, a $\sim10-100$ bar H$_2$ atmosphere overlaying a water layer would cause $>200$ bar of water to evaporate into the atmosphere, resulting in a hot steam atmosphere inconsistent with the observed transmission spectrum \citep{scheucher2020consistently}. An even smaller, $\sim1$ bar H$_2$ atmosphere would prevent this steam atmosphere and produce a liquid-water ocean (see Section~\ref{sec:model}), but this requires a very small rocky/Fe core and may be disfavored from the planet formation standpoint \citep[e.g.,][]{lee2016breeding}. However, a planet slightly more massive or smaller than K2-18~b -- such as those at the center of the $1.7-3.5\ R_{\oplus}$ planet population -- does not have this small-core difficulty to have a small atmosphere \citep{zeng2019growth,nixon2021deep}, and many such planets and planet candidates have been detected and will soon be available for transmission spectroscopy (Figure~\ref{fig:population}, panel a).
Here we propose that transit observations of temperate sub-Neptunes in the near- and mid-infrared wavelengths, which will soon commence with JWST, can detect small H$_2$ atmospheres that support liquid-water oceans and distinguish them from massive atmospheres (Figure~\ref{fig:population}, panel b). A companion paper has studied the atmospheric chemistry and spectral features of temperate planets with massive H$_2$ atmospheres \citep{hu2021photochemical}, and now we turn to temperate planets with small H$_2$ atmospheres. A recent paper might have similar intent as our work:
\cite{yu2021identify} studied the chemistry of temperate \ce{H2} atmospheres with varied surface pressures, with assumed zero flux for all species at the lower boundary. The theories of \cite{yu2021identify} may thus be more applicable to arid rocky planets without substantial volcanic outgassing, and here we instead focus on ocean planets, and address how to identify them observationally. As we will show later, a small atmosphere on a temperate sub-Neptune will have a distinctive composition because of its interaction with the ocean underneath.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/coldplanet_v2.pdf}
\caption{
Temperate exoplanets amenable for atmospheric characterization via transmission spectroscopy. (a) Purple dots are confirmed planets with measured masses, and blue dots are planets with unknown masses or planet candidates. Data are taken from the NASA Exoplanet Archive and and the TESS Objects of Interest Catalog. The marker sizes are scaled with the expected S/N of the spectral features of an H$_2$ atmosphere observed by JWST at $2\ \mu m$. Most of the temperate planets and planet candidates suitable for atmospheric characterization are larger than Earth and thus more likely to have H$_2$ atmospheres. (b) A roadmap to characterize the mass of the atmospheres and the habitability of temperate sub-Neptunes by detecting signature gases. See text for details.
}
\label{fig:population}
\end{figure}
\section{Mutual exclusivity of habitability and thermochemical equilibrium} \label{method}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.45\textwidth]{figures/schematic_v2.png}
\caption{
Interior structures of temperate H-rich exoplanets and the associated ranges of atmospheric composition. If the planet has a massive H$_2$ atmosphere, the deep atmosphere would be hot -- enabling thermochemical recycling -- but a liquid-water surface would not be possible. If the planet has a small H$_2$ atmosphere, a liquid-water surface may be possible. On these planets, the equilibrium abundance of atmospheric CO$_2$ is set by the oceanic chemistry and that of N$_2$ by atmospheric evolution.
}
\label{fig:schematic}
\end{figure}
On temperate sub-Neptunes, the condition to form a liquid-water ocean and that to achieve the thermochemical equilibrium of carbon and nitrogen molecules are mutually exclusive. The \ce{CO2}-\ce{CO}-\ce{CH4} and \ce{N2}-\ce{NH3} conversion rates are primarily a function of the temperature and to a lesser extent the pressure \citep{zahnle2014methane,tsai2018toward}, and in a temperate sub-Neptune like K2-18~b, the thermochemical equilibrium of carbon and nitrogen molecules are typically achieved at the pressure of $10^7\sim10^8$ Pa, where the temperature is $>1000$ K \citep[i.e., substantially higher than the critical point of water;][]{fortney2020beyond,yu2021identify,hu2021photochemical}. Therefore, the gas-phase thermochemical equilibrium would be achieved in the deep and hot part of a massive atmosphere, and in contrast, it would not be achieved in a small atmosphere overlying a liquid-water ocean. Instead, NH$_3$ and sulfur species would be sequestered by the ocean \citep[][and also see Section~\ref{sec:model}]{loftus2019sulfate} and the abundance of CO$_2$ would be set by the ocean chemistry (Figure~\ref{fig:schematic}, with the cosmochemical and geological constraints detailed in Appendix~\ref{sec:geo}). This fundamental difference, coupled with atmospheric photochemistry, leads to distinctive gas abundances in the observable part ($<\sim0.1$ bar) of the atmosphere.
If the planet has a massive H$_2$ atmosphere, thermochemical reactions in the deep atmosphere recycle O, C, N, S species into H$_2$O, CH$_4$, NH$_3$, and H$_2$S \citep{burrows1999chemical,heng2016analytical,woitke2020coexistence,Blain2020}. H$_2$O can form a cloud and the above-cloud H$_2$O may be partially depleted as a result \citep{morley2014water,charnay2021formation,hu2021photochemical}. Recent calculations have shown that the photodissociation of NH$_3$ in the presence of CH$_4$ leads to the formation of HCN and \ce{N2}, and that CO and CO$_2$ are produced by the photodissociation of CH$_4$ together with H$_2$O \citep{hu2021photochemical}. The photodissociation of H$_2$S leads to the formation of elemental sulfur haze \citep{hu2013photochemistry,zahnle2016photolytic}, but the haze would likely be close to the cloud deck and would not mute transmission spectral features \citep{hu2021photochemical}. These photochemical products are transported to the deep atmosphere and recycled back to CH$_4$, NH$_3$, and H$_2$S. An exception is that planets with super-solar atmospheric metallicity and appreciable internal heat may have additional \ce{CO}, \ce{CO2}, and \ce{N2} transported from the deep troposphere and incomplete recycling to \ce{NH3} \citep{fortney2020beyond,yu2021identify,hu2021photochemical}.
If the planet instead has a small atmosphere and a liquid-water ocean, the thermochemical recycling cannot occur. Instead, CO$_2$ is the preferred form of carbon in equilibrium with a massive amount of H$_2$O \citep{Hu2014B2014ApJ...784...63H,woitke2020coexistence}, and NH$_3$ is dissolved in the ocean and largely depleted from the atmosphere (see Section~\ref{sec:model}). The abundance of atmospheric CO$_2$ is controlled by the oceanic pH \citep{kitzmann2015unstable,krissansen2017constraining,kite2018habitability,isson2018reverse} and that of N$_2$ is probably a combined result of the initial endowment and atmospheric escape. A reasonable lower bound of the total mass of CO$_2$ in the H$_2$ and H$_2$O layers can be derived from the cosmochemical constraints of planetary building blocks and the partitioning between the iron core, the silicate mantle, and the water layer (Appendix~\ref{sec:geo}). Also, the ``seafloor'' of this thin-atmosphere, H$_2$O-rich sub-Neptune will not be not a sharp interface in density and composition, but instead have a finite thickness \citep{vazan2020new}. The interface will be compositionally stratified with denser material underlying less dense material, and material transport across this ``fuzzy layer'' is inhibited due to the stratification. Thus, any carbon or nitrogen added to the H$_2$ and H$_2$O envelope by planetesimal accretion late in planet growth will remain in the envelope, and will not be stirred down into the silicate layer. Meanwhile, transit observations can straightforwardly identify H$_2$-dominated atmospheres and rule out CO$_2$ or N$_2$-dominated ones only from the size of spectral features \citep{miller2008atmospheric}.
One might also consider the intermediate situation between massive atmospheres with thermochemical equilibrium and small atmospheres with liquid-water oceans, e.g., the atmospheres with a surface pressure from a few to $\sim100$ bars on K2-18~b. For many sub-Neptunes, this intermediate-atmosphere scenario would still require a massive water layer underneath to explain their mass and radius. If water was in the liquid form at the interface with the atmosphere, the evaporation of this ocean would make the atmosphere H$_2$O-dominated \citep{scheucher2020consistently}. If water is supercritical, any H$_2$ layer of intermediate mass should be well mixed with the water layer. Therefore, such an intermediate endowment of H$_2$ would most likely result in a non-H$_2$-dominated atmosphere, which is, again, distinguishable with transmission spectroscopy \citep{miller2008atmospheric}.
\section{Ocean Planet Models} \label{sec:model}
\begin{deluxetable*}{llll|llll}
\tablecaption{Summary of the photochemical model parameters and results.}
\label{table:result}
\tablehead{
\colhead{Model} & \colhead{Name} & \colhead{CO$_2$} & \colhead{CO flux} & \colhead{H$_2$O} & \colhead{CO} & \colhead{CH$_4$} & \colhead{C$_2$H$_6$} }
\startdata
1 & Low-CO$_2$ & $4\times10^{-4}$ & $0$ & $2.3\times10^{-3}$ & $1.4\times10^{-5}$ & $1.5\times10^{-2}$ & $3.0\times10^{-6}$ \\
1a & Low-CO$_2$ Variant & $4\times10^{-4}$ & $1.0\times10^9$ & $3.3\times10^{-3}$ & $2.9\times10^{-4}$ & $2.9\times10^{-2}$ & $5.1\times10^{-6}$ \\
2 & High-CO$_2$ & $0.1$ & $0$ & $1.1\times10^{-4}$ & $9.5\times10^{-3}$ & $5.3\times10^{-2}$ & $4.0\times10^{-7}$ \\
\enddata
\tablecomments{The volume mixing ratio of CO$_2$ (as inputs) is at the lower boundary, and those of H$_2$O, CO, CH$_4$, and C$_2$H$_6$ (as results) are column-averaged in $10-10^3$ Pa. The CO flux has a unit of cm$^{-2}$ s$^{-1}$.}
\end{deluxetable*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{figures/k218b_small_combined.pdf}
\caption{
Modeled pressure-temperature profiles (a) and abundance profiles of main gases and photochemical products (b) in a temperate sub-Neptune like K2-18~b that has a small H$_2$ atmosphere. Solid, dashed, and dash-dot lines show the results for the low-CO$_2$ case (Model 1 in Table~\ref{table:result}), the low-CO$_2$ case with additional CO sources (Model 1a), and the high-CO$_2$ case (Model 2). For the stellar irradiance, we use $S=S_{\rm Earth}*(1-A_{\rm B})$, with a Bond albedo of $A_{\rm B}=0.3$ (similar to Earth), to account for the radiative effects clouds would have in the otherwise cloud-free climate model. The surface albedo reflects a dark ocean (0.06). The surface temperatures in these models are consistent with a liquid-water ocean. The photochemical models use the UV spectrum of the M dwarf star GJ~176 \citep{france2016muscles} \citep[similar to K2-18;][]{dos2020high}. The steady-state mixing ratio \ce{CH4} is high and those of nitrogen molecules such as \ce{NH3} and \ce{HCN} is $<10^{-12}$.}
\label{fig:result}
\end{figure*}
We have used an atmospheric photochemical model \citep{hu2012photochemistry} coupled with a radiative-convective model \citep{scheucher2020consistently} to determine the steady-state abundances of photochemical gases in small and temperate H$_2$ atmospheres, for a cosmochemically and geologically plausible range of CO$_2$ abundance, and compared the compositions and transmission spectra with the massive H$_2$ atmosphere models published in \cite{hu2021photochemical}. The massive atmosphere models explored the atmospheric metallicity of $1-100\times$solar and included possible deep-tropospheric source \ce{CO}, \ce{CO2}, and \ce{N2} and incomplete reclycing of \ce{NH3} in super-solar atmospheres.
The photochemical model includes a comprehensive reaction network for O, H, C, N, and S species (including sulfur aerosols, hydrocarbons, and the reactions important in H$_2$ atmospheres), and it has been used to study the lifetime and equilibrium abundance of potential biosignature gases in H$_2$ atmospheres \citep{seager2013biosignature}. We have updated the reaction network and tested the model with the measured photochemical gas abundance in the atmosphere of Jupiter \citep[i.e., a low-temperature H$_2$ atmosphere;][]{hu2021photochemical}.
The pressure-temperature profiles (Figure~\ref{fig:result}) used as the basis for the photochemical model are calculated with the climate module of 1D-TERRA \citep{scheucher2020consistently}. The module uses a correlated-k approach with the random overlap method to include molecular absorption, collision-induced opacities, and the continuum of water vapor to calculate the radiative equilibrium, and the appropriate (moist or dry) adiabatic lapse rate to apply the convection adjustment. The module has been tested against the cases of Earth, Venus, and Mars, as well as with other radiative-convective and 3D climate models for modeling steam atmospheres \citep{scheucher2020consistently}.
As examples, we study H$_2$ atmospheres of 1 bar on a sub-Neptune planet that has a stellar irradiance similar to Earth and orbits around an early M star similar to K2-18. A 1-bar H$_2$ atmosphere on such a planet would likely have a surface temperature consistent with a liquid-water ocean (Figure~\ref{fig:result}). We adopt the ``ocean-planet'' interpretation of the $1.7-3.5\ R_{\oplus}$ planet population that centers at $10\ M_{\oplus}$, and $2.5\ R_{\oplus}$ \citep{zeng2019growth,venturini2020nature}, and assume 50\% of water by mass in this study. In this interpretation, sub-Neptunes may be ocean planets with deep oceans that do not require a massive H$_2$ envelope to explain their radius, and can conceivably have moderate-size H$_2$ atmospheres. This may not be directly applicable for K2-18~b, which resides on the low-density side of the $1.7-3.5\ R_{\oplus}$ population. The specific choices of these parameters are however unimportant, because atmospheric chemistry is not sensitive to moderate changes in the surface gravity.
CO$_2$ is the main form of carbon in thermochemical equilibrium with H$_2$O \citep{Hu2014B2014ApJ...784...63H,woitke2020coexistence}. If a liquid-water ocean exists, the partial pressure of CO$_2$ is set by atmosphere-ocean partitioning, which in turn is mainly controlled by the oceanic pH \citep{kitzmann2015unstable,krissansen2017constraining,kite2018habitability,isson2018reverse}. The pH is affected by the abundance of cations in the ocean, which come from complex water-rock reactions and dissolution of the seafloor. The rates of the processes involved are uncertain; therefore, we explore the mixing ratio of CO$_2$ from 400 ppm to 10\%, corresponding to the pCO$_2$ range from the present-day Earth to early Earth \citep{catling2017atmospheric} and including the predicted range for ocean planets \citep{kite2018habitability} that is still consistent with an H$_2$-dominated atmosphere. The $4\times10^{-4}$ bar partial pressure of \ce{CO2} in the low-CO$_2$ case, while not the absolute lower limit, is a cosmochemically and geologically reasonable lower bound of the \ce{CO2} partial pressure on an ocean planet (Appendix~\ref{sec:geo}).
The mixing ratio of N$_2$ on the modeled planet is probably set by atmospheric evolution (as opposed to the solubility equilibrium or geological recycling) and is assumed here to be 1\%. As N$_2$ only minimally participates in the chemical cycles and does not have strong spectral features in the infrared, its exact abundance is not our main concern. The photochemical model indicates that the NH$_3$ produced by photodissociation of N$_2$ in H$_2$ atmospheres has negligible mixing ratios ($<10^{-12}$).
The pressure at the water-rock boundary of a $10-M_{\oplus}$ and $2.5-R_{\oplus}$ planet is $\sim500$ GPa \citep{sotin2007mass,levi2014structure}, and this overloading pressure should suppress volcanism completely \citep{kite2009geodynamics,noack2017volcanism,kite2018habitability}. Therefore we do not include any volcanic outgassing in the standard models. As variant models, we consider the possibility of minor and intermittent sources of CO into the atmosphere. Evaporation of meteorites may provide a source of CO and CO$_2$ \citep{schaefer2017redox}, and water-rock reactions at the temperature relevant to the ``fuzzy layer'' may produce CO (and not CH$_4$ as it is thermochemically disfavored at high temperatures). The rates of these processes are unknown, but numerical experiments with the photochemical model indicate that an additional CO source of $10^{10}$ molecule cm$^{-2}$ s$^{-1}$ would lead to a steady-state abundance of CO greater than that of H$_2$, effectively resulting in a CO-dominated atmosphere. A CO source of $10^9$ molecule cm$^{-2}$ s$^{-1}$ would produce the CO-dominated atmosphere in the 10\%-CO$_2$ case but not in the 400ppm-CO$_2$ case. We therefore include a low-CO$_2$ case with the CO source of $10^9$ molecule cm$^{-2}$ s$^{-1}$ as a variant model.
Table~\ref{table:result} summarizes the input parameters and results of the photochemical models, and Figure~\ref{fig:result} shows the profiles of temperature and mixing ratios of main gases and photochemical products. CO is produced from the photodissociation of CO$_2$ and can build up to $10^{-5}$ and $10^{-2}$ mixing ratio level for the low-CO$_2$ and the high-CO$_2$ cases. OH from the photodissociation of H$_2$O destroys CO and maintains its steady-state mixing ratio. CH$_4$ is also produced photochemically and can build up to a substantial mixing ratio ($10^{-3}\sim10^{-2}$). This effectiveness in producing \ce{CH4} from \ce{CO} in temperate H$_2$ atmospheres has also been noted in \cite{yu2021identify}. Together with the high CH$_4$ mixing ratio, C$_2$H$_6$ is produced and can accumulate to a mixing ratio of $\sim10^{-6}$. C$_2$H$_2$, as expected, is short-lived and only has significant mixing ratios in the upper atmosphere. Here we have applied a deposition velocity of $10^{-5}$ cm s$^{-1}$ for C$_2$H$_6$ to account for the loss of carbon due to organic haze formation and deposition \citep{hu2012photochemistry}; removing this sink does not substantially change the results shown in Figure~\ref{fig:result}. The additional source of CO would result in moderately more CO, CH$_4$, and C$_2$H$_6$ in the atmosphere (Model 1a in Table~\ref{table:result} and Figure~\ref{fig:result}). The photochemical CO and CH$_4$ can build up to the mixing ratio levels that cause significant features in the planet's transmission spectrum (Section~\ref{sec:spec}).
Before closing this section, we address whether NH$_3$ can be produced substantially by water-rock reactions and then emitted into the atmosphere. Hydrothermal systems on early Earth may produce \ce{NH3} from the reduction of nitrite and nitrate \citep{summers1993prebiotic,summers2005ammonia}. On a planet with an \ce{H2}-dominated atmosphere, however, atmospheric production of the oxidized nitrogen including nitrite and nitrate should be very limited. Moreover, the storage capability of \ce{NH3} by the ocean is vast and limits the emission into the atmosphere. At the pH value of 8 (a lower pH would further favor the partitioning of \ce{NH3} in the ocean), $10^{-6}$ bar of atmospheric \ce{NH3} requires a dissolved ammonium concentration of $10^{-3}$ mol/L in equilibrium \citep{seinfeld2016atmospheric}. The mass of NH$_3$ in the atmosphere and ocean is then $\sim10^{-5}$ of the planetary mass. This would only be possible if much of the planet's rocky core begins with a volatile composition similar to carbonaceous chondrites, and most of this nitrogen is partitioned into the atmosphere and ocean as NH$_3$ \citep{marty2016origins}, which is highly unlikely as \ce{N2} is thermochemically favored. Therefore, the concentration of dissolved \ce{NH3} should be small and so is the atmospheric \ce{NH3} on a planet with a massive ocean.
\section{Spectral Characterization} \label{sec:spec}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{figures/k218b_small_spec.pdf}
\caption{
Modeled transmission spectrum of temperate sub-Neptune planets of M dwarf stars, using K2-18~b as an example and comparing with the planet's transit depth observed by \textit{Hubble} \citep{benneke2019water}. The massive-H$_2$-atmosphere models (black lines) and the small-H$_2$-atmosphere models (colored lines) differ in three spectral regions: in (a) and (b), the massive-atmosphere models have absorption features of NH$_3$ and HCN, while the small-atmosphere models do not; in (c), the small-atmosphere models with a low mixing ratio of CO$_2$ (400 ppm) have prominent features of CO$_2$ and CO, while the massive-atmosphere models only have small features of NH$_3$ and HCN. The $100\times$solar massive atmosphere with deep-tropospheric source and sink may have subdued NH$_3$ and HCN features and prominent CO$_2$ and CO features. The small-atmosphere models with a high mixing ratio of CO$_2$ (10\%) has a high mean molecular weight ($\sim6$) and a high cloud top (Figure~\ref{fig:result}) and thus muted spectral features.
}
\label{fig:spec}
\end{figure*}
Figure~\ref{fig:spec} compares the expected spectra for the massive-atmosphere scenarios and the small-atmosphere scenarios. For K2-18~b, the massive-atmosphere models with $1-100\times$solar metallicity and the small-atmosphere models with a low mixing ratio of CO$_2$ (400 ppm) provide good fits to the transmission spectrum measured by \textit{Hubble}.
Measuring the transmission spectra in an expanded wavelength range of $1-5\ \mu$m will distinguish the small atmospheres from massive ones. Using K2-18~b as an example for temperate sub-Neptunes, we see that the massive-atmosphere models and the small-atmosphere models, while having differences within each group, can be distinguished using the spectral regions of $1.9-2.1$, $2.7-3.1$, and $4.1-5.0\ \mu$m (the shaded areas a, b, and c in Figure~\ref{fig:spec}). Both the massive-atmosphere and small-atmosphere models show spectral features of H$_2$O and CH$_4$, and so observing these two gases alone is unlikely to separate the massive versus small scenarios.
At $1.9-2.1$ and $2.7-3.1\ \mu$m, the transmission spectra show NH$_3$ and HCN absorption in massive atmospheres but not in small atmospheres. If the $100\times$solar massive atmosphere has incomplete \ce{NH3} recycling in the deep troposphere, it will have much weaker NH$_3$ and HCN features in these spectral regions. The transmission spectra of small atmospheres show small CO$_2$ features at $\sim2.0$ and $\sim2.75\ \mu$m, but the feature at $\sim2.75\ \mu$m is combined with a part of the H$_2$O feature with similar strength. The transmission spectra of small atmospheres also show a small C$_2$H$_2$ feature at $\sim3.05\ \mu$m, and given enough precision, it might be distinguishable with the HCN feature at $\sim3.0\ \mu$m.
At $4.1-5.0\ \mu$m, the transmission spectra of small atmospheres (the low-CO$_2$ cases) have prominent features of CO$_2$ and CO, while the spectra of massive atmospheres have weak features of NH$_3$ and HCN. If the $100\times$solar massive atmosphere has CO and CO$_2$ transported from the deep troposphere, it can have prominent spectral features of CO$_2$ and CO in this region as well.
From the above, we see that the $100\times$solar massive atmosphere with deep-tropospheric effects may resemble a small atmosphere in their transmission spectra (Figure~\ref{fig:spec}), i.e., the lack of \ce{NH3} or \ce{HCN} and the prominence of \ce{CO2} and \ce{CO}. Would this potential ``false positive'' be avoidable? The answer may be yes given enough precision and spectral resolution. First, the spectrum of the massive atmosphere with deep-tropospheric effects still has weak spectral features of HCN, while none of the small atmospheres does. Second, the massive atmosphere has CO$_2$/CO$<\sim0.1$, because CO always dominates over CO$_2$ in the deep H$_2$ troposphere of a temperate planet, and photochemical processes driven by an M dwarf star do not significantly raise the CO$_2$ mixing ratio in the observable part of the atmosphere \citep{hu2021photochemical}. In contrast, the small atmospheres typically have CO$_2$/CO$\geq1$ (Table~\ref{table:result}). In the more likely scenario without any volcanic outgassing, CO$_2$/CO$\sim10$, because CO is produced photochemically from CO$_2$. Therefore, by measuring the abundance of CO and CO$_2$ independently, one could tell whether they are sourced from the deep troposphere.
Furthermore, a massive atmosphere with $\gg100\times$ solar metallicity will have a mean molecular weight much higher than that of an H$_2$ atmosphere and is thus also distinguishable by transmission spectroscopy.
With moderate time investment (i.e., $<100$ hours), JWST will provide the sensitivity to detect the signature gases aforementioned and distinguish massive versus small atmospheres on planets like K2-18~b. As an example, we have used PandExo \citep{batalha2017pandexo} to simulate the expected photometric precision using JWST's NIRSpec instrument. If combining two transit observations with NIRSpec’s G235H grating and four transits with the G395H grating, the overall photometric precision would be $\sim20$ ppm per spectral element at a resolution of $R=100$ in both channels that cover a wavelength range of $1.7-5.2\ \mu$m. These observations would distinguish the small-atmosphere scenarios versus the massive-atmosphere scenarios in Figure~\ref{fig:spec} with high confidence.
Additionally, we have performed spectral retrievals based on simulated observations using Tau-REx \citep{waldmann2015tau}. We find that the mixing ratio of NH$_3$ and HCN and the lack of CO$_2$ or CO in the solar-abundance massive atmosphere would be usefully constrained (Figure~\ref{fig:posterior}). For the $100\times$solar atmosphere, the CO$_2$ and CO transported from the deep troposphere would be identified, and the posteriors suggest that CO is likely more abundant than CO$_2$. The reduction in the mixing ratios of NH$_3$ and HCN due to incomplete recycling could also be seen in the retrieval, although the constraints on the mixing ratio of HCN is not accurate. For the small atmosphere, the retrieval yields degenerate solutions and thus double peaks in some posterior distributions. Despite this, it is clear from the posteriors that the atmosphere likely has high mixing ratios of both CO$_2$ and CH$_4$, has more CO$_2$ than CO, and has very little NH$_3$ or HCN (Figure~\ref{fig:posterior}). In addition to JWST, the dedicated exoplanet atmosphere characterization mission ARIEL could also provide the sensitivity to detect these gases with more repeated transit observations \citep{changeat2020disentangling}. This example shows that transit observations in the coming years can tell apart temperate sub-Neptunes with small H$_2$ atmospheres versus the planets with massive atmospheres and reveal their distinct atmospheric composition.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{figures/retrieval.pdf}
\caption{
Retrieved posterior distributions of the abundances of the main chemical compounds and the cloud pressure in example massive-atmosphere and small-atmosphere scenarios. The input transmission spectra are calculated by Tau-REx using the atmospheric composition in Figure~\ref{fig:result} and the expected uncertainties are calculated using PandExo \citep{batalha2017pandexo}, assuming to combine two transits of K2-18~b with NIRSpec/G235H and four transits with NIRSpec/G395H. The vertical red lines show the input value of the parameter, and the quantities on the top of each panel show the median and $1\sigma$ values summarized from the posterior. The cloud pressure ($P_{\rm clouds}$) has a unit of Pa. A detailed characterization of the atmosphere of K2-18 b, including distinguishing a small atmosphere versus a massive one and measuring the abundances of \ce{H2O}, \ce{CH4}, \ce{NH3}, \ce{HCN}, \ce{CO2}, and \ce{CO}, will be achievable with moderate time investment of JWST.
}
\label{fig:posterior}
\end{figure*}
\section{Discussion and Conclusion} \label{sec:discussion}
Taken together, the results presented above identify a near-term path to detect small H$_2$ atmospheres that can be consistent with liquid-water oceans on temperate exoplanets. H$_2$ atmospheres are probably the only type of temperate atmospheres readily within the reach of JWST and ARIEL for detailed studies, since to characterize a heavier H$_2$O, N$_2$, or CO$_2$ atmosphere will require co-adding a few tens transits -- something not impossible but probably very hard \citep{belu2011primary,krissansen2018detectability,wunderlich2019detectability,pidhorodetska2020detectability,gialluca2021characterizing}. The mass of the H$_2$ atmospheres -- a parameter that is not directly measured by transits but critical for habitability if the planet is moderately irradiated -- can be inferred from transmission spectra via the signature gases that indicate solubility equilibria versus gas-phase thermochemical recycling. The biggest uncertainty is probably the temperature at the $100\sim1000$-bar pressure level in the massive-atmosphere scenarios, which may be affected by ad hot heating mechanisms such as tidal heating. Detailed models of the interior temperature and mixing may further constrain this uncertainty \citep{fortney2020beyond,yu2021identify}. Based on the range of the parameter space explored, we suggest that the sensitivity of multiple gases provided by future observatories' expanded wavelength coverage over \textit{Hubble} would enable broad categorization of small versus massive atmospheres, summarized as a roadmap in Figure~\ref{fig:population}, panel b.
How many sub-Neptunes could we expect to be ocean planets in the first place? The current population statistics of planets provide indirect evidence that most sub-Neptunes are not ocean planets \citep{fulton2018california,owen2017evaporation,jin2018compositional}, but most known planets are hotter than planets that can be habitable. Even if the current statistics apply to temperate planets, there is plenty of room to have 10-20\% of sub-Neptunes be ocean planets, which will still be a lot of planets. Also, some planets in or just below the ``radius valley'' may be sub-Neptunes that have evolved into ocean planets \citep{kite2021water} and retained some residual H$_2$ \citep{misener2021cool}. For these reasons, this possibility of an ocean planet shrouded by a small H$_2$ atmosphere should motivate detailed observations of temperate planets with radius from near the ``radius valley'' ($\sim1.7\ R_{\oplus}$) to the main sub-Neptune population ($\sim2.5\ R_{\oplus}$). If some of the temperate planets in the aforementioned group have small H$_2$ atmospheres, their relative ease for transit observations would significantly enhance the prospect of detecting and characterizing potentially habitable exoplanets within the next decade.
\section*{Acknowledgments}
The authors thank helpful discussions with Fabrice Gaillard and Sukrit Ranjan. RH conceived and designed the study, simulated the photochemical models, interpreted the results, and wrote the manuscript. MD performed the JWST observation simulations and atmospheric retrievals. MS computed the pressure-temperature profiles. EK derived the cosmochemical and geological lower bounds for the carbon content. SS contributed interior structure models and insights. HR oversaw the development of the radiative-convective model used in the study. All authors commented on the overall narrative of the paper. The raw data that are used to generate the figures in this paper are available from the corresponding author upon reasonable request. This work was supported in part by NASA Exoplanets Research Program grant \#80NM0018F0612. The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
{ \small
\bibliographystyle{apj}
|
2,869,038,154,974 | arxiv | \section{INTRODUCTION}
The study of entanglement has emerged as a
central theme of quantum physics in recent years. It is driven both by fundamental
questions and by the increasing interest in
applications that go beyond the limit of classical physics.
Entanglement as a measurable quantity is a complicated subject, in
particular when the systems have multiple components. Here we
choose to study entanglement and its possible avenues of
quantification in an open quantum system. This system, the
canonical model of cavity QED \cite{berman94}, has a single atom
coupled to the mode of an optical cavity with two reservoirs or
avenues for extracting information: spontaneous emission and losses
from the cavity.
Two particles (or systems), $A$ and $B$ are said to be in an
entangled state if the wave function of the complete system does not
factorize, that is $|AB\rangle \neq|A\rangle|B\rangle$. One
consequence of this form of the wavefunction is that a measurement
on system $A$ yields information about system $B$ without any
direct interaction with system $B$. For systems with the same dimension, in particular, a (pure) state is said to be
maximally entangled if tracing over one of the two systems, say $A$, leaves the other one in a totally mixed state; this means that one can gain complete knowledge of system $B$ by performing measurements on $A$ only. An example that is of
relevance to this work is the maximally entangled state of an atom
and a field mode,
$|\Psi\rangle=(1/\sqrt{2})\left(|1,g\rangle+|0,e\rangle\right)$
with the first index denoting the number of photons in the field
mode and the second ($e=excited, g=ground$) denoting the state of
the atom. A measurement of the state of the atom immediately tells
us the number of photons in the field mode; or a measurement of
the photon number immediately tells us the state of the atom.
The von Neumann entropy $E=-tr_A(\rho_A log_2 \rho_A)$ of the
reduced density matrix of system $A$, $\rho_A=tr_B(\rho_{AB})$
\cite{bennet96} quantifies the amount of entanglement in a given
bipartite quantum system in a pure state. For mixed states, on the other hand, although it is easy enough to define what is meant by a totally unentangled state---namely, one in which it is possible to represent the density operator as an incoherent superposition of factorizable states---quantifying the amount of entanglement in a partially entangled state is not, in general, simple. The natural generalization of the pure-state measure indicated above, known as the entanglement of formation,
utilizes a decomposition of the quantum state $\rho=\sum_j
P_j|\psi_j\rangle\langle\psi_j |=\sum_j P_j\rho_j$, and then
defines $E=min(\sum_j P_j E_j)$ where $E_j$ is the von Neumann entropy
for the density matrix $\rho_j=|\psi_j\rangle\langle\psi_j |$, and the minimum is taken over all the possible decompositions, which is in general a very challenging task \cite{bennet96,wooters98}. As a result of this, alternative measures have been proposed,
such as the logarithmic negativity \cite{plenio05}. It is also possible that some particular measurement scheme may result in a most natural unraveling of the density operator, in the sense of the quantum trajectories approach \cite{nha04} (especially for systems that are continually monitored), and in that case it may be physically meaningful to focus only on the entanglement of the (conditionally pure) states obtained via that particular unraveling.
One of the main purposes of this paper is to determine how much information about the atom-field entanglement in our canonical cavity QED system can be gleaned from the kinds of measurements represented by the traditional correlation functions of quantum optics. As we shall show below, we are actually able to avoid the difficulties for mixed-state entanglement because, in the limit we are interested in, our system is, to a good approximation, in a pure state, in spite of its being an open system interacting with two reservoirs.
\section{CAVITY QED SYSTEM}
Fig. \ref{cqed} shows a two level atom in a driven optical
cavity. We consider a single-ended cavity, with the intracavity
field decaying via the output mirror at rate $\kappa$. The
two-level atom has a spontaneous emission rate to modes out the
sides of the cavity denoted by $\gamma$, which is generally less
than the free space Einstein $A$ coefficient. The resonant
coupling between the atom and the field mode is given by
$g=\mu_{eg}\sqrt{\omega/2\hbar\epsilon_0V}$ with $\mu_{eg}$ the
electric dipole matrix element, $\omega$ the transition frequency
and $V$ the volume of the cavity mode. The driving field is taken
to be a large classical field $\epsilon$ incident on the input
mirror, with small transmission $T_{in}$, so that the incident
flux (in photon units) inside the cavity is proportional to
$T_{in}\epsilon^2$
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=14cm]{cqed.eps}
\end{tabular}
\end{center}
\caption
{Single atom in a weakly driven optical cavity. Here g is the
reversible coupling rate between the mode of the cavity and the
atom, $\kappa$ is the decay rate of the field mode of the cavity,
$\gamma$ is the spontaneous emission rate. $\epsilon$ is the
external drive (taken to be a classical field). \label{cqed}}
\end{figure}
The quantum trajectory wave function that characterizes the system
under a non-Hermitian Hamiltonian is:
\begin{eqnarray}
|\psi _c(t)\rangle &=&\sum\limits_{n}^\infty \left(
C_{g,n}(t)e^{-iE_{g,n}t}|g,n\rangle \right.
\left. +C_{e,n}(t)e^{-iE_{e,n}t}|e,n\rangle \right) \label{psi}\\
H&=& \hbar g\;(a^\dagger \sigma _-+a\sigma _+)-i\kappa a^\dagger a
-i{{\gamma}\over2} \sigma_+\sigma_- + i \hbar
\epsilon(a^{\dagger }-a)
\end{eqnarray}
with collapse operators
\begin{eqnarray}
{\cal A}&=&\sqrt{\kappa} a\\ {\cal
S}&=&\sqrt{{\gamma}\over2}\sigma_-.
\end{eqnarray}
associated with photons exiting the output mirror and spontaneous
emission out the side of the cavity. The indices $e(g)$ indicate
the atom in the excited (ground) state, while $n$ is the number of
photons in the mode. The energies are
$E_{e,n}=E_{g,n+1}=\hbar\omega (n+1/2)$. We have the usual
creation ($a^\dagger$) and annihilation ($a$) operators for the
field, and Pauli raising and lowering operators $\sigma_{\pm}$ for
the atom.
In the weak driving limit, the system reaches a steady-state wave
function:
\begin{equation}
|\Psi\rangle=|0g\rangle+A_{1,g}|1g\rangle+A_{0,e}|0e\rangle+A_{2,g}|2g\rangle+A_{1,e}|1e\rangle
\label{wavefunction}\end{equation} where the $A_{ij}$ are known
\cite{carmichael91,brecha99}. They are
\begin{eqnarray}
A_{1,g}&=&\alpha \\
A_{0,e}&=&\beta\\
A_{1,e}&=&\alpha\beta q \label{twoexitation}\\
A_{2,g}&=&{\alpha}^2pq/\sqrt{2}.
\end{eqnarray}
The quantities $p$ and $q$ would be $1$ for coupled harmonic
oscillators. In cavity QED they differ from unity due to the
non-harmonic, or saturable, nature of the atom. The squares of
coefficients of single excitation $A_{1,g},~A_{0,e}$ give the
rates of detection of single photons through the output mirror or
in fluorescence (steady state), while the squares of the double
excitation coefficients $A_{1,e},~A_{2,g}$ give the rates of
detection of two photons either in coincidence (one through the
mirror, and one in fluorescence) or both out of the mirror. The
variables are
\begin{eqnarray}
\alpha&=&{\epsilon\over{\kappa(1+2C_1)}}\\
\beta&=&{{-2g}\over{\gamma}}\alpha\\
p&=&1-2C_1' \label{p}\\
q&=&{{(1+2C_1)}\over{(1+2C_1-2C^{'}_1)}}\label{q}\\
C_1&=&{{g^2}\over{\kappa \gamma}}\\
C_1^{'}&=&C_1 {{2\kappa}\over{(2\kappa+\gamma)}}
\end{eqnarray}
The one-excitation amplitudes $A_{1,g}$ and $A_{0,e}$ are
proportional to the driving field $\epsilon$; the two-excitation
amplitudes $A_{2,g}$, and $A_{1,e}$ are proportional to the square
of the driving field, $\epsilon^2$. \cite{carmichael91}. The norm
of this wave function is $||\Psi\rangle |=\sqrt{1+O(\epsilon^2)}$;
hence to lowest order in $\epsilon$, the coefficient of the vacuum
should be $(1-(1/2)O(\epsilon^2))$. The term $O(\epsilon^2)$ makes
no contribution to lowest nonzero order in $\epsilon$ for the
correlation functions or entanglement measures considered here.
The entanglement of formation for this system is calculated from
the density matrix after tracing over the field variables:
\begin{eqnarray}
\rho_{atom} &=& Tr_{field} |\Psi\rangle\langle\Psi |\\
&=& \left(
\begin{array}{cc}
1 + A_{1,g}^2 + A_{2,g}^2 & A_{1,e}A_{1,g}+A_{0,e} \\
A_{1,e}A_{1,g}+A_{0,e} & A_{1,e}^2 + A_{0,e}^2 \\
\end{array}
\right)
\end{eqnarray}
The eigenvalues of this matrix are, to lowest nonvanishing order,
\begin{eqnarray}
\lambda_1 &=& \left(A_{1,g} A_{0,e} - A_{1,e} \right)^2\nonumber \\
&=& |A_{1,g}|^2|A_{0,e}|^2(q-1)^2\nonumber\\
&=&\left({{\epsilon}\over{\kappa}}\right)^4\xi^2\\
\lambda_2 &=& 1-\left(A_{1,g} A_{0,e} - A_{1,e} \right)^2\nonumber\\
&=&1-\left({{\epsilon}\over{\kappa}}\right)^4\xi^2\\
\end{eqnarray}
where $q$ is defined in Eq. (\ref{q}), and we have defined
\begin{equation}
\xi={{2g}\over{\gamma (1+2C_1)^2}}(q-1)
\end{equation}
The entropy $ E = -\lambda_1 \log_2 \lambda_1 - \lambda_2
\log_2 \lambda_2 $ is then (again to lowest leading order)
\begin{eqnarray}
E&=&-\left({{\epsilon}\over{\kappa}}\right)^4\xi^2 \log_2\left[\left({{\epsilon}\over{\kappa}}\right)^4\xi^2\right] -\left(1-\left({{\epsilon}\over{\kappa}}\right)^4\xi^2\right) \log_2\left[1-\left({{\epsilon}\over{\kappa}}\right)^4\xi^2\right]\nonumber \\
&\approx&-\left({{\epsilon}\over{\kappa}}\right)^4\xi^2 \left(\log_2\left[\left({{\epsilon}\over{\kappa}}\right)^4\right]+\log_2\left[\xi^2\right]-1\right)\nonumber \\
&\approx&- \left({{\epsilon}\over{\kappa}}\right)^4
\log_2\left[\left({{\epsilon}\over{\kappa}}\right)^4\right]\xi^2.
\label{entanglement-1}
\end{eqnarray}
where we have taken the weak field limit, $\epsilon$ being the smallest
rate in the problem, so $\epsilon/\kappa \ll 1$. The approximation (\ref{entanglement-1})
will hold provided $(\epsilon/\kappa)^2 \ll |\xi|$.
This entropy is the same as that obtained by using the density matrix for the field alone,
traced over the atomic degrees of freedon.
The concurrence, first introduced by Wooters for two qubits\cite{wooters98}, can also be
used to characterize entanglement between two quantum systems of arbitrary
dimension \cite{Chen, Uhlmann, Rungta,Albeverio}. The concurrence for our system is
\begin{eqnarray}
{\cal C}&=&\sqrt{2(1-Tr\rho_{atom}^2)}\nonumber \\
&=&\sqrt{4 \left(A_{1,g} A_{0,e} - A_{1,e} \right)^2}\nonumber\\
&=&2\left({{\epsilon}\over{\kappa}}\right)^2|\xi|
\end{eqnarray}
To see why $|\xi|\propto |A_{1,e}-A_{0,e}A_{1,g}|$ may be a good indication
of entanglement, consider what happens if the wavefunction is a
product state. We could write
\begin{eqnarray}
|\Psi\rangle_P&=&|\psi_F\rangle\otimes|\phi_A\rangle\nonumber \\
&=&\left(D_0|0\rangle+D_1|1\rangle+D_2|2\rangle\right)\otimes\left(C_g|g\rangle+C_e|e\rangle\right)\nonumber
\\
&=&D_0C_g|0g\rangle+D_1C_g|1g\rangle+D_0C_e|0e\rangle +D_2C_g|2g\rangle+D_1C_e|1e\rangle
\end{eqnarray}
For weak excitations, the coefficient of the ground state of the
system is $D_0C_g=1$, or $C_g=D_0=1$. Then the product state is
\begin{equation}
|\Psi\rangle_P=|0g\rangle+D_1|1g\rangle+C_e|0e\rangle+D_2|2g\rangle+D_1C_e|1e\rangle
\end{equation}
Just knowing the one excitation amplitudes does not yield any information
about entanglement, as it is possible to have $A_{1,g}=D_1$ and
$A_{0,e}=C_e$. $A_{2,g}$ gives no information about entanglement,
just nonclassical effects in the field, as it only involves field
excitation. For weak fields $D_2$ is exactly $A_{2,g}$. The
entanglement shows up in the value of $A_{1,e}$; if this value
does not satisfy $A_{1,e}=D_1C_e=A_{0,e}A_{1,g}$, then it is not
possible to write the state as a product state.
In the presence of a non-zero vacuum contribution (as any real
quantum state will have), one can learn nothing about entanglement
simply by measurement of one-excitation amplitudes or
probabilities. For example, the state
$|0,g\rangle+\alpha(|1,g\rangle+|0,e\rangle) $ is entangled, but
only if one is certain that the probability amplitudes for higher
excitation are truly zero. A state of the form
$|0,g\rangle+\alpha(|1,g\rangle+|0,e\rangle) +O(\epsilon^2)$
cannot be said to be entangled without information on the relative
size of the probability amplitude $A_{1,e}$. Measurement of
one-excitation amplitudes conditioned by a previous measurement
{\it can} yield information about entanglement. This can be
accomplished by utilizing cross-correlation functions. A first
important conclusion out of this study is that a measure of the
zero time cross correlation between the atom and the field, as
well as the mean transmitted and fluorescent intensities yields a
measure of entanglement in the weak field limit.
\section{ENTANGLEMENT FOR WEAK EXCITATION \label{I}}
Equation~(\ref{entanglement-1}) of the previous section gives the
amount of entanglement in the system as a function of the one and
two excitation amplitudes. In terms of specific system parameters
the concurrence is:
\begin{equation}
{\cal C}=|2\alpha \beta (q-1)|=
\frac{16\,g^3\,{\epsilon }^2\,\kappa }
{{\left( 2\,g^2 + \gamma \,\kappa \right) }^2\,
\left( 2\,g^2 + \kappa \,
\left( \gamma + 2\,\kappa \right) \right) }.
\label{entanglement}
\end{equation}
This section analyzes the sensitivity of the concurrence to the
different parameters that appear in Eq.~(\ref{entanglement}), while
trying to give physical reasons for their influence on the
entanglement. Despite the fact that the rates of decay could be
the same through the two reservoirs, spontaneous emission
($\gamma$) reduces entanglement more than cavity loss ($\kappa$).
This is due to the fact that a $\gamma$ event (spontaneous
emission) {\it must} come from the atom, while a $\kappa$ event
(cavity transmission) could come from either the drive or a photon
emitted by the atoms into the cavity mode. A spontaneous emission
event unambiguously leaves the atom in the ground state, and the
system wavefunction factorizes.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=14cm]{fig2.eps}
\end{tabular}
\end{center}
\caption[example]
{A plot of ${\cal C}$ scaled by $(\epsilon/\gamma)^2$ as a
function of $\kappa/\gamma$ and $g/\gamma$ for weak excitation.
\label{entan-g-gam-ka}}
\end{figure}
Fig. \ref{entan-g-gam-ka} shows a remarkable result in the
entanglement of the system as a function of the three rates in the
problem. There is an optimal value for the coupling constant $g$
given a set of dissipation rates $\kappa,\gamma$. For many
interesting cavity QED effects, stronger coupling is generally
better, such as the enhancement of the spontaneous emission by a
factor of $1+2C_1=1+2g^2/\kappa\gamma$ (this formula strictly
holds only in the bad cavity limit $\kappa>>g,~\gamma$). However,
here increasing the coupling of the atom and field mode eventually
decreases the amount of entanglement. To explain this it is
instructive to recall that the concurrence ${\cal C}=|2 \alpha \beta
(q-1)|$, where $\alpha$ is the mean cavity field, and
$\beta=-g\alpha/\gamma$ is the mean atomic dipole. As the
coupling $g$ increases, for a fixed weak driving field $\epsilon$,
the intracavity field $\alpha=\epsilon/(\kappa+2g^2/\gamma)$
decreases. The intracavity field is the sum of the driving field
in the cavity $\epsilon/\kappa$, and the field radiated by the
atom, $(-2C_1/(1+2C_1))\epsilon/\kappa$, the minus sign resulting
from the fact that the radiated field is $\pi$ out of phase with
the driving field on resonance. We see that as $g$ and $C_1$
increase, the intracavity field decreases. This means that the
steady-state wavefunction has a larger vacuum component, and
consequently less entanglement. Another way to view this is that
the cavity enhancement of the spontaneous emission rate means a
larger loss rate for the system as the coupling increases, which
is bad for entanglement.
More formally, consider what happens if the two-excitation
amplitudes in Eq.~(\ref{wavefunction}) are arbitrarily set to zero,
which amounts to setting $q=0$ in Eq.~(\ref{entanglement}), in which
case the entanglement is only determined by the prefactor $|\alpha\beta|$.
The steady-state wave function becomes
\begin{equation}
|\psi\rangle_{ss}=|0g\rangle+\alpha(|1g\rangle-{{g}\over{\gamma}}|0e\rangle).
\label{on-wavefc}
\end{equation}
There are two interesting limits on this Eq.~(\ref{on-wavefc}) for the
parameter $f=g/\gamma$. If $f \gg 1$, the steady state
wavefunction is approximately $|\psi\rangle_{ss}=|0\rangle
(|g\rangle-f\alpha|e\rangle)$ which is a product state. Also, if
$f \ll 1$, the steady state wavefunction is approximately
$|\psi\rangle_{ss}=|g\rangle (|0\rangle+\alpha|1\rangle)$ which
again is a product state. To have entanglement between the atom
and cavity mode, we must have the parameter $f \simeq 1$, so as to
prepare a steady state wavefunction of the form
$|\psi\rangle_{ss}=|0g\rangle+\alpha(|1g\rangle-|0e\rangle)=|0g\rangle+\alpha|-\rangle$,
a mixture of the vacuum with a small entangled state component.
The decrease of the prefactor $|\alpha \beta|$ is the
dominant reason why the concurrence decreases with increasing
$g$ for large coupling. Close inspection of Fig. \ref{entan-g-gam-ka} also shows that there is an optimal
cavity loss rate $\kappa$ for entanglement for a fixed $g$ and $\gamma$.
This is a result of reaching a maximum in the population of the
states different from the vacuum (Eq.~(\ref{wavefunction})). Our
results here are consistent with the numerical results of Nha and
Carmichael \cite{nha04}.
When the system is driven off resonance, its response is typically
characterized by transmission and fluorescent spectra
\cite{gripp96a,terraciano05}. Although these are important probes of
the system, they do not, in this limit, carry information about the
entanglement, since they are derived from only the one-excitation amplitudes.
The concurrence as a function of the detuning of the driving laser
shows that the steady state entanglement decreases typically by a
factor of $1/\Delta^3$ for large detuning, where
$\Delta=(\omega-\omega_l)$ with $\omega$ the resonant frequency of
the atom and cavity, and $\omega_l$ the frequency of the driving
probe laser. But in the case where $g$ is larger than $\kappa$ and
$\gamma$, the response is maximized at the vacuum-Rabi peaks
\cite{carmichael89b}. Figure \ref{contour-vr} shows a contour plot
of $\cal C$ for parameters in the regime of cavity QED where the
two decay rates are similar: $2\kappa/\gamma=1.0$. The concurrence
increases with increasing $g$ on resonance up to a saddle point,
and then decreases. However the entanglement persists for
detunings on the order of $g$, the approximate location of the
vacuum-Rabi peaks in the spectra of the system.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=14cm]{fig3.eps}
\end{tabular}
\end{center}
\caption[example]
{Contour plot of $\cal C$ as a function of
$g/\gamma$ and $\Delta/\gamma$ for $\kappa/\gamma=0.5$ \label{contour-vr}}
\end{figure}
Detuning to a vacuum-Rabi peak ($\Delta=\pm g$),
generates a steady state wave function of the form
\begin{equation}
|\psi\rangle_{ss}=|0,g\rangle+\alpha\Gamma_1(g/\gamma)|1,\pm\rangle+\alpha^2
\Gamma_2(g/\gamma)|2,\pm\rangle,
\end{equation}
where $|n,\pm\rangle=(1/\sqrt{2})(|n,g\rangle\pm|n-1,e\rangle)$ is the
$n$ photon dressed atom-field state one is tuned near and $\Gamma_1(g/\gamma)$ and $\Gamma_2(g/\gamma)$ are
functions that are maximal when $g \simeq \gamma$.
This is a
state of mainly vacuum, plus a part that has entanglement between
the atom and the cavity. It would seem that by continuing to tune
to a vacuum-Rabi peak as $g$ increases, it would be possible to
maintain the entanglement, but Fig. \ref{contour-vr} shows that this
is not the case.
Rather, as argued (for the on-resonance case) above, the crucial parameter for maximizing entanglement is $f=g/\gamma
\propto 1/\sqrt{n_{sat}}$, where $n_{sat}=\gamma^2/8g^2$ is the
saturation photon number. This is the dependence on the
nonlinearity of the atomic system. Recall that, if these were two
driven coupled harmonic oscillators, $q=1$ and there would be no
entanglement. A nonlinear interaction between the two harmonic
oscillators would be needed to entangle them, as in the signal and
idler modes in optical parametric oscillation. This nonlinear
interaction would generate two-mode squeezing, which could be
measured by homodyne detection of mode A(B) conditioned on
detection of a photon in mode B(A), just as squeezing in one mode
can be detected via conditioned homodyne detection of a mode based
on a photodetection from that mode\cite{carmichael00,foster00}.
The nonlinearity of the two-level atom is needed to generate
two-mode squeezing and entanglement between the atom and the
cavity field. Even though the driving field is weak and the atom
never nears saturation, there can only be entanglement with a
linear atom-field coupling if the atom has a nonlinear response,
as two-level atoms do.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=14cm]{fig4.eps}
\end{tabular}
\end{center}
\caption[example]
{Contour plot of $\cal C$ as a function of $g/\gamma$ and
$\Delta/\gamma$ for $\kappa/\gamma=10$ \label{contour-nvr}}
\end{figure}
The concurrence shows its sensitivity to
different parameters. Fig. \ref{contour-nvr} shows a contour plot
of $\cal C$ versus $g/\gamma$ and $\Delta/\gamma$ for a case
where the cavity decay rate is larger than the spontaneous
emission rate ($\kappa/\gamma=10.0$). The entanglement is largest
near $g/\gamma=4.0$, before the vacuum-Rabi splitting of the
spectrum, which does not occur in this case until $g/\gamma \sim 10.0$, at which point the entanglement is already diminishing. The size of the
maximum concurrence decreases by increasing $\kappa/\gamma$ from
$0.5$ to $10.0$ by a factor of about 30.
\section{MEASUREMENTS OF ENTANGLEMENT WITH CORRELATION FUNCTIONS}
The calculation of entanglement leads now to the question of how
to implement measurements that give the full information in the
case of this cavity QED system under weak excitation. The previous
section shows that the concurrence is related to the rate of single
photon counts out of the cavity or in fluorescence and to the rate
of coincident counts from the cavity and fluorescence. These are
the quantities associated in quantum optics with correlation
functions, first introduced by Glauber
\cite{glauber63a,glauber63b,glauber63c,mandel95}. Generally these
correlation functions involve comparing a field (intensity) of one
mode with the field (intensity) of the same mode at a later time
(or different spatial location), with some exceptions
\cite{grangier86,kuzmich03,regelman01,berglund02,moore99,leach04}.
However, entanglement in cavity QED has two components: atom and
cavity mode. It is natural to look at cross correlations between
the cavity mode and the fluorescent light that falls in the mode
of the detector.
Consider a general cross-correlation function for two-modes of the
electromagnetic field:
\begin{equation}
G=\langle f_1(b^{\dagger},b)f_2(a^{\dagger},a)\rangle/\langle
f_1(b^{\dagger},b)\rangle\langle f_2(a^{\dagger},a)\rangle.
\end{equation}
with $f_1$ and $f_2$ well behaved functions, in the sense of a
convergent Taylor series on the Hilbert space of interest. If
$|\psi \rangle$ is a product state, the correlation function
$G(a,b)$ factorizes and then is unity. If it is {\it not} a
product state, then this will manifest itself in a non-unit value
for the normalized cross-correlation functions.
The simplest cross correlation function to consider is
$g_{TF}^{(1)}(0)$. This could be obtained by measuring the
visibility of the fringe pattern formed by interfering the
transmitted and fluorescent light. For the weakly driven
cavity-QED system, this is
\begin{eqnarray}
g^{(1)}_{TF}(0)&=&{{\langle \sigma_+a\rangle}\over{ \langle \sigma_+ \rangle }\langle a\rangle}\nonumber \\
&=&{{\alpha \beta}\over{\alpha \beta}}\nonumber \\
&=&1
\end{eqnarray}
so to lowest order, there is no information in this correlation
function about entanglement.
To obtain information about entanglement the correlation function
has to probe the two-excitation part of the state. A possibility
to do this is the intensity cross correlation:
\begin{eqnarray}
g_{TF}^{(2)}(0)&=&{{\langle \sigma_+
a^{\dagger}a\sigma_-\rangle}\over{\langle a^{\dagger}a
\rangle\langle \sigma_+ \sigma_-\rangle}}\nonumber\\
&=&{{|A_{1e}|^2}\over{|A_{1g}A_{0e}|^2}}\nonumber\\&=&q^2
\end{eqnarray}
This normalized
correlation function is directly related to the coefficient of
double excitations (See
Eqs.~(\ref{wavefunction}),~(\ref{twoexitation}),~(\ref{q})). If
$q=1$ then $g_{TF}^{(2)}(0)=1$ and there is no entanglement; so a
non-unit value of $q$ indicates entanglement. Using second-order
intensity correlations has been proposed in the context of
entangled coherent states by Stobi{\'n}ska and W{\'o}dkiewicz
\cite{stobinska05}.
The cross correlation function $g_{TF}^{(2)}(0)$ contains
information about the average photon number {\it in coincidence
with} a measurement of the fluorescence relative to the average
photon number in the absence of any interrogation of the
fluorescence. $g_{TF}^{(2)}(0)-1=q^2-1$ is an indicator of
entanglement.
A way to measure $q$ directly utilizes a field-intensity
correlation function $h_{\theta}(\tau)$ \cite{carmichael04}, that
can be implemented as a homodyne measurement conditioned on the
detection of a fluorescent photon,
\begin{eqnarray}
h_{\theta=0}^{TF}(0)&=&{{\langle I_F E_T\rangle}\over{\langle
I_F\rangle\langle E_T\rangle}}\nonumber \\
&=& {{\langle (a^{\dagger}+a)\sigma_{+}\sigma_{-}
\rangle}\over{\langle
a^{\dagger}+a\rangle\langle \sigma_+\sigma_-\rangle}}\nonumber \\
&=&{{A_{1,e}}\over{A_{0,e}A_{1,g}}}\nonumber\\
&=&q
\end{eqnarray}
So $h_{\theta=0}^{TF}(0)-1=q-1$ is also an indicator of entanglement in this system.
What makes
this measurement possible experimentally is the conditioning that
selects only times when there is a fluctuation and the rest of the
time (when the vacuum is present) no data is collected
\cite{foster00}. For one mode, the homodyned transmitted field
conditioned by detection of a photon from that mode, is a measure
of squeezing in that mode \cite{carmichael04}. A homodyne
measurement of the transmitted field conditioned by detection of a
fluorescent photon is a measure of the two-mode squeezing, with
the cavity field and atomic dipole as the two components.
Generally, two-mode squeezing is an indicator of entanglement
between the two modes. Gea~Banacloche {\it et al.} explored this
correlation function in a different regime of cavity QED and found
it to be a witness of the dynamics of entanglement \cite{gea05}.
Non-classicality and entanglement are not necessarily
simultaneously present. For example for two oscillators one could
have $|\psi\rangle=(1/\sqrt{2})(|A,B\rangle+|B,A\rangle)$, where
$A$ and $B$ are coherent state amplitudes. In this state, there is
entanglement, but each individual mode shows no non-classical
behavior. Conversely, one can have non-classical behavior with no
entanglement, say for example the atom in the ground state and the
field in a squeezed coherent state.
There is a particular form of the Schwartz inequality that must be
satisfied for a classical field for the specific case of the
system we are considering here:
\begin{equation}
(g^{(2)}_{TF} (0) - 1)^2 \leq|(g^{(2)}_{TT} (0) - 1)(g^{(2)}_{FF}
(0) - 1)|,
\end{equation}
Here $TT$ and $FF$ denote zero delay intensity correlations for
the transmitted and fluorescent fields respectively. In the
one-atom limit, $g^{(2)}_{FF} (0) = 0$, and $g^{(2)}_{TT} (0) =
q^2p^2$, so this inequality becomes $|q^2 - 1|^2 \leq |q^2p^2 -
1|$ which depends on $q$, but also on the parameter $p$ (Eq.
(\ref{p})), which can be varied independently. There is no
one-to-one relationship between Schwarz inequality violations and
entanglement (by this measure) in this particular system.
\section{CONCLUSION}
We find that entanglement in weakly driven cavity QED is
characterized by comparison of two-excitation probability
amplitudes to single excitation amplitudes, in particular the
amplitude involving one excitation in each subsystem. It is
necessary to have a small saturation photon number to enhance the
nonlinear response which generates a larger entanglement. But this
is true only to a point. We find the maximal entanglement for
small $\kappa$ and when $g/\gamma$ is on the order of unity. This
stems from the dual role of the coupling $g$. It couples energy
into the atom, but due to cavity enhanced spontaneous emission, it
can also channel energy out.
Increasing $\gamma$ decreases the entanglement, and this can be
explained in terms of the effect of the two decay processes on the
system. If we detect a fluorescent photon we know it has come from
the atom, and the atom is in the ground state. If we obtain a
transmitted photon, it could have been emitted from the atom into
the cavity mode, or just be a driving field photon that has passed
through the cavity without interaction with the atom. It is the
interference of these two indistinguishable processes that leads
to nonclassical effects in the transmitted field.
We have found a variety of cross-correlation functions that are
indicators, or witnesses, of entanglement in this system. One can
learn nothing about the entanglement by examining only one- or
two- excitation amplitudes separately. In particular we find that
a measurement of two-mode squeezing, or a homodyne measurement of
the transmitted field conditioned on the detection of a
fluorescence photon is directly proportional to the entanglement
calculated via the reduced von Neumann entropy. Further work
remains to generalize this approach to situations with higher
drives, but the general approach of looking at entanglement
together with the specific correlation function to measure gives
physical insight into this problem.
We would like to thank J. P. Clemens for fruitful discussions
related to the topic of this paper. This work has been supported
by NSF and NIST.
|
2,869,038,154,975 | arxiv | \section{Introduction\label{sec:Introduction}}
Thin films of metal oxides have been a focus area of continuous research
due to the rich physics that can be observed in these systems, such
as ferroelectricity, ferromagnetism and superconductivity, and their
resulting technological applications \citep{hwang2012emergent,mannhart2010oxideinterfacestextemdashan}.
An important challenge involving thin metal oxide films has been their
growth on semiconductors in such a way that their electrical polarization
couples to the electronic states inside the semiconductor \citep{reiner2009atomically,reiner2010crystalline,dogan2017abinitio}.
If successfully done, this enables the development of non-volatile
devices such as ferroelectric field-effect transistors (FEFET). In
a FEFET, the polarization of the oxide encodes the state of the device,
and requires the application of a gate voltage only for switching
the state, greatly reducing the power consumption and boosting the
speed of the device \citep{mckee2001physical,garrity2012growthand}.
Meeting this challenge requires a thin film ferroelectric oxide, as
well as an atomically abrupt interface between the oxide and the semiconductor,
so that the polarization of the oxide and the electronic states in
the semiconductor are coupled. The first of these requirements, i.e.,
a thin film ferroelectric, is difficult to obtain because materials
that are ferroelectric in the bulk lose their macroscopic polarization
below a critical thickness, due to the depolarizing field created
by surface bound charges \citep{batra1973phasetransition,dubourdieu2013switching}.
An alternative approach is to search for materials such that, regardless
of their bulk properties, they are stable in multiple polarization
configurations as thin films \citep{dogan2017abinitio}. The second
requirement, i.e., an abrupt oxide-semiconductor interface, has been
challenging due to the formation of amorphous oxides such as SiO$_{2}$
at the interface with a semiconductor such as Si \citep{robertson2006highdielectric,garrity2012growthand,mcdaniel2014achemical}.
This challenge has been overcome by using layer-by-layer growth methods
such as molecular beam epitaxy (MBE) and employing highly controlled
growth conditions \citep{mckee1998crystalline,mckee2001physical,kumah2016engineered}.
We recently reported on the experimental observation of polarization
switching in atomically thin ZrO$_{2}$ grown on Si \citep{dogan2018singleatomic}.
In the experimental setup, ZrO$_{2}$ was grown using atomic layer
deposition (ALD), yielding an amorphous oxide and an abrupt oxide-silicon
interface with no significant formation of SiO$_{2}$. This interface
was then incorporated into a gate stack device with amorphous Al$_{2}$O$_{3}$
separating it from the top electrode. Ferroelectric behavior was observed
by $C-V$ measurements with this gate stack. In this work, we present
an in-depth computational investigation of this monolayer system.
In \secref{Methods5}, we describe our computational methods. In \subsecref{Free-standing-ZrO2},
we investigate the structure of free-standing ZrO$_{2}$ monolayers
assuming they are strained to the two-dimensional lattice of the Si(001)
surface. In \subsecref{ZrO2-on-Si}, we report on the low-energy configurations
of these monolayers when placed on the Si(001) surface. We find that
these films have multiple (meta)stable structures with no significant
chemical differences between them. This suggests that epitaxial monocrystalline
growth may be challenging. In \subsecref{Domain}, we examine the
domain energetics in this system: we build a lattice model with nearest-neighbor
interactions, and solve this model using a Monte Carlo cluster method.
The results of the lattice model provide a microscopic understanding
of the experimentally observed polarization switching.
\section{Computational methods\label{sec:Methods5}}
We theoretically model the materials systems using density functional
theory (DFT) with the Perdew\--Burke\--Ernzerhof generalized gradient
approximation (PBE GGA) \citep{perdew1996generalized} and ultrasoft
pseudopotentials \citep{vanderbilt1990softselfconsistent}. We use
the QUANTUM ESPRESSO software package \citep{giannozzi2009quantum}.
A $35$ Ry plane wave energy cutoff is used to describe the pseudo
Kohn\--Sham wavefunctions. We sample the Brillouin zone with an $8\times8\times1$
Monkhorst\--Pack $k$-point mesh (per $1\times1$ in-plane primitive
cell) and a $0.02$ Ry Marzari\--Vanderbilt smearing \citep{marzari1999thermal}.
A typical simulation cell consists of $8$ atomic layers of Si whose
bottom layer is passivated with H and a monolayer of ZrO$_{2}$ placed
on top (see \figref{simcell}). Periodic copies of the slab are separated
by $\sim12\text{\AA}$ of vacuum in the $z$-direction. The in-plane
lattice constant is fixed to the computed bulk Si lattice constant
of $3.87\text{\AA}$. In general, the slab has an overall electrical
dipole moment along the $z$ direction that might artificially interact
with its periodic images across the vacuum. In order to prevent this
unphysical effect, we introduce a fictitious dipole in the vacuum
region of the cell which cancels out the electric field in vacuum
and removes such interactions \citep{bengtsson1999dipolecorrection}.
All atomic coordinates are relaxed until the forces on all the atoms
are less than $10^{-3}{\rm Ryd}/a_{0}$ in all axial directions, where
$a_{0}$ is the Bohr radius (except the bottom $4$ layers of Si which
are fixed to their bulk positions to simulate a thick Si substrate).
We use the nudged elastic bands (NEB) method with climbing images
\citep{henkelman2000aclimbing} to compute the transition energy barrier
between different metastable configurations.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{Figure_simcell. 5}
\par\end{centering}
\caption[A typical simulation supercell with $2\times1$ in-plane periodicity.]{\label{fig:simcell}A typical simulation supercell with $2\times1$
in-plane periodicity. The bottom 4 layers of Si are fixed to bulk
coordinates and passivated by hydrogen as shown, to simulate bulk
silicon. There is $\sim12\text{\AA}$ of vacuum along the $z$-direction
to separate periodic copies.}
\end{figure}
\section{Results \label{sec:Results5}}
\subsection{Free standing ZrO$_{2}$ monolayers\label{subsec:Free-standing-ZrO2}}
\subsubsection{Background: bulk zirconia}
Bulk ZrO$_{2}$ is observed in three structural phases. The high symmetry
cubic phase (space group: $Fm\overline{3}m$) is shown in \figref{cubicZrO2}.
The lower symmetry tetragonal ($P4_{2}/nmc$) and monoclinic ($P2_{1}/c$)
phases are obtained by continuously breaking the symmetries of the
cubic phase. All three configurations are centrosymmetric and hence
not ferroelectric. However, this binary oxide has a \emph{layered
structure} (along low-index directions) in which the cations and anions
lie in different planes, which, in thin film stoichiometric form,
would cause ultrathin ZrO$_{2}$ films to be polar. For instance,
in \figref{cubicZrO2} a horizontal monolayer of ZrO$_{2}$ could
be formed by the zirconium atoms in Layer 3, with (a) the oxygen atoms
in Layer 2, or with (b) the oxygen atoms in Layer 4, or with (c) half
of the oxygen atoms in each of Layer 2 and Layer 4. Before relaxing
the atoms in these hypothetical monolayers, in case (a) the resulting
polarization would be upward, in case (b) it would be downward, and
in case (c) it would be zero. This intrinsic layered structure, which
is also preserved in the tetragonal and the monoclinic phases of zirconia,
is a fundamental reason why ZrO$_{2}$ is an excellent candidate to
have a switchable polarization when grown on silicon.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{Figure_cubic. 1}
\par\end{centering}
\caption[The high symmetry cubic phase ($Fm\overline{3}m$) of bulk ZrO$_{2}$. ]{\label{fig:cubicZrO2}The high symmetry cubic phase ($Fm\overline{3}m$)
of bulk ZrO$_{2}$. Atomic layers are labelled 1 through 5, where
the odd (even) layers correspond to cation (anion) planes.}
\end{figure}
\subsubsection{Structure of free standing monolayers}
In order to check if this richness of structure due to the layered
nature of the bulk material is retained in the ultrathin film, we
have simulated free standing ZrO$_{2}$ monolayers. A monolayer formed
by a (001) plane of cubic ZrO$_{2}$ would have a square lattice with
size $3.61\ \text{\AA}$ (based on our DFT computations). To match
the lattice of the Si substrate, we simulate the monolayers at the
lattice constant of the Si(001) surface, which we find to be $3.87\ \text{\AA}$.
We have searched for minimum energy configurations for $1\times1$,
$2\times1$, $2\times2$ and $c(4\times2)$ sized unit cells of monolayer
ZrO$_{2}$ which are the periodicities of the low energy reconstructions
of the bare Si(001) surface, as we shall discuss in \subsecref{ZrO2-on-Si}.
We find that the lowest and the second lowest energy configurations
of the ZrO$_{2}$ monolayer are $2\times1$ and $1\times1$, respectively,
as shown in \figref{ZrO2_AB}. The chief difference between the two
configurations is that the lowest energy structure, labeled $A$,
has a vertical (along $z$) buckling of zirconiums in the $2\times$
in-plane direction, while for the second lowest energy structure,
labeled $B$, all the Zr are coplanar. We find that $E\left(B\right)-E\left(A\right)=0.07$
eV per ZrO$_{2}$. Both of these configurations are inversion symmetric
and hence non-polar. However, because neither $A$ or $B$ is symmetric
with respect to the mirror plane reflection $z\rightarrow-z$, there
are two more geometrically distinct minima, named $\overline{A}$
and $\overline{B}$, which are shown in \figref{ZrO2_AB}. $\overline{A}$
and $\overline{B}$ are obtained from $A$ and $B$, respectively,
by the mirror reflection. Notice that $\overline{A}$ can be obtained
from $A$ also by translating in the $2\times$ direction by half
a $2\times1$ cell. However, since the underlying substrate will have
at least $2\times1$ periodicity, this translation would not leave
the entire system (ZrO$_{2}$ with substrate) invariant.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_ZrO2_AB. 5}
\par\end{centering}
\caption[The lowest energy configurations of the free standing ZrO$_{2}$ monolayer.]{\label{fig:ZrO2_AB}The lowest energy configurations of the free
standing ZrO$_{2}$ monolayer. Structure $B$ has an energy of $0.07$
eV per ZrO$_{2}$ above that of structure $A$. On the right, all
four geometrically distinct metastable configurations are shown. $\overline{A}$
and $\overline{B}$ are obtained from $A$ and $B$, respectively,
by reflection in the $z=0$ plane. For each structure, two copies
of the $2\times1$ unit cells are displayed and a vertical dashed
line separates the copies.}
\end{figure}
\subsubsection{Energy landscape of free standing monolayers}
In order to analyze these configurations further, we parametrize the
energy landscape of free standing ZrO$_{2}$ monolayers by using two
coordinates: $z_{1}\equiv z\left(\text{Zr}_{2}\right)-z\left(\text{Zr}_{1}\right)$
and $z_{2}\equiv z\left(\text{O}_{1}\right)-z\left(\text{Zr}_{1}\right)$,
where the atoms $\text{Zr}_{1}$, $\text{Zr}_{2}$ and $O_{1}$ are
labelled for structure $A$ in \figref{ZrO2_AB} (for structures $\overline{A}$,
$B$ and $\overline{B}$, Zr$_{1}$ is directly below Zr$_{1}$ of
structure $A$ in the figure, and similarly for Zr$_{2}$ and O$_{1}$).
Note that the structures $B$ and $\overline{B}$ are treated in $2\times1$
unit cells for this analysis. To explore the energy landscape, we
have made a $9\times9$ grid of $\left(z_{1},z_{2}\right)$ values
and computed corresponding energies for structures whose $z_{1}$
and $z_{2}$ are fixed but all other coordinates are relaxed. In \figref{ZrO2_barriers},
we plot the energy landscape using darker (lighter) colors to represent
lower (higher) energies. The coloring is implemented by MatLab's linear
interpolation scheme based on the DFT energies on an equally spaced
$9\times9$ grid. We also label the four (meta)stable configurations
on the landscape. The energies are reported for $2\times1$ cells
where $E\left(A\right)=E\left(\overline{A}\right)=0$ is set as the
zero of energy.
In \figref{ZrO2_barriers} we also present the minimum energy transition
paths between these energy minima, as thick solid curves. We have
found these transitions using the NEB method with climbing images
\citep{henkelman2000aclimbing}. There are 6 pairs of metastable configurations
and hence 6 transition paths: $A\leftrightarrow\overline{A}$, $A\leftrightarrow B$,
$A\leftrightarrow\overline{B}$, $\overline{A}\leftrightarrow B$,
$\overline{A}\leftrightarrow\overline{B}$ and $B\leftrightarrow\overline{B}$.
However, as seen from the figure, the transition paths of $A\leftrightarrow\overline{A}$
and $B\leftrightarrow\overline{B}$ go through other energy minima
and hence can be expressed in terms of the remaining 4 transitions.
We have found that all of the four transitions go through a transition
state with energy $1.04$ eV per $2\times1$ cell. These four saddle
points, shown as diamond marks in \figref{ZrO2_barriers}, are related
by reflection and/or translation operations, and hence are physically
equivalent.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_ZrO2_barriers. 2}
\par\end{centering}
\caption[The energy landscape of the free standing ZrO$_{2}$ monolayer, as
parametrized by a pair of coordinates.]{\label{fig:ZrO2_barriers}The energy landscape of the free standing
ZrO$_{2}$ monolayer, as parametrized by a pair of coordinates $z_{1}\equiv z\left(\text{Zr}_{2}\right)-z\left(\text{Zr}_{1}\right)$
and $z_{2}\equiv z\left(\text{O}_{1}\right)-z\left(\text{Zr}_{1}\right)$
(See \figref{ZrO2_AB} for labelings of the atoms). $a_{\text{lattice}}$
is the computed lattice constant of silicon and is equal to $3.87\ \text{\AA}$.
All four local energy minima as well as the minimum energy transition
paths between them are shown. The saddle points on the landscape (i.e.,
the transition states) are shown as diamonds. The zero of energy is
taken to be the energy of structure $A$. All transition states lie
at the same energy because they are related by reflection/translation
operations. The energy landscape is computed by DFT on a $9\times9$
grid and then interpolated by MatLab to produce the smooth colored
plot.}
\end{figure}
To sum up, we have found that as a free standing monolayer in vacuum,
ZrO$_{2}$ is not polar but has two physically distinct stable configurations.
In the presence of a surface that breaks the $z\rightarrow-z$ symmetry,
$A$ and $\overline{A}$ (as well as $B$ and $\overline{B}$) have
the potential to relax to new configurations that are differently
polarized.
\subsection{ZrO$_{2}$ monolayers on Si(001)\label{subsec:ZrO2-on-Si}}
\subsubsection{Bare Si(001) surface}
To study the behavior of zirconia on Si(001), we first review the
structure of the bare Si(001) surface. It is well known that, on the
Si(001) surface, neighboring Si atoms pair up to form dimers \citep{ramstad1995theoretical,paz2001electron},
and we find that dimerization lowers the energy by $1.45$ eV per
dimer. The dimers can buckle (i.e., the two Si forming the dimer do
not have the same out-of-plane $z$ coordinate) which lowers their
energy. If nearby dimers buckle in opposite ways, higher order reconstructions
occur. We summarize the energies of these reconstructions in \tabref{Si_surf}
(we refer the reader to the cited works for detailed descriptions
of these surface configurations). There is a strong drive for the
surface Si atoms to dimerize (transition from a $1\times1$ to a $2\times1$
unit cell) and a weaker energetic drive to organize the dimers into
structures with periodicities larger than $2\times1$. Because the
metastable configurations of the ZrO$_{2}$ monolayers we found above
have unit cells that are $2\times1$ or smaller, we have limited our
search for Si/ZrO$_{2}$ interfaces to $2\times1$ simulation cells.
\begin{table}
\begin{centering}
\begin{tabular}{cccc}
\toprule
\addlinespace[0.3cm]
Si surface & Energy (eV/dimer) & Ref. \citep{ramstad1995theoretical} & Ref. \citep{paz2001electron}\tabularnewline\addlinespace[0.3cm]
\midrule
\addlinespace[0.1cm]
\midrule
flat $p(2\times1)$ & $\equiv0.00$ & $\equiv0.00$ & $\equiv0.00$\tabularnewline
\midrule
buckled $p(2\times1)$ & $-0.20$ & $-0.12$ & $-0.13$\tabularnewline
\midrule
buckled $p(2\times2)$ & $-0.28$ & $-0.17$ & $-0.23$\tabularnewline
\midrule
buckled $c(4\times2)$ & $-0.27$ & $-0.17$ & $-0.24$\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption[Energies of the lowest energy Si(001) surface reconstructions.]{\label{tab:Si_surf}Energies of the lowest energy Si(001) surface
reconstructions per dimer. Two theoretical references are presented
alongside our computed results. See the cited works for details of
the listed reconstructions.}
\end{table}
\subsubsection{Structure of the monolayers on silicon}
We have searched the configuration space for ZrO$_{2}$ on Si(001)
as follows: First, we have created a $3\times3\times2$ grid of points
inside the $2\times1$ in-plane unit cell on top of the bare Si surface
where a Zr atom is placed (the $3\times3$ grid corresponds to points
in the $xy$-plane and the $\times2$ corresponds to the vertical
distance from the substrate). A flat and high symmetry $1\times1$
zirconia monolayer is generated such that it includes this Zr atom.
For each such structure, the atoms in the Si surface layer and the
ZrO$_{2}$ monolayer are randomly and slightly displaced to generate
$5$ initial positions. This procedure, which yields $3\times3\times2\times5=90$
configurations, is done for dimerized and non-dimerized Si surfaces,
so that there are $180$ initial configurations in total. We have
then relaxed all the atoms in ZrO$_{2}$ and the top 4 layers of silicon
substrate to find local energy minima.
We present the five lowest energy structures we have obtained in \figref{SiZrO2_en}.
The horizontal axis is a quantity that describes the ionic polarization
of the ZrO$_{2}$ monolayer and is defined as the mean vertical Zr-O
separation $\delta z\equiv\overline{z\left(\text{Zr}\right)}-\overline{z\left(\text{O}\right)}$,
where over-bars mean averaging of the coordinate over the atoms of
that type in the structure. The vertical axis is the energy in eV
per $2\times1$ cell measured with respect to the lowest energy structure,
labeled $S1$. The energies of $S1$ through $S5$ are also listed
in \tabref{SiZrO2_en}.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_en. 2}
\par\end{centering}
\caption[Five lowest energy configurations of ZrO$_{2}$ monolayers on Si.]{\label{fig:SiZrO2_en}Five lowest energy configurations of ZrO$_{2}$
monolayers on Si. $\delta z\equiv\overline{z\left(\text{Zr}\right)}-\overline{z\left(\text{O}\right)}$
is a measure of ionic out-of-plane polarization for the monolayers.
Energies are listed in eV per $2\times1$ in-plane cell measured with
respect to the lowest energy structure $S1$.}
\end{figure}
\begin{table}
\begin{centering}
\begin{tabular}{cccccc}
\toprule
\addlinespace[0.3cm]
& $\ \ \ S1\ \ \ $ & $\ \ \ S2\ \ \ $ & $\ \ \ S3\ \ \ $ & $\ \ \ S4\ \ \ $ & $\ \ \ S5\ \ \ $\tabularnewline\addlinespace[0.3cm]
\midrule
\addlinespace[0.1cm]
\midrule
Energy & \multirow{2}{*}{$\equiv0.00$} & \multirow{2}{*}{0.07} & \multirow{2}{*}{0.14} & \multirow{2}{*}{0.50} & \multirow{2}{*}{0.69}\tabularnewline
(eV per $2\times1$ cell) & & & & & \tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption[Energies of the five lowest energy configurations of ZrO$_{2}$ monolayers
on Si.]{\label{tab:SiZrO2_en}Energies of the five lowest energy configurations
of ZrO$_{2}$ monolayers on Si as labeled in \figref{SiZrO2_en}.}
\end{table}
First, the metastable configurations lie on both sides of the $\delta z=0$
line, which means that there is no polarization direction that is
strongly preferred. Second, we find that the four lowest energy structures
have a $2\times1$ periodicity with intact Si dimers. (In addition
to $S5$, we have found three more $1\times1$ structures with broken
dimers at energies higher than 1 eV that are not shown.) The energy
difference of $0.69$ eV per dimer between the lowest energy $1\times1$
and the lowest energy $2\times1$ structures (i.e. $S5$ and $S1$)
is half of the energy of dimerization on the bare Si surface. Moreover,
the length of the dimer in $S1$ is $2.42\ \text{\AA}$ which is longer
than the $2.31\ \text{\AA}$ on the bare surface. Therefore, in general,
the Si dimers are weakened but not broken by the ZrO$_{2}$ monolayer
for the more stable low-energy structures.
Third, we notice that for each configuration shown in \figref{SiZrO2_en},
a physically equivalent configuration is obtained by a mirror reflection
by the $yz-$plane, which doubles the number of metastable structures
in the configuration space. For our analysis of transitions between
these configurations, we make the reasonable assumption that silicon
dimers remain intact during the transition between two dimerized configurations.
Hence, we reflect the atomic positions through a $yz$-plane which
keeps the dimers in place in order to obtain the geometrically inequivalent
(but physically identical) set of structures $\overline{S1}$, $\overline{S2}$
etc.
\subsubsection{Transitions between low energy states}
We have computed the minimum energy transition paths between the three
lowest energy configurations and their symmetry related counterparts
($S1,\overline{S1},S2,\overline{S2},S3,\overline{S3}$). When applying
the NEB method to find transition states, each atom in the initial
configuration is mapped to an atom in the final configuration. In
principle, all possible matching choices should be attempted in order
to find all inequivalent transition paths and energy barriers. However,
this is neither practical nor physically necessary. For the case of
free standing ZrO$_{2}$, in all the minimum energy configurations,
all atomic $(x,y)$ coordinates line on a square grid, and by making
the reasonable assumption that atoms do not swap sites during the
transition, we can dramatically reduce the number of possible transition
paths under consideration. Hence, we matched each atom in the initial
configuration with the atom that sits at the same $(x,y)$ site in
the final configuration in order to perform the NEB calculations.
Even though no fixed square grid exists for the ZrO$_{2}$/Si case
that applies to all the configurations, similar considerations are
possible: (1) For the six configurations of interest, both Zr atoms
and two out of the four O atoms in a unit cell align along the $y$-direction
with the Si dimers ($y=0.5a_{\text{lat}}$), and the other two O atoms
lie half way between consecutive dimers ($y=0$). Both along the $x$-
and the $y$-directions, atomic chains of $\ldots$-Zr-O-Zr-O-$\ldots$
exist in all cases. So for each configuration, we can make a square
grid in the $xy-$plane such that one Zr per cell sits at a lattice
site and the other atoms are very close to the other lattice sites.
For each transition process, the grid is assumed only to shift in
the $x$-direction. (2) Because of the high energy cost of breaking
Si dimers on the bare Si(001) surface, we assume that the dimers remain
intact during a transition. (3) We assume that $\ldots$-Zr-O-Zr-O-$\ldots$
chains along the $y$-direction remain intact during a transition,
so no movement in the $y$-direction is considered.
By using these constraints, we can reduce the number of possible matchings
to four for each transition. We demonstrate these choices for the
transition $S1\rightarrow S2$ in \figref{SiZrO2_match}. The final
state $S2$ is displayed upside down in order allow for a clearer
illustration of atomic matchings. In the left panel, $\ldots$-Zr-O-Zr-O-$\ldots$
chains along the $y$-direction are circled by blue dashed rings.
There are two possible ways in which the chains in $S1$ can be matched
to the chains in $S2$ that do not cause large scale rearrangements.
One of these matchings is shown as solid arrows, and the other is
shown as dotted arrows. In the right panel, the same exercise is repeated
for the remaining oxygens (circled by red dashed rings). Therefore
there are $2\times2=4$ matchings in total. Note that the reverse
processes correspond to the set of matchings that obey our rules for
the transition $S2\rightarrow S1$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.85\columnwidth]{Figure_SiZrO2_match. 5}
\par\end{centering}
\caption[The possible matchings for the $S1\rightarrow S2$ transition for
the NEB simulation.]{\label{fig:SiZrO2_match}The possible matchings for the $S1\rightarrow S2$
transition for the NEB simulation. The $S2$ structure is displayed
upside down to allow for ease of understanding the matching. In the
left panels, two possible choices for the two Zr-O pairs (or chains)
in the $S1$ unit cell that are to be matched to the Zr-O pairs (or
chains) in the structure $S2$ are shown. The set of solid arrows
corresponds to one choice, and the set of dotted lines corresponds
to another choice. Similarly, two choices for the remaining oxygens
are displayed in the right panels. See text for further details of
the described matchings. Two periodic copies of $2\times1$ cells
are are shown in each case, and a dashed line is drawn to separate
the copies. }
\end{figure}
The resulting smallest energy barriers are listed in \tabref{SiZrO2_neb}.
Notice that the nine listed transitions cover all the possible transitions
because, e.g., the transition $S1\leftrightarrow\overline{S2}$ is
related by symmetry to $\overline{S1}\leftrightarrow S2$. We observe
that the transitions within the set of unbarred states are about 1
eV smaller than the transitions between unbarred and barred states.
This is understood as follows: for all six structures, there is one
oxygen per cell which binds to a silicon atom. The transitions that
leave that oxygen in place (such as the dotted arrows in the right
panels of \figref{SiZrO2_match}) have lower energy barriers. A transition
between an unbarred state and a barred state necessarily involves
displacing that oxygen and breaking the strong Si-O bond. Therefore
a low energy path is not possible in such a case.
\begin{table}
\begin{centering}
\begin{tabular}{ccc}
\toprule
\addlinespace[0.3cm]
Transition & $\ \ E_{\text{barrier}}\left(\rightarrow\right)$ (eV) & $\ \ E_{\text{barrier}}\left(\leftarrow\right)$ (eV)\tabularnewline\addlinespace[0.3cm]
\midrule
\addlinespace[0.1cm]
\midrule
$S1\leftrightarrow\overline{S1}$ & 1.63 & 1.63\tabularnewline
\midrule
$S1\leftrightarrow S2$ & 0.79 & 0.71\tabularnewline
\midrule
$S1\leftrightarrow\overline{S2}$ & 1.60 & 1.52\tabularnewline
\midrule
$S1\leftrightarrow S3$ & 0.79 & 0.65\tabularnewline
\midrule
$S1\leftrightarrow\overline{S3}$ & 1.60 & 1.46\tabularnewline
\midrule
$S2\leftrightarrow\overline{S2}$ & 2.48 & 2.48\tabularnewline
\midrule
$S2\leftrightarrow S3$ & 0.23 & 0.17\tabularnewline
\midrule
$S2\leftrightarrow\overline{S3}$ & 1.57 & 1.51\tabularnewline
\midrule
$S3\leftrightarrow\overline{S3}$ & 1.77 & 1.77\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption[Transition barriers, calculated via the NEB method, between pairs
of low energy configurations of ZrO$_{2}$ monolayers on Si(001).]{\label{tab:SiZrO2_neb}Transition barriers, calculated via the NEB
method, between pairs of low energy configurations of ZrO$_{2}$ monolayers
on Si(001). Energy barriers are reported in eV per $2\times1$ cell.
The central and rightmost columns show the barriers going in both
directions (as indicated by the arrow directions).}
\end{table}
Focusing on the three low energy transitions, i.e. $S1\leftrightarrow S2$,
$S1\leftrightarrow S3$ and $S2\leftrightarrow S3$, we plot energy
vs $\delta z$ curves in \figref{SiZrO2_NEB}. The transition state
of $S2\leftrightarrow S3$ (dotted curve) and the shared transition
state of $S1\leftrightarrow S2$ and $S1\leftrightarrow S3$ (solid
curves) are marked by diamonds on the plot and their configurations
are displayed. During these transitions, the oxygen atom that is bonded
to a silicon (circled by red dashed rings in the figure) remains in
place, while the remaining 5 atoms in the ZrO$_{2}$ layer (inside
the blue dashed rounded rectangles) move in concert. Because this
movement does not significantly alter the chemistry of the interface,
the energy barriers are relatively low.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_NEB. 1}
\par\end{centering}
\caption[Three lowest energy configurations of ZrO$_{2}$ monolayers on Si
and the transition paths between them calculated via the NEB method.]{\label{fig:SiZrO2_NEB}Three lowest energy configurations of ZrO$_{2}$
monolayers on Si and the transition paths between them calculated
via the NEB method. The solid curve corresponds to the transitions
$S1\leftrightarrow\left(S2,S3\right)$ that share a transition state
denoted by a red diamond. The dotted curve corresponds to the transition
$S2\leftrightarrow S3$ which has a transition state denoted by a
green diamond. The circled oxygen atoms remain in place during the
transitions, and the circled groups of five atoms move as a block
with small internal displacements.}
\end{figure}
Because of the rich landscape of stable configurations at low energy
with similar chemical bonding and small structural differences, we
predict that growing large single-crystalline epitaxial films of ZrO$_{2}$
on Si(001) should be challenging. However, epitaxy may not be a necessary
condition for ferroelectricity in this system. A close examination
of the structures shown in \figref{SiZrO2_NEB} indicates that the
symmetry of the silicon surface, as well as the inherently rumpled
structure of ZrO$_{2}$, give rise to the switchable polarization.
The switching of the dipole occurs by a continuous displacement of
a group of atoms in the unit cell, while one oxygen remains in place.
No significant chemical change occurs during these transitions. We
note that open channels in the dimerized (001) face of silicon allow
for the motion of the oxide atoms lacking silicon nearest neighbors,
which stabilizes the three low-energy polar ZrO$_{2}$ structures.
\subsubsection{Coupling of polarization to electronic states in Si}
In addition to the prediction that the three lowest energy structures
may coexist in monolayer form, in \subsecref{Domain} we will explain
why, at temperatures of practical interest, structures $S2$ and $S3$
should be the dominant motifs in the monolayer structure. Because
of the large difference in polarization together with a low energy
barrier between these two structures, we believe that the polarization
switching described in Ref. \citep{dogan2018singleatomic} should
correspond to switching between $S2$ and $S3$. A first and simple
corroboration involves showing that the change in the silicon Fermi
level observed in the experiment is comparable with our theoretical
prediction. In \figref{SiZrO2_DOS}, we plot the density of states
(DOS) of the ZrO$_{2}$/Si system projected onto an interior layer
of the Si substrate for the cases of interface structures $S2$ and
$S3$. We set the energy of the Si valence band edge (VBE) of $S2$
to zero and align the vacuum energy level in $S3$ to the vacuum energy
energy in $S2$. We find a $0.6$ eV VBE shift in Si, which is somewhat
larger than, but comparable to, the experimental value of $0.4$ eV.
We believe that this is due to the fact that the experimental monolayers
are not epitaxial but amorphous with multiple structural motifs present,
so that application of the electric field is not as effective at polarization
switching as is assumed in our clean, epitaxial and ordered theoretical
simulations.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_DOS}
\par\end{centering}
\caption[Density of states in an interior Si layer with the ZrO$_{2}$ film
in its upwardly polarized and downwardly polarized forms.]{\label{fig:SiZrO2_DOS}Density of states in an interior Si layer
with the ZrO$_{2}$ film in its upwardly polarized ($S2$) and downwardly
polarized ($S3$) forms. There exists a valence band edge (VBE) shift
between the \textquotedbl up\textquotedbl{} state (top) and the \textquotedbl down\textquotedbl{}
state (bottom). This figure is reproduced from Ref. \citep{dogan2018singleatomic}.}
\end{figure}
\subsection{Domain energetics\label{subsec:Domain}}
Up to this point, our theoretical study of the ZrO$_{2}$ monolayers
on the Si(001) surface has shown that (meta)stable configurations
with varying polarizations are present. We have also demonstrated
that transitions between some of the lowest energy configurations
do not require complicated rearrangements of atoms and have low energy
barriers. Because of these findings, as well as the fact that the
experimental monolayer is amorphous, we expect there to be a multi-domain
character to these monolayers at or near room temperature ($k_{\text{B}}T=0.026$).
However, directly calculating the energy of a multi-domain region
of the system for an area larger than a few primitive unit cells is
not feasible. In this section, we describe an approximate model Hamiltonian
method to compute the energies of arbitrary regions of multiple domains,
and use Monte Carlo simulations to find thermodynamic ground states
at finite temperatures.
\subsubsection{Domain wall energies}
In order to investigate the behavior of finite domains, we have developed
a lattice model where every $2\times1$ in-plane cell is treated as
a site in a two dimensional lattice which couples to its neighbors
via an interaction energy. Similar models have been proposed for other
two dimensional systems \citep{bune1998twodimensional}. Such a model
is reasonable if the interface (domain wall) between domains of different
states is sharp, i.e., the atomic positions a few unit cells away
from a domain boundary are indistinguishable from the atomic positions
in the center of the domain. To find the degree of locality and the
energy costs of the domain walls, we have computed domain wall energies
as a function of domain size.
Sample simulation arrangements are shown in \figref{SiZrO2_latt_cell}.
In (a) and (b), domain walls along the $y$- and $x$-directions are
formed, respectively, between the configurations $S1$ and $S2$.
Three unit cells of $S1$ and $S2$ each are generated and attached
together to build larger simulation cells to model the domain walls:
$12\times1$ and $2\times6$ cells to simulate the domain boundaries
along the $y$- and $x$-directions, respectively. In each of the
3 unit wide domains, the center unit is fixed to the atomic configuration
of the corresponding uniform system. In \figref{SiZrO2_latt_cell},
for the $S1$ domain, the atoms in the unit labelled $S1$ are fixed,
and the atoms in the units $S1l$ and $S1r$ are relaxed. The same
is true for $S2$, but for clarity, fixed units of $S2$ are displayed
on both sides. We then compute the domain wall energy between $S1$
and $S2$ by subtracting $3E\left(S1\right)+3E\left(S2\right)$ from
the total energy of this supercell and dividing by two. We have checked
for a few test cases that increasing the domain width from 3 to 5
cells changes the domain wall energies by small amounts on the order
of 1-10 meV while typical domain wall energies are larger than 100
meV (see \tabref{SiZrO2_latt_J}). This, together with visualization
of the resulting structures, convinces us that the domains are sufficiently
local for us to treat the domain walls as being sharp. Note that there
are two inequivalent boundaries between $S1$ and $S2$ along a given
direction. In \figref{SiZrO2_latt_cell}, these boundaries are shown
as red and blue dashed lines. Due to the periodicity of simulation
cells, it is not possible to compute the energies of these two boundaries
independently, so we are forced to assume that their energies are
equal.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_latt_cell. 1}
\par\end{centering}
\caption[Simulation arrangements to compute the domain boundary energies between
$S1$ and $S2$.]{\label{fig:SiZrO2_latt_cell}Simulation arrangements to compute the
domain boundary energies between $S1$ and $S2$. (a) 3 cells each
of $S1$ and $S2$ are stacked along the $x$-direction to form straight
domain boundaries in the $y$-direction. The numberings of atomic
groups within the unit cells are displayed using dashed circles. The
boundary on the right (blue) is initially built by the atomic groups
1, 2 and 3 from $S1$ and $4$ from $S2$ in the unit cell to the
left of the boundary (labelled $S1r$), and the atomic groups 1, 2,
3 and 4 from $S2$ in the unit cell to the right of the boundary (labelled
$S2l$). The boundary on the left (red) is constructed to preserve
the number of atomic groups from each cell. (b) 3 cells each of $S1$
and $S2$ are stacked along the $y$-direction to form straight domain
boundaries in the $x$-direction. Fully relaxed boundary configurations
are shown.}
\end{figure}
The final step in determining the domain boundary energies is to survey
the configuration space available for a given boundary. For that purpose,
for each domain boundary we have generated a number of initial configurations
depending on the direction of the boundary:
\begin{itemize}
\item For a boundary along the $y$-direction such as in \figref{SiZrO2_latt_cell}(a),
we have generated five initial configurations as follows. For each
domain state (e.g., $S1$ or $S2$), we have labeled the Zr-O pairs
along the $y$-direction and the remaining oxygens and numbered them
in an increasing order in the $x$-direction. In the figure, the labelling
for states $S1$ and $S2$ is shown. Note that for each cell, the
sequence starts with a Zr-O pair and ends with an O atom. Hence in
some cases the oxygen labelled 4 lies beyond the unit cell to which
it belongs, such as in $S2$. To build a domain boundary such as the
$S1r$-$S2l$ (shown as a blue dashed line), we first place the atomic
groups numbered $1-4$ from $S1$ to the left hand side of the boundary,
and the atomic groups numbered $1-4$ from $S2$ to the right hand
side of the boundary. This constitutes our first initial configuration.
The second configuration is obtained by replacing atom $4$ from $S1$
on the left hand side by atom $4$ from $S2$. The third is obtained
by replacing both group $3$ and atom $4$ from $S1$ by $3$ and
$4$ from $S2$. The fourth choice is to replace atomic group $1$
from $S2$ on the right hand side by group $1$ from $S1$; and, lastly,
the fifth choice is to replace $1$ and $2$ from $S2$ by $1$ and
$2$ from $S1$. The opposite operation is performed at the other
boundary such as $S2r$-$S1l$ (shown as a red dashed line). We then
take the smallest of the five computed domain energies as the final
energy. Note that the relaxed structure shown in the \figref{SiZrO2_latt_cell}(a)
for the $S1$-$S2$ domain boundaries is obtained via choice $\#2$
for the $S1r$-$S2l$ boundary.
\item For a boundary along the $x$-direction such as in \figref{SiZrO2_latt_cell}(b),
we have generated a few initial configurations by slightly and randomly
displacing the two oxygen atoms at the boundary along the $y$-direction
in order to break the $y\rightarrow-y$ symmetry inherent to these
structures.
\end{itemize}
\subsubsection{Construction of a lattice model}
Once we have the library of domain boundary energies for every pair
of states along the $x$- and $y$-directions described above, we
approximate the energy of the system with an arbitrary configuration
of domains by a two-dimensional anisotropic lattice Hamiltonian on
a square lattice:
\begin{eqnarray}
H & = & \sum_{i,j}E\left(\sigma\left(i,j\right)\right)+\sum_{i,j}J_{x}\left(\sigma\left(i,j\right),\sigma\left(i+1,j\right)\right)\nonumber \\
& & +\sum_{i,j}J_{y}\left(\sigma\left(i,j\right),\sigma\left(i,j+1\right)\right),
\end{eqnarray}
where $\sigma\left(i\right)$ donates the state at a given site $i$,
$E\left(\sigma\left(i\right)\right)$ is the energy (per $2\times1$
unit cell) of state $\sigma\left(i\right)$ for a uniform system in
that state, and $J_{\alpha}\left(\sigma\left(i\right),\sigma\left(j\right)\right)$
is the energy of interaction (i.e., domain wall energy) between the
neighboring states $i,j$ in the axial direction $\alpha$. In our
model, only nearest neighbor interactions are included. Because of
the anisotropic nature of the film (the $x$- and $y$-directions
are fundamentally different due to Si dimerization), the interaction
term must distinguish between directions $x$ and $y$ so that $J_{x}$
and $J_{y}$ differ. The domain boundary energies calculated via DFT
simulations are employed as nearest neighbor interaction energies
in this model. In \figref{SiZrO2_lattice}, we illustrate an arbitrary
configuration of such a lattice. As an example, the state $S1$ in
the middle column couples to $\overline{S1}$ and $S3$ via $J_{x}\left(S1,\overline{S1}\right)$
and $J_{x}\left(S1,S3\right)$, respectively, and to $S2$ and $\overline{S2}$
via $J_{y}\left(S1,S2\right)$ and $J_{y}\left(S1,\overline{S2}\right)$,
respectively.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_lattice. 1}
\par\end{centering}
\caption[An example configuration of the two dimensional lattice that approximates
the ZrO$_{2}$ monolayer on Si as a multi-domain system.]{\label{fig:SiZrO2_lattice}An example configuration of the two dimensional
lattice that approximates the ZrO$_{2}$ monolayer on Si as a multi-domain
system. Nearest neighbor sites couple through the coefficients $J_{x}$
(blue arrows) and $J_{y}$ (green arrows).}
\end{figure}
For a model with $N$ distinct states, our interaction matrices $J_{\alpha}$
($\alpha=x,y$) have the following properties:
\begin{itemize}
\item The interaction energy between the sites of the same kind is zero
by definition, $J_{\alpha}\left(\sigma_{i},\sigma_{i}\right)=0$.
Hence the number of non-zero entries is $N^{2}-N$.
\item We have assumed that the domain wall energy between states $\sigma_{i}$
and $\sigma_{j}$ remains the same if we swap the states. Therefore
the interaction matrices are symmetric $J_{\alpha}\left(\sigma_{i},\sigma_{j}\right)=J_{\alpha}\left(\sigma_{j},\sigma_{i}\right)$,
reducing the number of unique non-zero entries to $\frac{1}{2}\left(N^{2}-N\right)$.
\item In our particular system, every state has a counterpart which is obtained
by the reflection $x\rightarrow-x$. Hence, e.g., the domain wall
between $\overline{S1}$ and $\overline{S2}$ can be obtained from
the domain wall between $S1$ and $S2$ by applying a single symmetry
operation. Therefore many of the entires of $J_{\alpha}\left(\sigma_{i},\sigma_{j}\right)$
are paired up in this way which further reduces the number of unique
entries further to $\frac{1}{4}N^{2}$.
\end{itemize}
In \tabref{SiZrO2_latt_J}, we list the unique entries of $J_{\alpha}\left(\sigma_{i},\sigma_{j}\right)$
for states $\sigma$ ranging over the the six lowest energy states.
Note that since $N=6$ for this table, there are $\frac{1}{4}6^{2}=9$
entries in the table. Because the unit cell is $2\times1$, the couplings
$J_{x}$ are expected to be smaller than the couplings $J_{y}$, which
is generally correct. We have computed the domain wall energies for
more possible of states including $S4$, $\overline{S4}$, $S5$ and
$\overline{S5}$, and the longer list of resulting domain wall energies
(see Supplementary Material) are included in our treatment of the
lattice model below.
\begin{table}
\begin{centering}
\begin{tabular}{ccc}
\toprule
\addlinespace[0.3cm]
Domain boundary & $\ \ J_{x}$ (eV)$\ \ $ & $\ \ J_{y}$ (eV)$\ \ $\tabularnewline\addlinespace[0.3cm]
\midrule
\addlinespace[0.1cm]
\midrule
$S1,\overline{S1}$ & 0.26 & 1.35\tabularnewline
\midrule
$S1,S2$ & 0.76 & 1.13\tabularnewline
\midrule
$S1,\overline{S2}$ & 0.96 & 0.99\tabularnewline
\midrule
$S1,S3$ & 0.61 & 4.81\tabularnewline
\midrule
$S1,\overline{S3}$ & 0.44 & 1.75\tabularnewline
\midrule
$S2,\overline{S2}$ & 0.38 & 1.64\tabularnewline
\midrule
$S2,S3$ & 0.17 & 0.98\tabularnewline
\midrule
$S2,\overline{S3}$ & 0.01 & 0.91\tabularnewline
\midrule
$S3,\overline{S3}$ & 0.73 & 0.002\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption[Domain boundary energies computed from first principles.]{\label{tab:SiZrO2_latt_J}Domain boundary energies between low-energy
states as computed from first principles. These energies, along with
the couplings that include the states $S4$, $\overline{S4}$, $S5$
and $\overline{S5}$ reported in Table 1 of the Supplementary Material,
serve as the couplings of nearest neighbors in our lattice model.}
\end{table}
We notice that some of the values in \tabref{SiZrO2_latt_J}, namely
$J_{x}\left(S2,\overline{S3}\right)$ and $J_{y}\left(S3,\overline{S3}\right)$,
are very small, which is expected to be a significant factor in the
finite temperature behavior of our model. We demonstrate the domain
wall that corresponds to $J_{y}\left(S3,\overline{S3}\right)$ in
\figref{SiZrO2_dom_S3S3b} via a top view. Because one of the $\ldots$-Zr-O-Zr-O-$\ldots$
chains along the $y$-direction in the $S3$ unit cell is approximately
aligned with the valley between consecutive Si dimers along the $x$-direction,
it is approximately unchanged under the $S3\rightarrow\overline{S3}$
transformation. Therefore when $S3$ and $\overline{S3}$ cells are
attached in the $y$-direction, continuous and linear $\ldots$-Zr-O-Zr-O-$\ldots$
chains are obtained (the top and bottom black horizontal straight
lines in \figref{SiZrO2_dom_S3S3b}). The remaining $\ldots$-Zr-O-Zr-O-$\ldots$
chain in the unit cells matches imperfectly, but the distortion is
small (the winding black horizontal curve in the middle in \figref{SiZrO2_dom_S3S3b})
such that the only atom with a slightly modified environment is one
of the oxygen atoms at the domain boundary (encircled with a red dashed
ring in the figure). This near-perfect meshing of the $\ldots$-Zr-O-Zr-O-$\ldots$
chains after stacking the $S3$ and $\overline{S3}$ structures along
the $y$-direction is the cause of the very small energy cost of creating
the domain boundary.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_S3S3b. 1}
\par\end{centering}
\caption[Top view of the domain boundaries along the $x$-direction between
$S3$ and $\overline{S3}$, computed by stacking 3 unit cells of each
structure.]{\label{fig:SiZrO2_dom_S3S3b}Top view of the domain boundaries along
the $x$-direction between $S3$ and $\overline{S3}$, computed by
stacking 3 unit cells of each structure along the $y$-direction.
The domain energy, computed to be $J_{y}\left(S3,\overline{S3}\right)=0.002\ \text{eV}$
per unit length, is very small due to the near-perfect meshing of
the $\ldots$-Zr-O-Zr-O-$\ldots$ chains in this configuration.}
\end{figure}
The model we have built is a general discrete lattice model that resembles
two dimensional Ising models and, more generally, Potts models \citep{wu1982thepotts}.
However, due to the lack of any simple pattern in site energies and
couplings, it does not belong to any analytically solvable category
of models.
\subsubsection{Mean-field approach}
To understand the thermodynamic behavior of this model at finite temperature,
we begin with the standard mean-field approach which is based on the
assumption that every site interacts in an averaged manner with its
neighboring sites. For a model with $N$ states $\sigma_{1},\sigma_{2},\ldots\sigma_{N}$,
every site has a probability $p$$\left(\sigma_{i}\right)$ of being
occupied by state $\sigma_{i}$. In mean field theory, the energy
of such a site including its interactions with its nearest neighbors
is given by
\begin{eqnarray}
U\left(\sigma_{i}\right) & = & E\left(\sigma_{i}\right)+2\sum_{j=1}^{N}p\left(\sigma_{j}\right)J_{x}\left(\sigma_{i},\sigma_{j}\right)\nonumber \\
& & +2\sum_{j=1}^{N}p\left(\sigma_{j}\right)J_{y}\left(\sigma_{i},\sigma_{j}\right).\label{eq:U}
\end{eqnarray}
The probability $p$$\left(\sigma_{i}\right)$ is given by the the
Boltzmann factor so that
\begin{equation}
p\left(\sigma_{i}\right)=\frac{\exp\left(-\frac{U\left(\sigma_{i}\right)}{k_{\text{B}}T}\right)}{Z},
\end{equation}
where
\begin{equation}
Z=\sum_{j=1}^{N}\exp\left(-\frac{U\left(\sigma_{j}\right)}{k_{\text{B}}T}\right)
\end{equation}
is the mean-field partition function.
These equations form a self-consistent system of $N$ equations for
$p\left(\sigma_{i}\right)$ for a given temperature $T$ and the specified
energies $E(\sigma_{i})$ and couplings $J_{x}$, $J_{y}$. We present
the solutions of this system of equations for temperatures ranging
from 0.1 through 3.0 $eV/k_{B}$ in \figref{SiZrO2_latt_MF}. We find
that there is a first-order phase transition at a very high temperature
of $k_{\text{B}}T=1.4$ eV ($\sim$16,000 K). Below this temperature,
one of the two degenerate ground states ($S1$ or $\overline{S1})$
occupies nearly all the sites (i.e., spontaneous symmetry breaking).
Above the transition temperature, the ground states are suppressed
and the lattice gets filled by the remaining states with an approximately
equal contributions. At very high temperature (not shown in the figure),
all states have equal probability, as expected.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_latt_MF. 2}
\par\end{centering}
\caption{\label{fig:SiZrO2_latt_MF}Probabilities of finding a type of state
at an arbitrary site vs temperature, as computed by the mean-field
equations for our lattice model.}
\end{figure}
It is known that in simpler two dimensional lattice problems, the
mean-field approximation predicts correctly the existence of a phase
transition but overestimates the critical temperature \citep{neto2006anisotropic}.
The mean-field approach assumes that each site interacts with all
its neighbors in an uncorrelated fashion and neglects the fact that
correlation lengths are finite. Moreover, as seen from \eqref{U},
the mean-field equations sum over all neighbors and end up providing
``isotropic solutions'' (i.e., the $x$ and $y$ directions become
equivalent), which is an serious shortcoming due to the major role
anisotropy is expected to, and will, play in our system (see \ref{tab:SiZrO2_latt_J}).
In summary, we expect these mean field theory predictions to be informative
but not quantitatively accurate.
\subsubsection{Monte Carlo simulations}
For a better understanding of our model at temperatures of practical
interest, we have employed classical Monte Carlo simulations with
a modified version of the Wolff cluster algorithm \citep{swendsen1987nonuniversal,wolff1989collective}
that we have developed. For further details of the method, we refer
the reader to the Supplementary Material. We have run simulations
in a $50\times150$ lattice with free boundary conditions (i.e., the
lattice is a finite-sized system with zero couplings beyond the edges;
comparison to periodic boundary conditions showed no discernible differences
for this lattice size at the temperatures examined below). and completely
random initial conditions, for $k_{\text{B}}T=$ 0.016, 0.032, 0.064,
0.128, 0.256 and 0.512 eV. We have used a non-square simulation lattice
because of the larger couplings in the $y$-direction compared to
the $x$-direction, which lead to longer correlation lengths in the
$y$-direction (see below). In \figref{SiZrO2_latt_MC_ss}, a sample
configuration of a well-thermalized simulation with $k_{\text{B}}T=0.016\ \text{eV}$
($T=186\ \text{K}$) is displayed.
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{Figure_SiZrO2_latt_MC_ss. 3}
\par\end{centering}
\caption{\label{fig:SiZrO2_latt_MC_ss} A snapshot of the Monte Carlo simulation
of the lattice model at $k_{\text{B}}T=0.016\ \text{eV}$ ($T=186\ \text{K}$).
On the left edge of the simulation frame, a series of domain walls
along the $x$-direction between $S3$ and $\overline{S3}$ domains
are emphasized by black arrows.}
\end{figure}
In \figref{SiZrO2_latt_MC_corr}, the autocorrelation functions $C_{\text{auto}}^{(k)}\left(t\right)$
as a function of simulation step (``time'' $t$) and the horizontal
and vertical spatial correlation functions $C_{x}^{(k)}\left(\Delta i\right)$
and $C_{y}^{(k)}\left(\Delta j\right)$ are plotted for each state
$k$ for one particular Monte Carlo run. These correlation functions
are defined as
\begin{eqnarray}
C_{\text{auto}}^{\left(k\right)}\left(\Delta t\right) & = & \underset{i,j,t}{\text{mean}}\left[\left\langle \sigma_{k}\left(i,j,t\right)\sigma_{k}\left(i,j,t+\Delta t\right)\right\rangle \right.\nonumber \\
& & -\left.\left\langle \sigma_{k}\left(i,j,t\right)\right\rangle \left\langle \sigma_{k}\left(i,j,t+\Delta t\right)\right\rangle \right],\label{eq:CorTime}
\end{eqnarray}
\begin{eqnarray}
C_{x}^{\left(k\right)}\left(\Delta i\right) & = & \underset{i,j,t}{\text{mean}}\left[\left\langle \sigma_{k}\left(i,j,t\right)\sigma_{k}\left(i+\Delta i,j,t\right)\right\rangle \right.\nonumber \\
& & \left.-\left\langle \sigma_{k}\left(i,j,t\right)\right\rangle \left\langle \sigma_{k}\left(i+\Delta i,j,t\right)\right\rangle \right],\label{eq:CorX}
\end{eqnarray}
\begin{eqnarray}
C_{y}^{\left(k\right)}\left(\Delta j\right) & = & \underset{i,j,t}{\text{mean}}\left[\left\langle \sigma_{k}\left(i,j,t\right)\sigma_{k}\left(i,j+\Delta j,t\right)\right\rangle \right.\nonumber \\
& & \left.-\left\langle \sigma_{k}\left(i,j,t\right)\right\rangle \left\langle \sigma_{k}\left(i,j+\Delta j,t\right)\right\rangle \right],\label{eq:CorY}
\end{eqnarray}
where $\sigma_{k}\left(i,j,t\right)$ identifies the state at the
lattice site ($i,j$) at the simulation time step $t$. We have defined
10 functions $\sigma_{k}\left(i,j,t\right)$ (one for each state $k$)
such that $\sigma_{k}\left(i,j,t\right)=1$ if the lattice site ($i,j$)
is occupied by state $k$ at time $t$ and is 0 otherwise. In \figref{SiZrO2_latt_MC_ss},
correlation functions for every type of state ($S1$, $\overline{S1}$
etc.) are computed separately and overlaid.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_latt_MC_corr. 3}
\par\end{centering}
\caption[Temporal and spatial correlation functions of all 10 types of states
for a Monte Carlo simulation for $k_{\text{B}}T=0.016\ \text{eV}$.]{\label{fig:SiZrO2_latt_MC_corr}Temporal and spatial correlation
functions for all 10 states for a Monte Carlo simulation with $k_{\text{B}}T=0.016\ \text{eV}$.
(a) Temporal correlation (autocorrelation) functions as defined by
\eqref{CorTime}, (b) correlation functions along the $x$-direction
as per \eqref{CorX}, and (c) correlation functions along the $y$-direction
as per \eqref{CorY}. }
\end{figure}
We observe that for the run exemplified by \figref{SiZrO2_latt_MC_ss}
and analyzed in \figref{SiZrO2_latt_MC_ss}, (1) a 1000 step Monte
Carlo simulation leads to decorrelation (i.e., equilibration) of states
$S1$, $\overline{S1}$, $S4$, $\overline{S4}$, $S5$ and $\overline{S5}$
but not for $S2$, $\overline{S2}$, $S3$ and $\overline{S3}$. (2)
The simulation cell of size $50\times150$ is successful in containing
the domains that form at this temperature since the spatial correlations
become quite small by the half-way point along each direction of the
simulation cell: sites that are sufficiently far from each other are
not correlated. We have repeated these simulations 10 times for each
temperature and have found that the correlation functions behave similarly
when the initial state of the simulation cell is chosen randomly.
For temperatures higher than $0.128$ eV, all temporal correlations
decay below $0.1$ in the duration of the simulation.
The reason behind the slow temporal decay of the $S2$, $\overline{S2}$,
$S3$ and $\overline{S3}$ autocorrelations at low temperatures is
that large domains of these states form in the lattice, and the Monte
Carlo algorithm becomes inefficient in ``flipping'' these domains
to another configuration. To see what other effects are present in
these simulations, we monitor two other quantities displayed in \figref{SiZrO2_latt_MC_other}.
The first is the probability that any lattice site is occupied by
a particular state: we show the ratio of the number of sites occupied
by a particular state to the total number of sites in the simulation
cell. The second quantity is the average domain size for each state:
this is computed for each snapshot at a fixed time by first determining
all the domains of that state (including domains with only one site),
and then dividing the total number of sites occupied by the state
to the number of domains. A large jump in the second quantity during
the simulation usually indicates a merger of two domains. The fact
that these quantities change quickly at the beginning of the simulation
and more slowly toward the end of the simulation in \figref{SiZrO2_latt_MC_other}
is indicative that the characteristics seen in \figref{SiZrO2_latt_MC_ss}
are representative of large volumes of the configuration space sampled
with the Boltzmann distribution at $k_{\text{B}}T=0.016\ \text{eV}$
(186 K): namely, while the lattice system has not fully equilibrated,
i.e., the temporal correlations have not decayed to very small values,
it is not very far from equilibrium either. Hence, these results show
that at this low temperature, the lattice system should be dominated
by large domains of $S2$ and $\overline{S2}$ followed by smaller
domains of $S3$ and $\overline{S3}$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_latt_MC_other. 3}
\par\end{centering}
\caption[Probabilities of finding a state at an arbitrary site and average
domain sizes of each state, as they evolve during a Monte Carlo simulation
for $k_{\text{B}}T=0.016\ \text{eV}$.]{\label{fig:SiZrO2_latt_MC_other}Probabilities of finding a state
at an arbitrary site (a) and average domain sizes of each state (b),
as they evolve during a Monte Carlo simulation for $k_{\text{B}}T=0.016\ \text{eV}$.}
\end{figure}
We now return to the mean field prediction that at temperatures lower
than $1.4$ eV the system should be dominated by either one of the
ground states. Clearly, this prediction is not supported by our Monte
Carlo simulations. Our Monte Carlo simulations show that for $k_{B}T\gtrsim0.5$
eV, there is no long range order. In \ref{fig:SiZrO2_latt_MC_CorLen},
we plot the correlation lengths $\xi_{x}$ and $\xi_{y}$ along the
$x$- and $y$-directions, respectively. The correlation lengths are
calculated by fitting the spatial correlation functions $C_{x}^{\left(k\right)}\left(\Delta x\right)$
and $C_{y}^{\left(k\right)}\left(\Delta y\right)$ to exponentials
of the form $A\exp\left(-\Delta\alpha/\xi_{\alpha}\right)$. We calculate
the correlation lengths (averaged over all states) for each run and
then average over all runs at a given temperature. As indicated by
the temperature dependence of the correlation length $\xi_{y}$, the
system gradually becomes more ordered as the temperature is increased
up to $0.128\ \text{eV}$, and then becomes disordered. Such behavior
is associated with a second order phase transition in which correlation
lengths diverge upon approaching the critical temperature. If such
a critical temperature is present in this system, it lies between
$0.128\ \text{eV}$ ($\sim1500\ \text{K}$) and $0.256\ \text{eV}$
($\sim3000\ \text{K}$). Because the melting temperature of silicon
is $\sim1700\ \text{K}$, it is likely impossible to approach this
critical temperature in practice. Hence, it is safe to assume that
for the relevant experimental conditions ($T<1000\ \text{K}$), the
monolayer system is well within the ordered phase.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_latt_MC_CorLen. 3}
\par\end{centering}
\caption[Correlation lengths along the $x$- and $y$-directions vs temperature.]{\label{fig:SiZrO2_latt_MC_CorLen}Correlation lengths along the $x$-
and $y$-directions vs temperature. Each data point is obtained by
fitting an exponential decay function to spatial correlation functions
for each run at a given temperature, and then averaging the results
of the fit for all the runs at that temperature.}
\end{figure}
Finally, we comment on qualitative characteristics of the multi-domain
structure of these films based on our lattice model. In \figref{SiZrO2_latt_MC_Prob},
we display the probability for a site to be occupied by each state
as a function of temperature, where the probability values are averaged
over the last quarter of each run, and then further averaged over
10 runs. The data show that the system is dominated by the second
and the third lowest energy configurations ($S2,S3,\overline{S2},\overline{S3}$).
As discussed above, we believe that this is due to the rather low
couplings $J_{x}\left(S2,\overline{S3}\right)$ and $J_{y}\left(S3,\overline{S3}\right)$
when compared to the other couplings in \tabref{SiZrO2_latt_J}. Namely,
these domain walls are not very costly energetically, so their entropic
contribution is significant even at low temperatures and stabilizes
these phases even though they are not the lowest energy states.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_latt_MC_Prob. 2}
\par\end{centering}
\caption[Probabilities of finding a state at an arbitrary site vs temperature,
as computed by Monte Carlo simulations.]{\label{fig:SiZrO2_latt_MC_Prob}Probabilities of finding a state
at an arbitrary site vs temperature, as computed by Monte Carlo simulations.
For each temperature, the probabilities are averaged over the last
quarter of each run, and then further averaged over 10 runs. }
\end{figure}
In \figref{SiZrO2_latt_MC_Patches} we display the average domain
size of each state vs temperature, again averaged over 10 runs for
each temperature. We find that, on average, the domains of states
$S2$ and $\overline{S2}$ are larger than the domains of states $S3$
and $\overline{S3}$, even though they occupy similar portions of
the simulation cell (see \ref{fig:SiZrO2_latt_MC_Prob}). This may
be because $J_{y}\left(S3,\overline{S3}\right)=0.002\ \text{eV}$
so the $S3$ and $\overline{S3}$ easily form vertical stacks of domains
at essentially no energetic cost, as exemplified in \figref{SiZrO2_latt_MC_ss}:
some of these stacks are emphasized by black arrows on the left edge
of the figure, but there are many more in the interior of the simulation
cell.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Figure_SiZrO2_latt_MC_Patches. 3}
\par\end{centering}
\caption[Average domain size for each type of state vs temperature, as computed
by Monte Carlo simulations.]{\label{fig:SiZrO2_latt_MC_Patches}Average domain size for each type
of state vs temperature, as computed by Monte Carlo simulations. For
each temperature, the domain sizes are averaged over the last quarter
of each run, and then further averaged over 10 runs.}
\end{figure}
To sum up, according to our discrete lattice model simulations, for
$2\times1$ ordered ZrO$_{2}$ monolayers on the Si(001) surface and
the experimentally relevant temperature range of $200-1000\ \text{K}$,
domains of $S2$, $\overline{S2}$, $S3$ and $\overline{S3}$ should
be expected to occur with linear extents ranging from a few to a few
dozen unit cells. This supports our claim that achieving epitaxy for
these films should be challenging. However, given that the local structure
is approximated by a mixture of $S2$ and $S3$ domains, the observed
ferroelectric switching is understandable as being due to a transition
between these two states.
\section{Conclusion\label{sec:Conclusion5}}
We have conducted a computational study of ZrO$_{2}$ monolayers o\textcolor{black}{n
Si(001) using DFT. These monolayers have recently been grown with
as an abrupt oxide/semiconductor interface but with an amorphous structure
and are measured to be ferroelectric \citep{dogan2018singleatomic}.
In our computations, we have found a multiplicity of (meta)stable
structures with a large variation in ionic polarization but small
differences in energy, atomic structure and chemistry. This suggests
that achieving epitaxy in the experiment should be challenging. In
order to understand the finite-temperature behavior of these ultrathin
films, we have developed a two dimensional discrete lattice model
of the domains in these thin films using DFT-derived parameters. We
have employed mean-field and Monte Carlo calculations to study this
lattice model and concluded that two distinct and oppositely polarized
structures, namely $S2$, $S3$ and their counterparts $\overline{S2}$
and $\overline{S3}$, dominate the films at the temperatures of interest.
The ferroelectric switching observed in the experiment is explained
by the film locally adopting one of these two structures and locally
switching between them. We have found that for monocrystalline epitaxial
films, this switching leads to a VBE shift in silicon of $\Delta V=0.6\ \text{eV}$,
which is moderately greater than the experimental value of $\Delta V=0.4\ \text{eV}$,
in agreement with the idea of partial (local) polarization switching.}
\section{Acknowledgements}
This work was supported primarily by the grant NSF MRSEC DMR-1119826.
We thank the Yale Center for Research Computing for guidance and use
of the research computing infrastructure, with special thanks to Stephen
Weston and Andrew Sherman. Additional computational support was provided
by NSF XSEDE resources via Grant TG-MCA08X007.
\bibliographystyle{apsrev}
\chapter*{Supplementary Material for ``Theory of Ferroelectric ZrO$_{2}$
Monolayers on Si''}
\begin{center}
{\large{}Mehmet Dogan$^{1,2,3,4}$ and Sohrab Ismail-Beigi$^{1,2,5,6}$}\\
\par\end{center}
\begin{center}
$^{1}$Center for Research on Interface Structures and Phenomena,
Yale University, New Haven, Connecticut 06520, USA
\par\end{center}
\begin{center}
$^{2}$Department of Physics, Yale University, New Haven, Connecticut
06520, USA
\par\end{center}
\begin{center}
$^{3}$Department of Physics, University of California, Berkeley,
94720 USA
\par\end{center}
\begin{center}
$^{4}$Materials Science Division, Lawrance Berkeley National Laboratory,
Berkeley, California 94720, USA
\par\end{center}
\begin{center}
$^{5}$Department of Applied Physics, Yale University, New Haven,
Connecticut 06520, USA
\par\end{center}
\begin{center}
$^{6}$Department of Mechanical Engineering and Materials Science,
Yale University, New Haven, Connecticut 06520, USA
\par\end{center}
\section*{Monte Carlo algorithms for statistical physics}
A thermodynamic system is described by its partition function
\begin{equation}
Z=\sum_{\left\{ s\right\} }e^{-\frac{E_{s}}{k_{\text{B}}T}},
\end{equation}
where the sum runs over all possible states of the system, $E_{s}$
is the energy of state $s$, $k_{\text{B}}$ is Boltzmann's constant
and $T$ is the temperature. The expectation value of an observable
$X$ is
\begin{equation}
\left\langle X\right\rangle =\frac{1}{Z}\sum_{\left\{ s\right\} }X_{s}e^{-\frac{E_{s}}{k_{\text{B}}T}},
\end{equation}
where $X_{s}$ is the value of the observable $X$ when the system
is in state $s$.
The summations are over all possible states of the system which is
a space that is enormous for most physically relevant systems. However,
most of the states occur with vanishingly small probabilities, computed
by the formula $\frac{1}{Z}\exp\left(-\frac{E_{s}}{k_{\text{B}}T}\right)$.
Hence in order to avoid summing over all possible states, which is
an intractable problem and a wasteful attempt, one usually uses \emph{importance
sampling}, in which the sampling is done over states that are chosen
according to the probability distribution $\frac{1}{Z}\exp\left(-\frac{E_{s}}{k_{\text{B}}T}\right)$
\citep{luijten2006introduction}.
Given two states of the system and their energies, it is trivial to
compute their relative probabilities according to their Boltzmann
factors $\exp\left(-\frac{E_{s}}{k_{\text{B}}T}\right)$. However,
computing the absolute probability of a state requires computing $Z$,
which we wish to avoid. The most commonly used way of computing expectation
values without evaluating the partition function is by creating a
Markov chain of states in which each state only depends on the state
that immediately precedes it \citep{metropolis1953equation}. Starting
from a configuration $S_{i}$ with a Boltzmann factor $p_{i}$, a
new trial configuration $S_{j}$ with a Boltzmann factor $p_{j}$
is generated and accepted with probability $\pi_{ij}$. The probability
of occupying the state $S_{j}$ should be equal to the sum of the
probabilities of arriving at state $S_{j}$ from any given state $S_{i}$,
i.e.
\begin{equation}
\sum_{i}p_{i}\pi_{ij}=p_{j}.
\end{equation}
At equilibrium, this Markov process should obey \emph{detailed balance},
i.e.
\begin{equation}
p_{i}\pi_{ij}=p_{j}\pi_{ji}.
\end{equation}
In general, the transition probabilities $\pi_{ij}$ are the product
of two factors: a probability $g_{ij}$ of proposing to move to state
$S_{j}$ from state $S_{i}$, and an acceptance ratio $A_{ij}$ of
accepting the proposed transition from $S_{i}$ to $S_{j}$. Thus
we can write
\begin{equation}
p_{i}g_{ij}A_{ij}=p_{j}g_{ji}A_{ji},
\end{equation}
or
\begin{equation}
\frac{g_{ij}A_{ij}}{g_{ji}A_{ji}}=\exp\left(-\frac{E_{j}-E_{i}}{k_{\text{B}}T}\right).\label{eq:ratios}
\end{equation}
For a given problem, $g_{ij},\ A_{ij}$ are specified by the algorithm
such that \Eqref{ratios} is satisfied and the sampling efficiency
is maximized.
Finally, a valid Monte Carlo algorithm must be ergodic, i.e., any
state must be reachable from any other state via a succession of moves.
\section*{Metropolis algorithms for discrete lattice models}
The most common Monte Carlo algorithm for discrete lattice models
such as the Ising model is the so-called Metropolis algorithm. Let
us describe this algorithm in the context of our lattice model which
we describe in more detail in the main text.
The Hamiltonian of our two dimensional discrete lattice model is
\begin{equation}
H=\sum_{i,j}E\left(\sigma\left(i,j\right)\right)+\sum_{i,j}J_{x}\left(\sigma\left(i,j\right),\sigma\left(i+1,j\right)\right)+\sum_{i,j}J_{y}\left(\sigma\left(i,j\right),\sigma\left(i,j+1\right)\right),
\end{equation}
where $\left(i,j\right)$ are the positions on the discrete lattice
along the $\left(x,y\right)$-directions, $\sigma\left(i,j\right)$
is the state on lattice site $\left(i,j\right)$, $E\left(\sigma\left(i,j\right)\right)$
is the site energy of the state $\sigma\left(i,j\right)$, $J_{x}\left(\sigma\left(i,j\right),\sigma\left(i+1,j\right)\right)$
is the nearest-neighbor interaction energy along the $x$-direction,
and $J_{y}\left(\sigma\left(i,j\right),\sigma\left(i,j+1\right)\right)$
is the nearest-neighbor interaction energy along the $y$-direction.
$J_{x}\left(\sigma_{1},\sigma_{2}\right)=J_{y}\left(\sigma_{1},\sigma_{2}\right)=0$
if $\sigma_{1}=\sigma_{2}$. In this model, there are $N$ types of
states, i.e. $\sigma$ is a function that maps a lattice site onto
one of $s_{1},s_{2},\ldots,s_{N}$. Note that the lower-case $s$
are different from the upper-case $S$ used above, which denoted the
state of the whole system, which would be the collection of states
$s$ on all lattice points for this model.
The two dimensional Ising model is a special case of our model, where
$N=2$. The external magnetic field can be included by having $E\left(s_{1}\right)\neq E\left(s_{2}\right)$,
and anisotropy can be included by having $J_{x}\left(s_{1},s_{2}\right)\neq J_{y}\left(s_{1},s_{2}\right)$.
The Metropolis algorithm would operate on our $N$-state model as
follows:
\begin{enumerate}
\item Pick a lattice site at random. Let us call the state on the site $s_{i}$.
Let us call the state of the initial system $S_{\mu}$.
\item Propose to flip the state $s_{i}$ to another state $s_{f}$, chosen
among all non-$s_{i}$ states with equal probability $\frac{1}{N-1}$.
Let us call the state of the system if the proposed flip occurs $S_{\nu}$.
Thus $g_{\mu\nu}=\frac{1}{N-1}$ (see \Eqref{ratios}). The probability
of proposing the inverse move, i.e. going to $S_{\mu}$ from $S_{\nu}$
is clearly the same, hence $g_{\nu\mu}=g_{\mu\nu}=\frac{1}{N-1}$.
\item Compute the energy difference $E_{\nu}-E_{\mu}$ between $S_{\nu}$
and $S_{\mu}$. This is simple, since the only difference is the state
change of state $s_{i}$ to $s_{j}$, and the energy difference is
localized to the site energy and the couplings with the nearest neighbors
of that site.
\item The acceptance ratios are obtained by \Eqref{ratios}:
\begin{equation}
\frac{A_{\mu\nu}}{A_{\nu\mu}}=\exp\left(-\frac{E_{\nu}-E_{\mu}}{k_{\text{B}}T}\right).
\end{equation}
A common way of achieving this equation is by setting:
\begin{equation}
A_{\mu\nu}=\begin{cases}
\exp\left(-\frac{E_{\nu}-E_{\mu}}{k_{\text{B}}T}\right) & \text{if}\ E_{\nu}>E_{\mu}\\
1 & \text{if}\ E_{\nu}\leq E_{\mu}
\end{cases}\label{eq:Accept1}
\end{equation}
\end{enumerate}
To find the expectation value of an observable $X$, $X$ is computed
at each step of the simulation that comprises of a finite number of
steps, and then simply averaged. This is the merit of \emph{importance
sampling}, which takes care of the relative probabilities of states
through \Eqref{ratios}, therefore the observables can simply be averaged.
\section*{Wolff cluster algorithms}
The success of a Monte Carlo algorithm is usually measured by how
easy it can generate ``independent'' samples, i.e. how many attempts
it takes to go from a state $S_{\mu}$ to another state $S_{\nu}$
such that $S_{\mu}$ and $S_{\nu}$ are ``uncorrelated'' (namely,
decorrelation time). The ``single-flip'' Metropolis algorithm is
conceptually simple and easy to implement. However, at each simulation
step, the state only slightly changes, so the decorrelation time can
be large. For models with a second order phase transition, such as
the two dimensional Ising model, this algorithm suffers from ``critical
slowing down'' where, close to the critical temperature of the model,
the decorrelation time diverges \citep{barkema1997newmonte}.
This issue can be solved by algorithms that propose states that are
sufficiently modified from the preceding state. A family of such algorithms
is called cluster algorithms, where rather than switching the state
on a single site, the state on a groups of sites (``a cluster'')
is switched simultaneously \citep{swendsen1987nonuniversal}. Here
we modify the Wolff cluster algorithm \citep{wolff1989collective},
originally developed for the Ising model, to simulate our $N$-state
model:
\begin{figure}
\begin{centering}
\includegraphics[width=0.75\columnwidth]{Figure_Wolff. 2}
\par\end{centering}
\caption[A sample instant of a Wolff cluster simulation of an -state lattice
model, prior to and after the switching of a cluster.]{\label{fig:Wolff}A sample instant of a Wolff cluster simulation
of an $N$-state lattice model, prior to ($S_{\mu}$) and after ($S_{\nu}$)
the switching of a cluster. The boundary of the cluster is shown by
solid lines, and the bonds at the boundary of the cluster are shown
by dotted lines. Each color-shape combination denotes a type of state
in our 10-state lattice model, described in detail in the main text.}
\end{figure}
\begin{enumerate}
\item Pick a lattice site $i$ at random. Let us call the state on the site
$s_{i}$. Let us call the state of the initial system $S_{\mu}$.
\item Add each of the nearest neighbors $j$ of the site $i$ to the cluster,
with the probability $p_{\text{add}}$, provided that the states on
sites $i$ and $j$ are the same, and the ``bond'' between $i$
and $j$ has not yet been considered.
\item Once all the neighbors of site $i$ have been considered, move to
the next site in the cluster. Repeat step 2 for this site. If all
the sites in the cluster have gone through step 2, the cluster has
been built. Move to step 4.
\item Propose to flip the state $s_{i}$ to another state $s_{f}$, chosen
among all non-$s_{i}$ states with equal probability $\frac{1}{N-1}$,
for all the sites in the cluster. Let us call the state of the system
if the proposed flip occurs $S_{\nu}$.
\item Compute the number of bonds at the boundary of the cluster. The two
neighboring states of the same kind are said to have a bond that is
intact. When the cluster is ``flipped'' the bonds at the boundary
will be broken. In \Figref{Wolff}, we illustrate the formation of
a cluster for a given state $S_{\mu}$ of the lattice, shown on the
left. The number of bonds at the boundary (shown as dotted lines in
the figure) is $n_{\mu}=9$. The proposed state $S_{\nu}$ is shown
on the right. The number of bonds at the boundary in the proposed
state is $n_{\nu}=1$.
\item Compute the energy difference $E_{\nu}-E_{\mu}$ between $S_{\nu}$
and $S_{\mu}$. This requires accounting for all the nearest-neighbor
interactions at the boundary of the cluster in both the initial and
the final states.
\end{enumerate}
Finding the correct acceptance ratio for this algorithm is somewhat
involved. Let us assume that the cluster in $S_{\mu}$ in \Figref{Wolff}
is built in the following order:
\begin{enumerate}
\item The site at the upper left corner of the cluster is randomly picked.
\item The site to the right is added with probability $p_{\text{add}}$,
the other neighboring sites of the same kind (above and below) are
rejected with probability $\left(1-p_{\text{add}}\right)^{2}$.
\item The site to the right is added with probability $p_{\text{add}}$,
the other neighboring sites of the same kind (above and below) are
rejected with probability $\left(1-p_{\text{add}}\right)^{2}$.
\item The site below is added with probability $p_{\text{add}}$, the site
above is rejected with probability $\left(1-p_{\text{add}}\right)$.
\item The site to the left is added with probability $p_{\text{add}}$,
the other neighboring sites of the same kind (to the right and below)
are rejected with probability $\left(1-p_{\text{add}}\right)^{2}$.
\item The site below is added with probability $p_{\text{add}}$, the site
to the left is rejected with probability $\left(1-p_{\text{add}}\right)$.
\item Both neighboring sites (to the left and to the right) are rejected
with probability $\left(1-p_{\text{add}}\right)^{2}$.
\end{enumerate}
The total probability of this process in this order is $p_{\text{add}}^{5}\left(1-p_{\text{add}}\right)^{10}$.
The same process can be repeated for the cluster in $S_{\nu}$ in
\Figref{Wolff} built in the exact same order, which yields a probability
of $p_{\text{add}}^{5}\left(1-p_{\text{add}}\right)^{2}$.
The ratio of proposal probabilities of the forward and backward moves
is then
\begin{equation}
\frac{p_{\text{add}}^{5}\left(1-p_{\text{add}}\right)^{10}}{p_{\text{add}}^{5}\left(1-p_{\text{add}}\right)^{2}}=\left(1-p_{\text{add}}\right)^{8},
\end{equation}
where 8 is the difference in the number of bonds at the boundary for
$S_{\mu}$ and $S_{\nu}$, i.e. $n_{\mu}-n_{\nu}=8$. It is evident
that for any given order for building the same cluster, the ratio
of proposal probabilities of the forward and backward moves will be
$\left(1-p_{\text{add}}\right)^{n_{\mu}-n_{\nu}}$. Because $g_{\mu\nu}$
is the sum of the probabilities of all moves that propose $S_{\nu}$
from $S_{\mu}$ and $g_{\nu\mu}$ is the sum of the probabilities
of all moves that propose $S_{\mu}$ from $S_{\nu}$, we can write
\begin{equation}
\frac{g_{\mu\nu}}{g_{\nu\mu}}=\left(1-p_{\text{add}}\right)^{n_{\mu}-n_{\nu}}.
\end{equation}
Therefore \Eqref{ratios} yields
\begin{align}
\frac{A_{\mu\nu}}{A_{\nu\mu}} & =\left(1-p_{\text{add}}\right)^{n_{\nu}-n_{\mu}}\exp\left(-\frac{E_{\nu}-E_{\mu}}{k_{\text{B}}T}\right)\nonumber \\
& =\exp\left(-\frac{E_{\nu}-E_{\mu}-k_{\text{B}}T\left(n_{\nu}-n_{\mu}\right)\log\left(1-p_{\text{add}}\right)}{k_{\text{B}}T}\right).
\end{align}
If we define
\begin{equation}
\Delta_{\mu\nu}\equiv E_{\nu}-E_{\mu}-k_{\text{B}}T\left(n_{\nu}-n_{\mu}\right)\log\left(1-p_{\text{add}}\right),\label{eq:Delta}
\end{equation}
we can set the acceptance ratios (in analogy with \Eqref{Accept1})
to be
\begin{equation}
A_{\mu\nu}=\begin{cases}
\exp\left(-\frac{\Delta_{\mu\nu}}{k_{\text{B}}T}\right) & \text{if}\ \Delta_{\mu\nu}>0\\
1 & \text{if}\ \Delta_{\mu\nu}\leq0
\end{cases}.\label{Accept2}
\end{equation}
In the original Wolff cluster method for the Ising model, $p_{\text{add}}$
is defined as a function of temperature such that the acceptance ratios
are always 1. This makes for a rejection-less algorithm which is able
to switch clusters of different sizes at any temperature. However,
in our model there is no simple relationship between $E_{\nu}-E_{\mu}$
and $n_{\nu}-n_{\mu}$ as in there is in the Ising model. Therefore
$p_{\text{add}}$ cannot be defined \emph{a priori} to make $\Delta_{\mu\nu}$
vanish in \Eqref{Delta}, which in turn would guarantee $A_{\mu\nu}=1$
in \ref{Accept2}. After empirical tests on our simulations, we have
set $p_{\text{add}}=\frac{1}{2}$ for the results presented in the
main text. Improving the acceptance ratios through the choice of $p_{\text{add}}$
is the subject of future research.
\section*{List of all domain wall energies}
We tabulate all domain wall energies in \tabref{SiZrO2_all_J}, which
includes the couplings between $S1$, $S2$, $S3$ and their barred
counterparts, also reported above in \tabref{SiZrO2_latt_J}.
\begin{table}
\begin{centering}
\begin{tabular}{ccc}
\toprule
\addlinespace[0.3cm]
Domain boundary & $\ \ J_{x}$ (eV)$\ \ $ & $\ \ J_{y}$ (eV)$\ \ $\tabularnewline\addlinespace[0.3cm]
\midrule
\addlinespace[0.1cm]
\midrule
$S1,\overline{S1}$ & 0.26 & 1.35\tabularnewline
\midrule
$S1,S2$ & 0.76 & 1.13\tabularnewline
\midrule
$S1,\overline{S2}$ & 0.96 & 0.99\tabularnewline
\midrule
$S1,S3$ & 0.61 & 4.81\tabularnewline
\midrule
$S1,\overline{S3}$ & 0.44 & 1.75\tabularnewline
\midrule
$S1,S4$ & 0.55 & 2.79\tabularnewline
\midrule
$S1,\overline{S4}$ & -0.20 & 2.37\tabularnewline
\midrule
$S1,S5$ & 0.35 & 1.35\tabularnewline
\midrule
$S1,\overline{S5}$ & 0.40 & 0.56\tabularnewline
\midrule
$S2,\overline{S2}$ & 0.38 & 1.64\tabularnewline
\midrule
$S2,S3$ & 0.17 & 0.98\tabularnewline
\midrule
$S2,\overline{S3}$ & 0.01 & 0.91\tabularnewline
\midrule
$S2,S4$ & -0.17 & 2.23\tabularnewline
\midrule
$S2,\overline{S4}$ & 0.86 & 2.28\tabularnewline
\midrule
$S2,S5$ & 0.34 & 0.59\tabularnewline
\midrule
$S2,\overline{S5}$ & -0.12 & 1.31\tabularnewline
\midrule
$S3,\overline{S3}$ & 0.73 & 0.002\tabularnewline
\midrule
$S3,S4$ & 0.10 & 1.89\tabularnewline
\midrule
$S3,\overline{S4}$ & 0.23 & 1.84\tabularnewline
\midrule
$S3,S5$ & -0.24 & 0.75\tabularnewline
\midrule
$S3,\overline{S5}$ & 0.69 & 1.21\tabularnewline
\midrule
$S4,\overline{S4}$ & 0.55 & -0.33\tabularnewline
\midrule
$S4,S5$ & 0.71 & 0.65\tabularnewline
\midrule
$S4,\overline{S5}$ & -0.26 & 1.86\tabularnewline
\midrule
$S5,\overline{S5}$ & 0.56 & 0.30\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption[Domain boundary energies computed from first principles.]{\label{tab:SiZrO2_all_J}Domain boundary energies computed from first
principles. These energies serve as the couplings of nearest neighbors
in our lattice model.}
\end{table}
A few of the couplings that involve the higher-energy $S4$ and $S5$
structures are negative, which can be understood in some cases when
the domain boundary region resembles a lower energy structure. In
\figref{SiZrO2_dom_S3S5}, we illustrate the domain boundaries that
correspond to $J_{x}\left(S3,S5\right)=-0.24\ \text{eV}$. The structure
immediately to the left of the right domain boundary ($S3r$) closely
resembles the $S2$ structure (see Figure 5 in the main text). However,
the fact that the higher energy $S4$ and $S5$ structures have negative
domain wall energies with the lower energy structures in some cases
is not enough to generate antiferroelectric patterns in our Monte
Carlo simulations. This may be due to the separation of scale in the
energies of the lowest three structures and $S4$ and $S5$ (see Table
II in the main text). Hence the energy reduction achieved by making
these domain boundaries (0.26 eV or less) is not enough to compensate
for the high energy cost of creating these two structures in the first
place.
\begin{figure}
\begin{centering}
\includegraphics[width=0.9\columnwidth]{Figure_SiZrO2_S3S5}
\par\end{centering}
\caption[The domain boundaries along the $y$-direction between $S3$ and $S5$,
computed by stacking 3 unit cells of each structure in the $x$-direction.]{\label{fig:SiZrO2_dom_S3S5}The domain boundaries along the $y$-direction
between $S3$ and $S5$, computed by stacking 3 unit cells of each
structure along the $x$-direction. The domain energy, computed to
be $J_{x}\left(S5,S5\right)=-0.24\ \text{eV}$ per unit length, is
negative partly due to the fact that the vicinity of one of the boundary
walls (labelled $S3r$) resembles a lower energy configuration $S2$
(see Figure 5 in the main text).}
\end{figure}
\bibliographystyle{apsrev}
|
2,869,038,154,976 | arxiv | \section{Introduction}
In the big data era, it is necessary to spread the machine learning tasks across multiple servers and transform centralized systems into distributed ones~\cite{verbraeken2020survey}. These distributed systems present new challenges and especially privacy remains an unsolved challenge in it~\cite{PMP4MLDS18}.
Privacy computing is a kind of techniques which perform data computation without specified information leakage. Homomorphic encryption is a special form of encryption that permits users to perform computations on encrypted data without first decrypting it, which has great practical implications in the outsourcing of private computations. It is called fully homomorphic encryption (FHE) if both addition and multiplication are supported. The security of FHE is usually guaranteed by the hard problems on lattices, which is also considered as one of the post-quantum cryptographic schemes~~\cite{post-quantum}. However, computing on encrypted data using such FHE schemes will increase the noise in the ciphertext. If the noise in the ciphertext grows beyond some threshold, it will result in wrong decryption~\cite{CrawfordGHPS18}. This problem can be solved by a bootstrapping technique, first proposed by Gentry~\cite{Gentry09}. Bootstrapping can reduce the noise in ciphertext to refresh it so that the homomorphic evaluation can be performed constantly.
Obviously, \emph{single-key} fully homomorphic encryption only allows server to perform addition and multiplication on data encrypted by same key. \emph{Multi-key} fully homomorphic encryption (MKFHE) was proposed to circumvent this shortcoming~\cite{Lopez12}. It enables users to encrypt their own data under their own keys, while homomorphic evaluations can be performed on encrypted data directly at server side without decryption. It avoids the possibility that user and server conspire to steal the data of other users. Therefore, MKFHE indeed realizes secure multi-party computation with untrusted party. While Chen et al.~\cite{MKTFHE} developed the library Multi-key fully homomorphic encryption over torus (MKTFHE) for NAND gate.
However, MKTFHE library only provides multi-key homomorphic NAND gates. Perform multi-key homomorphic computation using only homomorphic NAND gates is a complex and time-consuming work. As a result, a simple multi-key homomorphic NAND gate is not practical to be used directly. In order to better apply multi-key homomorphic encryption on privacy computing, we design and implement a series of practical multi-key homomorphic mathematical operators.
In this study, we make the following contributions :
\begin{enumerate}
\item We designed a series of fundamental multi-key bootstrapped gates based on MKTFHE, including multi-key bootstrapped AND gate, multi-key bootstrapped OR gate, multi-key bootstrapped NOR gate, multi-key bootstrapped XOR gate, multi-key bootstrapped XNOR gate, and multi-key NOT gate (without gate bootstrapping). Experiment results show that the proposed fundemental multi-key bootstrapped gates are more efficient than directly joining multi-key bootstrapped NAND gates to build basic binary gates.
\item We designed a series of multi-key homomorphic operators based on the basic multi-key bootstrapped gates we constructed. The operators we designed include $k$-bit complement array integer adder, $k$-bit complement array integer subtractor, $k$-bit complement array integer multiplier and $k$-bit complement array integer divider, which can perform addition, subtraction, multiplication and division on arbitrary bits integers in both positive and negative. We construct the complement array adder linearly and used a structure similar to the 4-bit array integer multiplier in~\cite{Hwang79} to construct our $k$-bit complement multiplier. In the similar way we construct the $k$-bit complement array integer divider. The subtractor is constructed with a adder and a simple multiplier. The time of the adder and the subtractor grows linearly with the bits of input numbers, while the time of the multiplier and divider grows quadratically. The time of the divider grows almost linearly with the number of layers in divider.
\item We train linear regression model by utilizing our proposed multi-key homomorphic operators. Taking linear regression as an example, we implement a whole multi-key fully homomorphic machine learning scheme. Experimental results show that the practicability and generality of our homomorphic operator and the training time grows linearly with the number of participants.
\end{enumerate}
The rest of this paper is organized as follows. Section \ref{2Rel} discusses the related research work. In Section \ref{2Pre}, it clarifies the notation and review some constructions related. Section \ref{3Scheme} describes the $k$-bit complement array operators with implementation based on MKTFHE and the distributed machine learning model we evaluate, followed by the performance analysis and experimental results of our implementation in Section \ref{4Imp}. Section \ref{5Con} concludes the paper with future work.
\section{Related Work}
\label{2Rel}
Previous research efforts have made a number of contributions to the development of privacy-preserving machine learning. Some work has focused on securely outsourcing the training of ML models to the cloud, typically by using homomorphic encryption techniques~\cite{froelicher2020scalable}. In 2016, Aono et al.~\cite{aono2016scalable} proposed a secure system for protecting the training data in logistic regression via homomorphic encryption. In 2018, Crawford et al.~\cite{crawford2018doing} built a system that uses fully-homomorphic encryption to approximate the coefficients of a logistic-regression model built from genomic data, while Kim et al.~\cite{kim2018logistic, kim2018secure} presented a method to train a logistic regression model without information leakage. However, these scheme are all based on simple-key fully homomorphic encryption, which are not suitable for distributed machine learning models.
In 2012, Graepel et al.~\cite{GraepelLN12} first proposed that it is possible to perform machine learning algorithm on encrypted data by homomorphic encryption scheme. Then, several single-key homomorphic encryption schemes are used for machine learning prediction~\cite{CryptoNets16, BourseMMP18, BoemerLCW19, TianNYY21, CHET19} or machine learning training~\cite{ChenGHHJLL18, KimS0XJ18, CheonKKS18}. A few machine learning algorithms has adapted multi-key homomorphic encryption and demonstrated the premising future~\cite{DKS19}.
Multi-key fully homomorphic encryption was first proposed based on the NTRU assumption by Lopez et al.~\cite{Lopez12} at 2012, which is intended to apply to on-the-fly multiparty computation. In 2015, Clear et al.~\cite{Clear15} constructed the first LWE-based MKFHE scheme, and improved by Mukherjee and Wichs~\cite{Mukherjee16} in 2016. These schemes are single-hop MKFHE schemes, which means all the participants must be known in advance. This problem was solved by Peikert et al.~\cite{Peikert16} and Brakerski et al.~\cite{Bra16} in 2016 by constructing multi-hop MKFHE schemes. However, these schemes are impractical and without implementation.
The first implementation of the MKFHE scheme was achieved by Chen et al.~\cite{MKTFHE} in 2019, named MKTFHE. In their scheme, a multi-key bootstrapped NAND gate is described. This scheme was improved by Lee and Park~\cite{LeeP19} in 2019, realizing the distributed decryption in MKTFHE. However, using multi-key bootstrapped NAND gates to construct homomorphic encryption operation directly has disadvantages such as low practicability, high complexity and error-prone construction process. At the same time, the complex structure lead to expensive optimization.
\section{Preliminaries}
\label{2Pre}
\subsection{Notation}
The rest of the paper uses the following notations. $\mathbb{T}$ denotes the real Torus $\mathbb{R}/\mathbb{Z}$, the set of real numbers modulo $1$. $\mathbb{T}_N [X]$ denotes $\mathbb{R}[X]/(X^N + 1)$ mod $1$. $k$ represents the number of parties. TLWE is used to denote the (scalar) binary learning with error problem over Torus, while for the ring mode, we use the notation of TRLWE. $params$ represents the parameter sets used in TFHE (fully homomorphic encryption over torus) scheme, while $mkparams$ represents the parameter sets used in MKTFHE scheme and our scheme.
\subsection{MKTFHE}
MKTFHE scheme is the multi-key version of TFHE scheme. TFHE, constructed by Chillotti et al.~\cite{ChillottiGGI16, ChillottiGGI17, ChillottiGGI20}, is a fast fully homomorphic encryption (FHE) scheme over the torus, which generalizes and improves the FHE based on GSW~\cite{GentrySW13} and its ring variants. In the TFHE scheme, bootstrapped binary gates are designed to represent the functions developers need.
The main idea of TFHE is to bootstrap after every binary gate evaluation to refresh the ciphertext in order to make it usable for the following operations, resulting in that arbitrarily deep circuits can be homomorphically evaluated. That is to say, a single parameter set allows the sever to evaluate any function. The entire homomorphic evaluation of the circuits will take time proportional to the number of the binary gates used or, if parallelism is involved, to the number of the circuit layers.
The message space of TFHE bootstrapping gates is $\mathbb{T}$. A TLWE ciphertext $(a, b) \in T^{n+1}$ encrypted a message $\mu \in \mathbb{T}$ with noise parameter $\alpha$.
In the TFHE scheme, the homomorphic evaluation of a binary gate is achieved with operations between TLWE samples and a gate-bootstrapping just after that (except the bootstrapping NOT gate, which does not need bootstrapping). By using this approach, all the basic gates can be evaluated with a single bootstrapping process (GB):
\begin{itemize}
\item $\mathsf{TFHE.BootsNAND}(c_1, c_2)=\mathsf{GB}((0, \frac{5}{8}) - c_1 - c_2)$
\item $\mathsf{TFHE.BootsAND}(c_1, c_2)=\mathsf{GB}((0, -\frac{1}{8}) + c_1 + c_2)$
\item $\mathsf{TFHE.BootsOR}(c_1, c_2)=\mathsf{GB}((0, \frac{1}{8}) + c_1 + c_2)$
\item $\mathsf{TFHE.BootsXOR}(c_1, c_2)=\mathsf{GB}(2 \cdot (c_1 - c_2))$
\item $\mathsf{TFHE.NOT}(c)=(0, \frac{1}{4}) - c$
\end{itemize}
The TFHE scheme has the advantages of fast bootstrapping, efficient homomorphic logic circuit evaluation, and so on. Its multi-key version, named MKTFHE~\cite{MKTFHE}, was constructed by Chen et al. in 2019. MKTFHE is the first attempt in the literature to implement an MKFHE scheme in codes.
In the MKTFHE scheme, the ciphertext length increases linearly with the number of users, and a homomorphic NAND gate with bootstrapping is given. The MKTFHE scheme is comprised of the following algorithms.
\begin{itemize}
\item $\mathsf{MKTFHE.Setup}(1^{\lambda})$: The cloud equipment take a security parameter $\lambda$ as input, and output the public parameter set $mkparams$.
\item $\mathsf{MKTFHE.KeyGen}(mkparams)$: Each party generates its keys independently. First sample the TLWE secret key $sk_i$. Then the algorithm set public key $pk_i$, bootstrapping key $BK_i$ and key-switching key $KS_i$.
\item $\mathsf{MKTFHE.SymEnc}(\mu)$: This algorithm encrypts an input bit $\mu \in \{0, 1\}$, and returns a TLWE ciphertext with the scaling factor $\frac{1}{4}$. The output ciphertext $c = (b, a) \in \mathbb{T}^{n + 1}$ satisfies $b + <a, s> \approx \frac{1}{4} \mu$.
\item $\mathsf{MKTFHE.SymDec}(c, \{ sk_i \})$: Input a TLWE ciphertext $c = (b, a_i, ..., a_k)$ and a set of secret keys $\{ sk_i \}$, and return the message $\mu$ which minimizes $|b+\sum_{i = 1}^{k}<a_i, s_i> - \frac{1}{4} m|$.
\item $\mathsf{MKTFHE.MKKeySwitch}(c, \{ KS_i \}_{i \in [k]})$: Input an expa-nded ciphertext $c$ corresponding to $t = (t_1, ... , t_k)$ and a sequence of key-switching keys from $t_i$ to $s_i$ and returns an encryption of the same message under $s = (s_1, ..., s_k)$.
\item $\mathsf{MKTFHE.NAND}(c_1, c_2, \{ pk_i, BK_i, KS_i \}_{i \in [k]})$: Take two TLWE ciphertext as input. This algorithm first extends two input ciphertexts and evaluate the NAND gate homomorphically on encrypted bits. Then the algorithm evaluates the decryption circuit of the TLWE ciphertext and run the $\mathsf{MKTFHE.MKKeySwitch}(c, \{ KS_i \}_{i \in [k]})$ algorithm. Finally, the algorithm outputs a TLWE ciphertext encrypting $m = m_1 \barwedge m_2$.
\end{itemize}
\subsection{Complement operators}
Assume that there are two $k$-bit inputs $a_{in}$ and $b_{in}$, a $k$-bit complement array integer adder can be constructed in following steps.
\begin{enumerate}
\item Construct a semi-adder with two XOR gates: Input two addends $a$ and $b$ and the carry $c$, return $out = (c \bigoplus (a \bigoplus b))$.
\item Construct a carrier with three AND gates and two OR gates: Input two addends $a$ and $b$ and the carry $c$, return $c = (a \wedge b ) \vee (a \wedge c ) \vee (b \wedge c)$.
\item Construct a $1$-bit adder with a semi-adder and a carrier: Input two addends $a$ and $b$ and the carry $c$, return $out = (c \bigoplus (a \bigoplus b))$ and $c = (a \wedge b ) \vee (a \wedge c ) \vee (b \wedge c)$.
\item Construct a $k$-bit complement array integer adder with k $1$-bit adder: Input two addends $a_{in}$ and $b_{in}$, set the carry $c[0] = 0$, compute $out[i] = (c[i] \bigoplus (a_{in}[i] \bigoplus b_{in}[i]))$, $cout[i + 1] = (a_{in}[i] \wedge b_{in}[i] ) \vee (a_{in}[i] \wedge c[i] ) \vee (b_{in}[i] \wedge c[i])$ with $i = \{ 0, 1, ..., k - 1 \}$. Return the result $out$.
\end{enumerate}
\begin{figure}
\centering
\label{fig1}
\includegraphics[width=1.0\linewidth]{fig1-multiplier}
\caption{The construction of $k$-bit complement array integer multiplier.The index of the adder stands for the type of it (0-adder, $1$-adder or 2-adder).}
\end{figure}
Assume that there are two $k$-bit inputs $a_{in}$ and $b_{in}$, a $k$-bit complement array integer multiplier can be constructed in following steps.
\begin{enumerate}
\item Prepare three kinds of $1$-bit adder for complement multiplier:
\begin{enumerate}
\item 0-adder: The same as $1$-bit adder above.
\item $1$-adder: Input two addends $a$ and $b$ and the carry $c$, return $out = \overline{(c \bigoplus (\overline{a} \bigoplus b))}$ and $out = (\overline{a} \wedge b ) \vee (\overline{a} \wedge c ) \vee (b \wedge c)$.
\item 2-adder: Input two addends $a$ and $b$ and the carry $c$, return $out = (c \bigoplus (\overline{a} \bigoplus \overline{b}))$ and $out = \overline{(\overline{a} \wedge \overline{b} ) \vee (\overline{a} \wedge c )}$ $\overline{ \vee (\overline{b} \wedge c)}$.
\end{enumerate}
\item Construct a $k$-bit complement array integer multiplier according to the following rules, as shown in Figure 1.
\begin{enumerate}
\item Arrange the adders in $k$ rows and $k - 1$ columns. The input $a$ and $b$ of the adder in $i$ row and $j$ column are $a_{in}[j] \wedge b_{in}[i - 1]$ and $a_{in}[j - 1] \wedge b_{in}[i]$. The inputs $c$ of the adders are $0$ in the first line while the $cout$ of adders above in other lines.
\item The highest bit of each addend (that is to say, $a_{in}[k - 1]$ and $b_{in}[k - 1]$) has the weight $-1$.
\item If two input addends have no weight then use 0-adder.
\item If one of the input addends has the weight $-1$ after AND gate or $1$-adder then use $1$-adder.
\item If one of the addends is an output of a $1$-adder and the other addend has the weight $-1$ after AND gate, use 2-adder.
\item If one of the addends is an output of a 2-adder, use 2-adder.
\end{enumerate}
\end{enumerate}
Obviously a subtractor can be constructed by an adder and a multiplier. Assume that there are two $k$-bit inputs $a_{in}$ and $b_{in}$, a $k$-bit complement array integer subtractor can be calculated by: $out = a_in + (-1) \times b_in$.
Assume that there are two $k$-bit inputs $a_{in}$ and $b_{in}$, a $k$-bit complement array integer divider~\cite{Vergos2007} can be constructed in following steps.
\begin{enumerate}
\item Prepare the Controlled Adder/subtractor (CAS) for the divider. We use three bootstrapped XOR gates, two bootstrapped OR gates and two bootstrapped AND gates to construct the CAS. The CAS takes $a_{in}[i]$, $b_{in}[i]$, $c_{in}[i]$ and $p$ as input, and output $out[i]$ and the carrier $c[i + 1]$. The construction of a CAS is shown in Figure 2. The CAS will perform addition if $p = 0$ and perform subtraction if $p = 1$.
\begin{figure}[htb]
\centering
\label{fig2}
\includegraphics[width=1.0\linewidth]{fig2-CAS}
\caption{The construction of CAS}
\end{figure}
\item Then design a absolute value array divider with the CAS above. It takes the $2k$-bit dividend $a_{in}$ and the $k$-bit divisor $b_{in}$ as input, and output the quotient $q$ and the remainder $r$.
\item XOR gates are used to decide the sign bit of the quotient to realize the division of both positive and negative integers.
\item In order to facilitate the operation, the input and output of the divider is unified to complement format. As a result, a compensation device is needed while processing the input and the output. To realize the complement, we first perform XOR gates on the sign bit and each bit, and then add the sign bit to the result, and finally get the complement code.
\item Construct the $k$-bit complement array divider with a absolute value array divider, a XOR gate and two compensation device. The $k$-bit complement array divider takes the $2k$-bit complement dividend $a_{in}$ and the $k$-bit complement divisor $b_{in}$ as input, and output the complement quotient $q$ and the complement remainder $r$. The structure of the $k$-bit complement array divider is shown in Figure 3. For simplicity, we set $k = 3$in this figure.
\end{enumerate}
\begin{figure}[htb]
\centering
\label{fig3}
\includegraphics[width=1.0\linewidth]{fig3-divider}
\caption{The construction of $k$-bit complement array divider}
\end{figure}
\subsection{Linear regression}
In unary linear regression, there is a calculation formula to obtain the global optimal solution during training the model. According to the definition of linear regression, we can evaluate the accuracy of the model. Suppose we observe $n$ data pairs and call them $\{(x_i, y_i), i = 1, ..., n\}$. We implement unary linear regression and utilize its calculation formula to train model:
\begin{equation}
(\omega ^* , b ^*) = argmin_{(\omega, b)} \sum_{i = 1}^{m}(f(x_i) - y_i)^2
\end{equation}
By deriving the above formula and making the derivative zero, we can get the calculation formula.
\begin{equation}
\omega = \frac{\sum_{i = 1}^{m} y_i(x_i - \bar{x})}{\sum_{i = 1}^{m} x_i^2 - \frac{1}{m} (\sum_{i = 1}^{m} x_i)^2 }, \bar{x} = \frac{1}{m}\sum_{i = 1}^{m} x_i
\end{equation}
\begin{equation}
b = \frac{1}{m} \sum_{i=1}^{m} (y_i - \omega x_i)
\end{equation}
Where $\omega$ is the slope of the line and $b$ is the y-intercept. $\bar{x}$ stands for the average of $x_1, x_2, \cdots, x_m$.
In most linear regression, there is often no calculation formula to help us obtain the global optimal solution, but the common practice is to utilize the Gradient descent (GD) method to obtain the local optimal through multiple iterations. The iterative formula of the classical gradient descent method is as follows.
\begin{equation}
\theta_j^{t + 1} = \theta_j^t - \alpha \frac{\partial J(\theta_j^t)}{\theta_j^t}
\end{equation}
\begin{equation}
J(\theta) = \frac{1}{2m} \sum_{i=1}^{m} (h_{\theta}(x_i) - y_i)^2
\end{equation}
Where $\alpha$ is the learning rate, $t$ is the number of iterations, {$\theta$} is the parameter of model, $J(\theta) $ is the gradient of {$\theta$}, $j$ is the index of parameters to be solved, $m$ is the number of training data and $h_{\theta}$ stands for the prediction model.
\section{The proposed privacy-preserving distributed machine learning}
\label{3Scheme}
Although MKTFHE scheme provides bootstrapped NAND gates, it is time-consuming to use NAND gates directly for privacy computing on Industrial Internet. We expand NAND gates to other fundamental gates to better support mathematical operators. At the same time, we construct an adder, a subtractor, a multiplier and a divider, so that users can perform mathematical operators infinitely.
\subsection{Homomorphic fundamental gates}
In previous works, the other fundamental gates (except the NAND gate) are constructed by joining the multi-key bootstrapped NAND gates together. Although only the multi-key bootstrapped NAND gate is described in ~\cite{Hwang79}, any arbitrary binary bootstrapped gate can be evaluated in the same way. Our scheme uses a similar approach in MKTFHE to directly evaluate the following basic gates in a multi-key version.
\begin{itemize}
\item $\mathsf{MKBootsAND}(c_1, c_2, \{ pk_i \}_{i \in [k]})$: Given ciphertexts $c_1$ and $c_2$, this algorithm first extends two input ciphertexts into $c_1^{'}$ and $c_2^{'}$ under the same key. Then it evaluates the AND gate by computing:
\begin{equation}
c^{'} = (0, - \frac{1}{8}) + c_1 + c_2
\end{equation}
homomorphically on encrypted bits. Then this algorithm evaluates the decryption circuit of the ciphertext $c^{'}$ to bootstrap it. Finally, this algorithm run the $\mathsf{MKTFHE.MKKey}$ $\mathsf{Switch(c, {KS_i}_{i \in [k]})}$ algorithm and outputs a TLWE ciphertext encrypting $m = m_1 \wedge m_2$.
\item $\mathsf{MKBootsOR}(c_1, c_2, \{ pk_i \}_{i \in [k]})$: Given ciphertexts $c_1$ and $c_2$, this algorithm first extends two input ciphertexts into $c_1^{'}$ and $c_2^{'}$ and evaluates the OR gate by computing:
\begin{equation}
c^{'} = (0, \frac{1}{8}) + c_1 + c_2
\end{equation}
homomorphically on encrypted bits. Then this algorithm bootstraps it, run the $\mathsf{MKTFHE.MKKeySwitch(c, {KS_i}_{i \in [k]})}$ algorithm and outputs a TLWE ciphertext encrypting $m = m_1 \vee m_2$.
\item $\mathsf{MKNOT}(c, mkparams)$: Take ciphertext $c$ as input, this algorithm evaluates the NOT gate by computing:
\begin{equation}
c^{'} = (0, \frac{1}{4}) - c
\end{equation}
homomorphically. This computation will not increase the noise in ciphertext or change the key so that no bootstrapping and key-switching process is needed.
\item $\mathsf{MKBootsNOR}(c_1, c_2, \{ pk_i \}_{i \in [k]})$: Given ciphertexts $c_1$ and $c_2$, this algorithm first extends two input ciphertexts into $c_1^{'}$ and $c_2^{'}$ and evaluates the OR gate by computing:
\begin{equation}
c^{'} = (0, \frac{1}{8}) - c_1 - c_2
\end{equation}
homomorphically on encrypted bits. Then this algorithm bootstraps it, run the $\mathsf{MKTFHE.MKKeySwitch(c, {KS_i}_{i \in [k]})}$ algorithm and outputs a TLWE ciphertext encrypting $m = m_1 \bar{\vee} m_2$.
\item $\mathsf{MKBootsXOR}(c_1, c_2, \{ pk_i \}_{i \in [k]})$: Given ciphertexts $c_1$ and $c_2$, this algorithm first extends two input ciphertexts and evaluates the XOR gate by computing:
\begin{equation}
c^{'} = 2 \cdot (c_1 - c_2)
\end{equation}
homomorphically. Then run the bootstrapping and key-switching algorithm, and outputs a TLWE ciphertext encrypting $m = m_1 \bigoplus m_2$.
\item $\mathsf{MKBootsXNOR}(c_1, c_2, \{ pk_i \}_{i \in [k]})$: Given ciphertexts $c_1$ and $c_2$, this algorithm first extends two input ciphertexts and evaluates the XOR gate by computing:
\begin{equation}
c^{'} = \frac{1}{4} - 2 \cdot (c_1 - c_2)
\end{equation}
homomorphically. Then run the bootstrapping and key-switching algorithm, and output a TLWE ciphertext encrypting $m = m_1 \bigodot m_2$.
\end{itemize}
First of all, a mapping function $ModToT$ is required to transform a message bit $m$ to an element on torus, like $(0, \frac{1}{8})$ in Equation (6), such that all of the following computations are executed on torus. The goal of Equation (6) is to compute $m_1 \wedge m_2$. Let us look into it more in detail. Then call $LweSub$ to calculate $c^{'}$ by subtraction. Similarly in Equation (7), after mapping message element to torus element, $LweAdd$ is called to calculate $c^{'}$. As shown in Equation (10), $LweSub$ and $LweMul$ should be called. $NOT$ is the simplest operation which only needs subtraction of $LweSub$. Luckily, $ModToT$ and $LweSub$ have been provided in the library of MKTFHE. We have to design and implement two algorithms $LweAdd$ and $LweMul$ by ourselves, to achieve addition and multiplication of two TLWE ciphertexts, independently.
\subsection{Two important components used in bootstrapped gates}
This subsection illustrates two components used in bootstrapped gates, $LweAdd$ and $LweMul$, which are missed in ~\cite{MKTFHE}. The idea of $LweAdd$ is to execute addition on the $n+1$-dimension vectors $c_1$ and $c_2$ element by element. Remember to refresh the current variance by adding the variance of ciphertext $c_1$ and the variance of ciphertext $c_2$ to get the variance of ciphertext $c$, as shown in Algorithm 1. As shown in Equation (4), $LweMul$ is only required in $XOR$. Such multiplication is dot multiplication, which can be expressed by addend $\{ a_1 \}_{i, j}$ plus $k$-time $\{ a_2 \}_{i, j}$. Remember to refresh the current variance by adding the variance of ciphertext $c_1$ and the variance of ciphertext $c_2$ times $k^2$ to get the variance of ciphertext $c$, as shown in Algorithm 2.
\begin{table*}[htb]
\centering
\label{Algorithm 1}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 1} Multi-key TLWE ciphertexts addition ($LweAdd$)}\\
\hline
\textbf{Input}:&MKTFHE parameter set, number of the participants $p$ and two MKTLWE ciphertexts\\
&$c_1 = (a_1, b_1)$ and $c_2 = (a_2, b_2)$\\
\textbf{Output}:&A MKTLWE ciphertext $c = (a, b) = c_1 + c_2$\\
1.&Extract the dimension of the lattice $n$ in parameter set.\\
2.&For $i = 1$ to $p$ do\\
3.&$\quad$For $j = 1$ to $n$ do\\
4.&$\quad\quad$Compute $\{ a \}_{i, j} = \{ a_1 \}_{i, j} + \{ a_2 \}_{i, j}$\\
5.&$\quad$End for\\
6.&End for\\
7.&Compute $b = b_1 + b_2$\\
8.&Refresh the current variance\\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[htb]
\centering
\label{Algorithm 2}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 2} Multi-key TLWE ciphertexts multiplication ($LweMul$)}\\
\hline
\textbf{Input}:&MKTFHE parameter set, number of the participants $p$, the coefficient $k$ and two MKTLWE \\
&ciphertexts $c_1 = (a_1, b_1)$ and $c_2 = (a_2, b_2)$\\
\textbf{Output}:&A MKTLWE ciphertext $c = (a, b) = c_1 + k \cdot c_2$\\
1.&Extract the dimension of the lattice $n$ in parameter set.\\
2.&For $i = 1$ to $p$ do\\
3.&$\quad$For $j = 1$ to $n$ do\\
4.&$\quad\quad$Compute $\{ a \}_{i, j} = \{ a_1 \}_{i, j} + k \cdot \{ a_2 \}_{i, j}$\\
5.&$\quad$End for\\
6.&End for\\
7.&Compute $b = b_1 + k \cdot b_2$\\
8.&Refresh the current variance\\
\hline
\end{tabular}}
\end{table*}
However, two components used in bootstrapped gates above, $LweAdd$ and $LweMul$, which are missed in ~\cite{MKTFHE}. The idea of $LweAdd$ is to execute addition on the $n+1$-dimension vectors $c_1$ and $c_2$ element by element. Remember to refresh the current variance by adding the variance of ciphertext $c_1$ and the variance of ciphertext $c_2$ to get the variance of ciphertext $c$, as shown in Algorithm 1. As shown in Equation (4), $LweMul$ is only required in $XOR$. Such multiplication is dot multiplication, which can be expressed by addend $\{ a_1 \}_{i, j}$ plus $k$-time $\{ a_2 \}_{i, j}$. Remember to refresh the current variance by adding the variance of ciphertext $c_1$ and the variance of ciphertext $c_2$ times $k^2$ to get the variance of ciphertext $c$, as shown in Algorithm 2.
We summarize the operations between two LWE ciphertexts used in each basic binary gate in Table 1
\begin{table}[htb]
\centering
\label{tab1}
\caption{Operations used in each basic gate in our scheme}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Gate&$BS$&$Add$&$Sub$&$Mul$&$ModToT$\\
\hline
AND&$\surd$&$\surd$&&&$\surd$\\
OR&$\surd$&$\surd$&&&$\surd$\\
NOT&&&$\surd$&&$\surd$\\
NAND&$\surd$&&$\surd$&&$\surd$\\
NOR&$\surd$&&$\surd$&&$\surd$\\
XOR&$\surd$&&$\surd$&$\surd$&\\
XNOR&$\surd$&&$\surd$&$\surd$&$\surd$\\
\hline
\end{tabular}
\end{table}
\subsection{
\texorpdfstring{$k$-bit homomorphic adder}{{} bit homomorphic adder}
}
After achieving various basic gate operations in subsection 4.1, it is still a gap between such gates and complicated machine learning functions or private computing directly. In this subsection, we construct a $k$-bit complement array integer adder based on the MKTFHE scheme. We choose complement representation instead of true form or inverse code, for the complement can compute both positive and negative numbers in the same way. A $k$-bit complement array integer adder can be constructed in the following four steps. The main idea was described in subsection 3.3. The only difference is that bootstrapping $BS$ is necessary after each computation to decrease noise. The $k$-bit complement array integer adder is shown in Algorithm 3.
\begin{table*}[htb]
\centering
\label{Algorithm 3}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 3} $k$-bit complement array integer adder}\\
\hline
\textbf{Input}:&MKTFHE parameter set, two ciphertexts $c_1 = \mathsf{MKTF}$- $\mathsf{HE.SymEnc}(\mu_1)$ and \\
&$c_2 = \mathsf{MKTFHE.SymEnc}(\mu_2)$ and the public keys of all participants\\
\textbf{Output}:&Ciphertext $c = \mathsf{MKTFHE.SymEnc}(\mu_1 + \mu_2)$\\
1.&Set the first carry $c_c[0] = 0$\\
2.&Encrypt addends and carry with public keys\\
3.&For $i = 1$ to $k - 1$ do\\
4.&$\quad$Compute the $out[i]$ and $c_c[i]$\\
5.&End for\\
\hline
\end{tabular}}
\end{table*}
\begin{enumerate}
\item Construct a homomorphic semi-adder with two bootstrapped XOR gates: Input two ciphertexts $c_a$ and $c_b$ and the encrypted carry $c_c$, return $out = BS(c_c \bigoplus(BS(c_a \bigoplus c_b)))$, where $BS$ stands for bootstrapping process.
\item Construct a homomorphic carrier with three bootstrapped AND gates and two bootstrapped OR gates: Input two ciphertexts $c_a$ and $c_b$ and the encrypted carry $c_c$, return $cout = BS( BS( BS( c_a \wedge c_b) \vee BS(c_a \wedge c_c) ) \vee BS(c_b \wedge c_c) )$.
\item Construct a homomorphic $1$-bit adder by merging a homomorphic semi-adder and a homomorphic carrier: Input two ciphertexts $c_a$ and $c_b$ and the encrypted carry $c_c$, return $out = BS(c_c \bigoplus(BS(c_a \bigoplus c_b)))$ and $cout = BS( BS( BS( c_a \wedge c_b) \vee BS(c_a \wedge c_c) ) \vee BS(c_b \wedge c_c) )$.
\item Construct a homomorphic $k$-bit complement array integer adder with k homomorphic $1$-bit adder: Input two ciphertexts $c_a$ and $c_b$, set the encrypted carry $c_c[0] = 0$, compute $out[i] = BS(c_c[i] \bigoplus(BS(c_a[i] \bigoplus c_b[i])))$ and $c_c[i+1] = BS( BS( BS( c_a[i] \wedge c_b[i]) \vee BS(c_a[i] \wedge c_c[i]) ) \vee BS(c_b[i] \wedge c_c[i]) )$, where $i = \{ 0, 1, ..., $k$-1\}$. Return the result $out$.
\end{enumerate}
In $k$-bit complement array integer adder, the output carry of the last $1$-bit adder is abandoned because of the capture of the complement.
\begin{figure}[htb]
\centering
\label{fig4}
\includegraphics[width=1.0\linewidth]{fig4-adder}
\caption{The construction of $k$-bit homomorphic complement array integer adder. GB stands for gate-bootstrapping.}
\end{figure}
\subsection{
\texorpdfstring{$k$-bit homomorphic multiplier}{{}-bit homomorphic multiplier}
}
We also construct a $k$-bit complement array integer multiplier based on the MKTFHE scheme. A $k$-bit complement array integer multiplier can be constructed in the following steps. The main idea was described in subsection 3.3. The only difference is that bootstrapping $BS$ is necessary after each computation to decrease noise. A $k$-bit complement array integer multiplier is shown in Algorithm 4.
\begin{table*}[htb]
\centering
\label{Algorithm 4}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 4} $k$-bit complement array integer multiplier}\\
\hline
\textbf{Input}:&MKTFHE parameter set, two ciphertexts $c_1 = \mathsf{MKTF}$-$\mathsf{HE.SymEnc}(\mu_1)$ and \\
&$c_2 = \mathsf{MKTFHE.SymEnc}(\mu_2)$ and the public keys of all participants.\\
\textbf{Output}:&A ciphertext $c = \mathsf{MKTFHE.SymEnc}(\mu_1 \times \mu_2)$\\
1.&Set the first carry $c_c[0] = 0$\\
2.&Encrypt numbers and carry with public keys\\
3.&For $i = 1$ to $k - 2$ do\\
4.&$\quad$For $j = 0$ to $k - 2$ do\\
5.&$\quad$$\quad$Compute the AND gate.\\
6.&$\quad$$\quad$If $j = 0$, compute the 2-homadder\\
7.&$\quad$$\quad$Elseif $i + j \geq k - 1$, compute the $1$-homadder\\
8.&$\quad$$\quad$Else compute the 0-homadder\\
9.&$\quad$End for\\
10.&$\quad$Get the $(j + k)$th bit of the result\\
11.&End for\\
12.&For $j = 0$ to $k - 1$ do \\
13.&$\quad$Compute the $j$th bit of the output by 2-homadder.\\
14.&End for\\
\hline
\end{tabular}}
\end{table*}
\begin{enumerate}
\item Prepare three kinds of homomorphic $1$-bit adder:
\begin{enumerate}
\item 0-homadder: The same as the homomorphic $1$-bit adder above.
\item $1$-homadder: Input two ciphertexts $c_a$ and $c_b$ and the encrypted carry $c_c$, return $out = \overline{BS(c_c \bigoplus(BS(\overline{c_a} \bigoplus}$ $\overline{c_b)))}$ and $c_{out} = BS( BS( BS( \overline{c_a} \wedge c_b) \vee BS(\overline{c_a} \wedge c_c) ) \vee BS(c_b \wedge c_c) )$.
\item 2-homadder: Input two ciphertexts $c_a$ and $c_b$ and the encrypted carry $c_c$, return $out = BS(c_c \bigoplus(BS(\overline{c_a} \bigoplus$ $\overline{c_b})))$ and $c_{out}=\overline{BS( BS( BS( \overline{c_a} \wedge }$ $\overline{\overline{c_b}) \vee BS(\overline{c_a} \wedge c_c) )}$ $\overline{\vee BS(\overline{c_b} \wedge c_c) )}$.
\end{enumerate}
\item Construct a $k$-bit homomorphic complement array integer multiplier according to the rules in Section 3.4 but use homomorphic adder instead of the previous adder.
\end{enumerate}
We use Algorithm 4 to evaluate $k$-bit complement array integer multiplier homomorphically, where $k$ is the number of bits of the addends. In $k$-bit complement array integer multiplier, the output carry of the last $1$-bit adder is considered as the highest bit of the result.
\subsection{
\texorpdfstring{$k$-bit homomorphic subtractor and divider}{{}-bit homomorphic subtractor and divider}
}
We construct a $k$-bit complement array integer subtractor by using a $k$-bit complement array integer adder and a $k$-bit complement array integer multiplier. A $k$-bit complement array integer subtractor is shown in Algorithm 5.
We also construct a $k$-bit complement array integer divider based on the MKTFHE scheme. The main idea was described in subsection 3.3. The only difference is that bootstrapping $BS$ is necessary after each computation to decrease noise. A $k$-bit complement array integer divider can be constructed in the following steps:
\begin{enumerate}
\item Prepare the homomorphic CAS for the divider. We use three homomorphic bootstrapped XOR gates, two homomorphic bootstrapped OR gates and two homomorphic bootstrapped AND gates to construct the homomorphic CAS. The homomorphic CAS takes ciphertexts $c_a[i]$, $c_b[i]$, $c_c[i]$ and $c_p$ as input, and output $c_{out}[i]$ and the carrier $c_{c}[i + 1]$ satisfying $c_{out}[i] = BS(c_a[i] \bigoplus BS( BS( c_b[i] \bigoplus c_p ) \bigoplus c_c[i] ) )$ and $c_{c}[i + 1] = BS( BS( BS(c_a[i] \vee c_c[i]) \wedge BS(c_b[i] \bigoplus c_p) ) \vee BS(c_a[i] \wedge c_c[i]) )$. The homomorphic CAS will perform addition if the message of $c_p$ is $0$ and perform subtraction if the message of $c_p$ is $1$.
\item Then design the homomorphic absolute value array divider with the homomorphic CAS above. It takes the $2k$-bit dividend $c_a$ and the $k$-bit divisor $c_b$ as input, and output the quotient $c_q$ and the remainder $c_r$.
\item Homomorphic XOR gates are used to decide the sign bit of the ciphertext of quotient to realize the homomorphic division of both positive and negative integers.
\item A homomorphic compensation device is also designed to realize the complement division.
\item Construct the $k$-bit homomorphic complement array divider with a homomorphic absolute value array divider, a homomorphic XOR gate and two homomorphic compensation device. The $k$-bit homomorphic complement array divider takes the ciphertexts of $2k$-bit complement dividend $a_{in}$ and the $k$-bit complement divisor $b_{in}$ as input, and output the ciphertexts of complement quotient $q$ and the complement remainder $r$.
\end{enumerate}
\begin{table*}[htb]
\centering
\label{Algorithm 5}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 5} $k$-bit complement array integer subtractor}\\
\hline
\textbf{Input}:&MKTFHE parameter set, two ciphertexts $c_1 = \mathsf{MKTF}$-$\mathsf{HE.SymEnc}(\mu_1)$ and \\
&$c_2 = \mathsf{MKTFHE.SymEnc}(\mu_2)$ and the public keys of all participants.\\
\textbf{Output}:&A ciphertext $c = \mathsf{MKTFHE.SymEnc}(\mu_1 - \mu_2)$\\
1.&Change every bit of $c_2$ and add $1$ to calculate $-c_2$\\
2.&Set the first carry $c_c[0] = 0$\\
3.&Encrypt addends and carry with public keys\\
4.&For $i = 1$ to $k - 1$ do\\
5.&$\quad$Compute the $out[i]$ and $c_c[i]$\\
6.&End for\\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[htb]
\centering
\label{Algorithm 6}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 6} $k$-bit complement array integer divider}\\
\hline
\textbf{Input}:&MKTFHE parameter set, two ciphertexts $c_1 = \mathsf{MKTF}$-$\mathsf{HE.SymEnc}(\mu_1)$ and \\
&$c_2 = \mathsf{MKTFHE.SymEnc}(\mu_2)$ and the public keys of all participants.\\
\textbf{Output}:&Two ciphertext $q = \mathsf{MKTFHE.SymEnc}(\mu_1 / \mu_2)$ and $q = \mathsf{MKTFHE.SymEnc}(\mu_1 mod \mu_2)$\\
1.&Encrypt the $p_0$ with public keys\\
2.&Change the inputs into complement format with compensation device.\\
3.&Calculate the first later of divider.\\
4.&For $i = 1$ to $k - 1$ do\\
5.&$\quad$For $j = 0$ to $k - 1$ do\\
5.&$\quad$$\quad$Compute the CAS unit.\\
6.&$\quad$End for\\
7.&End for\\
\hline
\end{tabular}}
\end{table*}
We use Algorithm 6 to evaluate $k$-bit complement array integer divider homomorphically, where $k$ is the number of bits of the divisor.
In summary, this section first expands NAND gates to other six basic gates, AND, OR, NOT, NOR, XOR and XNOR. In order to compute these gates, two important components, $LweAdd$ and $LweMul$ are then proposed. At last, we construct the adder, subtractor, multiplier and divider. With these homomorphic operators, we can evaluate arbitrary polynomials without decryption.
\subsection{Multi-key homomorphic distributed linear regression scheme}
We first implement a multi-key fully homomorphic encryption scheme in which the secret key is distributed among the parties, while the corresponding collective public key $pk$ is known to all of them. Thus, each party can independently compute on ciphertexts encrypted under $pk$ but all parties have to collaborate to decrypt a ciphertext. This enables the participants to train a collectively encrypted model, that cannot be decrypted as long as one participant is honest and refuses to participate in the decryption. Then we implement two methods: calculation formula method and GD method to train and evaluate the linear regression model.
Calculation formula method: The advantage of this method is that the global optimal solution can be obtained without multiple iterative operation which can reduce the computational cost of time and space. We first estimate the size of the input data, and then select homomorphic operators with appropriate bits to calculate the above formula. Since our operators are integer operators, we choose to directly round off the input data under the condition of ensuring accuracy.
We use algorithm 7 to train the linear regression model under multi-key homomorphic encryption.
GD method: Considering that our homomorphic operators only support integer calculation, and the learning rate $\alpha$ is usually a floating-point number less than 1, we rewrite the GD iterative formula in Subsection 3.4 to zoom the $\alpha$ to integers. We bring the linear regression model into the origin GD iterative formula, the zooming multiple is $n$, and the practical integer GD iterative formula is below:
\begin{equation}
b' = b \cdot n - \alpha \cdot n \cdot \frac{2}{m} \cdot \sum_{i = 1}^{m} [y_i \cdot n - ( \frac{\omega}{n} \cdot x_i - \frac{b}{n} )]
\end{equation}
\begin{equation}
\omega' = \omega \cdot n - \alpha \cdot n \cdot \frac{2}{m} \cdot \sum_{i = 1}^{m} [y_i \cdot n - ( \frac{\omega}{n} \cdot x_i - \frac{b}{n} )]
\end{equation}
\begin{equation}
loss = \frac{1}{m} \cdot \sum_{i = 1}^{m} [y_i - (\omega \cdot x_i + b) / n]^2
\end{equation}
After we get the rewritten iterative formula, we estimate the size of the input data to select the zooming multiple, iterative times and homomorphic operators with appropriate numbers of bits.
We use Algorithm 8 to train the linear regression model under multi-key homomorphic encryption.
\begin{table*}[htb]
\centering
\label{Algorithm 7}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 7} Calculation formula method in multi-key fully homomorphic linear regression}\\
\hline
\textbf{Input}:&MKTFHE parameter set, two ciphertexts $c_{x_n} = \mathsf{MKTF}$- $\mathsf{HE.SymEnc}(x_n)$ and \\
&$c_{y_n} = \mathsf{MKTFHE.SymEnc}(y_n)$ and the public keys of all participants\\
\textbf{Output}:&Two ciphertext $c_{\omega} = \mathsf{MKTFHE.SymEnc}(\omega)$ and $c_b = \mathsf{MKTFHE.SymEnc}(b)$\\
1.&Reconstruct the public keys to bootstrapping key\\
2.&Extend the sing-key ciphertext to multi-key ciphertext\\
3.&Calculate the average value $c_{\bar{x}}$ of $c_{x}$, following the Equation (1)\\
4.&Calculate the $c_{\omega}$ by $c_{\bar{x}}$, following the Equation (2)\\
5.&Calculate the $c_{b}$ by $c_{\omega}$, following the Equation (3)\\
\hline
\end{tabular}}
\end{table*}
\begin{table*}[htb]
\centering
\label{Algorithm 8}
\resizebox{\linewidth}{!}{
\begin{tabular}{rl}
\hline
\multicolumn{2}{l}{\textbf{Algorithm 8} GD method in multi-key fully homomorphic linear regression}\\
\hline
\textbf{Input}:&MKTFHE parameter set, zooming multiple $n$, iterative times $d$, two sets of ciphertexts\\
&$c_{x_n} = \mathsf{MKTF}$- $\mathsf{HE.SymEnc}(x_n)$ and $c_{y_n} = \mathsf{MKTFHE.SymEnc}(y_n)$ and the public keys of all\\
&participants\\
\textbf{Output}:&Two ciphertext $c_{\omega} = \mathsf{MKTFHE.SymEnc}(\omega)$ and $c_b = \mathsf{MKTFHE.SymEnc}(b)$\\
1.&Reconstruct the public keys to bootstrapping key\\
2.&Extend the sing-key ciphertext to multi-key ciphertext\\
3.&Given iteration initial value to $\omega$ and $b$\\
4.&For $i = 1$ to $d$ do\\
5.&$\quad$Calculate the GD iterative equation (13) (14)\\
6.&$\quad$Set $\omega = \omega '$ and $b = b'$\\
7.&End for\\
\hline
\end{tabular}}
\end{table*}
\section{Implementation and experiments}
\label{4Imp}
The test environment is Ubuntu 18.04 operation system, with Intel Xeon Gold 5220 CPU and 46512MiB memory.
\subsection{Experiments on homomorphic basic gates}
The MKTFHE scheme provides only homomorphic bootstrapped NAND gates. Our scheme constructs other basic homomorphic bootstrapped gates in the same way as the bootstrapped NAND gate in MKTFHE. We compared our basic bootstrapped gates with gates built by the NAND gates in the same test environment. Experiment shows that our basic bootstrapped gates are as efficient as the bootstrapped NAND gate in the MKTFHE scheme, which is much more efficient than the gates with the same function constructed by only bootstrapped NAND gates.
\begin{table}[htb]
\centering
\label{tab2}
\caption{Expression of our basic bootstrapped gates and the average time of each gate in both our scheme and naïve scheme.}
\begin{tabular}{|c|c|c|}
\hline
Gate&Naïve scheme(s)&Our scheme(s)\\
\hline
AND&0.238671&0.238008\\
OR&0.584701&0.23558\\
NOT&0.236541&2e-06\\
NAND&0.235327&0.239638\\
NOR&0.470561&0.236153\\
XOR&0.71246&0.2354\\
XNOR&0.710777&0.23588\\
\hline
\end{tabular}
\end{table}
The result in Table 2 shows that our scheme is more efficient than the naïve scheme, which connects the bootstrapped NAND gates to realize the operations. We have improved in efficiency, especially in bootstrapped OR gate, bootstrapped NOR gate, NOT gate, bootstrapped XOR gate and bootstrapped XNOR gate: the time cost by a single bootstrapped XOR gate reduced by about 67\% while the NOT gate is evaluated almost instantaneously for no bootstrapping is needed.
\subsection{
\texorpdfstring{Experiments on $k$-bit homomorphic adder}{Experiments on {}-bit homomorphic adder}
}
We construct a $k$-bit homomorphic complement integer adder based on the basic bootstrapped gates we designed above. We create two participants for our experiment. Each of them creates its addend and secret key individually. For simple, we set $k = 1, 2, ..., 8$.
The server input multi-key parameters, the ciphertexts encrypted under different keys by two participants, and the public keys of two participants. Then the server extends the ciphertext, evaluates the $k$-bit adder homomorphically, and returns the ciphertext to the participants. The participants decrypt the ciphertext and get the result.
According to the structure of $k$-bit homomorphic adder, while computing the addition two $k$-bit addends, $k$ $1$-bit adder is required. As shown in subsection 4.2, it requires $5$ bootstrapped gates to construct a $1$-bit adder. That is to say, $5k$ basic bootstrapped gates are needed to construct a $k$-bit homomorphic adder. Obviously, the cost of the homomorphic evaluation grows linearly with the number of bootstrapped gates. So, the cost of the adder grows linearly with the bits of input numbers.
\begin{table}[htb]
\centering
\label{tab3}
\caption{The average time of $k$-bit homomorphic adder.}
\begin{tabular}{|c|c|c|c|}
\hline
Bits&Time(s)&Num of gates&Time/gate(s)\\
\hline
$1$-bit&1.19515&5&0.23903\\
$2$-bit&2.39738&10&0.239738\\
$3$-bit&3.60485&15&0.240323\\
$4$-bit&4.80614&20&0.240307\\
$5$-bit&6.01884&25&0.240754\\
$6$-bit&7.25483&30&0.241828\\
$7$-bit&8.43608&35&0.241031\\
$8$-bit&9.69468&40&0.242367\\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\label{fig5}
\includegraphics[width=1.0\linewidth]{fig5-cost_of_adder}
\caption{The cost of $k$-bit homomorphic complement array integer adder}
\end{figure}
From Table 3 and Figure 5, we can learn that the number of binary bootstrapped gates grows linearly with the number of the bits of addends. As evaluation is the most expensive step in our scheme, we can say that the cost of the $k$-bit homomorphic complement adder grows linearly with the bits of input numbers.
\subsection{
\texorpdfstring{Experiments on $k$-bit homomorphic multiplier}{Experiments on {}-bit homomorphic multiplier}
}
To construct a $k$-bit homomorphic multiplier, we construct 3 kinds of homomorphic adder: 0-homadder, 1-homadder and 2-homadder. We also construct a $k$-bit homomorphic complement integer multiplier based on three kinds of adders and the basic bootstrapped gates we designed above. The structure of 3 kinds of homomorphic adder in section 4.3 shows that it requires $7$ basic bootstrapped gates to construct a homomorphic adder: $5$ gates in $1$-bit adder and two AND gates to compute the input. It can be learned from the structure of the $k$-bit homomorphic multiplier in section 3.3 that it requires $k \times (k - 1)$ homomorphic adders to construct a $k$-bit homomorphic multiplier. That is to say, it requires $7 \times k \times (k - 1)$ basic bootstrapped gates for a $k$-bit homomorphic multiplier. So, the cost of the multiplier grows quadratically with the bits of input numbers. Note that there is parallel optimization in the multiplier, so the average time consumed by a gate is much less than that consumed by a single gate.
We create two participants for our experiment, each of them creates its complement number and secret key individually. For simplicity, we set $k = 2, 3, ..., 8$. The server input multi-key parameters, the ciphertexts encrypted under different keys by two participants, and the public keys of two participants. Then the server extends the ciphertext, evaluates the $k$-bit multiplier homomorphically, and returns the ciphertext to the participants. The participants decrypt the ciphertext and get the result.
\begin{table}[htb]
\centering
\label{tab4}
\caption{The average time of each step of experiments on $k$-bit homomorphic adder, as well as the number of gates used in multiplier.}
\begin{tabular}{|c|c|c|c|}
\hline
bit of mul&Time(s)&Gates&Time/gate(s)\\
\hline
2-bit mul&1.14368&14&0.081691429\\
3-bit mul&2.66844&42&0.063534286\\
4-bit mul&4.83101&84&0.057512024\\
5-bit mul&7.43843&140&0.053131643\\
6-bit mul&10.762&210&0.051247619\\
7-bit mul&14.6522&294&0.049837415\\
8-bit mul&19.1052&392&0.048737755\\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\label{fig6}
\includegraphics[width=1.0\linewidth]{fig6-cost_of_mul}
\caption{The cost of $k$-bit homomorphic complement array integer multiplier}
\end{figure}
From Table 4 and Figure 6, we can learn that the number of binary bootstrapped gates grows quadratically with the number of the bits of addends. Similarly, we can say that the cost of the $k$-bit homomorphic complement multiplier grows quadratically with the bits of input numbers.
\subsection{
\texorpdfstring{Experiments on $k$-bithomomorphic subtractor and divider}{Experiments on {}-bit homomorphic subtractor and divider}
}
We construct a $k$-bit homomorphic complement integer subtractor based on the basic bootstrapped gates we designed above. We create two participants for our experiment. Each of them creates its addend and secret key individually. For simple, we set $k = 1, 2, ..., 8$.
According to the structure of $k$-bit homomorphic subtractor, while computing the $k$-bit subtraction, $k$ $1$-bit adder is required. As shown in subsection 4.2, it requires $5$ bootstrapped gates to construct a $1$-bit adder. At the same time, a bootstrapped XOR gate is used to compute the negation. That is to say, $6k$ basic bootstrapped gates are needed to construct a $k$-bit homomorphic adder. Obviously, the cost of the homomorphic evaluation grows linearly with the number of bootstrapped gates. So, the cost of the subtractor grows linearly with the bits of input numbers.
\begin{table}[htb]
\centering
\label{tab5}
\caption{The average time of experiments on $k$-bit homomorphic subtractor}
\begin{tabular}{|c|c|c|c|}
\hline
bit of sub&Time(s)&Gates&Time/gate(s)\\
\hline
1&1.99209&6&0.332015\\
2&2.90969&12&0.242474167\\
3&4.45103&18&0.247279444\\
4&5.84264&24&0.243443333\\
5&7.42889&30&0.247629667\\
6&8.93639&36&0.248233056\\
7&10.35&42&0.246428571\\
8&11.9013&48&0.24794375\\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\label{fig7}
\includegraphics[width=1.0\linewidth]{fig7-cost_of_sub}
\caption{The cost of $k$-bit homomorphic complement array integer subtractor}
\end{figure}
Experiments show that the cost of $k$-bit homomorphic subtractor grows linearly with the bit of minuend.
We also construct a $k$-bit homomorphic complement integer divider in the similar way. We create two participants for our experiment. Each of them creates its addend and secret key individually. For simple, we set $k = 1, 2, ..., 8$.
According to the structure of $k$-bit homomorphic divider, while computing the $k$-bit division, a homomorphic absolute value array divider, a homomorphic XOR gate and two homomorphic compensation device are required. There are $7$ homomorphic gates in each hom-CAS while there are $k^2$ hom-CASs in each homomorphic absolute value array divider. It takes $2k$ homomorphic gates to construct a compensation device. As a result, $7k^2 + 2k + 1$ basic bootstrapped gates are needed to construct a $k$-bit homomorphic divider. Note that there are more optimization in the divider than in the multiplier, and the CAS almost run in parallel, so a $k$-bit divider is faster than a $k$-bit multiplier, and the cost of the divider grows almost linearly with the number of layers.
\begin{table}[htb]
\centering
\label{tab6}
\caption{The average time of experiments on $k$-bit homomorphic divider}
\begin{tabular}{|c|c|c|c|}
\hline
bit of div&Time(s)&Layers&Time/layer(s)\\
\hline
1&1.483&1&1.483\\
2&4.09858&2&2.04929\\
3&7.08719&3&2.362396667\\
4&10.0022&4&2.50055\\
5&13.1937&5&2.63874\\
6&15.965&6&2.660833333\\
7&19.1003&7&2.728614286\\
8&22.2938&8&2.786725\\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\label{fig8}
\includegraphics[width=1.0\linewidth]{fig8-cost_of_div}
\caption{The cost of $k$-bit homomorphic complement array integer divider}
\end{figure}
\subsection{Experiments on multi-key homomorphic distributed linear regression}
We implement a multi-key fully homomorphic encryption scheme in which the secret key is distributed among the parties, while the corresponding collective public key $pk$ is known to all of them. Thus, each party can independently compute on ciphertexts encrypted under $pk$ but all parties have to collaborate to decrypt a ciphertext. This enables the participants to train a collectively encrypted model, that cannot be decrypted as long as one participant is honest and refuses to participate in the decryption.
Our multi-key fully homomorphic linear regression scheme is composed by multiple parties which can be divided by three types of entities: participants, cloud server, and decryption party, we take two participants as examples. the whole scheme are shown in the Figure 9, the steps are as follows:
\begin{enumerate}
\item Participants have the need to outsource computing, and each participant offers its own part of data for model training. During the step of data encryption, all participants call $\mathsf{MKTFHE.KeyGen}(mkparams)$ to generate their symmetric key, asymmetric key bootstrapping key and key-switching key independently, then call $\mathsf{MKTFHE.SymEnc}(\mu)$ to encrypt the data. And finally upload the ciphertext of input data and public key to the cloud server.
\item Cloud server is usually composed of one or more high-performance servers, and doesn’t have their own data. After receiving the ciphertext data from participants, the cloud server will use the participants’ key to generate a new bootstrapping key and extend the single-key ciphertext of the input data to multi-key ciphertext. Then cloud server uses our designed homomorphic operators to train the linear regression model. After the training is finished, the result of the model will be sent to the decryption party.
\item When the decryption party receive the ciphertext of the trained model, they will unite all the participants to call $\mathsf{MKTFHE.SymDec}(c, \{sk_i\})$ to decrypt the result.
\end{enumerate}
\begin{figure}[htb]
\centering
\label{fig9}
\includegraphics[width=1.0\linewidth]{fig9-protocal}
\caption{Multi-key fully homomorphic linear regression scheme}
\end{figure}
We show our setting of the parameters in the Table 7, following the notation of MKTFHE library. The achievement estimated security level of our scheme is 110-bit while the dimension of the TLWE problem is $k = 1$.
\begin{table*}[htb]
\centering
\label{tab7}
\caption{Parameter sets of multi-key homomorphic linear regression}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
LWE-$n$&LWE-$\alpha$&LWE-$B'$&LWE-$d'$&RLWE-$N$&RLWE-$\beta$&RLWE-$B$&RLWE-$d$\\
\hline
560&$3.05 \times 10^{-5}$&$2^2$&8&1024&$3.72 \times 10^{-9}$&$2^9$&3\\
\hline
\end{tabular}}
\end{table*}
We train a multi-key fully homomorphic linear regression model in two methods by using our proposed homomorphic operators. The input data is composed of several sets of linear data and some random noise. Considering the size of the data, we chose 8-bit homomorphic operators to train the model in formula method and 16-bit homomorphic operators in GD method. In addition, the learning rate and the zooming multiple of the GD method is 0.001 and 10,000. We make several experiments and find that the model will converge by performing up to 10 iterations. We also create $m$ participants in our experiment. Each of them generates their own secret key to encrypt their own data. For simple, we set $m = 2,4,8$, and the $3,4,5,6$ columns record the time of each step.
\begin{table*}[htb]
\centering
\label{tab8}
\caption{The result of experiments on multi-key fully homomorphic linear regression}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
method&Num of Parties&KeyGen(s)&Ciphertext extension(s)&Training(s)&Evaluation(s)\\
\hline
Formular&$k=2$&$1.984$&$0.0008$&$227.467$&$37.131$\\
Formular&$k=4$&$4.002$&$0.0016$&$445.593$&$68.968$\\
Formular&$k=8$&$8.916$&$0.0035$&$869.148$&$138.259$\\
GD&$k=2$&$2.025$&$0.0008$&$1133.33$/iter&$190.644$\\
GD&$k=4$&$4.008$&$0.0015$&$2092.17$/iter&$372.353$\\
\hline
\end{tabular}}
\end{table*}
In Table 8, the result shows that: the running time including key reconstruction time, extension ciphertext time, training time and evaluation time all grow linearly with the numbers of participants; due to the large zooming multiple of the GD method, the homomorphic operators with larger bits are selected. At the same time, the GD method needs multiple iterations, and the calculation process is more complex, so the running time including training time and evaluation time is much more than that of the formula method.
Note that there is more optimization in our linear regression scheme such as running in parallel. Besides, with our homomorphic operators, we can implement more complex multi-key fully homomorphic machine learning scheme.
\section{Conclusion and Discussion}
\label{5Con}
\subsection{Conclusion}
The scheme proposes a series of $k$-bit homomorphic complement operators based on MKTFHE, which narrows the gap between the original NAND gate and complicated machine learning functions. Experiment shows that the cost of the adder and the subtractor grows linearly with the bits of input numbers and the cost of the multiplier grows quadratically. Meanwhile, the cost of the divider grows almost linearly with the number of layers in it.
To narrow the gap between multi-key bootstrapped NAND gates and multi-key homomorphic mathematical operators, we construct other basic bootstrapped gates (AND, OR, NOT, NOR, XOR, and XNOR) in the same way as the bootstrapped NAND gate in MKTFHE and with the same efficiency as the NAND gate. Experiment shows that constructing basic binary gates in this way is much more efficient than building them by directly joining the bootstrapped NAND gates together, especially when constructing XOR gate and NOT gate. Then we construct a $k$-bit complement adder, a $k$-bit complement subtractor, a $k$-bit complement multiplier and a $k$-bit complement divider based on our basic binary bootstrapped gates. Finally, we train distributed linear regression model by utilizing our proposed multi-key homomorphic operators. The operators we designed can be directly used to achieve privacy-preserving distributed machine learning schemes in distributed communication system.
\subsection{Discussion}
The plaintext space of the MKTFHE scheme is $\{0,1\}$, which is smaller than the plaintext space in other homomorphic encryption schemes like BGV~\cite{BrakerskiGV14} or CKKS~\cite{CheonKKS17, CheonHKKS18}. Extending the message space on torus may be helpful. It has been experimented that integers or fixed-point numbers can also be mapped to a ring on the torus~\cite{BouraGGJ20, BouraGG18} as well as performing homomorphic evaluations. If the message space of MKTFHE is magnified to integers or fixed-point numbers, it may reduce the overhead in time and space in MKTFHE.
Common reference strings (CRS) are needed in this scheme for all the participants, and the computing server has to know the multi-key parameters in advance. If the CRS can be removed in MKTFHE, the scheme will no longer need a trusted third party to generate the CRS, making the scheme more secure.
We described only multi-key homomorphic linear regression model in our scheme, but more machine learning schemes (such as logistic regression, SVM, etc) can be evaluated in the same way, for the basic mathematical operators are provided in our scheme.
\section{Acknowledgement}
This work is supported by National Natural Science Foundation of China (No. 61872109), Shenzhen Basic Research Project, China (No. JCYJ20180507183624136), Shenzhen Basic Research Project, China (No. JCYJ20200109113405927) and National Science and Technology Major Project Carried on by Shenzhen, China (No. CJGJZD20200617103000001).
|
2,869,038,154,977 | arxiv | \section{Introduction}
Rather high values of surface temperatures
measured for two millisecond pulsars (MSPs),
PSR J0437-4715 (\citealt{kargaltsev04,durant12,bogdanov19})
and PSR J2124-3358 (\citealt{rangelov17}),
imply that some reheating mechanism
should operate in these objects.
A number of such mechanisms exists in the literature,
the most promising
are
vortex creep (\citealt*{alpar84}),
rotochemical heating in the crust (\citealt*{gkr15}),
and rotochemical heating in the core (\citealt*{reisenegger95}).
Here we focus on the latter mechanism.
Calculations by
\cite*{pr10} and \cite*{reisenegger15}
suggest that rotochemical heating can likely explain the observed temperatures of
PSR~J0437$-$4715 and PSR J2124-3358 if
either the proton or neutron superfluid energy gaps
(or both)
are sufficiently large
in the whole stellar core.
However,
the available
analysis of the rotochemical heating
did not account for the possible
presence of the magnetic field in the core.
Moreover, the effect of the neutron-star (NS) matter compression during the preceding accretion at the low-mass X-ray binary (LMXB) stage
has never been studied.
Our aim here is to fill these gaps.
The paper is organized as follows.
Section \ref{st} discusses available observations of MSP surface temperatures.
In Section \ref{approach}
we present general equations that govern
the combined NS thermal and chemical evolution
under compression.
The adopted physics input is described in Section \ref{input}.
In Section \ref{SC} we analyze the effect of the core magnetic field on the chemical heating.
Section \ref{acc} examines the effect of the
accretion during the LMXB stage
on the thermal states of MSPs.
Section \ref{assumptions} contains discussion of the sensitivity of our results
to various approximations made in the paper.
We conclude in Section \ref{conc}.
\section{Observed surface temperatures of millisecond pulsars}
\label{st}
A millisecond pulsar with the best constraint on the surface temperature
is PSR J0437-4715 (hereafter, `J0437'; \citealt{kargaltsev04,durant12}).
The mass of J0437 is
measured to be
$M=(1.3-1.58)M_\odot$
with $3\sigma$ significance (\citealt{reardon16}).
The rotation
rate
and spin-down rate
(accounting for the Shklovskii effect)
equal, respectively, $\nu=173.7\,\rm Hz$ and $\dot{\nu} = -4.14\times 10^{-16}\,\rm Hz \,s^{-1}$.
Effective redshifted (seen by a distant observer) surface temperature is estimated as $T^\infty_{\rm s}=(1.0-1.5)\times 10^5\,\rm K$ (with about a
$15\%$ uncertainty dominated by the uncertainty in the extinction)
for apparent, as seen by a distant observer, radius, $R_\infty=(18-13)\,\rm km$ (\citealt{durant12}).
While \cite{durant12} obtained the surface temperature of J0437
by interpreting its thermal spectrum as blackbody emission,
\cite{ggr19} fitted the same spectrum with atmospheric models and found that the spectral fits favour a hydrogen atmosphere with the
surface redshifted temperature
$T^\infty_{\rm s}=(2.3\pm 0.1)\times 10^5\,\rm K$
and stellar coordinate radius $R=13.6^{+0.9}_{-0.8}\,\rm km$.
The lower/upper uncertainties provide 68\% credible intervals.
Later,
by fitting new observational data from NICER with the hydrogen atmosphere model,
\cite{bogdanov19}
reported the local effective surface temperature
$T_{\rm s}=(1.8^{+0.2}_{-0.6})\times 10^5\,\rm K$
and stellar coordinate radius $R=15.3^{+2.0}_{-1.6}\,\rm km$ (90\% credible intervals).
Another pulsar with the measured surface temperature is PSR J2124-3358.
The latter is estimated to be $(0.5-2.1)\times 10^5\,\rm K$
assuming the NS radius, as seen by a distant observer,
$R_\infty=12\,\rm km$ (\citealt{rangelov17}).
The rotation rate of PSR J2124-3358, $\nu=202.8\,\rm Hz$, and its spin-down energy loss rate, $\dot{E} = 6.8\times 10^{33}\,\rm erg \,s^{-1}$, make this pulsar very similar to J0437.
There is also a
set of
upper limits on the surface temperatures of a number of
millisecond pulsars (see, e.g., \citealt{schwenzer17,chugunov17,hhc19,boztepe19}),
which are, however, all rather high, being not very restrictive for
rotochemical heating theory.
In view of the above mentioned facts,
in what follows we
only confront
our results
with the observations of the pulsar J0437,
assuming that
its effective redshifted surface temperature is
$T^\infty_{\rm s}=(1.0-1.5)\times 10^5\,\rm K$, in accordance with \cite{durant12,bogdanov19}.
\section{General equations and our approach}
\label{approach}
To describe NS evolution under combined action of compression
caused by
accretion and spin-down,
we follow the approach developed by \cite{fr05}.
Initially, this approach was
applied to study
departure from the beta-equilibrium of the
spinning-down neutron star.
However, an NS may accrete about $\sim 0.1M_\odot$
during its evolution in LMXB (\citealt{ozel12,antoniadis16}).
Such a substantial amount of the accreted material
compresses NS matter and could lead to strong deviations from
chemical equilibrium.
Thus, here we generalize the framework of \cite{fr05} to account, in addition, for
the compression of matter due to accretion from the low-mass companion in an LMXB.
Following \cite{fr05,rjfk06},
we assume that the redshifted chemical potentials $\mu_{\rm n}^\infty$, $\mu_{\rm \mu}^\infty+\mu_{\rm p}^\infty$, and $\mu_{\rm e}^\infty+\mu_{\rm p}^\infty$ (as seen by a distant observer)
are uniform
because of the
very efficient diffusion (\citealt{reisenegger97, dgs20}),
while the redshifted internal temperature, $T^\infty$,
is uniform due to high thermal conductivity (\citealt{ykgh01}).
Here and below the subscripts $\rm b,n,p,e,\mu$
refer to baryons, neutrons, protons, electrons, and muons, respectively.
The temperature $T^\infty$ is driven by the thermal balance equation (\citealt{yls99,fr05})
\begin{eqnarray}
C\dot{T}^\infty=L^\infty_{\rm acc}-L^\infty_{\gamma}+\int_V \left(Q^\infty_{\rm heat}-Q^\infty_{\nu}\right)dV, \label{Teq}
\end{eqnarray}
where dot stands for the time derivative;
$dV=4{\rm \pi}\,r^2\, {\rm e}^{\lambda/2}dr$ is the proper volume element,
with ${\rm e}^{\lambda}$ being the
radial component of the metrics of a non-rotating reference star
(the effect of rotation on the metrics is assumed to be small and is neglected in this equation).
$C$ in equation (\ref{Teq}) is the heat capacity of an NS
and
$Q^\infty_{\nu}$ is the neutrino emissivity.
In what follows, we assume that the direct Urca (DUrca) processes
(\citealt{gs41})
are closed (their effect is discussed in Section \ref{assumptions})
and account for the two main contributions to $Q^\infty_{\nu}$.
The first one comes from the non-equilibrium modified Urca (MUrca) processes
in the core:
\begin{eqnarray}
B_i+{\rm n} \rightarrow B_i+{\rm p}+{\rm l} + \overline{\nu}_{\rm l}, \;\;\;\; B_i+{\rm p}+{\rm l} \rightarrow B_i+{\rm n} + \nu_{\rm l}.
\end{eqnarray}
Here $B_i=\rm n,p$ stands for, respectively,
neutron and proton branches of MUrca reactions, $\rm l={\rm e,\mu}$.
The second contribution to $Q^\infty_{\nu}$ comes from the
baryon-baryon bremsstrahlung:
\begin{eqnarray}
B_i+B_k \rightarrow B_i+B_k + \nu+ \overline{\nu},
\label{brems}
\end{eqnarray}
where $B_i,B_k$ stands for $\rm n$ or $\rm p$; $\nu,\overline{\nu}$ are neutrino and antineutrino
of one of the three possible flavors.
Generally, baryon-baryon bremsstrahlung is much weaker than MUrca processes.
Still, if superfluidity (superconductivity) of neutrons (protons) strongly suppresses
MUrca
reactions,
then bremsstrahlung with protons (neutrons) becomes the main contributor to $Q^\infty_{\nu}$.
We emphasize, however, that the process (\ref{brems})
does not change the chemical composition of the stellar matter.
Note that we do not account for the explicit contribution to $Q^\infty_\nu$
of the Cooper pairing neutrino emission process (e.g., \citealt{ykgh01}),
but discuss its possible role in Section \ref{assumptions}.
Further, $Q^\infty_{\rm heat}$ in equation (\ref{Teq}) represents
the heat release
in the non-equilibrium reactions
\begin{eqnarray}
Q^\infty_{\rm heat}=\sum_{\rm l=e,\mu}{\eta_{\rm l}^\infty \Delta \Gamma_{\rm l}\, {\rm e}^{\nu/2}}, \label{heat}
\end{eqnarray}
where $\eta_{\rm l}^\infty\equiv \mu_{\rm n}^\infty-\mu_{\rm l}^\infty-\mu_{\rm p}^\infty$
is the redshifted imbalance of chemical potentials (uniform throughout the core and vanishing in equilibrium);
$\Delta \Gamma_{\rm l}$ is the net
production
rate
of particle species $\rm l=\rm{e,\mu}$ due to reactions per unit volume (\citealt{ykgh01});
$-{\rm e}^{\nu}$ is the
time component of the metrics of a nonrotating reference star.
Further,
\begin{eqnarray}
L^\infty_{\gamma}=4{\rm \pi}\sigma R^2 T_{\rm s}^4 {\rm e}^{\nu_{\rm s}}
\end{eqnarray}
accounts for the photon emission from the surface. Here $R$ is the stellar
coordinate radius, $\nu_{\rm s}=\nu(R)$.
We relate the effective surface temperature in quiescence,
$T_{\rm s}$,
to the internal temperature, $T$, using the fitting formula (A8)
from \cite{pcy97}, relevant
to fully accreted envelope.
Finally, $L_{\rm acc}^\infty$ in equation (\ref{Teq}) is the heating rate due to accretion
\begin{eqnarray}
L_{\rm acc}^\infty\approx\frac{\dot{M}}{m_{\rm u}}\,q\,{\rm e}^{\nu_{\rm s}/2},
\end{eqnarray}
where ${m_{\rm u}}$ is the nucleon mass unit, $\dot{M}$ is the average accretion rate
(mass accreted per unit time of a distant observer),
$q$ is the heat released per one accreted baryon
because of non-equilibrium nuclear reactions
in the crust (deep crustal heating).
We approximate the redshift in the crust
by
the surface redshift, $\nu\approx \nu_{\rm s}$.
The values of $\dot{M}$ and $q$ are rather uncertain.
In our analysis we choose $\dot{M}=10^{-10}M_\odot/\rm yr$
($M_\odot$ is the solar mass) and $q=0.5\,\rm MeV/{\rm baryon}$,
in accordance with \cite{gc20,gc21},
who found that $q\; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 0.5\,\rm MeV/{\rm baryon}$. Note, however, that our results are not sensitive to the choice of these values (see Section \ref{SF}).
In addition to the thermal balance equation (\ref{Teq}),
one should also
formulate evolution equations
for the redshifted chemical potential imbalances, $\eta_{\rm l}^\infty$.
To establish these equations, let us
introduce the vectors $\pmb \eta\equiv (\delta \mu_{\rm n},\eta_{\rm e}, \eta_{\rm \mu})$
(where $\delta \mu_{\rm n}$ is the departure of the neutron chemical potential
from its equilibrium value;
we introduce $\delta \mu_{\rm n}$ purely for mathematical reasons,
in order to invert the matrix $\textbf{\textsf{A}}_{\rm (\mu)}$
appearing below)
and $\pmb {\delta n}\equiv (\delta n_{\rm b},\delta n_{\rm e}, \delta n_{\rm \mu})$.
In the inner core, where muons are present, these vectors are related as
\begin{eqnarray}
\pmb \eta= \textbf{\textsf{A}}_{(\mu)\,ji} \pmb {\delta n}, \label{eqA}
\end{eqnarray}
where the elements of the first
row
in the matrix $\textbf{\textsf{A}}_{(\mu)}$ equal
$\textbf{\textsf{A}}_{(\mu)\,{\rm n}i}=\partial \delta \mu_{\rm n}/\partial n_i$,
while the elements of the second and third
rows
equal $\textbf{\textsf{A}}_{(\mu)\,li}=\partial \eta_{\rm l}/\partial n_i$;
indices $l$ and $i$ run over $\rm{e,\mu}$ and $\rm{b,e,\mu}$, respectively
(note that, due to the quasineutrality condition,
$n_{\rm p}=n_{\rm e}+n_{\mu}$,
any thermodynamic quantity can be presented as a function of three number densities in the $npe\mu$ core).
We multiply equation (\ref{eqA}) by the matrix
$\textbf{\textsf{A}}^{-1}_{(\mu)\,ij}$,
which is the inverse
to $\textbf{\textsf{A}}_{(\mu)\,ji}$,
and integrate the result over the volume $V_\mu$,
where muons are present:
\begin{eqnarray}
\left(\int_{V_\mu}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{(\mu)\,ij}\,dV\right) {\rm e}^{\nu/2}\pmb \eta = \int_{V_\mu} \pmb {\delta n} \,dV. \label{eqmu}
\end{eqnarray}
Here we factored out
the uniform vector ${\rm e}^{\nu/2} \pmb \eta$.
Then we follow the analogous procedure in the outer core, where muons are absent, and find:
\begin{eqnarray}
\left(\int_{V_{\rm e}}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{({\rm e})\,ij}\,dV\right) {\rm e}^{\nu/2} \pmb \eta = \int_{V_{\rm e}} \pmb {\delta n}\, dV, \label{eqe1}
\end{eqnarray}
where $\textbf{\textsf{A}}^{-1}_{({\rm e})\,ij}=0$ for the last
row
and the last column;
integration goes over the outer core, where muons are absent.
Then we sum up equations (\ref{eqmu}) and (\ref{eqe1})
to obtain
\begin{eqnarray}
\left(\int_{V_\mu}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{(\mu)\,ij}\,dV+\int_{V_{\rm e}}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{({\rm e})\,ij}\,dV\right) {\rm e}^{\nu/2} \pmb \eta = \nonumber \\
=(\delta N_{\rm b},\delta N_{\rm e},\delta N_{\rm \mu}),
\label{sum}
\end{eqnarray}
where $\delta N_j=N_j-N_j^{\rm eq}$
is the difference between the actual particle number $N_j$ of species $j$ in the core and their equilibrium value,
$N_j^{\rm eq}$.
In equilibrium both $\pmb \eta$ and $\delta N_j$
must
vanish.
Multiplying equation (\ref{sum}) by the matrix $\textbf{\textsf{G}}_{ji}$,
which is the inverse
to the matrix
$\int_{V_\mu}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{(\mu)\,ij}\,dV+\int_{V_{\rm e}}{\rm e}^{-\nu/2}\textbf{\textsf{A}}^{-1}_{({\rm e})\,ij}\,dV$,
we find (see \citealt{fr05}):
\begin{eqnarray}
\eta_{\rm l}^\infty=\textbf{\textsf{G}}_{\rm l b} \delta N_{\rm b}+ \textbf{\textsf{G}}_{\rm l e} \delta N_{\rm e}+\textbf{\textsf{G}}_{\rm l \mu} \delta N_{\rm \mu},
\label{eta1}
\end{eqnarray}
where $\rm{l=e,\mu}$.
Taking time derivative, equation (\ref{eta1}) can be presented as
\begin{eqnarray}
\dot{\eta}_{\rm l}^\infty=\textbf{\textsf{G}}_{\rm l b} \delta \dot{N}_{\rm b}+\textbf{\textsf{G}}_{\rm l e} \delta \dot{N}_{\rm e}+\textbf{\textsf{G}}_{\rm l \mu} \delta \dot{N}_{\rm \mu}. \label{etadot}
\end{eqnarray}
where $\delta \dot{N}_i=\dot{N}_i- \dot{N}_i^{\rm eq}$.
The equilibrium number of particle species $i$ in the core,
$N_i^{\rm eq}$,
changes due to NS compression in the course of the accretion and/or NS spin-down.
$\dot{N}_i^{\rm eq}$ is responsible
for
building up
the imbalances in the core.
We determine $\dot{N}_i^{\rm eq}$ following \cite{hartle67,ht68,fr05}
with all the necessary modifications caused by accretion.
At the same time,
an actual (non-equilibrium) number of particle species $i$ in the core, $N_i$,
varies because of the two processes.
The first process is the particle mutual transformations
due to
non-equilibrium MUrca reactions.
These transformations tend to relax the imbalances $\eta_{\rm l}$ generated by the accretion
and/or spin-down of the star.
The second one is the transformation of the bottom layers of the crust into the homogeneous matter of the core
under compression.
Since this transformation is a rather model-dependent process,
in what follows we make
a simplifying assumption
that the number of baryons in the crust does not change with time.
In other words, the accretion of $\delta A$ baryons increases the number
of baryons in the core by the same value, $\delta A$.
Obviously, $\delta N_{\rm b}=0$ in such a formulation of the problem.
Moreover, we follow \cite{gc20,gc21} who suggested that the transformation of the crust matter into the core matter under compression is warranted by a specific instability that disintegrates nuclei in the inner layers of the crust into
neutrons (with no admixture of protons and electrons).
In such a paradigm, we have
\begin{eqnarray}
\dot{N}_{\rm l}=\int_{\rm core} {\rm e}^{\nu/2}\Delta\Gamma_{\rm l}\, dV. \label{Ndot}
\end{eqnarray}
Note that, our conclusions are not really sensitive to the above assumptions about the physics near the crust-core boundary.
We checked this by considering the two other limiting possible cases:
(i) the accretion does not change the baryon number in the core;
(ii) the accretion of $\delta A$ baryons increases the number of baryons in the core by $\delta A$,
but the nuclei in the inner layers of the crust
disintegrate to beta-equilibrated homogeneous
npe-matter of the core
(not just neutrons).
The cases (i) and (ii) lead
to equivalent values of $\delta \dot{N}_i$ ($i=\rm b,e,\mu$), and thus to
a similar
evolution of $\eta_{\rm l}$, $T^\infty$, and $T_{\rm s}^\infty$
that only
slightly
differs from that presented below in this paper
(see Figs.\ \ref{Fig:2}, \ref{Fig:3}, \ref{Fig:TscSC}, and \ref{Fig:TmutSC}).
\section{Physics input}
\label{input}
To illustrate our results, we employ
BSk24 EOS of the BSk (Brussels-Skyrme) family
(\citealt{gcp13, pfcpg13,pcp18,pcp19}).
This EOS allows for muons in the inner layers of an NS.
In what follows,
we consider
$1.4M_\odot$ NS model, for which DUrca processes
are closed in the whole NS core
(for BSk24 EOS DUrca processes operate in stars with $M>1.595M_\odot$,
where they accelerate the relaxation of the imbalances, see Section \ref{assumptions}).
To calculate the neutrino emissivity, $Q_{\nu}$, due to baryon-baryon bremsstrahlung and non-equilibrium MUrca processes,
as well as the net reaction rates, $\Delta \Gamma_{\rm l}$, which appear in the
evolution equations of
Section \ref{approach}, we follow the
review article by \cite{ykgh01},
but account,
in addition, for the enhancement of MUrca reaction rates reported by \cite{sbh18}.
More precisely,
we use the formulas from \cite{ykgh01}
with the equilibrium emissivities for MUrca processes
enhanced by a factor given by the equation (14) of \cite{sbh18}.
To calculate the heat capacity appearing in equation (\ref{Teq}), we follow \cite{yls99}.
To simplify analysis, in the paper
we assume that neutrons are nonsuperfluid,
and discuss the effect of neutron superfluidity in Section \ref{assumptions}, arguing that neutron superfluidity does not change the conclusions of the paper qualitatively, since neutron energy gap is expected to be noticeably lower than the
proton gap.
\begin{figure}
\begin{center}
\leavevmode
\center{\includegraphics[width=0.77\linewidth]{Tc.pdf}}
\end{center}
\caption{
Local proton critical temperature $T_{\rm cp}$ in the NS core as a function of $n_{\rm b}$ for six models of Ding et al. (2016) (see their table I and notations there). Vertical dotted lines indicate
central baryon number densities for NSs with $M=1.4M_\odot,1.6M_\odot$, and $1.8M_\odot$.
}
\label{Fig:Tc}
\end{figure}
We allow
for the superconductivity of protons in the NS core and assume that, if present, proton superconductivity occupies the whole core. Moreover, in Section \ref{SC},
when calculating the thermal states of an MSP with vanishing magnetic field,
we, for simplicity,
assume that the redshifted proton critical temperature is constant throughout the core
and equals $T_{\rm cp}^\infty=2\times 10^9\rm \, K$.
Note that microscopic calculations predict
rather wide profiles of $T_{\rm cp}$, see, e.g.,
Fig.\ \ref{Fig:Tc}, where we demonstrate six critical temperature profiles
from the paper by
\cite{ding16}, as functions of the baryon number density, $n_{\rm b}$.%
\footnote{Actually, \cite{ding16} presented $T_{\rm cp}$ as a function of the proton Fermi momentum, $k_{\rm Fp}$.
To obtain $T_{\rm cp}$ as a function of $n_{\rm b}$ we make use of the BSk24 EOS.}
Fig.\ \ref{Fig:Tc} shows {\it local} (unredshifted) critical temperatures,
which are independent of the stellar model.
One can see that proton superconductivity extends over the whole core for NSs of moderate masses
and $T_{\rm cp}$ is, generally, larger than $2\times 10^9\rm \, K$.
Thus, $T_{\rm cp}^\infty=2\times 10^9\rm \, K$ can be considered as a lower limit
on
the real $T_{\rm cp}^\infty$ for a chosen model of an NS with $M= 1.4 M_\odot$.
Below we comment on how our results modify if real minimum of $T_{\rm cp}^\infty$ differs from $2\times 10^9\rm \, K$.
Proton superconductivity
with $T_{\rm cp}\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2\times 10^9$~K
almost completely
suppresses the proton heat capacity and proton-baryon bremsstrahlung processes (\citealt{yls99}).
Thus, we assume that, if protons are superconducting,
they do not contribute to the heat capacity $C$;
the contribution to $C$ from other particle species
remains unaffected, while proton-baryon bremsstrahlung is completely
suppressed.
In turn, the reduction factors describing suppression of the non-equilibrium MUrca reactions by proton superconductivity
are, generally, rather cumbersome and have been analyzed in detail by \cite{vh05}.
Fortunately, in the limit $T\ll T_{\rm cp}$ they can be substantially simplified.
First, in this limit the nonequilibrium MUrca reactions only proceed
if $|\eta_{\rm l}|>\Delta_{\rm p}$ in the case of neutron branch of MUrca and
if $|\eta_{\rm l}|>3\Delta_{\rm p}$ in the case of proton branch,
where $\Delta_{\rm p}$ is the proton energy gap (\citealt{reisenegger97,pr10}).
\cite{pr10} proposed simple analytic expressions
for the reduction factors of MUrca non-equilibrium reactions
(see their formulas 20 and 21, and note that the corresponding expression B.15 in their appendix, which is equivalent to equations 20 and 21, contains a typo).
These expressions are valid for $\Delta_{\rm p}\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 30k_{\rm B}T$
($k_{\rm B}$ is the Stefan-Boltzmann constant).
The above inequality is satisfied in the parameter range of our interest
(e.g., for $T=10^8\,\rm K$ and $T_{\rm cp}=2\times 10^9\,\rm K$ one has $\Delta_{\rm p}\approx 35k_{\rm B}T$).
We use these analytic expressions in our calculations.
When considering superconducting NS cores we,
for the first time in this context, account for the magnetic field existing there.
Depending on the density and microphysics model,
protons in the NS core form a superconductor
of either type I or type II (\citealt{gas11,hs18}).%
\footnote{Other, more exotic phases are also possible, see an interesting recent work (\citealt*{wgn20}) in this direction.}
If protons form a type II superconductor,
the magnetic field is confined to the flux tubes (Abrikosov vortices) in the core;
in the case of type I superconductor the field is contained in the
macroscopic (but small) domains
surrounded by the magnetic-field-free superconducting matter (\citealt{degennes67}).
Protons are normal (nonsuperconducting) both inside the flux tubes
(in the case of type-II superconductor) and inside the domains (in case of type-I superconductor)
and we shall treat particle mutual transformations as fully unsuppressed there
(see, e.g., \citealt{sww98},
who used this approximation to describe MUrca reactions with localized proton excitations
in the cores of flux tubes).
We expect that this approximation gives correct order-of-magnitude estimates
for the nonequilibrium MUrca reaction rates
because the wavelengths of neutron, proton, and lepton quasiparticles in NS interiors
are generally smaller than the
typical sizes of the flux-tube cores and normal domains (e.g., \citealt{gusakov19,dg17}).
Summarizing, we model the NS matter inside the flux tubes/normal domains
as a uniform non-superconducting liquid.
Note that, for simplicity, we neglect enhancement of the reaction rates
by the magnetic field
studied
by \cite{by99},
because this enhancement is quite moderate for the magnetic field strengths
reached inside the flux tubes/normal domains.
Clearly, the volume fraction occupied by
the
nonsuperconducting matter depends on the magnetic field strength in the core, $B$, and can be estimated as $\sim B/H_{\rm crit}$, where $H_{\rm crit}$ is the magnetic field value inside the flux tube/normal domain.
In the case of type-II superconductor, $H_{\rm crit}$ corresponds
to the upper critical magnetic field, $H_{\rm c2}$,
while in type-I superconductor $H_{\rm crit}=H_{\rm c}$
(\citealt{degennes67}).
Both critical fields
$H_{\rm c2}$ and $H_{\rm c}$ vary throughout the core
by a factor of few
(\citealt{gas11,lp80}), see Fig.\ \ref{Fig:Hcrit},
and we adopt their typical value, $\sim 2\times 10^{15}\,\rm G$, as $H_{\rm crit}$ in our calculations.
In what follows,
both type-I and type-II superconductors are treated as described above,
i.e., not discriminating
between these two phases.
Since the value of the magnetic field in the core is highly uncertain
(see, e.g., \citealt*{crt19} and references therein)
and can strongly deviate from the value of the NS dipole magnetic field, $B_{\rm dip}$,
we consider $B$ as a free parameter in the calculations below.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=0.77\linewidth]{Hcrit.pdf}
\end{center}
\caption{
$H_{\rm c}$ and $H_{\rm c2}$ in the NS core as functions of $n_{\rm b}$ for $T_{\rm cp}=2\times 10^9\,\rm K$. Vertical dots show $n_{\rm b}$ corresponding to the interface between type I and type II superconductors. Dashes show adopted in the paper value of $H_{\rm crit}$.
}
\label{Fig:Hcrit}
\end{figure}
\section{Thermal states of superconducting magnetized MSPs}
\label{SC}
Here our aim is to analyze the role of the core
magnetic field
in the rotochemical heating of superconducting MSPs.
But first, let us assume for a moment
that the magnetic field is absent,
protons are strongly superconducting in the whole NS core
($T^\infty\ll T_{\rm cp}^\infty$), $T_{\rm cp}$-profile is flat ($ T_{\rm cp}^\infty=\rm const$), while neutrons are normal.
Since the chemical imbalances $\eta_{\rm l}$
relax through the MUrca processes only,
$\eta_{\rm l}$
will grow until $|\eta_{\rm l}^\infty|>\Delta_{\rm p}^\infty$,
when the reactions of the MUrca neutron branch are open.
After this condition is fulfilled for one of the imbalances,
the reactions will prevent the subsequent growth of the corresponding $\eta_{\rm l}^\infty$
by compensating the effect of compression (\citealt{reisenegger97}).
As a result, $|\eta_{\rm l}^\infty|$ freezes
at some equilibrium value slightly exceeding $\Delta_{\rm p}^\infty$
(see the dashed line in the last panel of Fig.\ \ref{Fig:SpinDownB} at ${\rm log}_{10}t>9.5$).
Even small variation of $\eta_l^\infty$ results in a strong variation
of the reaction rates tending to bring $\eta_{\rm l}^\infty$ back to its equilibrium value.
In what follows, we shall call such quasiequilibrium the `steady state'.
Higher values of $\Delta_{\rm p}^\infty$ correspond to higher values
of $|\eta_{\rm l}^\infty|$ in the steady state
and higher energy release in the non-equilibrium reactions.
Indeed, we can write:
$\int_V Q_{\rm heat}^\infty dV=\sum_l \eta_{\rm l}^\infty \int_V \Delta \Gamma_{\rm l} {\rm e}^{\nu/2} dV=\sum_{\rm l} \eta_{\rm l}^\infty \dot{N}_{\rm l}$ (see equations \ref{heat} and \ref{Ndot}).
To calculate $\dot{N}_{\rm l}$ let us consider, for example,
a situation when $\eta_{\rm \mu}^\infty$
have reached the steady-state,
while $|\eta_{\rm e}^\infty|<\Delta_{\rm p}^\infty$
(exactly this situation is realized in the fourth panel
of Fig.\ \ref{Fig:SpinDownB}).
Then $\dot{N}_e=0$, since reactions with electrons are locked
(we do not account for the lepton decay here, and discuss its role in Section \ref{assumptions}),
while the quasiequilibrium condition
$\dot{\eta}_\mu^\infty=0$ prescribes
$\dot{N}_\mu=\dot{N}_\mu^{\rm eq}+\textbf{\textsf{G}}_{\rm \mu e}/\textbf{\textsf{G}}_{\rm \mu \mu} \dot{N}_{\rm e}^{\rm eq}$
(see equation \ref{etadot}).
Thus, in the steady state $\dot{N}_{\rm l}$
are driven by the compression rate and do not depend on the imbalances,
while the heating rate appears to be proportional to the value
of the imbalance in the steady state,
or (approximately)
to $\Delta_{\rm p}^\infty$.
This property was used by \cite{reisenegger97,pr10,reisenegger15}
to explain high thermal luminosity of J0437.
\begin{figure*}
\begin{center}
\leavevmode
\center{\includegraphics[width=0.9\linewidth]{TmutSCSD.pdf}}
\end{center}
\caption{
Internal stellar temperature $T^\infty$ (solid lines)
and imbalances $\eta_{\rm e}^\infty$ and $\eta_{\rm \mu}^\infty$ (dotted and dashed lines)
versus time. Black lines account for the enhancement of MUrca reactions
as discussed in
Section \ref{input},
red lines (in the second panel from the left) disregard this effect. Vertical dots mark
the time when $\nu\approx 173\,\rm Hz$ (the spin frequency of J0437).
Horizontal dot-dashed lines show the redshifted proton superfluid gap, corresponding to $T_{\rm cp}^\infty=2\times 10^9\,\rm K$. At $t=0$ the star is cold and in chemical equilibrium; $\nu(t=0)=300\,\rm Hz$; $B_{\rm dip}=1.6\times 10^8\,\rm G$.
We do not show the plot at $t<10^8\,\rm yrs$;
it corresponds to the gradual growth of the imbalances.
}
\label{Fig:SpinDownB}
\end{figure*}
\begin{figure*}
\begin{center}
\leavevmode
\center{\includegraphics[width=0.9\linewidth]{TstSCSD.pdf}}
\end{center}
\caption{
Surface stellar temperature $T_{\rm s}^\infty$ versus time for the same parameters as in Fig.\ \ref{Fig:SpinDownB}. Notations are the same.
Error bars show $T_{\rm s}^\infty$ for the pulsar J0437.
}
\label{Fig:Ts}
\end{figure*}
Such an analysis, however, is valid only in the absence of the magnetic field.
As we already discussed above, the magnetic field makes
part of the stellar core
with the volume fraction $\sim B/H_{\rm crit}$
nonsuperconducting and allows for unsuppressed nonequilibrium reactions there.
These reactions may efficiently relax the imbalances
and prevent $|\eta_{\rm l}^\infty|$ from growing to $\Delta_{\rm p}^\infty$.
To analyze the role of the magnetic field,
we model the rotochemical heating of an MSP
for different values of the core magnetic field, $B$.
We assume that
MUrca
reactions
are not suppressed inside the flux tubes (or normal domains),
and are \underline{completely forbidden outside} (\citealt{sww98}).
This is a good approximation, even for small $B$, as long as
$|\eta_{\rm l}^\infty|<\Delta_{\rm p}^\infty$ and $T^\infty \ll \Delta_{\rm p}^\infty$.
For example, the reduction factor for
the neutron branch of MUrca process
is $\sim 2\times 10^{-11}$ if we take
$T^\infty=5\times 10^7\,\rm K$,
$T_{\rm cp}^\infty=2\times 10^9\,\rm K$, and
$\eta_{\rm l}^\infty=0.8 \Delta_{\rm p}^\infty$.
This value is much smaller than
the nonsuperconducting volume fraction,
$B/H_{\rm crit}$, for the values of
$B$
considered in the numerical examples below.
In this Section,
we ignore the prehistory of the pulsar,
namely, its compression at the LMXB stage.
We assume that, initially
(at the moment of time $t=0$),
NS
was in chemical equilibrium,
and rotated at a spin frequency
$\nu(t=0)=\Omega(t=0)/(2{\rm \pi})=300\,\rm Hz$;
the NS temperature $T^\infty(t=0)$
is assumed to be low.
We also assume that the pulsar spins down due to magneto-dipole losses
with the spin-down rate $\dot{\Omega}$ equal to
\begin{eqnarray}
\dot{\Omega}=-\frac{2B_{\rm dip}^2 R^6\Omega^3}{3c^3I}, \label{sd}
\end{eqnarray}
where $I$ is the NS moment of inertia and $B_{\rm dip}=1.6\times 10^8\,\rm G$.%
\footnote{Such value of $B_{\rm dip}$
corresponds to the actual (i.e., accounting for the Shklovskii effect)
spin-down rate of J0437.}
While equation (\ref{sd}) describes the energy losses of a rotating dipole in vacuum,
which is a rather crude emission model,
its accuracy is quite sufficient for our analysis.
Figure \ref{Fig:SpinDownB} shows the evolution of $T^\infty$, $\eta_{\rm e}^\infty$,
and $\eta_{\rm \mu}^\infty$ for four values
of the core magnetic field $B$ ($B=10^{12},10^9,10^6,0\,\rm G$). In case of $B=0$ we assume $T_{\rm cp}^\infty=2\times 10^9\,\rm K$.
One can see that, in the beginning, in all the panels
chemical imbalances grow with the same rate.
This rate is
defined
by the compression rate of the stellar matter;
non-equilibrium reactions, which depend on the value of the imbalances,
are still too
slow
to contribute to the evolution of the imbalances.
Growth of the imbalances $\eta_{\rm l}$ leads to the growth of the non-equilibrium reaction rates.
As a result, at some moment non-equilibrium reactions
come into play and start to compensate the effect of compression.
The imbalances reach the steady state when this compensation becomes exact.
After that, the imbalances stay in the steady state
(see the arrow in the left panel).
In the absence of the evolution of the compression rate (${\Omega\dot{\Omega}}=\rm const$),
they would stay exactly constant.
However, the compression rate decreases
with time ($\Omega\dot{\Omega}$ decreases, see equation \ref{sd})
and the imbalances trace this decrease.
Higher values of $B$ result
in the lower steady-state imbalances and lower $T^\infty$.
This is not surprising,
since the higher $B$,
the larger is the volume
fraction, where nonequilibrium reactions are unsuppressed
and can effectively
relax $\eta_{\rm l}^\infty$.
Note that the above consideration with non-zero $B$
is valid only
as long as
$\eta_{\rm l}^\infty<\Delta_{\rm p}^\infty$.
Once $\eta_{\rm l}^\infty$ reaches $\Delta_{\rm p}^\infty$,
the reactions start to operate in the whole core
and this
stops further growth of $\eta_{\rm l}^\infty$.
For the reference we indicate $\Delta_{\rm p}^\infty$
for $T_{\rm cp}^\infty=2\times 10^9\,\rm K$ by dot-dashed line in Fig.\ \ref{Fig:SpinDownB}.
One sees that imbalances do not reach
$\Delta_{\rm p}^\infty$ for $B\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^6\,\rm G$
and assumed spin-down parameters.
In other words, the magnetic field $B\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^6\,\rm G$
reduces the efficiency of rotochemical mechanism.
Figure \ref{Fig:Ts} illustrates this point,
explicitly showing the corresponding redshifted
effective
surface temperature, $T_{\rm s}^\infty$,
for the same spin-down parameters and the same set of $B$ values as in Fig.\ \ref{Fig:SpinDownB}.
In Fig.\ \ref{Fig:Ts}
we also show the error bars for the measured
redshifted effective surface temperature of J0437.
We choose the horizontal coordinate corresponding
to the actual spin rate of
this MSP.
The time coordinate is also in agreement with the estimated age of J0437,
see \cite{durant12,gr10}.
One can see that at $B\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^{12}\,\rm G$ the rotochemical mechanism becomes unable to explain the observed temperature of J0437 for the adopted stellar model.
In the second panels of Figs.\ \ref{Fig:SpinDownB} and \ref{Fig:Ts}
we illustrate the effect of the enhancement of MUrca processes
discussed in \cite{sbh18}.
For comparison, by red lines we show the results
obtained neglecting the enhancement of MUrca reactions.
One can see that, while the quantitative effect is obvious
(red dashed and dotted lines
pass higher in the steady state,
because the reactions are not so effective),
the picture does not change qualitatively,
and thus our conclusions are not really sensitive
to the accounting for
the enhancement coefficients.
We should emphasize that,
if the proton critical temperature profile does not extend over the whole NS core,
then the unsuppressed reactions will proceed in the normal part of the core,
which makes the
role of the magnetic field in establishing the equilibrium
negligible:
Even at $B=0$ the imbalances will relax efficiently.
\section{Effect of preceding accretion on the thermal states of MSPs}
\label{acc}
Let us now analyze how the prehistory of an MSP,
namely the accretion at the LMXB stage, affects its subsequent thermal
evolution.
We assume that the accretion proceeds with constant
average rate $\dot{M}$ during some period of time $t_{\rm acc}$
(in our numerical examples $\dot{M}=10^{-10}M_\odot/\rm yr$,
$t_{\rm acc}=10^9\,\rm yrs$)
and then smoothly
(on a timescale $\sim 3\times 10^7\,\rm yrs$, \citealt{tauris12})
switches off.
We also assume that during almost the whole LMXB phase (except for the beginning and final stages, see below for details),
when accretion is active, an NS has an equilibrium frequency
$\nu_{\rm eq}$,
defined by the condition that the
accretion spin-up is balanced by the magneto-dipole spin-down.
In our numerical calculations we choose
$\nu_{\rm eq}=300\, \rm Hz$.
When the accretion
smoothly switches off, the star starts to spin down.
The spin-down rate increases
gradually
on the time-scale $\sim 3\times 10^7\,\rm yrs$ from zero to the value given by
equation (\ref{sd}).
In the beginning of the accretion phase we take into account
an initial spin up of accreting NS up to the equilibrium frequency $\nu_{\rm eq}$.
We choose the duration of this spin-up stage
to be $\sim 2\times 10^8\,\rm yrs$,
in accordance with the observed $\dot{\nu}$ for some NSs in LMXBs during outbursts
(see, e.g., \citealt{papitto08,patruno10,papitto11})
and in accordance with the accretion torque modeling
(\citealt{pr72,gl79}).
Figure \ref{Fig:1} shows the behavior of $\nu$ and $\dot{\Omega}$ with time.
Then, using equations of Section \ref{approach},
we model the joint thermal and chemical evolution of an NS.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=0.9\linewidth]{AOmega.pdf}
\end{center}
\caption{
$\nu$ (left panel) and $\dot\Omega$ (right panel) versus time.
}
\label{Fig:1}
\end{figure}
\subsection{Nonsuperconducting NS matter}
\label{nonSF}
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=0.77\linewidth]{Tmut.pdf}
\end{center}
\caption{
$T^\infty$ (solid line) and imbalances $\eta_{\rm e}^\infty$ (dotted line) and $\eta_\mu^\infty$ (dashed line) versus time.
To highlight the accretion effect we neglect the
effect of $\Omega$-variation on $T^\infty$, $\eta_{\rm e}^\infty$, and $\eta_{\mu}^\infty$.
We do not show the plot at ${\rm log}_{10} t<5.3$, it corresponds to the gradual growth of the imbalances with time.
}
\label{Fig:2}
\end{figure}
First, we consider a normal (nonsuperconducting) NS.
Figure \ref{Fig:2} shows temporal evolution of $T^\infty$, $\eta_{\rm e}^\infty$, and $\eta_\mu^\infty$.
In this figure we neglect $\Omega$
evolution to highlight the accretion effect
(see Fig.\ \ref{Fig:3} for the combined effect of accretion and
$\Omega$-variation).
The compression of NS material in the course of accretion drives an NS
out of chemical equilibrium.
As a result, $\eta_{\rm e}^\infty$ and $\eta_\mu^\infty$ grow
until they reach the equilibrium values
when particle transformations become fast enough
to compensate the subsequent growth of $\eta_{\rm e}^\infty$ and $\eta_\mu^\infty$.
At the LMXB stage the value of $T^\infty$
corresponds to the equilibrium one,
defined by the balance between the NS cooling
and the deep crustal heating
(as we already mentioned in Section \ref{approach},
we assume $q=0.5\,\rm MeV$ per accreted nucleon, see \citealt{gc20,gc21}).
NS heating due to non-equilibrium particle transformations
proceeding to restore the chemical equilibrium in the core also takes place,
however, the energy released in these reactions (chemical heating)
is much smaller than the deep crustal heating.
When accretion ceases at $t\sim 10^9\,\rm yrs$,
the NS internal temperature $T^\infty$
drops on a typical NS cooling timescale.
However, at some value of $T^\infty$,
NS cooling becomes approximately balanced by the chemical heating in the core.
Then the temperature fall
slows down and subsequently (at $t \gtrsim 10^{9.15}$~yrs)
$T^\infty$ evolves
on the timescale of chemical evolution.
Generally, after the accretion ceases,
the imbalances are driven by two trends.
First, $\eta_{\rm e}^\infty$ and $\eta_\mu^\infty$
tend to relax to zero by means of particle transformations.
Second, the NS matter continues to be compressed
due to NS spin-down that maintains the non-zero imbalances.
Note, however, that Fig.\ \ref{Fig:2} neglects the spin-down
($\Omega$ is artificially kept constant) so that in this figure
the imbalances at the MSP stage are driven by the relaxation through non-equilibrium reactions only.
The reader can notice a sharp increase of $\eta_{\rm e}^\infty$ and $\eta_\mu^\infty$
at $t\sim 10^9\,\rm yrs$, when accretion ceases
(see also inset in Fig.\ \ref{Fig:2}).
It is related to the fact that when the accretion rate starts to decrease
(we remind that this process lasts $\sim 3\times 10^7$~yrs),
the NS temperature rapidly falls down.
Driven by strong temperature dependence, the reaction rates decrease as well,
making the relaxation of the imbalances inefficient.
As a result, $\eta_{\rm e}^\infty$ and $\eta_\mu^\infty$ get an opportunity to grow up.
However, at some moment the growth of the imbalances stops.
The main reason for that is that
then the accretion rate becomes too low to provide sufficient compression of NS matter.
\begin{figure}
\begin{center}
\leavevmode
\includegraphics[width=0.77\linewidth]{Tst.pdf}
\end{center}
\caption{
$T_{\rm s}^\infty$ normalized to $10^5$~K
versus time.
Dotted line is plotted assuming that there is no spin-down ($\Omega$ is kept constant),
solid line accounts for both compression due to accretion (at the LMXB stage) and NS spin-down.
The evolution exclusively due to spin-down follows solid line very closely.
Error bar shows $T_{\rm s}^\infty$ for the pulsar J0437.
}
\label{Fig:3}
\end{figure}
Fig.\ \ref{Fig:3} shows the dependence $T_{\rm s}^\infty(t)$.
The surface temperature traces $T^\infty$ behavior.
Solid line accounts for the compression
of NS material in the course of both accretion and spin-down ($\Omega$ is now allowed to vary);
dotted line shows the effect of accretion only ($\dot{\Omega}=0$).
If we account for NS spin-down only, neglecting
compression of the star
at the LMXB stage
($t\lesssim 10^9\,\rm yrs$)
by the accreted material,
we obtain the line that practically coincides with the solid line.
Figure \ref{Fig:3} implies that in the normal NSs compression due to accretion alone (dots) does not generate sufficient imbalances to maintain relatively high NS surface temperatures after the accretion ceases,
comparable to those observed in some MSPs.
The reason for that is
reactions of particle transformations are fast enough to
keep the NS core close
to the equilibrium ($\eta_{\rm e}^\infty\approx \eta_{\rm \mu}^\infty\approx 0$)
despite the strong compression caused by accretion.
Variation of $q$, $\dot{M}$, as well as account for $\dot{M}$
temporal dependence
do not change the situation
qualitatively.
\subsection{Superfluid/superconducting NS matter}
\label{SF}
\begin{figure*}
\begin{center}
\leavevmode
\includegraphics[width=0.9\linewidth]{TstSC.pdf}
\end{center}
\caption{
$T_{\rm s}^\infty$ normalized to $10^5$~K
versus time.
Protons are superconducting in the whole NS core. Dotted lines are plotted assuming
that there is no spin-down ($\dot{\Omega}=0$),
dashed lines -- no compression due to accretion at the LMXB stage,
solid lines account for both compression due to accretion and NS spin-down.
Three panels correspond to three values of the magnetic field (from left to right):
$B=10^{12}$, $10^{9}$, and $10^{6}$~G.
Error bars show $T_{\rm s}^\infty$ for the pulsar J0437.
}
\label{Fig:TscSC}
\end{figure*}
\begin{figure*}
\begin{center}
\leavevmode
\includegraphics[width=0.9\linewidth]{TmutSC.pdf}
\end{center}
\caption{
The same as in Fig.\ \ref{Fig:2},
but protons are superconducting.
It is assumed that $\dot{\Omega}=0$.
Three panels correspond to three values of the magnetic field in the core.
}
\label{Fig:TmutSC}
\end{figure*}
Baryon superfluidity/superconductivity suppresses the nonequilibrium reaction rates.
Thus,
in superfluid/superconducting NSs,
we can expect that higher values of the imbalances will be reached
at the LMXB stage,
which can affect the NS surface temperature
at the subsequent MSP stage.
To illustrate this point,
let us assume that protons are strongly superconducting,
while neutrons are normal.
We allow for non-zero magnetic field in the core (characterized by the parameter $B$)
and model the NS evolution assuming that the proton energy gap
is high enough, so that the inequality $\eta_{\rm l}^\infty<\Delta_{\rm p}^\infty$ is always satisfied, and MUrca reactions can be treated as fully suppressed outside the flux tubes/normal domains.
Figure \ref{Fig:TscSC} shows $T_{\rm s}^\infty$ for
the three values of $B$.
Lower magnetic fields result in higher $T_{\rm s}^\infty$
after
cessation of accretion
(at $t>10^9\,\rm yrs$).
This is not surprising,
since the imbalances
generated in the star during its evolution
are higher for lower $B$,
see Fig.\ \ref{Fig:TmutSC}.
\footnote{
\label{foot4}
Note that, in the case of $B=10^6\,\rm G$
(third panel in Figs.\ \ref{Fig:TscSC} and \ref{Fig:TmutSC})
the condition $\eta_{\rm l}^\infty<\Delta_{\rm p}^\infty$ assumed in our calculations
is only
fulfilled for $T_{\rm cp}$-profiles with $T_{\rm cp}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 5\times 10^9\,\rm K$,
while in the cases $B=10^9\,\rm G$ and $B=10^{12}\,\rm G$ the condition $T_{\rm cp}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2\times 10^9\,\rm K$ is sufficient.}
At the same time, for
sufficiently large magnetic fields such as $B=10^{12}\,\rm G$
the generated imbalances are too low to explain the surface temperature of J0437.
Another interesting feature is that, for MSPs with even vanishing spin-down rate,
$T_{\rm s}^\infty$ can be relatively high
during some period of time after accretion switches off
and NS becomes an MSP.
For example, for $B=10^9\,\rm G$ the surface temperature remains
$T_{\rm s}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^5\,\rm K$ during approximately $10^9\,\rm yrs$
after accretion ceases at
vanishing
spin-down rate (see dotted line in the middle panel of Fig.\ \ref{Fig:TscSC}).
In turn, for $B=10^6\,\rm G$ this period lasts approximately $4.5\times 10^9\,\rm yrs$.
During this time $T_{\rm s}^\infty$ is
supported by relaxation of the imbalances generated during compression of matter by accretion
at the LMXB stage.
In the case of $B=10^6\,\rm G$ this heating could support the observed temperature of J0437 even in the absence of the pulsar spin-down.
Thus, our results imply that
temperature of an MSP may be directly related to
the evolution of the
star at the LMXB stage.
Millisecond pulsars with low $\Omega \dot{\Omega}$ may nevertheless have rather high surface temperatures ($T_{\rm s}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^5\,\rm K$).
As in the case of normal NSs, we
checked the sensitivity of our results to the choice of $\dot{M}$ and $q$.
We found that, while
variation of these parameters
has a certain effect on the stellar temperature
at the LMXB stage,
the built-up imbalances,
and hence the NS temperature at the MSP stage,
are quite insensitive to the choice of $\dot{M}$ and $q$.
Note that Fig.\ \ref{Fig:TmutSC}
implies that when we consider superconducting NSs,
$\eta_{\rm l}^\infty\gg T^\infty$ both at the MSP and LMXB stages
(i.e., we are in the so-called `suprathermal' regime).
In this regime, the relaxation rate of the imbalances,
determined by $\Delta \Gamma_{\rm l}$,
does not depend on temperature,
but strongly depends on the imbalance values,
$\Delta \Gamma_{\rm l}\propto \eta_{\rm l}^7$ (\citealt{ykgh01}).
As a result,
although $T^\infty$ is sensitive to $\dot{M}$ and $q$ values,
the imbalances and NS temperature at the MSP stage are not affected by these parameters.
\section{Discussion}
\label{assumptions}
In this paper we worked under a number of simplifying assumptions.
Here we discuss how
they may affect our conclusions.
First, we considered the NS model with forbidden DUrca processes.
However, for many
admissible EOSs these processes may operate in the
inner cores of massive NSs.
If open, DUrca processes dramatically accelerate particle mutual transformations
and hence relaxation of the imbalances.
As long as
the magnetic field in NS cores
does not vanish,
the unsuppressed DUrca reactions proceed in flux tubes/normal domains.
As a result, the steady-states reached in NSs with open DUrca correspond
to much lower values of the imbalances
comparing to NSs with closed DUrca,
so that the chemical heating in
the former stars is not significant.
Note also that the central regions in high-mass NSs
may be non-superconducting
(see, e.g., Fig.\ \ref{Fig:Tc}).
In these regions particle mutual transformations
are not suppressed by proton superconductivity, that
also leads to a more rapid relaxation of the imbalances.
All this suggests that chemical heating in massive NSs may be inefficient.
Further, in this work we did not account for the non-equilibrium lepton decay process (\citealt{ag10}):
\begin{eqnarray}
{\rm e}+{\rm l} \rightarrow {\rm \mu} +{\rm l} +\nu_{\rm e} +\overline{\nu}_{\rm \mu},\,\,\, {\rm \mu}+{\rm l} \rightarrow {\rm e} +{\rm l} +\nu_{\rm \mu} +\overline{\nu}_{\rm e}. \label{lepton}
\end{eqnarray}
Generally, this process is much weaker than MUrca processes
(as follows from figure 4 of \citealt{ag10}, by about eight orders of magnitude).
However, when superfluidity/superconductivity
strongly
suppresses MUrca reactions,
lepton decay becomes the main process of particle mutual transformations.
Note that \cite{ag10} calculated the non-equilibrium reaction rates for (\ref{lepton})
in subthermal regime,
when $\mu_{\rm e}-\mu_{\rm \mu}\ll T$.
In our problem we are interested in the opposite (suprathermal) regime,
when $\mu_{\rm e}-\mu_{\rm \mu}\gg T$.
Unfortunately, lepton decay in this limit has not been discussed in the literature
to our best knowledge, so we did not include this process
in this work.
However, we believe that lepton decay cannot qualitatively affect our results
since it relaxes $\mu_{\rm e}-\mu_{\rm \mu}$ only,
tending to equalize $\eta_{\rm e}$ and $\eta_{\rm \mu}$,
but it is unable
to vanish $\eta_{\rm l}$.
Note that, in all our calculations $\eta_{\rm e}$ and $\eta_{\rm \mu}$
are rather close to each other,
which means that even if the lepton decay
was efficient,
it would equalize $\eta_{\rm e}$ and $\eta_{\rm \mu}$ at some average value,
but would not significantly affect the evolution of stellar temperature.
Next, for simplicity we assumed that neutrons in the star are nonsuperfluid (normal).
Neutron superfluidity may affect the chemical heating in two ways.
First, it introduces the Cooper pairing neutrino emission process,
a strong cooling agent in NSs with internal temperatures
comparable, but smaller than
the neutron critical temperature (\citealt{ykgh01}).
Note that, due to rather low temperatures of MSPs,
the Cooper pairing neutrino emission process is negligible in MSPs.
This process
may, however, increase the NS cooling rate at the LMXB stage,
when an NS is noticeably hotter.
As a result, the equilibrium temperature at the LMXB stage
may appear to be a bit smaller than in our simulations.
The second effect of neutron superfluidity concerns the suppression of
particle mutual transformations in MUrca reactions.%
\footnote{
Bremsstrahlung processes with neutrons are also suppressed by neutron superfluidity.}
It directly reduces the cooling/heating rate (depending on the ratio of $\eta_{\rm l}/T$; see \citealt{fr05})
of an NS due to MUrca processes and,
in addition, reduces the relaxation rates of the imbalances,
allowing
$\eta_{\rm e}$ and $\eta_{\rm \mu}$
to reach higher values (\citealt*{ynh20}).
However, such a suppression of MUrca reactions is only efficient
as long as $\eta_{\rm l}\; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \Delta_{\rm n}$, where $\Delta_{\rm n}$ is the neutron superfluid energy gap.
Once $\eta_{\rm l}$ have reached $\Delta_{\rm n}$,
the subsequent growth of the imbalances
leads to less and less efficient suppression
of the reaction rates by neutron superfluidity.
In other words, neutron superfluidity
has a significant
impact
on the MUrca reaction rates only
if
$\eta_{\rm l}\; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \Delta_{\rm n}$
(or $\eta_{\rm l}^\infty \; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; \Delta_{\rm n}^\infty$) in a substantial fraction of the NS core.
However, observations of cooling NSs
and NSs in LMXBs
seem to indicate
(e.g., \citealt{gkyg04,plps04,gkyg05,syhhp11,page11,CasA13,CasA15,bhsp18,kgd20,kgd21,sow21})
that
the maximum value of the redshifted neutron energy gap in the core should not exceed
$\Delta_{\rm n}^\infty\; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 0.07\,\rm MeV$, and is even lower away from the maximum.
This constraint also
does not contradict microscopic calculations
(e.g., \citealt{ls01, yls99,gps14,dlz14, drddwcp16, sc19}).
At the same time,
our results imply that the chemical heating
is capable of maintaining the surface temperature $T_{\rm s}^\infty\approx 10^5\rm\, K$
for the parameters of PSR J0437-4715 as an example,
only if the imbalances are rather large,
$\eta_{\rm l}^\infty> 0.1\,\rm MeV$,
i.e.,
$\eta_{\rm l}^\infty> \Delta_{\rm n}^\infty$.
This suggests that neutron superfluidity should not affect our results qualitatively,
although some quantitative effect is
of course expected.
Now, let us comment on
our model of proton superconductivity.
When considering superconducting NSs,
we assumed that proton superconductivity
extends over the whole NS core.
Only in this case our conclusions
about the thermal states of superconducting MSPs are valid.
If $\Delta_{\rm p}\; \raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; T$ in some part of the core,
the imbalances effectively relax
through unsuppressed MUrca reactions there,
and no effective chemical heating is possible
(see \citealt{reisenegger15}).
Moreover,
to find
the thermal state of a superconducting spinning down MSP
with $B=0$,
we assumed that the redshifted proton critical temperature
is constant throughout the core.
\footnote{
We have to specify the $T_{\rm cp}^\infty$-profile
to calculate the reduction factors for MUrca processes
by proton superconductivity. In the case of non-vanishing magnetic field,
$B\neq 0$, we do not need to calculate these factors, because
we treat MUrca reactions outside the flux tubes/normal domains as fully suppressed in this case
(i.e., we neglect a contribution of MUrca reactions outside the flux tubes/normal domains in comparison
to that inside the flux tubes/normal domains).
This approach is justified for typical LMXB/MSP temperatures and magnetic fields $B \gtrsim 10^6$~G
as long as $\eta_{\rm l}^\infty<\Delta_{\rm p}^\infty$ in the whole stellar core.
The latter condition is fulfilled for most of our numerical examples
for the critical temperature profiles with $T_{\rm cp}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 2\times 10^9\, \rm K$,
see footnote \ref{foot4} for details.}
In contrast, \cite{reisenegger15} considered a more realistic bell-shaped profile
of $T_{\rm cp}$.
They found that the redshifted imbalances
first grow up to the lowest value of $\Delta_{\rm p}^\infty$ in the core,
then nonequilibrium MUrca reactions switch on
in the volume fraction in which $\eta_{\rm l}^\infty>\Delta_{\rm p}^\infty$
and prevent subsequent growth of the imbalances in the NS core.
This means that our results for the flat profile
with $T_{\rm cp}^\infty=2\times 10^9\, \rm K$
are equally valid for {\it any}
bell-shaped profile with the minimum value of
$T_{\rm cp}^\infty$ equal to $2\times 10^9\, \rm K$.
Note that microscopic calculations
(see, e.g., \citealt{drddwcp16} and Fig.\ \ref{Fig:Tc})
imply that in our stellar model the chosen value of
$T_{\rm cp}^\infty=2\times 10^9\, \rm K$ can be considered
as a lower limit
on the real minimum of $T_{\rm cp}^\infty$.
Thus, the derived surface temperature
(see the last panel in Fig.\ \ref{Fig:Ts})
is the estimate from below for the real one.
Finally, modeling of the Roche-lobe decoupling phase
predicts (\citealt{tauris12})
that in the end of the LMXB phase
an NS experiences a spin-down torque,
which
can
substantially reduce the NS spin frequency.
Such a spin-down would additionally compress the NS matter.
However, as we checked,
this effect
does not significantly affect our results,
since the matter compression in the course of accretion is much more efficient.
\section{Conclusions}
\label{conc}
This work
studies two aspects of the chemical heating of MSPs.
First, we analyze the effect of stellar core magnetic field on the heating
of superconducting MSPs.
Second, we look into the role of the preceding accretion at the LMXB stage
on building up the chemical potential imbalances and
their effect on the subsequent thermal evolution of MSP.
In the previous studies of
\cite{pr10} and \cite{reisenegger15} it was found that an MSP
should be effectively heated by the rotochemical
mechanism if
proton and/or neutron energy gaps
are sufficiently large
in the whole stellar core.
This is required for an efficient heating since
the redshifted chemical potential imbalances $\eta_{\rm l}^\infty$ tend to grow
to the lowest value of the corresponding redshifted gap
in the course of NS compression.
Higher values of the gaps
lead to larger
imbalances, when an NS reaches the steady state.
Since the heating rate due to the non-equilibrium
MUrca processes
is proportional to $\eta_{\rm l}^\infty$ in
this state, larger
gaps result in hotter NSs in
the steady state.
Adopting the flat critical temperature profile
throughout the core, $T_{\rm cp}^\infty=\rm const$,
we confirmed that in the absence
of the magnetic field the
imbalances tend to grow
to the value of the proton superfluid gap ($\eta_{\rm l}^\infty\approx \Delta_{\rm p}^\infty$).
However, we found that
even small magnetic field in the core
(in our numerical examples, $B\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^6\,\rm G$) may terminate this growth at some value of $\eta_{\rm l}^\infty<\Delta_{\rm p}^\infty$.
This happens because the core magnetic field destroys
superconductivity in some volume fraction
proportional to the field value,
so that
unsuppressed nonequilibrium reactions may
proceed there
and relax the imbalances.
As a result,
the steady state corresponds to lower $\eta_{\rm l}^\infty$
and lower surface temperatures at higher $B$.
Thus, the core magnetic field reduces the
efficiency of the rotochemical heating
even if proton superconductivity extends over the whole NS core.
In particular,
sufficiently high values of the core magnetic field (such as $B=10^{12}\,\rm G$)
significantly complicate explanation of
the surface temperature of J0437 within the rotochemical heating scenario.
On the contrary,
the preceding accretion at the LMXB stage
may help to bring the imbalances
to higher values
even at non-zero magnetic field in the core.
It appears to be possible
because of the strong compression of NS interiors caused by the accreted matter.
As a result, after accretion ceases,
nonequilibrium processes heat an MSP,
and the latter may stay warm ($T_{\rm s}^\infty\; \raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\; 10^5\,\rm K$) for about
billion years even in the absence of any spin-down (see Fig.\ \ref{Fig:TscSC}).
In this respect it is worth noting
that all the known effective reheating mechanisms
operate by transforming the rotational energy
to the thermal energy (\citealt{alpar84,reisenegger95,gkr15}).
Thus, currently, it is believed that the higher is the loss of the rotational energy by an MSP,
the larger is its surface temperature.
Here we find that it is not necessary the case,
since preceding accretion may `charge' the MSP core with the chemical energy,
thus providing the long-lasting energy source for the MSP's subsequent thermal luminosity.
This means that \underline{MSPs with low spin-down rates may still be warm}.
Finally, it is worth noting that the magnetic field
in superconducting cores of ordinary pulsars
may also reduce the efficiency of rotochemical heating
in these objects.
We leave a detailed discussion of the related effects for a future work.
\section*{Acknowledgments}
We thank Andreas Reisenegger for valuable comments on the draft version of the manuscript.
EK was supported by the Russian Science Foundation
(grant number 19-12-00133).
\section*{Data availability}
The data underlying this article are available in the article.
|
2,869,038,154,978 | arxiv | \section{Introduction}
In this paper, we say that a compact complex manifold $X$ is a {\it Calabi-Yau manifold} if its canonical bundle $\omega_X \simeq \ensuremath{\mathcal{O}}_X$ and
$H^i(X, \ensuremath{\mathcal{O}}_X) = H^0(X, \Omega_X^i)=0$ for $0<i < \dim X$. Moreover, we say that $X$ is a {\it Calabi-Yau $n$-fold} if its dimension is $n$.
Projective Calabi-Yau manifolds form an important class of algebraic varieties.
Non-K\"{a}hler Calabi-Yau manifolds are well investigated in complex differential geometry including those without the condition on the cohomology groups (cf. \cite{MR2891478}, \cite{MR3372471}).
Reid's fantasy \cite{MR909231} suggests the possible importance of non-K\"{a}hler Calabi-Yau manifolds in the classification of projective ones.
One of the important problems on projective Calabi-Yau manifolds is whether their topological types are finite or not.
Inspired by these background, we construct infinitely many non-K\"{a}hler Calabi-Yau manifolds with the following properties.
\begin{thm}\label{thm:main}
For positive integers $m$ and $N \ge 4$, there exists a simply connected non-K\"{a}hler Calabi-Yau $N$-fold $X=X(m)$ such that
\[
b_2(X) = \begin{cases}m+10 & N=4 \\
m+2 & N \ge 5
\end{cases}, \ \ a(X)=N-2,
\]
where $b_2(X)$ is the 2nd Betti number
and $a(X)$ is the algebraic dimension of $X$. The topological Euler number $e(X)$ can also be computed (Proposition \ref{prop:pi1Euler}).
\end{thm}
Since the 2nd Betti number of $X$ can be arbitrarily large, the examples give infinitely many topological types of non-K\"{a}hler Calabi-Yau manifolds of dimension $N \ge 4$.
As far as we know, these are the first examples of Calabi-Yau manifolds with infinitely many topological types in
a fixed dimension $N \ge 4$ in our sense.
The examples suggest that
there are very plenty of non-K\"{a}hler Calabi-Yau manifolds which are close to projective ones even in dimension $>3$.
Note that direct products of known lower-dimensional Calabi-Yau manifolds have non-zero holomorphic forms and so are not themselves Calabi-Yau.
Calabi-Yau 3-folds with arbitrarily large $b_3$ and $b_2 =0$ are constructed by Clemens and Friedman (cf.\cite{MR1141199}, \cite{MR1410077}).
Hashimoto and the author constructed non-K\"{a}hler Calabi-Yau 3-folds with arbitrarily large $b_2$ by smoothing normal crossing varieties in \cite{Hashimoto:aa}.
The idea of constructing projective Calabi-Yau manifolds by smoothing SNC varieties goes back to the papers \cite{MR1296351} and \cite{MR2658406}. In this paper, we construct examples by the same method as \cite{Hashimoto:aa}.
\subsection{Comments on the construction}
The idea of the construction in \cite{Hashimoto:aa} was to consider distinct SNC varieties by gluing smooth varieties along their anticanonical divisors through an automorphism of infinite order. The point is that we use smooth varieties which are blow-ups of other varieties and the number of blow-up centers increases when we change the gluing isomorphisms.
In this paper, we consider an SNC variety of the form $X_0 = X_1 \cup X_2$, where $X_1=\ensuremath{\mathbb{P}}^2 \times T$ and $X_2$ is a blow-up of $X_1$ and $T \subset \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^n$ is a hypersurface of bi-degree $(1, n+1)$.
The essential ingredient in this paper is the Schoen type Calabi-Yau manifold with infinite automorphisms (cf. \cite{MOF2020}, \cite{MR923487}, \cite{MR1672045}).
When $n=2$, the intersection of two irreducible components is the Schoen Calabi-Yau 3-fold which is a fiber product of two rational elliptic surfaces over $\ensuremath{\mathbb{P}}^{1}$.
We actually use isomorphisms of such Calabi-Yau ($N-1$)-folds which come from quadratic transformations on rational elliptic surfaces.
We effectively use the description of general rational elliptic surfaces
as hypersurfaces of $\ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ of bidegree $(3,1)$.
\begin{rem}
In \cite{Hashimoto:aa}, examples of SNC Calabi-Yau 3-folds are constructed by using automorphisms of $(2,2,2)$-hypersurface $X_{(2,2,2)} \subset \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ induced by the covering involutions of double covers $ X \rightarrow \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$.
Note that, for $X=X_{(2, \ldots, 2)} \subset (\ensuremath{\mathbb{P}}^1)^n$, the covering involutions
of projections $X \rightarrow (\ensuremath{\mathbb{P}}^1)^{n-1}$ are birational maps with indeterminacies when $n \ge 4$,
thus we need new gluing isomorphisms to construct examples in higher dimensions.
It is also not clear that the construction of Clemens--Friedman can be generalized to higher dimensions.
\end{rem}
After finishing the manuscript, the author received an e-mail from Nam-Hoon Lee and he constructed a non-K\"{a}hler Calabi-Yau 4-fold by smoothing SNC varieties \cite{Lee:2021aa}.
\section{Preliminaries}
We can construct an SNC variety by gluing two smooth proper varieties along their smooth divisors as follows.
\begin{prop}\label{prop:gluingSNC}
Let $X_1$ and $X_2$ be a smooth proper varieties and $D_i \subset X_i$ be a smooth divisor for $i=1,2$
with an isomorphism $\phi \colon D_1 \rightarrow D_2$.
Then there exists an SNC variety $X_0$ with a closed immersion $\iota_i \colon X_i \hookrightarrow X_0$ for $i=1,2$ in the Cartesian diagram
\[
\xymatrix{
D_1 \ar[d]_{i_2 \circ \phi} \ar[r]^{i_1} & X_1 \ar[d]^{\iota_1} \\
X_2 \ar[r]^{\iota_2} & X_0
},
\]
and we write $X_0=: X_1 \cup^{\psi} X_2$.
Moreover, if $D_1$ is connected and $D_i \in |{-}K_{X_i}|$ for $i=1,2$, then we have $\omega_{X_0} \simeq \ensuremath{\mathcal{O}}_{X_0}$.
\end{prop}
\begin{proof}
See \cite[Proposition 2.1, Corollary 2.2]{Hashimoto:aa} and references therein.
\end{proof}
\begin{defn}\label{defn:d-semistable}
Let $X$ be an SNC variety and $X= \bigcup_{i=1}^N X_i$ be the decomposition into its irreducible components. Let $D:= \Sing X = \bigcup_{i \neq j} (X_i \cap X_j)$ be the double locus and let $I_{X_i}, I_D \subset \ensuremath{\mathcal{O}}_X$ be the ideal sheaves of
$X_i$ and $D$ on $X$. Let
\[
\ensuremath{\mathcal{O}}_D(X):= (\bigotimes_{i=1}^N I_{X_i} / I_{X_i} I_D)^{\ast} \in \Pic D
\]
be the {\it infinitesimal normal bundle} as in \cite[Definition 1.9]{MR707162}.
We say that $X$ is {\it $d$-semistable} if $\ensuremath{\mathcal{O}}_D(X) \simeq \ensuremath{\mathcal{O}}_D$.
If $X= X_1 \cup X_2$ for smooth varieties $X_1$ and $X_2$, then $\ensuremath{\mathcal{O}}_D(X) \simeq \ensuremath{\mathcal{N}}_{D/X_1} \otimes \ensuremath{\mathcal{N}}_{D/X_2}$, where $\ensuremath{\mathcal{N}}_{D/X_i}$ is the normal bundle of $D \subset X_i$ for $i=1,2$.
\end{defn}
The following result on smoothings of an SNC variety is essential for the construction.
\begin{thm}\label{thm:smoothingSNCCY}
Let $X$ be an $n$-dimensional proper SNC variety whose dualizing sheaf $\omega_X$ is trivial.
Assume that $X$ is d-semistable.
Then there exists a semistable smoothing $\phi \colon \ensuremath{\mathcal{X}} \rightarrow \Delta^1$ of $X$ over a unit disk, that is, a proper surjective morphism such that $\ensuremath{\mathcal{X}}$ is smooth, $X \simeq \phi^{-1}(0)$ and $\ensuremath{\mathcal{X}}_t := \phi^{-1}(t)$ is smooth for $t \neq 0$.
\end{thm}
\begin{proof}
This follows from \cite[Theorem 4.2]{MR1296351}, \cite[Corollary 7.4]{Chan:2019vv}.
\end{proof}
\begin{rem}
For the construction of our examples, \cite[Theorem 4.2]{MR1296351} is enough although they assume that $H^{n-1}(X, \ensuremath{\mathcal{O}}_X) =0$ and $H^{n-2}(X_i, \ensuremath{\mathcal{O}}_{X_i}) =0$ for all irreducible component $X_i$ of $X$.
Indeed, we can check that the SNC variety $X_0(m)$ as in Example \ref{eg:construction} satisfies the conditions since it is a union of two rational manifolds glued along a Calabi-Yau manifold.
In \cite{Felten:2019ve} and \cite{Felten:2020vm}, generalizations of Theorem \ref{thm:smoothingSNCCY} are studied.
\end{rem}
We shall use the following description of general rational elliptic surfaces as hypersurfaces in $\ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$.
\begin{prop}\label{prop:hypersurfaceP2xP1}
Let $S \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ be a general hypersurface of bidegree $(3,1)$, that is a member of $|p_1^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(3) \otimes p_2^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1}(1)|$, where $p_1 \colon S \rightarrow \ensuremath{\mathbb{P}}^2$ and $p_2 \colon S \rightarrow \ensuremath{\mathbb{P}}^1$ are the projections.
\begin{enumerate}
\item[(i)] $S$ is a rational elliptic surface with no $(-2)$-curve. Moreover, $p_1$ induces an anticanonical elliptic fibration
and $p_2$ is the blow-up at 9 points which appear as intersection of two cubic curves.
\item[(ii)] Let $C \subset S$ be an irreducible curve which is not a $(-1)$-curve.
Then $C^2 \ge 0$ and $|C|$ is base point free.
\end{enumerate}
\end{prop}
\begin{proof}
(i)
Note that $S$ is defined as
$$(s F + t G =0) \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1,$$
where $[s:t] \in \ensuremath{\mathbb{P}}^1$ is the coordinates
and $F, G \in \ensuremath{\mathbb{C}}[x_0, x_1, x_2]$ are general homogeneous polynomials of degree $3$.
By this description, we see that $S$ is smooth and $p_2$ is the blow-up at 9 points $(F= G=0) \subset \ensuremath{\mathbb{P}}^2$.
We see that $- K_S = p_2^* \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^1}(1)$ and it induces an elliptic fibration, thus $S$ is a rational elliptic surface with a section.
Since $-K_S$ is nef, an irreducible curve $C$ such that $C^2 <0$ is either a $(-1)$-curve or $(-2)$-curve.
If $C \subset S$ is a $(-2)$-curve, then $C$ is $p_2$-vertical by $-K_S \cdot C =0$ and contained in a singular fiber.
It is well-known that a generic pencil of cubic curves contains only irreducible curves (cf. \cite[Lemma 3.1]{MR2457523}),
thus $S$ can not contain a $(-2)$-curve since a curve in a pencil can have singularities outside the base locus and all the members of $|{-}K_S|$ are irreducible.
\vspace{2mm}
\noindent(ii)
We see that $C^2 \ge 0$ since $K_S \cdot C \le 0$ and $C$ is neither a $(-1)$-curve nor a $(-2)$-curve.
If $p_2(C)$ is a point, then $C$ is a fiber of $p_2$ since all fibers are irreducible and reduced.
Thus $|C| = |{-}K_S|$ is a free linear system.
If $p_2(C) = \ensuremath{\mathbb{P}}^1$, then we see that $C^2 >0$ since $-K_S \cdot C >0$ and $C$ is not a $(-1)$-curve.
We see that $$
h^0(S,\ensuremath{\mathcal{O}}_S(C)) \ge \chi(S,\ensuremath{\mathcal{O}}_S(C)) = \chi(S, \ensuremath{\mathcal{O}}_S) + \frac{C(C-K_S)}{2} > 1+ \frac{C^2}{2} >1.
$$
Thus $|C|$ has no fixed part and $C$ is nef and big.
We also check that $|C|$ is free. Indeed, we have an exact sequence
\[
H^0(S, \ensuremath{\mathcal{O}}_S(C)) \rightarrow H^0(f, \ensuremath{\mathcal{O}}_f(C)) \rightarrow H^1(S, \ensuremath{\mathcal{O}}_S(C-f))
\]
for a smooth fiber $f \in |{-}K_S|$, $H^1(S, \ensuremath{\mathcal{O}}_S(C-f))= H^1(S, K_S+C) =0$ by the vanishing theorem and $f \cdot C \ge 2$ since $C$ is not a section.
\end{proof}
We also need the following isomorphism of a rational elliptic surface $S$ induced by a quadratic transformation of $\ensuremath{\mathbb{P}}^2$.
(ii) and (iii) are technical, but essential in the construction of our Calabi-Yau manifolds.
\begin{prop}\label{prop:quadraticTrans}
Let $S \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ be a general $(3,1)$-hypersurface.
Let $p_1, \ldots, p_9 \in \ensuremath{\mathbb{P}}^2$ be the points on which the birational morphism $\mu=p_1 \colon S \rightarrow \ensuremath{\mathbb{P}}^2$ induced by the 1st projection
is not an isomorphism.
Let $E_i:= \mu^{-1}(p_i)$ for $i=1, \ldots, 9$ be $(-1)$-curves and $H:= \mu^*\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$. Then we have the following.
\begin{enumerate}
\item[(i)] For $1\le i<j<k \le 9$, there exist a hypersurface $S_{ijk} \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ and an isomorphism $\phi_{ijk} \colon S \rightarrow S_{ijk}$ over $\ensuremath{\mathbb{P}}^1$
such that $$\phi_{ijk}^* (H_{ijk}) = 2H - E_i - E_j - E_k, $$ where $H_{ijk}$ is the pull-back of $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$ to $S_{ijk}$ by the 1st projection.
\item[(ii)] For a positive integer $m$, there exist a hypersurface $S_m \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ and an isomorphism $\phi_m \colon S \rightarrow S_m$ over $\ensuremath{\mathbb{P}}^1$ such that
\begin{equation}\label{eq:phi-mpullback}
\phi_m^* H_{S_m} = (27m^2+1)H - (9m^2-3m)F_1 - 9m^2 F_2 - (9m^2+3m) F_3,
\end{equation}
where $H_{S_m}$ is the pull-back of $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$ to $S_m$ and $F_i := E_{3i-2} + E_{3i-1} + E_{3i}$ for $i=1,2,3$.
\item[(iii)] In (ii), the divisor $L_m:= H + \phi_m^* H_{S_m} + mK_S$ is ample and free on $S$.
\end{enumerate}
\end{prop}
\begin{rem}
In (iii), the essential part used later is that the divisor is effective and can be written as a sum of smooth curves.
However, it is nice to show freeness since then the linear system contains a smooth irreducible member.
By this property, we can determine the 2nd Betti number of our Calabi-Yau manifolds obtained as smoothings.
\end{rem}
\begin{proof}
\noindent(i) We see that $p_1, \ldots, p_9 \in \ensuremath{\mathbb{P}}^2$ is in ``Cremona general position'' in the sense of \cite[pp.1178]{MR2457523},
thus can consider the quadratic transformation $q_{ijk}\colon \ensuremath{\mathbb{P}}^2 \dashrightarrow \ensuremath{\mathbb{P}}^2$ at $p_i, p_j, p_k$.
This $q_{ijk}$ induces an isomorphism $\phi_{ijk} \colon S \rightarrow S_{ijk}$ onto a hypersurface $S_{ijk} \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$.
By the construction, we see that $\phi_{ijk}$ is induced by the birational transformation $q_{ijk} \times \id \colon \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1 \dashrightarrow \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$, thus $\phi_{ijk}$ is an isomorphism over $\ensuremath{\mathbb{P}}^1$.
Let $E'_i, E'_j, E'_k \in \Pic S_{ijk}$ be the images of $H-E_j-E_k, H- E_i - E_k, H- E_i - E_j \in \Pic S$ by $\phi_{ijk}$.
For $l \neq i,j,k$, let $E'_l:= \phi_{ijk}(E_l) \subset S_{ijk}$.
We identify $\Pic S \simeq \ensuremath{\mathbb{Z}}^{10}$ and $\Pic S_{ijk} \simeq \ensuremath{\mathbb{Z}}^{10}$ by the basis $(H, E_1, \ldots, E_9)$ and $(H_{ijk}, E'_1, \ldots, E'_9)$. Here $\ensuremath{\mathbb{Z}}^{10}$ is the lattice with a bilinear form $$(a, b_1, \ldots, b_9) \cdot (a', b'_1, \ldots, b'_9):= aa' - \sum_{l=1}^9 b_l b'_l. $$
Let $h, e_i, e_j, e_k \in \ensuremath{\mathbb{Z}}^{10}$ be the images of $H, E_i, E_j, E_k$ via the identification $\Pic S \rightarrow \ensuremath{\mathbb{Z}}^{10}$, that is,
\[
h:= (1,0, \ldots ,0), \\
e_1:= (0,1, 0, \ldots, 0), \cdots, e_9:= (0,0,\ldots, 0,1) \in \ensuremath{\mathbb{Z}}^{10}.
\]
Then we see that $\phi_{ijk}^* \colon \Pic S_{ijk} \rightarrow \Pic S$ is the reflection orthogonal to $H- E_i - E_j - E_k$, that is, it induces $\phi_{ijk}^* \colon \ensuremath{\mathbb{Z}}^{10} \rightarrow \ensuremath{\mathbb{Z}}^{10}$ determined by
$$\phi_{ijk}^*(x) = x+ (x \cdot \alpha_{ijk}) \alpha_{ijk}, $$
where $\alpha_{ijk}:=h-e_i - e_j - e_k \in \ensuremath{\mathbb{Z}}^{10}$.
Hence we obtain the required equality by substituting $x=h$.
\vspace{2mm}
\noindent(ii)
Let $\psi_S:= \phi_{789} \circ \phi_{456} \circ \phi_{123} \colon S \rightarrow S'$ be a composite of three isomorphisms as in (i), that is,
$\phi_{123}$ is the quadratic transformation at $\{ p_1, p_2, p_3 \}$, and $\phi_{456}$ and $\phi_{789}$ are quadratic transformation at
the images of $\{ p_4,p_5,p_6\}$ and $\{p_7, p_8, p_9 \}$ respectively.
Then let $$\psi_1:=\psi_{S'} \circ \psi_S \colon S \rightarrow S'', $$ where $\psi_{S'}$ is the same operation on $S'$ and let $S_1:= S''$ (and $S_0:=S$).
We can perform this operation for any positive integer $i$ and construct an isomorphism
$\psi_i \colon S_{i-1} \rightarrow S_i$ as a composite of six quadratic transformations.
Now let $\phi_m:= \psi_m \circ \cdots \circ \psi_1 \colon S \rightarrow S_m$.
On each surface $T=S_i$, we have an isomorphism
$\Pic T \rightarrow \ensuremath{\mathbb{Z}}^{10}$ determined by the basis $(H_{T}, E_{T,1}, \ldots, E_{T, 9})$, where $H_T$ is the pull-back of $\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$ and $E_{T,i}$'s are the exceptional divisors.
Note that the labelling for the exceptional divisors are determined as (i).
We also use the same symbol for isomorphisms of the Picard group and $\ensuremath{\mathbb{Z}}^{10}$, that is,
for $\psi_1 \colon S \rightarrow S'$, we write $\psi_1^* \colon \Pic S' \rightarrow \Pic S$ and $\psi_1^* \colon \ensuremath{\mathbb{Z}}^{10} \rightarrow \ensuremath{\mathbb{Z}}^{10}$, for example.
We are reduced to show the following to obtain the equality (\ref{eq:phi-mpullback}).
\begin{claim} Let $h, e_1, \ldots, e_9 \in \ensuremath{\mathbb{Z}}^{10}$ be the elements corresponding to
$(H, E_1, \ldots, E_9)$ or $(H_{S_m}, E_{S_m, 1}, \ldots, E_{S_m, 9})$ as before. Let $f_i := e_{3i-2} + e_{3i-1} + e_{3i}$ for $i=1,2,3$. Then we have
\begin{equation}\label{eq:pullbackclaim}
\phi_m^*(h) = (27m^2+1)h - (9m^2-3m)f_1 - 9m^2 f_2 - (9m^2+3m) f_3.
\end{equation}
\end{claim}
\begin{proof}[Proof of Claim]
We check the required equality by induction on $m$.
Recall that $\phi_{ijk}^* \colon \ensuremath{\mathbb{Z}}^{10} \rightarrow \ensuremath{\mathbb{Z}}^{10}$ is the reflection for $\alpha_{ijk} = h-e_i -e_j -e_k \in \ensuremath{\mathbb{Z}}^{10}$.
Then we compute
\begin{multline*}
\psi_S^*(h) = \phi_{789}^*(\phi_{456}^*(\phi_{123}^*(h)) ) =\phi_{789}^*(\phi_{456}^*(2h-f_1) ) \\
=\phi_{789}^*(4h-f_1-2f_2) = 8h-f_1-2f_2 -4f_3.
\end{multline*}
Then we compute similarly
\begin{equation*}
\psi_1^*(h) = \phi_{789}^*(\phi_{456}^*(\phi_{123}^*(8h-f_1 -2f_2 -4f_3)) ) = 28h-6f_1-9f_2 -12f_3,
\end{equation*}
thus obtain the equality for $\phi_1 = \psi_1$.
Suppose that we have the equality (\ref{eq:pullbackclaim}) for $\phi_m$.
By a similar computation, we obtain
\begin{multline*}
\phi_{m+1}^*(h) = \psi_{m+1}^* \phi_m^*(h) \\
= \psi_{m+1}^*((27m^2+1)h - (9m^2-3m)f_1 - 9m^2 f_2 - (9m^2+3m) f_3) \\
= (27(m+1)^2+1)h - (9(m+1)^2-3(m+1)) f_1 - 9(m+1)^2 f_2 - (9(m+1)^2 + 3(m+1)) f_3.
\end{multline*}
Thus we obtain the claim by induction.
\end{proof}
This finishes the proof of (ii).
\vspace{2mm}
\noindent(iii)
We have $L_m \cdot (-K_S) = 6 >0$ and
\begin{multline*}
L_m^2 = (H + \phi_m^*H_{S_m})^2 + 2m (K_S \cdot (H+ \phi_m^*H_{S_m}))\\
= (2+2(27m^2+1)) -12m
= 54m^2 -12m +4 >0.
\end{multline*}
By these and the Riemann-Roch formula,
we see that $L_m$ is effective.
Since $S$ is general, it is enough to show $L_m \cdot C >0$ for all $(-1)$-curve $C$.
We write $C= \alpha H - \sum_{i=1}^9 \beta_i E_i$ for some integers $\alpha, \beta_1, \ldots, \beta_9$
such that $\alpha^2 - \sum_{i=1}^9 \beta_i^2 =-1$ and $3\alpha - \sum_{i=1}^9 \beta_i =1$. Let $\gamma_i:= \beta_{3i-2}+\beta_{3i-1} + \beta_{3i}$
for $i=1,2,3$.
We may assume that $\alpha \le m$ since, if $\alpha >m$, then we have
$$L_m \cdot C \ge (H+mK_S)\cdot C = \alpha-m >0. $$
We may also assume that $\beta_i \ge 0$ for all $i$ since we have
\[
L_m \cdot E_i = \phi_m^*(H_{S_m}) \cdot E_i + mK_S \cdot E_i \ge
(9m^2 -3m) -m >0.
\]
Then we compute
\begin{multline*}
L_m \cdot C = \alpha(27m^2+2) -(9m^2-3m)\gamma_1 - 9m^2 \gamma_2 - (9m^2+3m)\gamma_3 -m \\
= \alpha(27m^2+2) -(9m^2-3m)(3\alpha-1) - 3m(\gamma_2 + 2 \gamma_3) -m \\
=(9m+2) \alpha +(9m^2-3m) -3m(\gamma_2 + 2 \gamma_3)-m \\
\ge (9m+2) \alpha +(9m^2-3m) -3m(2(3\alpha-1)) -m \\
= -9m\alpha + 2 \alpha +9m^2 +2m = 9m(m - \alpha)+ 2m + 2 \alpha \ge 2m + 2 \alpha >0,
\end{multline*}
where we used $\gamma_2 + 2\gamma_3 \le 2(3\alpha-1)$ for the 1st inequality and $m \ge \alpha$ for the 2nd inequality.
Thus we see that $L_m$ is ample by the Nakai--Moishezon criterion.
Since $L_m \cdot (-K_S) = 6$, we see that $L_m|_{F}$ is free for a general smooth element $F \in |{-}K_S|$.
By this and the exact sequence $$0 \rightarrow \ensuremath{\mathcal{O}}_S(K_S +L_m) \rightarrow \ensuremath{\mathcal{O}}_S(L_m) \rightarrow \ensuremath{\mathcal{O}}_F(L_m) \rightarrow 0,$$
we check that $|L_m|$ is free as in the proof of Proposition \ref{prop:hypersurfaceP2xP1}(ii).
\end{proof}
We have the following description of Calabi-Yau manifolds of ``Schoen type'' arising from general rational elliptic surfaces (\cite{MR923487}).
The author learned (i) in the following proposition in \cite{MOF2020} when $n=2$ (cf. \cite[Section 2]{MR1672045}).
\begin{prop}\label{prop:SchoenCY3}
Let $S \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ be a general $(3,1)$-hypersurface and $T \subset \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^n$ a
general $(1,n+1)$-hypersurface for some $n \ge 2$.
Let $X_1:= \ensuremath{\mathbb{P}}^2 \times T$ and $X_2:= S \times \ensuremath{\mathbb{P}}^n$ be divisors in $\ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^n$
and $X_{12}:= X_1 \cap X_2$. Let $p_S \colon X_{12} \rightarrow S$ and $p_T \colon X_{12} \rightarrow T$ be
the surjective morphisms induced by the projections.
\begin{enumerate}
\item[(i)] $X_{12}$ is a Calabi-Yau $(n+1)$-fold and there is a natural isomorphism $$\varphi \colon S \times_{\ensuremath{\mathbb{P}}^1} T \rightarrow X_{12}, $$ where the fiber product is defined by the projections
$\phi_S \colon S \rightarrow \ensuremath{\mathbb{P}}^1$ and $\phi_T \colon T \rightarrow \ensuremath{\mathbb{P}}^1$.
\item[(ii)](cf. \cite[Corollary 3.2]{MR1240604}) We have $$\Pic X_{12} \simeq (\Pic S \oplus \Pic T)/ \ensuremath{\mathbb{Z}}(-K_S, K_T). $$
Thus $\Pic X_{12} \simeq \ensuremath{\mathbb{Z}}^{19}$ when $ n=2$ and $\Pic X_{12} \simeq \ensuremath{\mathbb{Z}}^{11}$ when $n \ge 3$.
\end{enumerate}
\end{prop}
\begin{proof}
\noindent(i) This follows from properties of fiber products.
\vspace{2mm}
\noindent(ii) By the same argument as the 1st paragraph of the proof of \cite[Proposition 1.1]{MR1093334}, we see that the naturally induced homomorphism
$$p_S^* \oplus p_T^* \colon \Pic S \oplus \Pic T \rightarrow \Pic X_{12}$$ is surjective.
By the same argument as the proof of \cite[Proposition 3.1]{MR1240604}, we can write $(L_1, L_2) \in \Ker (p_S^* \oplus p_T^*)$ as
$$(L_1, L_2) = (A_1, A_2) + m(-K_S, K_T)$$ for some numerically trivial
$A_1 \in \Pic S$, $A_2 \in \Pic T$ and $m \in \ensuremath{\mathbb{Z}}$. Since $H^1(S, \ensuremath{\mathcal{O}}_S) = H^1(T, \ensuremath{\mathcal{O}}_T)=0$, we see that
$A_1$ and $A_2$ are linearly trivial. Thus we see that $\Ker (p_S^* \oplus p_T^*) = \ensuremath{\mathbb{Z}}(K_S, -K_T)$ and obtain the required isomorphism.
Since the projection morphism $T \rightarrow \ensuremath{\mathbb{P}}^n$ is the blow-up along the intersection $(F=G=0) \subset \ensuremath{\mathbb{P}}^n$ of general divisors $(F=0), (G=0)$ of degree $n+1$, we see that $\Pic T \simeq \ensuremath{\mathbb{Z}}^{10}$ if $n=2$ and $\Pic T \simeq \ensuremath{\mathbb{Z}}^2$ if $n \ge 3$.
Thus we obtain the latter statement.
\end{proof}
\section{Construction of examples}
We first explain the construction of examples $X(m)$ by smoothing SNC varieties $X_{0}(m)$ in the following.
\begin{eg}\label{eg:construction}
Let $m, n$ be positive integers.
Let $S \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ be a general $(3,1)$-hypersurface
and $S_m \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1$ be the hypersurface in Proposition \ref{prop:quadraticTrans}(ii)
with the isomorphism $\phi_m \colon S \rightarrow S_m$ over $\ensuremath{\mathbb{P}}^1$.
Let $T \subset \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^n$ be a general $(1,n+1)$-hypersurface and
$$Y_1:= \ensuremath{\mathbb{P}}^2 \times T \subset \ensuremath{\mathbb{P}}^2 \times \ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^n.$$
Let $D_1:= Y_1 \cap (S \times \ensuremath{\mathbb{P}}^n) \subset Y_1$.
Then there is a natural isomorphism $\varphi_1 \colon D_1 \rightarrow S \times_{\ensuremath{\mathbb{P}}^1} T$ as in Proposition \ref{prop:SchoenCY3}(i).
Note that $D_1 \in |{-}K_{Y_1}|$ and the normal bundle $\ensuremath{\mathcal{N}}_{D_1/Y_1} \simeq p_S^* \ensuremath{\mathcal{O}}_S(3h+f)$,
where $p_S \colon D_1 \rightarrow S$ is the projection, $f \in |{-}K_S|$ and $h:= \mu_S^* \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(1)$ for the birational morphism $\mu_S \colon S \rightarrow \ensuremath{\mathbb{P}}^2$
induced by the projection.
Let $Y_2:= \ensuremath{\mathbb{P}}^2 \times T$ and $D_2 := Y_2 \cap (S_m \times \ensuremath{\mathbb{P}}^n) \subset Y_2$.
Then there is a natural isomorphism $\varphi_2 \colon D_2 \rightarrow S_m \times_{\ensuremath{\mathbb{P}}^1} T$ as above.
Let $\Phi_m \colon D_1 \rightarrow D_2$ be the isomorphism which fits in the following diagram:
\[
\xymatrix{
D_1 \ar[d]^{\varphi_1} \ar[r]^{\Phi_m} & D_2 \ar[d]^{\varphi_2} \\
S \times_{\ensuremath{\mathbb{P}}^1} T \ar[r]^{\phi_m \times \id} & S_m \times_{\ensuremath{\mathbb{P}}^1} T.
}
\]
For $i=1, \ldots, m$, let $F_i:= p_S^{-1}(f_i) \subset D_1$ for a smooth general member $f_i \in |{-}K_S|$.
Let $\Gamma_m:= p_S^{-1}(C_m)$ for a smooth member
$$C_m \in |3H + \phi_m^*(3H_{S_m}) + (m-2)K_S| = |L_m +2(H+\phi_m^*H_{S_m}) + (-2K_S)|$$
of an ample and free linear system guaranteed by Proposition \ref{prop:quadraticTrans}(iii).
Now let $\nu_1 \colon \tilde{Y}_1 \rightarrow \ensuremath{\mathbb{P}}^2 \times T =Y_1$ be the blow-up along $F_1, \ldots, F_m$, and $\nu_2 \colon X_1 \rightarrow \tilde{Y}_2$ be the blow-up along the strict transform $\tilde{\Gamma}_m \subset \tilde{Y}_1$ of $\Gamma_m \subset D_1$.
Thus we have a composition $\mu := \nu_1 \circ \nu_2 \colon X_1 \rightarrow Y_1$.
Let $X_2:= Y_2 = \ensuremath{\mathbb{P}}^2 \times T$.
Let $\tilde{D}_1 \subset X_1$ be the strict transform of $D_1$ and $\mu_{D_1} \colon \tilde{D}_1 \rightarrow D_1$ be the induced isomorphism.
Then we can glue $X_1$ and $X_2$ along the composition isomorphism
$$\Psi_m \colon \tilde{D}_1 \xrightarrow{\mu_{D_1}} D_1 \xrightarrow{\Phi_m} D_2$$
and construct an SNC variety $$X_0(m):= X_0:= X_1 \cup^{\Psi_m} X_2$$ by Proposition \ref{prop:gluingSNC}.
We check that $X_0$ is d-semistable since we have
\[
\ensuremath{\mathcal{N}}_{D_1/Y_1} \otimes \Phi_m^*\ensuremath{\mathcal{N}}_{D_2/Y_2} \simeq \ensuremath{\mathcal{O}}_{D_1}(p_S^*(3(H+\phi_m^*H_{S_m}) +2f)) \simeq \ensuremath{\mathcal{O}}_{D_1}(F_1+ \cdots + F_m + \Gamma_m)
\]
and the blow-up centers of $\mu$ are chosen so that this becomes trivial.
We also see that $\omega_{X_0} \simeq \ensuremath{\mathcal{O}}_{X_0}$ since $\tilde{D_1} \in |{-}K_{X_1}|$ and $D_2 \in |{-}K_{X_2}|$.
Thus we can apply Theorem \ref{thm:smoothingSNCCY} and construct a semistable smoothing $\ensuremath{\mathcal{X}}(m) \rightarrow \Delta^1$.
Let $X(m)$ be its general smooth fiber. Thus we obtain a compact complex manifold $X(m)$. We also write $X:= X(m)$ for short in the following.
\end{eg}
\subsection{Properties of the smoothings}
We have the following basic properties of $X(m)$ in Example \ref{eg:construction}.
\begin{prop}\label{prop:X_mproperty} The above $X = X(m)$ satisfies the following.
\begin{enumerate}
\item[(i)] The Hodge to de Rham spectral sequence $H^q(X, \Omega_X^p) \Rightarrow H^{p+q}(X, \ensuremath{\mathbb{C}})$ degenerates at $E_1$.
\item[(ii)] We have $H^i(X, \ensuremath{\mathcal{O}}_X)=0$ and $H^0(X, \Omega_X^i) =0$ for $0 < i < \dim X$.
We also have $\omega_X \simeq \ensuremath{\mathcal{O}}_X$, thus $X$ is a Calabi-Yau manifold.
\item[(iii)] The 2nd betti number is $b_2(X) = m+\rho_T$, where $\rho_T:= \rk \Pic T$.
\end{enumerate}
\end{prop}
\begin{proof}
\noindent(i) This is \cite[Corollary 11.24]{MR2393625}.
\vspace{2mm}
\noindent(ii) Let $X_{12}:= X_1 \cap X_2$. We compute $H^i(X_0, \ensuremath{\mathcal{O}}_{X_0}) =0$ for $0<i < \dim X$ by the exact sequence
\[
\cdots \rightarrow H^{i-1}(X_{12}, \ensuremath{\mathcal{O}}_{X_{12}}) \rightarrow H^i(X_0, \ensuremath{\mathcal{O}}_{X_0}) \rightarrow \bigoplus_{j=1}^2 H^i(X_j, \ensuremath{\mathcal{O}}_{X_j}) \rightarrow \cdots.
\]
By this and the upper semi-continuity theorem, we obtain $H^i(X, \ensuremath{\mathcal{O}}_X) =0$ for $0< i < \dim X$.
Since we have an exact sequence
$$
H^0(X, \ensuremath{\mathcal{O}}_X) \rightarrow H^0(X, \ensuremath{\mathcal{O}}_X^*) \rightarrow H^1(X, \ensuremath{\mathbb{Z}}) \rightarrow H^1(X, \ensuremath{\mathcal{O}}_X)
$$
from the exponential exact sequence, we see that $H^1(X, \ensuremath{\mathbb{Z}}) =0$. By this and (i), we obtain $H^0(X, \Omega_X^1) =0$.
\begin{claim}
We have $H^0(X, \Omega^i_X) =0$ for $2 \le i \le \dim X-1$.
\end{claim}
\begin{proof}[Proof of Claim]
For the semistable smoothing $\ensuremath{\mathcal{X}} \rightarrow \Delta^1$ and $i \ge 0$, we have the locally free sheaf
\[
\Lambda_{X_0}^i := \Omega^i_{\ensuremath{\mathcal{X}}/ \Delta^1}(\log X_0)|_{X_0}
\]
which is defined in \cite[pp.92]{MR707162}.
It is enough to show $H^0(\Lambda^i_{X_0}) =0$ for $2 \le i \le \dim X-1$ since the rank $h^0(\Omega^i_{\ensuremath{\mathcal{X}}/ \Delta^1}(\log X_0)|_{X_t})$ is upper-semicontinuous for the fibers $X_t$ over $t \in \Delta^1$.
By applying \cite[Proposition 3.5]{MR707162} to $\Lambda_{X_0}^i$ for $X_0 = X_1 \cup X_2$, we have an exact sequence
\[
0 \rightarrow V_0 \rightarrow \Lambda_{X_0}^i \rightarrow V_1/V_0 \rightarrow 0,
\]
where $V_0$ and $V_1/V_0$ are described as
\[
V_0 \simeq \Ker (\Omega_{X_1}^i \oplus \Omega_{X_2}^i \xrightarrow{(\iota_1^*, -\iota_2^*)} \Omega_{X_{12}}^i), \ \ V_1/V_0 \simeq \Omega_{X_{12}}^{i-1}.
\]
By this and $H^0(X_i, \Omega_{X_j}^i)=0=H^0(X_{12}, \Omega_{X_{12}}^{i-1})$ for $i=2, \ldots, \dim X -1$ and $j=1,2$, we obtain $H^0(V_0) =0$ and $H^0(V_1/V_0) =0$.
Thus we obtain $H^0(\Lambda_{X_0}^i)=0$ for $2 \le i \le \dim X -1$ by the above exact sequence.
\end{proof}
We also see that $\omega_X \simeq \ensuremath{\mathcal{O}}_X$ as in the proof of \cite[Theorem 3.4]{Hashimoto:aa}. Hence we see that $X$ is a Calabi-Yau manifold in our sense.
\vspace{2mm}
\noindent(iii) Note that $\Pic X_0 \simeq H^2(X_0, \ensuremath{\mathbb{Z}})$ and $\Pic X \simeq H^2 (X, \ensuremath{\mathbb{Z}})$ by $H^i(X_0, \ensuremath{\mathcal{O}}_{X_0}) =0$ and $H^i(X, \ensuremath{\mathcal{O}}_X) =0$ for $i=1,2$.
We compute $b_2(X_0) = m+\rho_T +1$ as follows.
We have an exact sequence
\[
0 \rightarrow \Pic X_0 \rightarrow \Pic X_1 \oplus \Pic X_2 \xrightarrow{R} \Pic X_{12},
\]
where $R= (\iota_1^*, -\iota_2^*)$ for the closed immersion $\iota_i \colon X_{12} \hookrightarrow X_i$ for $i=1,2$.
By using the isomorphism
\[
\Pic X_{12} \simeq (\Pic S \oplus \Pic T)/ \ensuremath{\mathbb{Z}}(K_S, -K_T)
\] as in Proposition \ref{prop:SchoenCY3}, we see that the image $\Image R \subset \Pic X_{12}$ is generated by the image of $\Pic T$ and $p_S^*(H), p_S^*(\phi_m^*(H_{S_m}))$.
Thus we see that $$\Image R \simeq \ensuremath{\mathbb{Z}}^{\rho_T+2}.$$
Since $\Pic X_1 \simeq \ensuremath{\mathbb{Z}}^{1+ \rho_T+m+1}= \ensuremath{\mathbb{Z}}^{m+\rho_T+2}$ and $\Pic X_2 \simeq \ensuremath{\mathbb{Z}}^{\rho_T+1}$,
we see that $$\Pic X_0 \simeq \ensuremath{\mathbb{Z}}^{m+\rho_T +1}$$ by the above exact sequence, thus obtain $b_2(X_0) = m+ \rho_T +1$.
Then, by the Clemens map $\gamma \colon X \rightarrow X_0$ (cf. \cite{MR444662}, \cite[Theorem 2.9]{Hashimoto:aa}), we have the exact sequence
\[
\ensuremath{\mathbb{Z}} \simeq H^0(X_0, R^1 \gamma_* \ensuremath{\mathbb{Z}}) \rightarrow H^2(X_0, \ensuremath{\mathbb{Z}}) \rightarrow H^2(X, \ensuremath{\mathbb{Z}}) \rightarrow H^1(R^1 \gamma_* \ensuremath{\mathbb{Z}}) =0.
\]
Thus we see that $H^2(X, \ensuremath{\mathbb{Z}}) \simeq \ensuremath{\mathbb{Z}}^{m+\rho_T}$ as in \cite[Claim 3.6(ii)]{Hashimoto:aa}.
\end{proof}
The following lemma is useful to see the non-projectivity of $X$ and compute the algebraic dimension of $X$
\begin{lem}\label{lem:X0linebundle}
Let $X_0=X_0(m)$ be the SNC Calabi-Yau variety as in Example \ref{eg:construction}. Let $N:= \dim X_0$.
Let $\ensuremath{\mathcal{L}}_0 \in \Pic X_0$ be a line bundle such that $\ensuremath{\mathcal{L}}_i:= \ensuremath{\mathcal{L}}_0|_{X_i}$ is effective for $i=1,2$.
We may write
$$
\ensuremath{\mathcal{L}}_1= \mu^* (\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(a) \boxtimes \ensuremath{\mathcal{O}}_T(H_1) ) \otimes \ensuremath{\mathcal{O}}_{X_1}(\sum_{j=1}^{m} b_j E_j + c F),$$
$$
\ensuremath{\mathcal{L}}_2= \ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^2}(a') \boxtimes \ensuremath{\mathcal{O}}_{T}(H_2),
$$
where $E_j:= \mu^{-1}(F_j) \subset X_1$ for $j=1, \ldots , m$ and $F:= \mu^{-1}(\Gamma_m)$ are the exceptional divisors of $\mu \colon X_1 \rightarrow \ensuremath{\mathbb{P}}^2 \times T$ and
$H_1, H_2 \in \Pic T$.
Then we have $a=a'=0$ and $\kappa(\ensuremath{\mathcal{L}}_0) \le N -2$.
\end{lem}
\begin{proof}
Note that $\ensuremath{\mathcal{L}}_1|_{\tilde{D}_1} \simeq \Psi_{m}^* \ensuremath{\mathcal{L}}_2|_{D_2}$, where $\Psi_m \colon \tilde{D}_1 \rightarrow D_2$ is the isomorphism used to construct $X_0$.
We have a natural surjection
$$
\pi_S \colon \Pic \tilde{D}_1 \xrightarrow{\simeq} (\Pic S \oplus \Pic T)/ \ensuremath{\mathbb{Z}}(K_S, -K_T) \xrightarrow{\pr} \Pic S/\ensuremath{\mathbb{Z}}(K_S)
$$
by Proposition \ref{prop:SchoenCY3}(ii), where $\pr$ is the projection.
We see that
\[
\pi_S(\ensuremath{\mathcal{L}}_1|_{\tilde{D}_1}) = [a H + c(3(H+ H'))] = (a+3c)[H] + 3c[H'] \in \Pic S/\ensuremath{\mathbb{Z}} (K_S),
\]
where $[\cdot]$ means the image of $\Pic S \rightarrow \Pic S /\ensuremath{\mathbb{Z}} K_S$ and $H':= \phi_m^*(H_{S_m})$.
We also see that
\[
\pi_S(\Psi_m^* \ensuremath{\mathcal{L}}_2|_{D_2}) = a' [H'].
\]
By comparing the above two terms, we see that $a' = 3c$ and $a+3c =0$ since $[H]$ and $[H']$ are linearly independent.
Since $a, a' \ge 0$, we obtain $a=a'=0$. This implies that $\kappa(\ensuremath{\mathcal{L}}_i) \le \dim T$ for $i=1,2$, thus $\kappa(\ensuremath{\mathcal{L}}_0) \le \dim T=N-2$.
\end{proof}
\begin{prop}
For $m>0$, $X=X(m)$ is not projective.
Moreover, we have $$a(X) =n = \dim X -2$$ for a very general $t \in \Delta^1$, where $a(X)$ is the algebraic dimension.
\end{prop}
\begin{proof}
Let $\pr_T \colon \ensuremath{\mathbb{P}}^2 \times T \rightarrow T$ be the projection.
We see that, for a very ample $H_T \in \Pic T$,
the line bundles $H_1:= \mu^*(\pr_T^*\ensuremath{\mathcal{O}}_T(H_T)) \in \Pic X_1$ and $H_2 := \pr_T^* \ensuremath{\mathcal{O}}_T(H_T)$ induce a line bundle $H_0 \in \Pic X_0$
such that $H_0|_{X_i} \simeq H_i$. Since we have an isomorphism $\Pic X_0 \simeq \Pic \ensuremath{\mathcal{X}}$ as in \cite{Hashimoto:aa}, $H_0$ induces a line bundle $H_t$ and this induces
a fiber space $X \rightarrow T$. Its general fiber is a K3 surface since the general fiber of $X_0 \rightarrow T$ is an SNC surface which is a union of $\ensuremath{\mathbb{P}}^2$ and
its blow-up at $18$ points. Thus the fiber space $X \rightarrow T$ is a K3 fibration.
We see that there is no line bundle $L_t$ on $X$ such that $\kappa(L_t) \ge \dim T +1$ by the same argument as \cite[Proposition 3.19(iii)]{Hashimoto:aa}.
Indeed, if such a line bundle exists, then there exists $L_0 \in \Pic X_0$ such that $L_0|_{X_i}$ is effective for $i=1,2$ and $\kappa(L_0) \ge \dim T +1$.
This contradicts Lemma \ref{lem:X0linebundle}.
\end{proof}
We can also compute the following topological invariants of $X$.
\begin{prop}\label{prop:pi1Euler}
\begin{enumerate}
\item[(i)] $X$ is simply connected.
\item[(ii)] The topological Euler number of $X$ is
\begin{equation}\label{eq:Euler}
e(X)= (\gamma_m-12) \frac{(-n)^{n+1}+n^2 +2n}{n+1} +24(n+1)(-n)^n,
\end{equation}
where $\gamma_m:= -18(27m^2-2m+5)$.
\end{enumerate}
\end{prop}
\begin{proof}
\noindent(i) We show this by following the proof of \cite[Proposition 3.10]{Hashimoto:aa}.
As in \cite[Proposition 3.10]{Hashimoto:aa}, we see that
\begin{equation}\label{eq:pi1Xformula}
\pi_1(X) \simeq \pi_1(X_1') \ast_{\pi_1(\tilde{X}_{12})} \pi_1(X_2'),
\end{equation}
where $X_i':= X_i \setminus X_{12}$ for $i=1,2$ and $\tilde{X}_{12}:= \gamma^{-1}(X_{12})$ for the Clemens map $\gamma \colon X \rightarrow X_0$.
We see that $\pi_1(X_2') = \{1 \}$ by the Gysin exact sequence
\[
H_2(X_2, \ensuremath{\mathbb{Z}}) \rightarrow H_0(X_{12}, \ensuremath{\mathbb{Z}}) \rightarrow H_1(X_2', \ensuremath{\mathbb{Z}}) \rightarrow H_1(X_2, \ensuremath{\mathbb{Z}}).
\]
Indeed, for a section $C \subset T$ of the fiber space $T \rightarrow \ensuremath{\mathbb{P}}^1$, the class $[\{p\} \times C] \in H_2(X_2, \ensuremath{\mathbb{Z}})$ is sent to a generator of $H_0(X_{12})$. For example, a fiber of $T \rightarrow \ensuremath{\mathbb{P}}^n$ over the exceptional locus can be taken as $C$.
We also see that $\pi_1(X_1') = \{ 1\}$ as in \cite{Hashimoto:aa} (In fact, the argument is easier since $\pi_1(X_2')= \{1 \}$).
By these and (\ref{eq:pi1Xformula}), we see that $\pi_1(X) = \{ 1\}$.
\vspace{2mm}
\noindent(ii) As in the proof of \cite[Claim 3.7]{Hashimoto:aa},
we see that $$e(X) = e(X_1) + e(X_2) - 2e(X_{12})$$ by the Mayer-Vietoris exact sequence and the Clemens map.
Note that $T \rightarrow \ensuremath{\mathbb{P}}^1$ has singular fibers with only one nodes over points $q_1, \ldots , q_{d_n} \in \ensuremath{\mathbb{P}}^1$,
where
$$d_n:= (n+1)n^n$$ is the degree of the discriminant hypersurface in $|\ensuremath{\mathcal{O}}_{\ensuremath{\mathbb{P}}^n}(n+1)|$ (cf. \cite[Lemma 2.1]{MR2569617}).
Note also that a smooth hypersurface $H_{n+1} \subset \ensuremath{\mathbb{P}}^n$ has the Euler number
\[
e(H_{n+1}) = (n+1) + \frac{(-n)^{n+1} -1}{n+1}= \frac{(-n)^{n+1}+n^2 + 2n}{n+1} =: \sigma_n.
\]
Thus we compute
\[
e(T)= e(\ensuremath{\mathbb{P}}^1) e(H_{n+1}) + d_n(-1)^n = 2 \sigma_n + \delta_{n},
\]
where we put $\delta_{n}:= (-1)^{n} d_{n}$.
We compute that $$e(X_2) = e(\ensuremath{\mathbb{P}}^2 \times T) = e(\ensuremath{\mathbb{P}}^2) e(T)=3 (2\sigma_n+\delta_n) = 6\sigma_n + 3\delta_n. $$
To compute $e(X_1)$, note that $X_1 \rightarrow \ensuremath{\mathbb{P}}^2 \times T$ is the blow-up along $F_1, \ldots, F_m$ and (the strict transform of) $\Gamma_m$.
Note that $e(F_i) =0$ since $F_i$ is a product of an elliptic curve and a Calabi-Yau hypersurface.
Thus we see that
$$
e(X_1) = e(\ensuremath{\mathbb{P}}^2 \times T) + e(\Gamma_m) = 3 (2\sigma_n + \delta_n) + e(\Gamma_m).
$$
Note also that $\Gamma_m \rightarrow C_m$ is a Calabi-Yau fibration and the discriminant locus consists of $18 d_n$ points since
$$
C_m \cdot (-d_n K_S) = d_n(-K_S \cdot (3H+3\phi^*H)) =18 d_n.
$$
Indeed, $\Gamma_m \rightarrow C_m$ is singular at the intersection of $C_m$ and the discriminant locus $D_S \subset S$ of $X_{12} \rightarrow S$,
and $D_S$ consists of smooth fibers over the $d_n$ points which is the discriminant locus of $T \rightarrow \ensuremath{\mathbb{P}}^1$. We also check
\[
e(C_m)=-(K_S+C_m)\cdot C_m = -18(27m^2 -2m +5)=: \gamma_m.
\]
Thus we compute
$$e(\Gamma_m) = e(C_m) \cdot e(H_{n+1}) + 18d_n (-1)^n= \gamma_m \sigma_n +18\delta_n$$ and
\[
e(X_1) = (6 \sigma_n + 3 \delta_n) + (\gamma_m \sigma_n +18 \delta_n) = (\gamma_m +6) \sigma_n + 21 \delta_n.
\]
To compute $e(X_{12})$, note that $X_{12} \rightarrow \ensuremath{\mathbb{P}}^1$ has a fiber with non-zero Euler number only at the discriminant locus
$p_1, \ldots, p_{12}$ of $S \rightarrow \ensuremath{\mathbb{P}}^1$.
Then we compute
\[
e(X_{12})= 12 (1 \cdot e(H_{n+1})) = 12 \sigma_n.
\]
By these, we obtain
$$e(X) = ((\gamma_m+6)\sigma_n + 21 \delta_n)+ (6 \sigma_n + 3 \delta_n) - 24 \sigma_n = (\gamma_m-12) \sigma_n + 24 \delta_n.$$
\end{proof}
\begin{rem}
The above implies that the Euler number can be arbitrarily negative when $n$ is odd and can be arbitrarily positive when $n$ is even (except $n=2$).
If $n=2$, then we compute $e(X)= 288$. We check that the formula (\ref{eq:Euler}) also holds when $n=1$.
\end{rem}
\begin{rem}
We see that the Hodge to de Rham spectral sequence on $X$ degenerates at $E_1$ (cf. \cite[Section 4]{MR894379}, \cite[Corollary 11.24]{MR2393625}).
It would be interesting whether our examples $X(m)$ satisfies the $\partial \bar{\partial}$-lemma and the hard Lefschetz property (cf.\cite{MR4085665}, \cite{MR3784517}). We hope to seek these elsewhere.
\end{rem}
\section*{Acknowledgement}
The author is grateful to Kenji Hashimoto for useful discussions.
He is also grateful to Nam-Hoon Lee for sending his manuscript and useful information.
He thanks Jim Bryan, Robert Friedman and the anonymous referees for valuable comments.
This work was partially supported by JSPS KAKENHI Grant Numbers JP17H06127, JP19K14509.
\bibliographystyle{amsalpha}
|
2,869,038,154,979 | arxiv | \section*{Introduction}
\subsection*{History}
A set $X$ of reals\footnote{In this paper, we use $2^\omega$ as
the set of reals. ($\omega=\{0,1,2,\ldots\}$.)
By well-known results both the definition and the theorem
also work for the unit interval~$ [0,1]$ or the torus $\mathbb R/\mathbb Z$.
Occasionally we also write ``$x$ is a real'' for ``$x\in \omega^\omega$''.}
is called ``strong measure zero'' (smz), if for
all functions $f:\omega\to\omega$ there are intervals $I_n$ of measure $\leq
1/f(n)$ covering $X$.
Obviously, a smz set is a null set (i.e., has Lebesgue measure zero), and it
is easy to see that the family of smz sets forms a $\sigma$-ideal and that
perfect sets (and therefore uncountable Borel or analytic sets) are not smz.
At the beginning of the 20th century, Borel~\cite[p.~123]{MR1504785}
conjectured:
\proofclaimnl{Every smz set is countable.}
This statement is known as the ``Borel Conjecture'' (BC). In the 1970s it was proved
that BC is \emph{independent}, i.e., neither provable nor refutable.
Let us very briefly comment on the notion of independence: A sentence $\varphi$
is called independent of a set $T$ of axioms, if neither $\varphi$ nor $\lnot\varphi$
follows from $T$. (As a trivial example, $(\forall x)(\forall y) x\cdot y= y\cdot x$ is
independent from the group axioms.) The set theoretic (first order) axiom system
ZFC (Zermelo Fraenkel with the axiom of choice) is considered to be the
standard axiomatization of all of mathematics: A mathematical proof is
generally accepted as valid iff it can be formalized in ZFC. Therefore we just
say ``$\varphi$ is independent'' if $\varphi$ is independent of ZFC.
Several mathematical statements are independent, the earliest and most
prominent example is Hilbert's first problem, the Continuum Hypothesis
(CH).
BC is independent as well: Sierpi\'nski~\cite{sierpinskiCH} showed
that CH implies $\lnot$BC (and, since G\"odel showed the consistency of CH, this
gives us the consistency of $\lnot$BC). Using the method of forcing,
Laver~\cite{MR0422027} showed that BC is consistent.
Galvin, Mycielski and Solovay~\cite{GMS} proved the following conjecture
of Prikry:
\proofclaimnl{$X \subseteq 2^\omega $ is smz
if and only if every comeager (dense $G_\delta$) set
contains a translate of $X$.}
Prikry also defined the following dual notion:
\proofclaimnl{$X \subseteq 2^\omega$ is called ``strongly meager'' (sm) if every set
of Lebesgue measure 1 contains a translate of $X$.}
The dual Borel Conjecture (dBC) states:
\proofclaimnl{Every sm set is countable.}
Prikry noted that CH implies $\lnot$dBC and conjectured dBC to be consistent
(and therefore independent), which was later proved by
Carlson~\cite{MR1139474}.
Numerous additional results regarding BC and dBC have been proved: The
consistency of variants of BC or of dBC, the consistency of BC or dBC together
with certain assumptions on cardinal characteristics, etc.
See~\cite[Ch.~8]{MR1350295} for several of these results. In this paper, we
prove the consistency (and therefore independence) of BC+dBC (i.e.,
consistently BC and dBC hold simultaneously).
\subsection*{The problem}
The obvious first attempt to force BC+dBC is to somehow combine Laver's and
Carlson's constructions. However, there are strong obstacles:
Laver's construction is a countable support iteration of Laver forcing. The
crucial points are:
\begin{itemize}
\item Adding a Laver real makes every old uncountable set $X$ non-smz.
\item And this set $X$ remains non-smz after another forcing $P$, provided
that $P$ has the ``Laver property''.
\end{itemize}
So we can start with CH and use a countable support iteration of Laver forcing of
length $\om2$. In the final model, every set $X$ of reals of size~$\al1$
already appeared at some stage $\alpha<\om2$ of the iteration; the next Laver real
makes $X$ non-smz, and the rest of the iteration (as it is a countable support
iteration of proper forcings with the Laver property) has the Laver property,
and therefore $X$ is still non-smz in the final model.
Carlson's construction on the other hand adds $\omega_2$ many
Cohen reals in a finite support iteration (or
equivalently: finite support product). The crucial points are:
\begin{itemize}
\item A Cohen real makes every old uncountable set $X$ non-sm.
\item And this set $X$ remains non-sm after another forcing $P$, provided
that $P$ has precaliber~$\al1$.
\end{itemize}
So we can start with CH, and use more or less the same argument as above:
Assume that $X$ appears at $\alpha<\om2$. Then the next Cohen makes $X$ non-sm. It
is enough to show that $X$ remains non-sm at all subsequent stages
$\beta<\om2$. This is guaranteed by the fact that a finite support iteration
of Cohen reals of length~$<\om2$ has precaliber~$\al1$.
So it is unclear how to combine the two proofs: A Cohen real makes all old sets
smz, and it is easy to see that whenever we add Cohen reals cofinally often in
an iteration of length, say, $\om2$, all sets of any intermediate extension
will be smz, thus violating BC. So we have to avoid Cohen
reals,\footnote{An iteration that forces dBC without adding Cohen reals was given in
\cite{MR2767969}, using non-Cohen oracle-cc.}
which also
implies that we cannot use finite support limits in our iterations.
So we have
a problem even if we find a replacement for Cohen forcing in Carlson's proof
that makes all old uncountable sets $X$ non-sm and that does not add Cohen
reals: Since we cannot use finite support, it seems hopeless to get
precaliber~$\aleph_1$,
an essential requirement to keep $X$ non-sm.
Note that it is the \emph{proofs} of BC and dBC that are seemingly
irreconcilable; this is not clear for the models. Of course Carlson's model,
i.e., the Cohen model, cannot satisfy BC, but it is not clear whether maybe
already the Laver model could satisfy dBC. (It is even still open whether a
single Laver forcing makes every old uncountable set non-sm.) Actually,
Bartoszy\'nski and Shelah~\cite{MR2020043} proved that the Laver model does
satisfy the following weaker variant of dBC (note that the continuum has size
$\al2$ in the Laver model):
\begin{quote}
Every sm set has size less than the continuum.
\end{quote}
In any case, it turns out that one \emph{can} reconcile Laver's and Carlson's proof,
by ``mixing'' them ``generically'', resulting in the following theorem:
\begin{Thmstar}
If ZFC is consistent, then ZFC+BC+dBC is consistent.
\end{Thmstar}
\subsection*{Prerequisites}
To understand anything of this paper, the reader
\begin{itemize}
\item should have some experience with finite and countable support
iteration, proper forcing, $\al2$-cc, $\sigma$-closed, etc.,
\item should know what a quotient forcing is,
\item should have seen some preservation theorem for proper countable support iteration,
\item should have seen some tree forcings (such as Laver forcing).
\end{itemize}
To understand everything, additionally the following is required:
\begin{itemize}
\item The ``case A'' preservation theorem from~\cite{MR1623206},
more specifically we build on
the proof
of~\cite{MR1234283} (or~\cite{MR2214624}).
\item In particular, some familiarity with the property ``preservation of randoms''
is recommended. We will use the fact that random and Laver forcing
have this property.
\item We make some claims about (a rather special case of)
ord-transitive models in Section~\ref{subsec:ordtrans}.
The readers can either believe these claims, or check them themselves
(by some rather straightforward proofs),
or look up the proofs (of more general
settings) in~\cite{MR2115943} or~\cite{kellnernep}.
\end{itemize}
{}From the theory of strong measure zero and strongly meager, we only need the
following two results (which are essential for our proofs of BC and dBC,
respectively):
\begin{itemize}
\item Pawlikowski's result from~\cite{MR1380640} (which we quote as Theorem~\ref{thm:pawlikowski} below), and
\item Theorem 8 of Bartoszy\'nski and Shelah's~\cite{MR2767969} (which we
quote as Lemma~\ref{lem:tomek}).
\end{itemize}
We do not need any other results of Bartoszy\'nski and Shelah's
paper~\cite{MR2767969}; in particular we do not use the notion of non-Cohen
oracle-cc (introduced in~\cite{MR2243849}); and the reader does not have to know the original proofs of Con(BC)
and Con(dBC), by Laver and Carlson, respectively.
The third author claims that our construction is more or less the same as a
non-Cohen oracle-cc construction, and that the
extended version presented in~\cite{MR2610747} is even closer to our
preparatory forcing.
\subsection*{Notation and some basic facts on forcing, strongly meager (sm) and
strong measure zero (smz) sets}
We call a lemma ``Fact'' if we think that no proof is necessary --- either because
it is trivial, or because it is well known (even without a reference),
or because we give an explicit reference to the literature.
Stronger conditions in forcing notions are smaller, i.e., $q\leq p$ means that
$q$ is stronger than $p$.
Let $P\subseteq Q$ be forcing notions. (As usual, we abuse notation by not
distinguishing between the underlying set and the quasiorder on it.)
\begin{itemize}
\item For $p_1,p_2\in P$ we write $p_1\perp_P p_2$ for
``$p_1$ and $p_2$ are incompatible''. Otherwise we write $p_1 \parallel_P p_2$.
(We may just write $\perp$ or $\parallel$ if $P$ is understood.)
\item\label{def:starorder} $q\leq^* p$ (or: $q\leq^*_P p$) means that $q$ forces that $p$ is in the generic
filter, or equivalently that
every $q'\leq q$ is compatible with $p$.
And $q=^* p$ means $q\leq^* p\ \wedge\ p\leq^* q$.
\item\label{def:separative} $P$ is separative, if $\leq$ is the same as $\leq^*$,
or equivalently, if for all $q\leq p$ with $q\neq p$ there is an $r\leq p$
incompatible with $q$. Given any $P$, we can define its ``separative
quotient'' $Q$ by first replacing (in $P$) $\leq$ by $\leq^*$ and then
identifying elements $p,q$ whenever $p=^*q$. Then $Q$ is separative
and forcing equivalent to $P$.
\item \qemph{$P$ is a subforcing of $Q$} means that the relation $\le_P$
is the restriction of $\le_Q$ to~$P$.
\item \qemph{$P$ is an incompatibility-preserving subforcing of $Q$}
means that $P$ is a subforcing of
$Q$ and that
$p_1\perp_P p_2$ iff $p_1\perp_Q p_2$ for all $p_1,p_2\in P$.
\end{itemize}
Let additionally $M$ be a countable
transitive\footnote{We will also use so-called
ord-transitive models, as defined in Section~\ref{subsec:ordtrans}.}
model (of a sufficiently large subset of ZFC) containing~$P$.
\begin{itemize}
\item ``$P$ is an $M$-complete subforcing of
$Q$'' (or: $P\lessdot_M Q$)
means that $P$ is a subforcing of $Q$ and: if $A\subseteq P$ is in
$M$ a maximal antichain, then it is a maximal antichain of~$Q$ as
well. (Or equivalently: $P$ is an incompatibility-preserving
subforcing of $Q$ and every predense subset of
$P$~in $M$ is predense in~$Q$.)
Note that this means that every $Q$-generic filter $G$ over~$V$
induces a $P$-generic filter over~$M$, namely $G^M\coloneqq G\cap P$
(i.e., every maximal antichain of
$P$ in~$M$ meets $G\cap P$ in exactly one point).
In particular, we can interpret a $P$-name $\tau$ in~$M$ as a $Q$-name.
More exactly, there is a $Q$-name $\tau'$ such that $\tau'[G]=\tau[G^M]$
for all $Q$-generic filters $G$. We will usually just identify $\tau$
and $\tau'$.
\item Analogously, if $P\in M$ and $i:P\to Q$ is a function, then
$i$ is called an $M$-complete embedding if it preserves $\leq$
(or at least $\leq^*$)
and $\perp$ and moreover: If $A\in M$ is predense in~$P$, then
$i[A]$ is predense in $Q$.
\end{itemize}
There are several possible characterizations of sm (``strongly
meager'') and smz (``strong measure zero'') sets; we will use the following
as definitions:
A set $X$ is not sm if there is a measure $1$ set into
which $X$ cannot be translated; i.e., if there is a null set $Z$
such that $(X+t)\cap Z\neq\emptyset$ for all reals $t$, or, in other words,
$Z+X=2^\omega$. To summarize:
\proofclaim{eq:notsm}{$X$ is {\em not} sm iff there is a
Lebesgue null set $Z$ such that $Z+X=2^\omega$.}
We will call such a $Z$ a ``witness'' for the fact that $X$ is not sm (or say
that $Z$ witnesses that $X$ is not sm).
The following theorem of Pawlikowski~\cite{MR1380640} is central for our
proof\footnote{We thank Tomek Bartoszy\'nski for pointing out Pawlikowski's
result to us, and for suggesting that it might be useful for our proof.}
that BC holds in our model:
\begin{Thm}\label{thm:pawlikowski}
$X\subseteq 2^\omega$ is smz iff $X+F$ is null for every closed null set $F$.
\\
Moreover, for every dense $G_\delta$ set $H$ we can \emph{construct} (in
an absolute way) a closed null set $F$ such that for every $X
\subseteq 2^\omega$ with $X+F$ null there is $t\in 2^\omega$ with
$t+X\subseteq H$.
\end{Thm}
In particular, we get:
\proofclaim{eq:notsmz}{$X$ is \emph{not} smz iff there is a closed null
set $F$ such that $X+F$ has positive outer Lebesgue measure.}
Again, we will say that the closed null set $F$ ``witnesses'' that
$X$ is not smz (or call $F$ a witness for this fact).
\subsection*{Annotated contents}
\begin{list}{}{\setlength{\leftmargin}{0.5cm}\addtolength{\leftmargin}{\labelwidth}}
\item[Section~\ref{sec:ultralaver}, p. \pageref{sec:ultralaver}:]
We introduce the family of ultralaver forcing notions
and prove some properties.
\item[Section~\ref{sec:janus}, p. \pageref{sec:janus}:]
We introduce the family of Janus forcing notions
and prove some properties.
\item[Section~\ref{sec:iterations}, p. \pageref{sec:iterations}:]
We define ord-transitive models and mention some basic properties.
We define the ``almost finite'' and ``almost countable'' support
iteration over a model.
We show that in many respects they behave like finite
and countable support, respectively.
\item[Section~\ref{sec:construction}, p. \pageref{sec:construction}:]
We introduce the
preparatory forcing notion $\mathbb{R}$ which adds a generic forcing
iteration~$\bar \mathbf{P}$.
\item[Section~\ref{sec:proof}, p. \pageref{sec:proof}:] Putting
everything together, we show that $\mathbb{R}*\mathbf{P}_{\om2}$
forces BC+dBC, i.e., that an uncountable $X$ is neither
smz nor sm. We show this under the assumption $X\in V$,
and then introduce a
factorization
of $\mathbb{R}*\bar \mathbf{P}$ that this assumption does not result in loss
of generality.
\item[Section~\ref{sec:alternativedefs}, p. \pageref{sec:alternativedefs}:] We briefly comment on alternative ways some notions could be defined.
\end{list}
An informal overview of the proof, including two illustrations, can be found
at~\url{http://arxiv.org/abs/1112.4424/}.
\section{Ultralaver forcing}\label{sec:ultralaver}
In this section, we define the family of \emph{ultralaver forcings} $\mathbb{L}_{\bar
D}$, variants of Laver forcing which depend on a system $\bar D$ of
ultrafilters.
In the rest of the paper, we will use the following properties of $\mathbb{L}_{\bar
D}$.
(And we will use \emph{only} these properties. So readers who are willing to
take these properties for granted could skip to Section~\ref{sec:janus}.)
\begin{enumerate}
\item
$\mathbb{L}_{\bar D}$ is $\sigma$-centered, hence ccc.\label{item:sigmacentered}
\\
(This is Lemma~\ref{lem:newscentered}.)
\item $\mathbb{L}_{\bar D}$ is separative. \\
(This is Lemma~\ref{lem:LDMsep}.)
\item\label{item:absolutepositive}
\emph{Ultralaver kills smz:}
There is a canonical $\mathbb{L}_{\bar D}$-name $\bar{\n\ell}$ for a
fast growing real in~$\omega^\omega$ called the ultralaver real. From this
real, we can define (in an absolute way) a closed null set $F$
such that
$X+F$ is positive for all uncountable $X$ in~$V$ (and therefore
$F$ witnesses that $X$ is not smz, according to Theorem~\ref{thm:pawlikowski}).
\\
(This is Corollary~\ref{cor:absolutepositive}.)
\item
Whenever $X$ is uncountable, then $\mathbb{L}_{\bar D} $ forces that
$X$ is not ``thin''.
\\
(This is Corollary~\ref{cor:LDnotthin}.)
\item
If $(M,\in)$ is a countable model of ZFC$^*$ and if $\mathbb{L}_{\bar D^M}$ is an
ultralaver forcing in $M$,
then for any ultrafilter system $\bar D$ extending $\bar D^M$,
$\mathbb{L}_{\bar D^M} $ is an $M$-complete subforcing of the ultralaver forcing
$\mathbb{L}_{\bar D}$.
\\
(This is Lemma~\ref{lem:LDMcomplete}.)
\\
Moreover, the real $\bar{\n\ell}$ of
item~(\ref{item:absolutepositive}) is so ``canonical'' that we get:
If (in $M$) $\bar{\n\ell}^M$ is the $\mathbb{L}_{\bar D^M}$-name for the
$\mathbb{L}_{\bar D^M}$-generic real,
and if (in $V$) $\bar{\n\ell}$ is the $\mathbb{L}_{\bar D}$-name for the
$\mathbb{L}_{\bar D}$-generic real, and if $H$ is $\mathbb{L}_{\bar D}$-generic over
$V$ and thus $H^M\coloneqq H\cap \mathbb{L}_{\bar D^M}$ is the induced
$\mathbb{L}_{\bar D^M}$-generic filter over $M$, then
$\bar{\n\ell}[H]$ is equal to
$ \bar{\n \ell}^M[H^M]$.
\\
Since the closed null set $F$
is constructed from $\bar{\n\ell}$
in an absolute way, the same holds for $F$, i.e., the
Borel codes $F[H]$ and $F[H^M]$ are the same.
\ite
Moreover, given $M$ and $\mathbb{L}_{\bar D^M}$ as above, and a random real
$r$ over~$M$, we can choose $\bar D$ extending $\bar D^M$
such that $\mathbb{L}_{\bar D}$
forces that randomness of~$r$ is
preserved (in a strong way that can be preserved in a
countable support iteration).
\\
(This is Lemma~\ref{lem:extendLDtopreserverandom}.)
\end{enumerate}
\subsection{Definition of ultralaver}
\begin{Notation} We use the following fairly standard notation:
A \emph{tree} is a nonempty
set $p \subseteq \omega^{<\omega}$ which is closed under initial segments
and has no maximal elements.\footnote{Except for the
proof of Lemma~\ref{lem:LDMcomplete},
where we also allow trees with maximal elements, and even empty trees.}
The elements (``nodes'') of a
tree are partially ordered by $\subseteq$.
For each sequence $s\in \omega^{<\omega}$ we write $\lh(s)$ for the
length of $s$.
For any tree $p \subseteq \omega^{<\omega}$ and any $s\in p$ we write
$\suc_p(s)$ for one of the following two sets:
\[ \{ k\in \omega: s^\frown k \in p \} \text{ \ \ or \ \ } \{ t\in p:
(\exists k\in \omega)\;\, t=s^\frown k \} \]
and we rely on the context to help the reader decide which set we
mean.
A \emph{branch} of $p$ is either of the following:
\begin{itemize}
\item A function $f:\omega\to \omega$ with $f\mathord\restriction n\in p$ for all
$n\in \omega$.
\item A maximal chain in the partial order $(p,\subseteq)$. (As
our trees do not have maximal elements, each
such chain $C$ determines a branch $\bigcup C$ in the first sense,
and conversely.)
\end{itemize}
We write $[p]$ for the set of all branches of~$p$.
For any tree $p\subseteq
\omega^{<\omega}$ and any $s\in p$ we write $p^{[s]}$ for the set
$\{t\in p: t \supseteq s \text{ or } t \subseteq s\}$, and we write
$[s]$ for either of the following sets:
\[ \{ t\in p: s \subseteq t \} \text{ \ \ or \ \ } \{ x \in [p]: s
\subseteq x \}. \]
The stem of a tree $p$ is the
shortest $s\in p $ with $|\suc_p(s)|>1$. (The trees we
consider will never be branches, i.e., will always have finite stems.)
\end{Notation}
\begin{Def}\label{def:LD}
\begin{itemize}
\item
For trees $q,p$ we write $q\le p$ if $q \subseteq p$ (``$q$ is stronger
than~$p$''), and
we say that \qemph{$q$ is a pure extension of~$p$} ($q\mathrel{\le_0} p$) if
$q\le p$ and $\stem(q)=\stem(p)$.
\item
A filter system $\bar D$ is a family $(D_s)_{s\in
\omega^{<\omega}}$ of filters on~$\omega$. (All our filters will
contain the Fr\'echet filter of cofinite sets.) We write $D_s^+$
for the collection of $D_s$-positive sets (i.e., sets whose complement
is not in $D_s$).
\item We define $\mathbb{L}_{\bar D} $
to be the set of all trees $p$ such that
$\suc_p(t)\in D_t^+$ for all $t\in p$ above the stem.
\item
The generic filter is determined by the generic branch $ \bar\ell
= (\ell_i)_{i\in \omega}\in \omega^\omega$, called the \emph{generic real}:
$\{\bar\ell\} = \bigcap_{p\in G} [p]$ or equivalently, $ \bar\ell =
\bigcup_{p\in G} \stem(p)$.
\item An ultrafilter system is a filter system consisting of
ultrafilters. (Since all our filters contain the Fr\'echet
filter, we only consider nonprincipal ultrafilters.)
\item
An \emph{ultralaver forcing} is a forcing $\mathbb{L}_{\bar D}$ defined from an
ultrafilter system. The generic real for an ultralaver forcing is
also called the \emph{ultralaver real}.
\end{itemize}
\end{Def}
Recall that a forcing notion $(P,\le)$ is \emph{$\sigma$-centered} if
$P = \bigcup_n P_n$, where for all $n,k\in \omega$ and for all
$p_1,\ldots, p_k\in P_n$ there is $q\le p_1,\ldots, p_k$.
\begin{Lem}\label{lem:newscentered}
All ultralaver forcings $\mathbb{L}_{\bar D}$ are $\sigma$-centered (hence ccc).
\end{Lem}
\begin{proof}
Every finite set of conditions sharing the same stem has a common lower bound.
\end{proof}
\begin{Lem}\label{lem:LDMsep}
$\mathbb{L}_{\bar D}$ is separative.\footnote{See page~\pageref{def:separative} for
the definition.}
\end{Lem}
\begin{proof} If $q\le p$, and $q\not=p$, then there is $s\in p\setminus q$.
Now $p^{[s]} \perp q$.
\end{proof}
If each $D_s$ is the Fr\'echet filter, then $\mathbb{L}_{\bar D}$ is Laver forcing
(often just written $\mathbb{L}$).
\subsection{$M$-complete embeddings}
Note that for all ultrafilter systems
$\bar D$ we have:
\proofclaim{eq:compatible}{
Two conditions in $\mathbb{L}_{\bar D}$ are compatible
if and only if their stems are comparable and moreover, the longer
stem is an element of the condition with the shorter stem.
}
\begin{Lem}\label{lem:LDMcomplete}
Let $M$ be countable.\footnote{Here,
we can assume that $M$ is a
countable transitive model of a sufficiently large finite
subset ZFC$^*$ of ZFC. Later, we will also use ord-transitive models
instead of transitive ones, which does not make any difference
as far as properties of $\mathbb{L}_{\bar D}$ are concerned, as our arguments
take place in transitive parts of such models.}
In~$M$, let $\mathbb{L}_{\bar D^M}$ be an ultralaver forcing. Let $\bar D$
be (in $V$) a filter system extending\footnote{I.e.,
$D_s^M \subseteq D_s$ for all $s\in \omega^{<\omega}$.}
$\bar D^M$.
Then $\mathbb{L}_{\bar D^M} $ is an $M$-complete subforcing of $\mathbb{L}_{\bar D}$.
\end{Lem}
\begin{proof}
For any tree\footnote{Here we also allow empty trees, and trees
with maximal nodes.}~$T$, any filter system
$\bar E = (E_s)_{s\in \omega^{<\omega}}$,
and any ${s_0}\in T$ we define a sequence
$(T_{\bar E,{s_0}}^\alpha)_{\alpha\in \omega_1}$
of ``derivatives''
(where we may abbreviate $T_{\bar E,{s_0}}^\alpha$ to $T^\alpha$)
as follows:
\begin{itemize}
\item $T^0\coloneqq T^{[{s_0}]}$.
\item Given $T^\alpha$, we let
$T^{\alpha+1}\coloneqq T^\alpha \setminus
\bigcup \{ [s] : s\in T^\alpha , {s_0}\subseteq s,
\suc_{T^\alpha}(s)\notin E_s^+ \}$,
where $[s]\coloneqq \{t: s\subseteq t\}$.
\item For limit ordinals $\delta>0$
we let $T^\delta\coloneqq \bigcap_{\alpha<\delta} T^\alpha$.
\end{itemize}
Then we have
\begin{itemize}
\item [(a)] Each $T^\alpha$ is closed under initial segments.
Also: $\alpha < \beta$ implies $ T^\alpha \supseteq T^\beta$.
\item [(b)] There is an $\alpha_0<\omega_1$ such that $T^{\alpha_0} =
T^{\alpha_0+1} = T^\beta$ for all $\beta>\alpha_0$. We write
$T^\infty$ or $T^\infty_{\bar E,{s_0}}$ for $T^{\alpha_0}$.
\item[(c)] If ${s_0}\in T_{\bar E,{s_0}}^\infty$, then $T_{\bar
E,{s_0}}^\infty\in \mathbb{L}_{\bar E}$ with stem~${s_0}$. \\
Conversely, if $\stem(T)={s_0}$, and $T\in \mathbb{L}_{\bar E}$, then
$T^\infty=T$.
\item[(d)] If $T$ contains a tree $q\in \mathbb{L}_{\bar E}$ with
$\stem(q)={s_0}$,
then $T^\infty$ contains $q^\infty=q$,
so in particular ${s_0}\in T^\infty$.
\item[(e)] Thus: $T$ contains a condition in $\mathbb{L}_{\bar E}$ with stem ${s_0}$
iff ${s_0}\in T^\infty_{\bar E,{s_0}}$.
\item[(f)] The computation of $T^\infty$ is absolute between any two models
containing $T$ and $\bar E$. (In particular, any transitive ZFC$^*$-model
containing $T$ and $\bar E$ will also contain
$\alpha_0$.)
\item[(g)] Moreover: Let $T\in M$, $\bar E\in M$, and let $\bar E'$ be a filter system
extending $\bar E$ such that for all ${s_0}$ and all
$A\in {\mathscr P}(\omega)\cap M$ we have: $A\in (E_{s_0})^+$ iff $A\in
(E_{s_0}')^+$.
(In particular, this will be true for any $\bar E'$ extending $\bar E$,
provided that each $E_{s_0}$ is an
$M$-ultrafilter.)
\\
Then
for each $\alpha\in M$ we have $T^\alpha_{\bar E,{s_0}}= T^\alpha_{\bar
E',{s_0}}$ (and hence $T^\alpha_{\bar E',{s_0}}\in M$).
(Proved by induction on~$\alpha$.)
\end{itemize}
Now let $A = (p_i:i\in I)\in M$ be a maximal antichain in
$\mathbb{L}_{\bar D^M}$, and assume (in $V$) that $q\in \mathbb{L}_{\bar D}$. Let
${s_0}\coloneqq \stem(q)$.
We will show that $q$ is compatible with some~$p_i$ (in $\mathbb{L}_{\bar D}$).
This is clear
if there is some $i$ with ${s_0}\in p_i$ and $\stem(p_i)\subseteq
{s_0}$, by~\eqref{eq:compatible}. (In this case, $p_i \cap q$
is a condition in $\mathbb{L}_{\bar D}$ with stem $s_0$.)
So for the rest of the proof
we assume that this is not the case, i.e.:
\proofclaim{eq:not.the.case}{
There is no $i$ with $s_0 \in p_i $ and $\stem(p_i)\subseteq s_0$.
}
Let $J\coloneqq \{ i\in I: {s_0}
\subseteq \stem(p_i)\}$. We claim that there is $j\in J$ with
$\stem(p_j)\in q$ (which as above implies that $q$ and $p_j$ are compatible).
Assume towards a contradiction that this is not the case.
Then $q$ is contained in the following tree $T$:
\begin{align}\label{def:T}
T \coloneqq
(\omega^{<\omega})^{[{{s_0}}]}\setminus
\bigcup _{j\in J} [\stem(p_j)].
\end{align}
Note that $T\in M$. In $V$ we have:
\proofclaim{eq:T.contains.q}{ The tree $T$ contains a condition
$q$ with stem ${s_0}$.}
So by (e) (applied in $V$), followed by (g), and again by (e) (now in $M$) we get:
\proofclaim{eq:T.contains.p}{
The tree $T$ also
contains a condition $p\in M$ with stem ${s_0}$.}
Now $p$ has to be
compatible with some~$p_i$. The sequences ${s_0}=\stem(p)$ and
$\stem(p_i)$ have to be comparable, so by~\eqref{eq:compatible}
there are two possibilities:
\begin{enumerate}
\item $\stem(p_i)\subseteq \stem(p) = s_0 \in p_i$. We have excluded this
case in our assumption \eqref{eq:not.the.case}.
\item $s_0 = \stem(p) \subseteq \stem(p_i)\in p$. So $i\in J$.
By construction of~$T$ (see~\eqref{def:T}),
we conclude $\stem(p_i)\notin T$, contradicting
$\stem(p_i)\in p\subseteq T$ (see~\ref{eq:T.contains.p}).
\qedhere
\end{enumerate}
\end{proof}
\subsection{Ultralaver kills strong measure zero}
The following lemma appears already in \cite[Theorem 9]{MR942525}. We will give a
proof below in Lemma~\ref{lem:pure}.
\begin{Lem}\label{lem:pure.finite}
If $A$ is a finite set, $\n \alpha$ an $\mathbb{L}_{\bar D}$-name, $p\in
\mathbb{L}_{\bar D}$, and $p\Vdash\n \alpha\in A$, then there is
$\beta\in A$ and a pure extension $q\mathrel{\le_0} p $ such that $q\Vdash
\n \alpha=\beta$.
\end{Lem}
\begin{Def}
Let $\bar\ell$ be an increasing sequence of natural numbers.
We say that $X\subseteq 2^\omega$ is
\emph{smz with respect to~$\bar\ell$}, if there
exists a sequence $(I_k)_{k\in\omega}$ of basic intervals
of $2^\omega$ of measure $\leq 2^{-\ell_k}$
(i.e., each $I_k$ is of the form $[s_k]$ for some
$s_k\in 2^{\ell_k }$) such that
$X\subseteq\bigcap_{m\in \omega} \bigcup_{k\ge m} I_k$.
\end{Def}
\begin{Rem}
It is well known and easy to see that the properties
\begin{itemize}
\item For all $\bar\ell$ there exists
exists a sequence $(I_k)_{k\in\omega}$ of basic intervals
of $2^\omega$ of measure $\leq 2^{-\ell_k}$
such that $X\subseteq \bigcup_{k\in\omega} I_k$.
\item For all $\bar\ell$ there exists
exists a sequence $(I_k)_{k\in\omega}$ of basic intervals
of $2^\omega$ of measure $\leq 2^{-\ell_k}$
such that $X\subseteq\bigcap_{m\in \omega} \bigcup_{k\ge m } I_k$.
\end{itemize}
are equivalent. Hence, a set $X$ is smz iff $X$ is smz with respect to all
$\bar\ell\in \omega^\omega$.
\end{Rem}
The following lemma is a variant of the corresponding lemma
(and proof)
for Laver forcing (see for example \cite[Lemma~28.20]{MR1940513}):
Ultralaver makes old uncountable sets non-smz.
\begin{Lem}\label{lem:LDdestroysSMZ}
Let $\bar D$ be a system of ultrafilters, and let $\bar{\n\ell}$ be the
$\mathbb{L}_{\bar D}$-name for the ultralaver real. Then each uncountable set $X \in V$ is forced to be non-smz (witnessed by the ultralaver real $\bar{\n\ell}$).
More precisely, the following holds:
\begin{equation}\label{eq:my_non_smz}
\Vdash_{\mathbb{L}_{\bar D}} \forall X \in V \cap [2^\omega]^{\aleph_1}\;\; \forall (x_k)_{ k \in \omega} \subseteq 2^\omega \;\; X \not\subseteq \bigcap_{m \in \omega} \bigcup_{k \geq m} [x_k \mathord\restriction \n\ell_k].
\end{equation}
\end{Lem}
We first give two technical lemmas:
\begin{Lem}\label{lem:first_technical}
Let $p \in \mathbb{L}_{\bar D}$ with stem $s \in \omega^{<\omega}$, and let $\n x$ be a $\mathbb{L}_{\bar D}$-name for a real in $2^\omega$. Then there exists a pure extension $q \leq_0 p$ and a real $\tau \in 2^\omega$ such that for every $n \in \omega$,
\begin{equation}\label{eq:first_technical}
\{ i \in\suc_q(s):\; q^{[s^\frown i]} \Vdash \n x \mathord\restriction n = \tau \mathord\restriction n \} \in D_s.
\end{equation}
\end{Lem}
\begin{proof}
For each $i \in \suc_p(s)$, let $q_i \leq_0 p^{[s^\frown i]}$ be such that $q_i$ decides
$\n x \mathord\restriction i$,
i.e.,
there is a $t_i$ of length $i$ such that
$q_i \Vdash \n x \mathord\restriction i = t_i$ (this is possible by Lemma~\ref{lem:pure.finite}).
Now we define the real $\tau \in 2^\omega$ as the $D_s$-limit of the $t_i$'s. In more detail: For each $n \in \omega$ there is a (unique) $\tau_n \in 2^n$ such that $\{ i:\; t_i \mathord\restriction n = \tau_n \} \in D_s$; since $D_s$ is a filter, there is a real $\tau \in 2^\omega$ with $\tau \mathord\restriction n = \tau_n$ for each $n$. Finally, let
$q \coloneqq \bigcup_i q_i$.
\end{proof}
\begin{Lem}\label{lem:second_technical}
Let $p \in \mathbb{L}_{\bar D}$ with stem $s$, and let $(\n x_k)_{ k \in \omega}$ be a
sequence of $\mathbb{L}_{\bar D}$-names for reals in $2^\omega$. Then there exists a
pure extension $q \leq_0 p$ and a family of reals $(\tau_\eta)_{ \eta \in q,\,
\eta \supseteq s} \subseteq 2^\omega$ such that for each $\eta \in q$
above~$s$, and every $n \in \omega$,
\begin{equation}\label{eq:second_technical}
\{ i \in \suc_q(\eta):\; q^{[\eta^\frown i]} \Vdash \n x_{|\eta|} \mathord\restriction n = \tau_\eta \mathord\restriction n \} \in D_\eta.
\end{equation}
\end{Lem}
\begin{proof}
We apply Lemma~\ref{lem:first_technical} to each node $\eta$ in $p$ above $s$ (and
to $\n x_{|\eta|}$) separately:
We first get a $p_1 \leq_0 p$ and a $\tau_s \in 2^\omega$;
for every immediate successor $\eta \in \suc_{p_1}(s)$, we get $q_\eta \leq_0
p_1^{[\eta]}$ and a $\tau_\eta \in 2^\omega$, and let $p_2 \coloneqq \bigcup_\eta
q_\eta$; in this way, we get a (fusion) sequence $(p,p_1,p_2,\ldots)$, and let
$q \coloneqq \bigcap_k p_k$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:LDdestroysSMZ}]
We want to prove~\eqref{eq:my_non_smz}. Assume towards a contradiction that $X$ is an uncountable set in $V$,
and that $(\n x_k)_{ k \in \omega}$ is a sequence of names for reals in $2^\omega$ and $p
\in \mathbb{L}_{\bar D}$ such that
\begin{equation}\label{eq:towards_smz_contra}
p \Vdash X \subseteq \bigcap_{m \in \omega} \bigcup_{k \geq m} [\n x_k \mathord\restriction \n\ell_k].
\end{equation}
Let $s \in \omega^{<\omega}$ be the stem of $p$.
By Lemma~\ref{lem:second_technical}, we can fix a pure extension $q \leq_0 p$ and a family $(\tau_\eta)_{\eta \in q,\, \eta \supseteq s} \subseteq 2^\omega$ such that for each $\eta \in q$ above the stem $s$ and every $n \in \omega$, condition~\eqref{eq:second_technical} holds.
Since $X$ is (in $V$ and) uncountable, we can find a real $x^* \in X$ which is
different from each real in the countable family $(\tau_\eta)_{\eta \in q,\, \eta
\supseteq s}$; more specifically, we can pick a family of natural numbers
$(n_\eta)_{\eta \in q,\, \eta \supseteq s}$ such that $x^* \mathord\restriction n_\eta \neq \tau_\eta
\mathord\restriction n_\eta$ for any $\eta$.
We can now find $r\le_0 q$ such that:
\begin{itemize}
\item For all $\eta\in r$ above $s$ and all $i\in \suc_r(\eta)$
we have $i > n_\eta$.
\item For all $\eta\in r$ above $s$ and all $i\in \suc_r(\eta)$
we have $r^{[\eta^\frown i]} \Vdash \n x_{|\eta|} \mathord\restriction n_\eta =
\tau_\eta\mathord\restriction n_\eta \not= x^*\mathord\restriction n_\eta$.
\end{itemize}
So for all $\eta\in r$ above $s$ we have, writing $k$ for $|\eta|$, that
$r^{[\eta^\frown i]} $ forces $x^*\notin [ \n x_k \mathord\restriction n_\eta] \supseteq
[\n x_k \mathord\restriction \ell_k ] $.
We conclude that
$r$ forces $x^* \notin \bigcup_{k \ge |s|} [\n x_k \mathord\restriction \ell_k] $,
contradicting
\eqref{eq:towards_smz_contra}.
\end{proof}
\begin{Cor}\label{cor:LDdestroysSMZ}
Let $(t_k)_{k\in \omega}$ be a dense subset of $2^{\omega}$.
Let $\bar D$ be a system of ultrafilters, and let $\bar{\n\ell}$ be the
$\mathbb{L}_{\bar D}$-name for the ultralaver real. Then the set $$ \n H\coloneqq
\bigcap_{m\in\omega} \bigcup _{k\ge m} [ t_k \mathord\restriction {\n \ell_k}] $$
is forced to be a
comeager set with the property that $\n H$ does not contain any
translate of any old uncountable set.
\end{Cor}
Pawlikowski's theorem~\ref{thm:pawlikowski} gives us:
\begin{Cor}\label{cor:absolutepositive}
There is a canonical name $F$ for a closed null set such that
$X+F$ is positive for all uncountable $X$ in~$V$.
In particular, no uncountable ground model set is smz in the
ultralaver extension.
\end{Cor}
\subsection{Thin sets and strong measure zero}
\label{sec:thin}
For the notion of ``(very) thin'' set, we use an increasing
function $B^*(k) $ (the function we use will be described in
Corollary~\ref{cor:tomek}). We will
assume that $\bar\ell^*=(\ell^*_k)_{k\in\omega}$ is an
increasing sequence of natural numbers with $\ell^*_{k+1} \gg B^*(k)$.
(We will later use a subsequence
of the ultralaver real~$\bar\ell$ as~$\bar\ell^*$, see Lemma~\ref{lem:subsequence}).
\begin{Def}\label{def:thin}
For $X \subseteq 2^\omega$ and $k\in
\omega$ we write $X\mathord\restriction [\ell^*_k,\ell^*_{k+1}) $ for the set
$\{x\mathord\restriction [\ell^*_k,\ell^*_{k+1}) : x\in X\}$. We say that
\begin{itemize}
\item $X \subseteq 2^\omega$ is
\qemph{very thin with respect to $\bar
\ell^*$ and~$B^*$}, \ if there are infinitely many $k$ with $|X\mathord\restriction
[\ell^*_k,\ell^*_{k+1})|\le B^*(k) $.
\item $X\subseteq 2^\omega$ is \qemph{thin with respect to $\bar \ell^*$ and~$B^*$}, \ if $X$ is the union
of countably many very thin sets.
\end{itemize}
\end{Def}
Note that the family of thin sets is a $\sigma$-ideal, while the family of
very thin sets is not even an ideal. Also, every very thin set is covered by a
closed very thin (in particular nowhere dense) set. In particular, every thin
set is meager and the ideal of thin sets is a proper ideal.
\begin{Lem}\label{lem:subsequence}
Let $B^*$ be an increasing function.
Let $\bar\ell$ be an increasing sequence of natural numbers.
We define
a subsequence $\bar\ell^*$ of $\bar\ell$ in the following way:
$\ell^*_k=\ell_{n_k}$ where $n_{k+1}-n_k=B^*({k})\cdot 2^{\ell^*_k}$.
\\ Then we get: If $X$ is thin with respect to $\bar\ell^*$ and $B^*$,
then $X$ is smz with respect to~$\bar\ell$.
\end{Lem}
\begin{proof}
Assume that $X=\bigcup_{i\in\omega} Y_i$, each $Y_i$ very thin
with respect to~$\bar\ell^*$ and $B^*$.
Let $(X_j)_{j\in \omega}$ be an enumeration of $\{Y_i:i\in \omega\}$
where each $Y_i$ appears infinitely often. So $X \subseteq
\bigcap_{m\in \omega} \bigcup_{j\ge m} X_j$.
By induction on~$j\in\omega$, we find for all $j>0$ some $k_j>k_{j-1}$
such that
\[
|X_j\mathord\restriction [\ell^*_{k_j},\ell^*_{k_j+1}) |\leq B^*({k_j})
\quad\text{hence}\quad
|X_j\mathord\restriction [0,\ell^*_{k_j+1}) |\leq B^*({k_j})\cdot 2^{\ell^*_{k_j}}
= n_{k_j+1}-n_{k_j}.
\]
So we can enumerate $X_j\mathord\restriction [0,\ell^*_{k_j+1}) $ as $(s_i)_{n_{k_j}\leq
i<n_{k_{j}+1}}$. Hence $X_j$ is a subset of $\bigcup_{n_{k_j}\leq
i<n_{k_{j}+1}} [s_i]$;
and each $s_i $ has length $\ell^*_{k_j+1}\geq \ell_i$,
since $\ell^*_{k_j+1}=\ell_{n_{k_j+1}}$ and $i<n_{k_j+1}$.
This implies
\[ X \subseteq
\bigcap_{m\in \omega} \bigcup_{j\ge m} X_j \subseteq
\bigcap_{m\in \omega} \bigcup_{i\ge m} [s_i]. \]
Hence $X$ is smz with respect to~$\bar\ell$.
\end{proof}
Lemma~\ref{lem:LDdestroysSMZ}
and Lemma~\ref{lem:subsequence} yield:
\begin{Cor}\label{cor:LDnotthin}
Let $B^*$ be an increasing function.
Let $\bar D$ be a system
of ultrafilters, and $\n{\bar\ell}$ the name for the ultralaver real.
Let $\n{\bar\ell}^*$ be constructed from $B^*$ and $\n{\bar\ell}$ as in
Lemma~\ref{lem:subsequence}.
\\
Then $\mathbb{L}_{\bar D}$ forces that for every uncountable~$X\subseteq 2^\omega$:
\begin{itemize}
\item $X$ is not smz with respect to~$\n{\bar \ell}$.
\item $X$ is not thin with respect
to~$\n{\bar\ell}^*$ and~$B^*$.\label{item:LDnotthin}
\end{itemize}
\end{Cor}
\subsection{Ultralaver and preservation of Lebesgue positivity}\label{ss:ultralaverpositivity}
It is well known that both Laver forcing and random forcing preserve
Lebesgue positivity; in fact they satisfy a stronger property that is preserved
under countable support iterations.
(So in particular, a countable support iteration of Laver
and random also preserves positivity.)
Ultralaver forcing $\mathbb{L}_{\bar D}$ will in general not preserve
positivity. Indeed, if all ultrafilters $D_s$ are equal to the same
ultrafilter $D^*$,
then the range $L\coloneqq \{\ell_0, \ell_1, \ldots \} \subseteq \omega $
of the ultralaver real $\bar \ell$ will diagonalize $D^*$, so every
ground model real $x\in 2^\omega$ (viewed as a subset of $\omega$) will either
almost contain $L$ or be almost disjoint to $L$, which implies that
the set $2^\omega\cap V$
of old reals is covered by a null set in the
extension.
However, later in this paper it will become clear that if we choose the
ultrafilters $D_s$ in a sufficiently generic way, then many old positive sets
will stay positive.
More specifically, in this section we will show
(Lemma~\ref{lem:extendLDtopreserverandom}): If $\bar D^M$ is an ultrafilter
system in a countable model $M$ and $r$ a random real over $M$,
then we can find an extension $\bar D$ such that $\mathbb{L}_{\bar D}$ forces that
$r$ remains random over $M[H^M]$
(where $H^M$ denotes the $\mathbb{L}_{\bar D}$-name for the restriction of the
$\mathbb{L}_{\bar D}$-generic filter $H$ to $\mathbb{L}_{\bar D^M}\cap M$).
Additionally, some ``side conditions''
are met, which are necessary to preserve the property in forcing iterations.
In Section~\ref{subsec:almostCS} we will see how to use this property to
preserve randoms in limits.
The setup we use for preservation of randomness is basically the notation of
``Case A'' preservation introduced in~\cite[Ch.XVIII]{MR1623206}, see also
\cite{MR1234283,MR2214624} or the textbook~\cite[6.1.B]{MR1350295}:
\begin{Def}\label{def:nullset}
We write $\textsc{clopen}$ for the collection of clopen sets on $2^\omega$.
We say that the function $Z:\omega\to \textsc{clopen}$
is a code for a null set, if
the measure of $Z(n)$ is at most $ 2^{-n}$ for each~$n\in \omega $.
For such a code $Z$,
the set $\nullset(Z)$ coded by $Z$ is
\[
\nullset(Z)\coloneqq \bigcap_n \bigcup_{k\ge n} Z(k).
\]
\end{Def}
The set $\nullset(Z)$ obviously is a null set, and it is well known
that every null set is contained in such a set $\nullset(Z)$.
\begin{Def}\label{def:sqsubset}
For a real $r$ and any code $Z$, we define $Z \sqsubset_n
r$ by:
\[
(\forall k\geq n) \ r\notin Z(k).
\]
We write $Z \sqsubset r$ if $Z \sqsubset_n r$ holds for
some~$n$;
i.e., if $r\notin \nullset(Z)$.
\end{Def}
For later reference, we record the following trivial fact:
\proofclaim{eq:sq.n}{
$p \Vdash \n Z \sqsubset r$ iff
there is a name
$\n n $ for an element of $\omega$
such that $p\Vdash \n Z \sqsubset_{\n n} r$.
}
Let $P$ be a forcing notion, and $\n Z$ a $P$-name of a code for a null set. An
interpretation of $\n Z$ below $p$ is some code $Z^*$ such that there is a
sequence $p=p_0\geq p_1\geq p_2\geq \dots$ such that $p_m$ forces $\n Z \mathord\restriction m=
Z^*\mathord\restriction m$. Usually we demand (which allows a simpler proof of the
preservation theorem at limit stages) that the sequence $(p_0,p_1,\dots)$ is
inconsistent, i.e., $p$ forces that there is an $m$ such that $p_m\notin G$.
Note that whenever $P$ adds a new $\omega$-sequence of ordinals, we can find
such an interpretation for any~$\n Z$.
If $\n{\bar Z}=(\n Z_1,\ldots, \n Z_m)$ is a tuple of names of codes for null sets, then
an interpretation of $\bar{\n Z}$ below $p$ is some tuple $(Z_1^*,\ldots, Z_m^*)$ such that there is a
single sequence $p=p_0\geq p_1\geq p_2\geq \dots$ interpreting each $\n Z_i$ as $Z_i^*$.
We now turn to preservation of Lebesgue positivity:
\begin{Def} \label{def:random.random.random}
\begin{enumerate}
\item
A forcing notion $P$ \emph{preserves Borel outer measure}, if $P$
forces $\Leb^*(A^V)=\Leb(A^{V[G_P]})$ for every code $A$ for a Borel
set. ($\Leb^*$ denotes the outer Lebesgue measure, and for a
Borel code $A$ and a set-theoretic universe~$V$, $A^V$ denotes the
Borel set coded by $A$ in~$V$.)
\item
$P$ \emph{strongly preserves randoms}, if the following holds: Let
$N\esm H(\chi^*)$ be countable for a sufficiently large regular cardinal
$\chi^*$,
let $P,p, \bar {\n Z} = (\n Z_1,\ldots, \n Z_m)\in N$,
let $p\in P$ and let $r$ be random
over~$N$. Assume that in~$N$, $\bar Z^* $ is an interpretation of $\n
{\bar Z}$, and assume $Z_i^*\sqsubset_{k_i} r$ for each~$i$. Then there is an $N$-generic
$q\le p$ forcing that $r$ is still random over~$N[G]$ and
moreover, $\n Z_i\sqsubset_{k_i} r$ for each~$i$.
(In particular, $P$ has to be proper.)
\item Assume that $P$ is absolutely definable. $P$ \emph{strongly
preserves randoms over countable models} if (2) holds for all
countable (transitive\footnote{Later we will introduce
ord-transitive models, and it is easy to see that it does not make
any difference whether we demand transitive or not; this can be seen
using a transitive collapse.}) models $N$ of~ZFC$^*$.
\end{enumerate}
\end{Def}
It is easy to see that these properties are increasing in strength.
(Of course (3)$\Rightarrow$(2) works only if ZFC$^*$ is satisfied in~$H(\chi^*)$.)
In~\cite{MR2155272} it is shown that (1) implies (3), provided that $P$ is nep
(``non-elementary proper'',
i.e., nicely definable and proper with respect to countable models). In
particular, every Suslin ccc forcing notion such as random forcing, and also
many tree forcing notions including Laver forcing, are nep. However $\mathbb{L}_{\bar
D}$ is not nicely definable in this sense, as its definition uses ultrafilters
as parameters.
\begin{Lem}\label{lem:random.laver}
Both Laver forcing and random forcing strongly preserve randoms over
countable models.
\end{Lem}
\begin{proof}
For random forcing, this is easy and well known (see, e.g.,
\cite[6.3.12]{MR1350295}).
For Laver forcing: By the above, it is enough to show (1).
This was done by
Woodin (unpublished) and Judah-Shelah~\cite{MR1071305}. A nicer proof
(including a variant of (2)) is given by Pawlikowski~\cite{MR1367136}.
\end{proof}
Ultralaver will generally not preserve Lebesgue positivity, let alone
randomness. However, we get the following ``local'' variant of strong
preservation of randoms (which will be used in the preservation
theorem~\ref{lem:iterate.random}).
The rest of this section will be devoted to the proof of the following lemma.
\begin{Lem}\label{lem:extendLDtopreserverandom}
Assume that $M$ is a countable model,
$\bar D^M$ an
ultrafilter system in $M$ and
$r$ a random real over $M$. Then there is (in $V$) an
ultrafilter system $\bar D$ extending%
\footnote{This implies, by
Lemma~\ref{lem:LDMcomplete}, that the $\mathbb{L}_{\bar D}$-generic
filter~$G$ induces an $\mathbb{L}_{\bar D^M}$-generic filter over~$M$,
which we call~$G^M$.}
$\bar D^M$, such that the following holds:
\\
\textbf{If}
\begin{itemize}
\item $p\in \mathbb{L}_{\bar D^M}$,
\item in $M$, $\n {\bar Z} = ( \n Z_1, \ldots , \n Z_m) $
is a sequence of $\mathbb{L}_{\bar D^M}$-names for
codes for null sets,\footnote{Recall that $\nullset(\n Z)= \bigcap_n
\bigcup_{k\ge n} \n Z(k)$ is a null set in the
extension.} and $Z_1^*,\dots , Z_m^*$ are interpretations under~$p$,
witnessed by a sequence $(p_n)_{n\in \omega}$ with
strictly increasing\footnote{It is enough to assume that the lengths of the
stems diverge to infinity; any thin enough subsequence
will then have strictly increasing stems and will still
interpret each $\n Z_i$ as $Z_i^*$.} stems,
\item $Z^*_i \sqsubset_{k_i} r$ for $i=1,\dots, m$,
\end{itemize}
\textbf{then} there is a $q\leq p$ in $\mathbb{L}_{\bar D}$ forcing that
\begin{itemize}
\item $r$ is random over $M[G^M]$,
\item $\n Z_i \sqsubset_{k_i} r$ for $i=1,\dots, m$.
\end{itemize}
\end {Lem}
For the proof of this lemma, we will use the following concepts:
\begin{Def}
Let $p\subseteq \omega^{< \omega} $ be a tree.
A \qemph{front name below $p$}
is
a function\footnote{Instead of $\textsc{clopen}$
we may also consider other ranges of front names,
such as the class of all ordinals, or the set $\omega$.}
$h:F\to \textsc{clopen}$, where $F\subseteq p$ is a front (a set that
meets every branch of~$p$ in a unique point). (For notational simplicity
we also allow $h$ to be defined on elements $\notin p$; this way,
every front name below $p$ is also a front name below $q$ whenever
$q\le p$.)
If $h$ is a front name and $\bar D$ is any filter system with $p\in \mathbb{L}_{\bar D}$,
we define the corresponding $\mathbb{L}_{\bar D}$-name (in the sense of forcing)
$\n z^h $ by
\begin{align}\label{def:n.alpha}
\n z^h\coloneqq \{ ( \check y, p^{[s]}): s\in F,\ y \in h(s)\}.
\end{align}
(This does not depend on the $\bar D$ we use, since we set
$\check y\coloneqq \{(\check x, \omega^{<\omega} ): x \in y \}$.)
Up to forced equality, the name $\n z^h$ is characterized
by the fact that $p ^{[s]} $ forces (in any ${\mathbb{L}_{\bar D}}$) that
$ \n z ^h = h(s)$, for every $s$ in the domain of $h$.
\end{Def}
Note that the same object~$h$ can be viewed as a front name below $p$ with respect
to different forcings $\mathbb{L}_{\bar D_1}$, $ \mathbb{L}_{\bar D_2}$, as long
as $p\in \mathbb{L}_{\bar D_1}\cap \mathbb{L}_{\bar D_2}$.
\begin{Def}
Let $p \subseteq \omega^{<\omega}$ be a tree.
A \qemph{continuous name below $p$}
is either of the following:
\begin{itemize}
\item An $\omega$-sequence of front names below $p$.
\item A $\subseteq$-increasing function $g:p\to \textsc{clopen}^{<\omega}$
such that $\lim_{n\to \infty } \lh(g(c\mathord\restriction n)) =\infty$
for every branch $c\in [p]$.
\end{itemize}
For each $n$, the set of minimal elements
in $\{ s\in p: \lh(g(s)) > n \}$ is a front, so each
continuous name in the second sense naturally
defines a name
in the first sense, and conversely.
Being a continuous name below $p$ does not involve the notion of $\Vdash$
nor does it depend on the filter system~$\bar D$.
If $g$ is a continuous name and $\bar D$ is any filter system,
we can again
define the corresponding $\mathbb{L}_{\bar D}$-name $\n Z^g $
(in the sense of forcing);
we leave a formal definition of $\n Z^g$ to the reader
and content ourselves with this characterization:
\begin{align}\label{def:z.g}
(\forall s\in p): p^{[s]} \Vdash_{\mathbb{L}_{\bar D}} g(s) \subseteq \n Z^g .
\end{align}
\end{Def}
Note that a continuous name below $p$ naturally corresponds
to a continuous function $F:[p] \to \textsc{clopen}^\omega$, and $ \n Z^g$ is forced
(by~$p$) to be the value of $F$ at the generic real $\n {\bar \ell}$.
\begin{Lem}\label{lem:pure}
$\mathbb{L}_{\bar D}$ has the following ``pure decision properties'':
\begin{enumerate}
\item\label{item:pure.one} Whenever ${\n y}$ is a name for an
element of $\textsc{clopen}$,
$p\in \mathbb{L}_{\bar D}$, then there is a pure extension $p_1\mathrel{\le_0} p$
such that $\n{y} = \n z^h $ (is forced)
for a front name $h$ below~$p_1$.
\item\label{item:pure.omega} Whenever ${\n Y }$ is a name for a sequence of
elements of $\textsc{clopen}$, $p\in \mathbb{L}_{\bar D}$, then there is a pure extension
$q\mathrel{\le_0} p$ such that ${\n Y } = \n Z ^g$ (is forced) for some
continuous name $g$ below~$q$.
\item\label{item:pure.finite} (This is Lemma~\ref{lem:pure.finite}.)
If $A$ is a finite set, $\n \alpha$ a name, $p\in
\mathbb{L}_{\bar D}$, and $p$ forces $\n \alpha\in A$, then there is
$\beta\in A$ and a pure extension $q\mathrel{\le_0} p $ such that $q\Vdash
\n \alpha=\beta$.
\end{enumerate}
\end{Lem}
\begin{proof}
Let $p\in \mathbb{L}_{\bar D}$, $s_0\coloneqq \stem(p)$, $\n y$ a name for an
element of $\textsc{clopen}$.
We call $t\in p$ a ``good node in $p$'' if $\n y$ is a front name
below~$p^{[t]}$ (more formally: forced to be equal to
$\n z^h$ for a front name $h$).
We can find $p_1\mathrel{\le_0} p$
such that for all $t\in p_1$ above $s_0$: If there is $q\mathrel{\le_0} p_1^{[t]}$
such that $t$ is good in~$q$, then $t$ is already good in~$p_1$.
We claim that $s_0$ is now good (in~$p_1$). Note that for any bad
node $s$ the set $\{\,t\in \suc_{p_1}(s): \ t \text{ bad}\,\}$ is
in~$D_s^+$. Hence, if $s_0$ is bad, we can inductively construct
$p_2\mathrel{\le_0} p_1$ such that all nodes of $p_2$ are bad nodes in~$p_1$.
Now let $q\le p_2$ decide $\n y$, $s\coloneqq \stem(q)$.
Then $q \mathrel{\le_0} p_1^{[s]}$, so $s$ is good in~$p_1$, contradiction.
This finishes the proof of (\ref{item:pure.one}).
To prove (\ref{item:pure.omega}), we first construct $p_1$ as in (\ref{item:pure.one}) with respect to $\n
y_0$. This gives a front $F_1\subseteq p_1$ deciding $\n
y_0$. Above each node in $F_1$ we now repeat the construction
from (\ref{item:pure.one}) with respect to $\n y_1$, yielding $p_2$, etc.
Finally, $q\coloneqq \bigcap_ n p_n$.
To prove (\ref{item:pure.finite}): Similar to (\ref{item:pure.one}), we can
find $p_1\mathrel{\le_0} p$ such that
for each $t\in p_1$: If there is a pure extension of $p_1^{[t]}$
deciding $\n\alpha$, then $p_1^{[t]}$ decides $\n \alpha$; in
this case we again call $t$ good. Since there are only finitely many
possibilities for the value of $\n \alpha$, any bad node $t$ has
$D_t^+$ many bad successors. So if the stem of $p_1$ is bad, we
can again reach a contradiction as in (\ref{item:pure.one}).
\end{proof}
\begin{Cor}\label{cor:obda.continuous}
Let $\bar D$ be a filter system, and let $G\subseteq \mathbb{L}_{\bar D}$
be generic. Then every $Y \in \textsc{clopen}^\omega$ in $V[G]$
is the evaluation of a continuous name $\n Z^g$ by $G$.
\end{Cor}
\begin{proof} In $V$, fix a $p\in \mathbb{L}_{\bar D}$ and a name $\n Y $ for an
element of $ \textsc{clopen}^\omega$.
We can find $q\le_0 p$
and a continuous name $g$ below $q$ such that $q \Vdash \n Y = \n Z^g$.
\end{proof}
We will need the following modification of the concept of ``continuous
names''.
\begin{Def}
Let $p \subseteq \omega^{<\omega}$ be a tree, $b\in [p]$ a branch.
An \qemph{almost continuous name below~$p$ (with respect to~$b$)}
is
a $\subseteq$-increasing function $g:p\to \textsc{clopen}^{<\omega}$
such that $\lim_{n\to \infty } \lh(g(c\mathord\restriction n)) =\infty$
for every branch $c\in [p]$, except possibly for $c=b$.
\end{Def}
Note that ``except possibly for $c=b$'' is the only difference between this definition and the definition of a continuous name.
Since for any $\bar D$ it is forced\footnote{
This follows from our assumption
that all our filters contain the Fr\'echet filter.}
that the generic real (for
$\mathbb{L}_{\bar D}$) is not equal to the exceptional branch $b$, we again get
a name $\n Z^g$ of a function in $\textsc{clopen}^\omega$ satisfying:
\[ (\forall s\in p): p^{[s]} \Vdash_{\mathbb{L}_{\bar D}} g(s) \subseteq \n Z^g. \]
An almost continuous name naturally corresponds to a
continuous function $F$ from $[p] \setminus \{b\}$ into
$\textsc{clopen}^\omega$.
Note that being an almost
continuous name is a very simple combinatorial
property of $g$ which does not depend on $\bar D$, nor does it
involve the notion $\Vdash$.
Thus, the same function $g$
can be viewed as an almost continuous name for two different
forcing notions $\mathbb{L}_{\bar D_1}$, $\mathbb{L}_{\bar D_2}$ simultaneously.
\begin{Lem} \label{lem:nicefy}
Let $\bar D$ be a system of filters
(not necessarily ultrafilters).
Assume that $\bar p = (p_n)_{n\in \omega}$ witnesses that $Y^*$
is an interpretation of~$\n Y$, and that the lengths of the stems of the $p_n$
are strictly increasing.\footnote{It is easy to see that for every
$\mathbb{L}_{\bar D}$-name $\n Y$ we can find such $\bar p$ and~$Y^*$:
First find $\bar p$ which interprets both $\n Y$ and $\bar{\n\ell}$,
and then thin out to get a strictly increasing sequence of stems.}
Then there exists a sequence $\bar q = (q_n)_{n\in \omega}$ such
that
\begin{enumerate}
\item $q_0\ge q_1\ge \cdots $.
\item $q_n\le p_n$ for all~$n$.
\item $\bar q$ also interprets $\n Y $ as~$Y^*$.
(This follows from the previous two statements.)
\item $\n Y$ is almost continuous below~$q_0$, i.e., there is
an almost continuous name $g$ such that $q_0$ forces
$\n Y = \n Z^g $.
\item $\n Y$ is almost continuous below~$q_n$, for all~$n$.
(This follows from the previous statement.)
\end{enumerate}
\end{Lem}
\begin{proof}
Let $b$ be the branch described by the stems of the conditions
$p_n$: \[b\coloneqq \{ s: (\exists n)\, s \subseteq \stem(p_n)\}.\]
We now construct
a condition~$q_0$.
For every $s\in b$ satisfying $\stem(p_n) \subseteq s \subsetneq
\stem(p_{n+1})$ we set $\suc _{q_0}(s) = \suc_{p_n}(s)$, and for all
$t\in \suc_{q_0}(s)$ except for the one in~$b$ we let $q_0^{[t]}
\mathrel{\le_0} p_n^{[t]} $ be such that $\n Y$ is continuous below
$q_0^{[t]}$. We can do this by Lemma~\ref{lem:pure}(\ref{item:pure.omega}).
Now we set
\[ q_n\coloneqq p_n \cap q_0 = q_0^{[\stem(p_n)]} \le p_n. \]
This takes care of~(1) and~(2). Now we show~(4):
Any branch $c$ of $q_0$ not equal to $b$ must contain a
node $s^\frown k\notin b$ with $s\in b$, so $c$ is a branch in
$q_0^{[s^\frown k]}$, below which $\n Y $ was continuous.
\end{proof}
The following lemmas and corollaries
are the motivation for considering continuous and
almost continuous names.
\begin{Lem} Let $\bar D$ be a system of filters
(not necessarily ultrafilters).
Let $p\in \mathbb{L}_{\bar D}$, let $b$ be a branch, and let
$g:p\to \textsc{clopen}^{<\omega}$ be an
almost continuous name
below~$p$ with respect to~$b$; write
$\n Z^g$ for the associated $\mathbb{L}_{\bar D}$-name.
Let $r\in 2^ \omega$ be a real, $n_0\in \omega$.
Then the following are equivalent:
\begin{enumerate}
\item $p\Vdash_{\mathbb{L}_{\bar D}} r \notin \bigcup_{n\ge n_0} \n Z^g(n)$,
i.e., $ \n Z^g \sqsubset_{n_0} r$.
\item For all $n\ge n_0$ and for all $s\in p $ for which
$g(s)$ has length $>n$ we have $r \notin g(s)(n)$.
\end{enumerate}
\end{Lem}
Note that (2) does not mention the notion $\Vdash$ and
does not depend on $\bar D$.
\begin{proof}
$\lnot$(2) $\Rightarrow$ $\lnot$(1):
Assume that there is $s\in p $ for which $g(s)= (C_0,\ldots,
C_n, \ldots, C_k)$ and $r\in C_n$. Then $p^{[s]}$ forces that the
generic sequence $\n Z^g = ( \n Z(0), \n Z(1), \ldots)$ starts with
$C_0,\ldots, C_n$, so $p^{[s]}$ forces $r\in \n Z^g(n)$.
$\lnot$(1) $\Rightarrow$ $\lnot$(2): Assume that $p$ does not force
$r \notin \bigcup_{n\ge n_0} \n Z^g(n)$. So there is a condition
$q\le p$ and some $n\ge n_0$ such that $q \Vdash r\in \n Z^g(n)$. By
increasing the stem of~$q$, if necessary, we may assume that
$s\coloneqq \stem(q)$ is not on $b$ (the ``exceptional'' branch), and
that $g(s)$ has already length~$>n$. Let $C_n\coloneqq g(s)(n)$ be the
$n$-th entry of~$g(s)$. So $p^{[s]}$ already forces $\n Z^g(n) =
C_n$; now $q^{[s]}\le p^{[s]}$, and $q^{[s]}$ forces the
following statements: $r\in \n Z^g(n) $, $\n Z^g(n) = C_n$. Hence $r\in
C_n$, so (2) fails.
\end{proof}
\begin{Cor}\label{cor:z.absolute}
Let $\bar D_1$ and $\bar D_2$ be systems of filters, and assume that
$p$ is in $\mathbb{L}_{\bar D_1} \cap \mathbb{L}_{\bar D_2}$. Let $g:p \to
\textsc{clopen}^{<\omega}$ be an
almost continuous name of a sequence of clopen sets, and
let $\n Z^g_1$ and $\n Z^g_2$ be the associated $\mathbb{L}_{\bar D_1}$-name
and $\mathbb{L}_{\bar D_2}$-name, respectively.
Then for any real $r$ and $n\in \omega$ we have
\[ p \Vdash_{\mathbb{L}_{\bar D_1}} \n Z^g_1 \sqsubset_n r
\ \ \Leftrightarrow \ \
p \Vdash_{\mathbb{L}_{\bar D_2}} \n Z^g_2 \sqsubset_n r.
\]
\end{Cor}
(We will use this corollary for the special case that $\mathbb{L}_{\bar D_1}$
is an ultralaver forcing, and $\mathbb{L}_{\bar D_2}$ is Laver forcing.)
\begin{Lem}
Let $\bar D_1$ and $\bar D_2$ be systems of filters, and assume that
$p$ is in $\mathbb{L}_{\bar D_1} \cap \mathbb{L}_{\bar D_2}$. Let $g:p \to
\textsc{clopen}^{<\omega}$ be a continuous
name of a
sequence of clopen sets, let $F \subseteq p$ be a front
and let $h:F\to \omega$ be a front name.
Again we will write $\n Z^g_1, \n Z^g_2$ for the associated names
of codes for null sets,
and we will write $\n n_1$ and $\n n_2$ for the associated
$\mathbb{L}_{\bar D_1}$- and
$\mathbb{L}_{\bar D_2}$-names, respectively, of natural numbers.
Then for any real $r$ we have:
\[p \Vdash_{\mathbb{L}_{\bar D_1}} \n Z^g_1 \sqsubset_{\n n_1} r
\ \ \Leftrightarrow \ \
p \Vdash_{\mathbb{L}_{\bar D_2}} \n Z^g_2 \sqsubset_{\n n_2} r.\]
\end{Lem}
\begin{proof}
Assume $p \Vdash_{\mathbb{L}_{\bar D_1}} \n Z^g_1 \sqsubset_{\n n_1} r$.
So for each $s\in F$ we have:
$p^{[s]}\Vdash_{\mathbb{L}_{\bar D_1}} \n Z^g_1 \sqsubset_{h(s) } r$.
By
Corollary~\ref{cor:z.absolute}, we also have
$p^{[s]}\Vdash_{\mathbb{L}_{\bar D_2}} \n Z^g_2 \sqsubset_{h(s)} r$.
So also
$p^{[s]}\Vdash_{\mathbb{L}_{\bar D_2}} \n Z^g_2 \sqsubset_{\n n_2} r$ for each $s\in F$.
Hence
$p\Vdash_{\mathbb{L}_{\bar D_2}} \n Z^g_2 \sqsubset_{\n n_2} r$.
\end{proof}
\begin{Cor}\label{cor:stays.random}
Assume $q\in \mathbb{L}$ forces in Laver forcing that
$ \n Z^{g_k} \sqsubset r$ for $k=1,2,\ldots$,
where each $g_k$ is a continuous name of a code for a
null set.
Then there is a Laver condition $q'\mathrel{\le_0} q$ such that for all
filter systems $\bar D$ we have:
\begin{quote} If $q'\in \mathbb{L}_{\bar D}$, then $q'$
forces (in ultralaver forcing ${\mathbb{L}_{\bar D}}$) that $
\n Z^{g_k} \sqsubset r$ for all $k$.
\end{quote}
\end{Cor}
\begin{proof} By \eqref{eq:sq.n} we can find a sequence $(\n n _k)_{k=1}^\infty$ of
$\mathbb{L}$-names such that $q\Vdash \n Z^{g_k} \sqsubset_{\n n_k} r$ for
each $k$. By Lemma~\ref{lem:pure}(\ref{item:pure.omega})
we can find $q'\mathrel{\le_0} q$ such that this sequence is continuous
below $q'$. Since each $\n n_k$ is now a front name below $q'$, we
can apply the previous lemma.
\end{proof}
\begin{Lem}\label{lem:continuous.is.enough}
Let $M$ be a countable model, $r\in 2^\omega$, $\bar D^M\in M$
an ultrafilter system, $\bar D $ a filter system extending $\bar D^M$, $q\in \mathbb{L}_{\bar D}$.
For any $V$-generic filter $G\subseteq \mathbb{L}_{\bar D}$ we write
$G^M$ for the ($M$-generic, by Lemma~\ref{lem:LDMcomplete}) filter on
$\mathbb{L}_{\bar D^M}$.
The following are equivalent:
\begin{enumerate}
\item $q\Vdash _{\mathbb{L}_{\bar D}} r $ is random over $M[G^M]$.
\item For all names $\n Z\in M$ of codes for null sets: $q\Vdash_{\mathbb{L}_{\bar D}} \n Z \sqsubset r $.
\item For all continuous names $g\in M$:
$q\Vdash_{\mathbb{L}_{\bar D}} \n Z^g \sqsubset r $.
\end{enumerate}
\end{Lem}
\begin{proof} (1)$\Leftrightarrow$(2) holds because every null set is contained in a set of the form $\nullset(Z)$, for some code $Z$.
(2)$\Leftrightarrow$(3): Every code for a null set in $M[G^M]$
is equal to~$\n Z^g[G^M]$, for some $g\in M$, by
Corollary~\ref{cor:obda.continuous}.
\end{proof}
The following lemma may be folklore. Nevertheless, we prove it for the convenience
of the reader.
\begin{Lem} \label{lem:random.over.mprime}
Let $r $ be random over a countable model $M$ and $A\in M$. Then there is a
countable model $M'\supseteq M$
such that $A$ is
countable in~$M'$, but $r$ is still random over~$M'$.
\end{Lem}
\begin{proof}
\def\namematrix{
\xymatrix@C=15mm{ M \ar[r]^C \ar[d]_{B_1} &
M^C \ar[d]^{\n B_2} \\
M^{B_1} \ar[r]_{\n P = C*\n B_2/ B_1} &
M^ {C*\n B_2} \\
}
}
\def\modelmatrix{
\xymatrix@C=15mm{ M \ar[r]^J \ar[d]_{r} &
M[J] \ar[d]^{K} \\
M[r] \ar[r]_H &
M[r][H] \\
}
}
We will need the following forcing notions, all defined in $M$:
\[\namematrix \]
\begin{itemize}
\item Let $C$ be the forcing that collapses the cardinality of~$A$ to
$\omega$ with finite conditions.
\item Let $B_1$ be random forcing (trees $T \subseteq 2^{<\omega}$
of positive measure).
\item Let $\n B_2$ be the $C$-name of random forcing.
\item Let $i:B_1\to C*\n B_2$ be the natural complete embedding $T\mapsto
(1_C,T)$.
\item Let $\n P$ be a $B_1$-name for the forcing $C*\n B_2/i[G_{B_1}]$, the
quotient of $C*\n B_2$ by the complete subforcing $i[B_1]$.
\end{itemize}
The random real $r$ is $B_1$-generic over~$M$. In $M[r]$ we let
$P\coloneqq \n P[r]$. Now let $H \subseteq P$ be generic over~$M[r]$.
Then $r*H \subseteq B_1*\n P \simeq C*\n B_2$ induces
an $M$-generic filter $J \subseteq C$ and an $M[J]$-generic filter
$K \subseteq \n B_2[J]$; it is easy to check
that $K$ interprets the $\n B_2$-name of the
canonical random real as the given random real~$r$.
Hence $r$ is random over the countable model
$M'\coloneqq M[J]$, and $A$ is countable
in~$M' $.
\[ \modelmatrix \]
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:extendLDtopreserverandom}]
We will first describe a construction that deals with a single
triple $ ( \bar p, \bar {\n Z}, \bar Z^ *)$ (where $\bar p$
is a sequence of conditions with strictly increasing stems which interprets
$ \bar {\n Z} $ as $ \bar Z^ *$); this construction will
yield a condition $q' = q'( \bar p, \bar {\n Z}, \bar Z^ *)$. We
will then show how to deal with all possible triples.
So let $p$ be a condition, and let
$\bar p = (p_k)_{k\in \omega}$ be a sequence interpreting
$\bar {\n Z}$ as $\bar Z^*$, where the lengths of the
stems of $p_n$ are strictly increasing and $p_0=p$.
It is easy to see that it is enough to deal with a single null set, i.e.,
$m=1$, and with $k_1=0$. We write $\n Z$ and $Z^*$ instead
of $\n Z_1$ and $Z_1^*$.
Using Lemma~\ref{lem:nicefy} we may (strengthening the conditions
in our interpretation) assume (in $M$) that the sequence
$(\n Z(k))_{k\in \omega}$ is almost continuous, witnessed by~$g:p\to
\textsc{clopen}^{<\omega}$.
By Lemma~\ref{lem:random.over.mprime}, we can find a model
$M'\supseteq M$ such that $(2^\omega)^M$ is
countable in~$M'$, but $r$ is still random over~$M'$.
We now work in~$M'$. Note that $g$ still defines an almost
continuous name, which we again call~$\n Z$.
Each filter in $D_s^M$ is now countably
generated; let $A_s$ be a pseudo-intersection of $D_s^M$
which additionally satisfies $A_s \subseteq \suc_p(s)$ for all $s\in p$
above the stem.
Let $D'_s$ be the Fr\'echet filter on $A_s$. Let $p'\in \mathbb{L}_{\bar D'}$
be the tree with the same stem as $p$ which satisfies
$\suc_{p'}(s)= A_s$ for all $s\in p'$ above the stem.
By Lemma~\ref{lem:LDMcomplete}, we know that $\mathbb{L}_{\bar D^M}$ is an
$M$-complete subforcing of $\mathbb{L}_{\bar D'}$ (in $M'$ as well as in $V$). We
write $G^M$ for
the induced filter on $\mathbb{L}_{\bar D^M}$.
We now work in $V$. Note that below the condition $p'$, the forcing
$\mathbb{L}_{\bar D'}$ is just Laver forcing $\mathbb{L}$, and that $p'\le_{\mathbb{L}} p$.
Using Lemma~\ref{lem:random.laver} we can find a condition $q\le p'$
(in Laver forcing $\mathbb{L}$) such that:
\begin{align}
& q \text{ is $M'$-generic}. \\
&q\Vdash_\mathbb{L} \text{ $r$ is random over $M'[G_{\mathbb{L}}]$
(hence also over $M[G^M]$)}\label{eq:r.random}.\\
& \text{Moreover, }q \Vdash_\mathbb{L} \n Z \sqsubset_0 r . \label{eq:z0r}
\end{align}
Enumerate all continuous
$\mathbb{L}_{\bar D^M}$-names of codes for null sets from $M$ as
$\n Z^{g_1}, \n Z^ {g_2}, \ldots $ \
Applying
Corollary~\ref{cor:stays.random} yields a condition $q'\le q$
such that
for all filter systems $\bar E$ satisfying $q'\in \mathbb{L}_{\bar E}$,
we have $q'\Vdash_{\mathbb{L}_{\bar E}} \n Z^{g_i} \sqsubset r$ for all $i$.
Corollary~\ref{cor:z.absolute} and Lemma~\ref{lem:continuous.is.enough} now imply:
\proofclaim{claim:p.prime}{ For every
filter system $\bar E$ satisfying $q'\in \mathbb{L}_{\bar E}$,
$q' $ forces in ${\mathbb{L}_{\bar E}}$ that $r$ is random over $M[G^M]$ and
that $\n Z \sqsubset_0 r$. }
By thinning out $q'$ we may assume that
\proofclaim{eq:basdf}{For each $\nu\in \omega^\omega\cap M$ there
is $k$ such that $\nu\mathord\restriction k\notin q'$. }
We have now described a construction of $q'= q'(\bar p, \n Z, Z^*)$.
Let $(\bar p^n , \n Z^n, Z^{*n})$ enumerate all triples
$(\bar p , \n Z, Z^{*})\in M$ where $\bar p$ interprets $\n Z$ as $Z^*$
(and consists of conditions with strictly increasing stems). For each
$n$ write $\nu^n$ for $\bigcup_k \stem (p^n_k)$,
the branch determined by the stems of the sequence $\bar p^ n$. We now
define by induction a sequence $q^n$ of conditions:
\begin{itemize}
\item $q^0 \coloneqq q'( \bar p^0 , \n Z^0, Z^{*0}) $.
\item Given $q^{n-1}$ and $(\bar p^n , \n Z^n, Z^{*n})$, we find $k_0$ such
that $\nu^n\mathord\restriction k_0 \notin q^0 \cup \cdots \cup q^{n-1}$
(using~\eqref{eq:basdf}). Let $k_1$ be such that
$\stem(p^n_{k_1})$ has length $>k_0$. We replace $\bar
p^n$ by $\bar p'\coloneqq (p^n_{k})_{k\ge k_1}$.
(Obviously, $\bar p'$
still interprets $\n Z^n$ as $Z^{*n}$.)
Now let $q^n\coloneqq q' (\bar p', \n Z^n, Z^{*n})$.
\end{itemize}
Note that the stem of $q^n$ is at least as long as
the stem of $p^n_{k_1}$, and is therefore not in $q^0 \cup
\cdots\cup q^{n-1}$, so $\stem(q^i)$ and $\stem(q^j)$ are
incompatible for all $i\not=j$. Therefore we can
choose for each $s$ an ultrafilter $D_s$ extending
$D^M_s$ such that
$\stem(q^i) \subseteq s $ implies $\suc_{q^i}(s)
\in D_s$.
Note that all $q^i$ are in $\mathbb{L}_{\bar D}$. Therefore,
we can use~\eqref{claim:p.prime}. Also, $q^i\le p^i_0$.
\end{proof}
Below, in Lemma~\ref{lem:iterate.random}, we will prove a preservation theorem
using the following ``local'' variant of ``random preservation'':
\begin{Def}\label{def:locally.random}
Fix a countable model $M$, a real $r\in 2^\omega$ and a
forcing notion $Q^M\in M$.
Let $Q^M$ be an $M$-complete subforcing of $Q$.
We say that \qemph{$Q$ locally preserves randomness of $r$ over $M$},
if
there is in $M$ a
sequence $(D^{Q^M}_n)_{n\in\omega}$ of
open dense subsets of $Q^M$ such that
the following holds:\\
{\bf Assume that }
\begin{itemize}
\item $M$ thinks that
$\bar p\coloneqq (p^n)_{n\in\omega}$ interprets
$(\n Z_1, \ldots, \n Z_m) $ as $(Z_1^*, \ldots, Z_m^*) $
(so each $\n Z_i$ is a $Q^M$-name of a code for a null set
and each $Z_i^*$ is a code for a null set, both in $M$);
\item moreover, each $p^n$ is in $D^{Q^M}_n$
(we call such a sequence $(p^n)_{n\in\omega}$, or the according interpretation, \qemph{quick});
\item $r$ is random over $M$;
\item $Z^*_i \sqsubset_{k_i} r$ for $i=1,\dots, m$.
\end{itemize}
{\bf Then}
there is a $q\leq_Q p^0$ forcing that
\begin{itemize}
\item
$r$ is random over $M[G^M]$;
\item $\n Z_i \sqsubset_{k_i} r$ for $i=1,\dots, m$.
\end{itemize}
\end{Def}
Note that this is trivially satisfied if $r$ is not random over $M$.
For a variant of this
definition, see Section~\ref{sec:alternativedefs}.
Setting
$D^{Q^M}_n$ to be the set of conditions with stem of length at least $n$,
Lemma~\ref{lem:extendLDtopreserverandom} gives us:
\begin{Cor}\label{cor:ultralaverlocalpreserving}
If $Q^M$ is an ultralaver forcing in $M$ and $r$ a real,
then there is an ultralaver forcing $Q$ over\footnote{``$Q$ over $Q^M$''
just means that $Q^M$ is an $M$-complete subforcing of $Q$.} $Q^M$ locally
preserving randomness of $r$ over~$M$.
\end{Cor}
\section{Janus forcing}\label{sec:janus}
In this section, we define a family of forcing notions that has two faces
(hence the name \qemph{Janus forcing}): Elements of this family may be countable (and therefore
equivalent to Cohen), and they may also be essentially random.
In the rest of the paper, we will use the following properties of Janus forcing
notions $\mathbb{J}$.
(And we will use \emph{only} these properties. So readers who are willing to
take these properties for granted could skip to Section~\ref{sec:iterations}.)
Throughout the whole paper we fix a function $B^*:\omega\to \omega$
given by Corollary~\ref{cor:tomek}.
The Janus forcings will depend on a real parameter
$\bar \ell^* = (\ell^*_m)_{m\in \omega}\in \omega^\omega$ which grows
fast with respect to~$B^*$. (In our application, $\bar \ell^*$
will be given by a subsequence of an ultralaver real.)
The sequence $\bar \ell^*$ and the function $B^*$ together define
a notion of a ``thin set'' (see Definition~\ref{def:thin}).
\begin{enumerate}
\item \label{item:canonical.null.set}
There is a canonical $\mathbb{J}$-name for a (code for a) null set~$\n
Z_\nabla$.
\\
Whenever $X \subseteq 2^\omega$ is not thin, and $\mathbb{J}$
is countable, then $\mathbb{J}$ forces that $X$ is not strongly meager,
witnessed\footnote{in the sense of~\eqref{eq:notsm}} by~$\nullset(\n Z_\nabla)$ (the set we get when we
evaluate the code $\n Z_\nabla$).
Moreover, for any $\mathbb{J}$-name~$\n Q$ of a $\sigma$-centered forcing,
also $\mathbb{J}*\n Q$ forces that $X$ is not strongly meager, again
witnessed by~$\nullset(\n Z_\nabla)$.
\\
(This is Lemma~\ref{lem:janusnotmeager}; ``thin'' is defined in Definition~\ref{def:thin}.)
\item
Let $M$ be a countable transitive model and $\mathbb{J}^M$ a Janus
forcing in~$M$. Then $\mathbb{J}^M$ is a Janus forcing in $V$ as well
(and of course countable in $V$). (Also note that trivially the forcing
$\mathbb{J}^M$ is an $M$-complete subforcing of itself.)
\\
(This is Fact~\ref{fact:janus.ctblunion}.)
\item
Whenever $M$ is a countable transitive model and $\mathbb{J}^M$ is a
Janus forcing in $M$,
then
there is a Janus forcing $\mathbb{J}$
such that
\begin{itemize}
\item
$\mathbb{J}^M$ is an $M$-complete subforcing of $\mathbb{J}$.
\item
$\mathbb{J}$ is (in $V$) equivalent to random forcing
(actually we just need that $\mathbb{J}$ preserves Lebesgue positivity
in a strong and iterable way).
\end{itemize}
(This is Lemma~\ref{lem:janusmayberandom} and Lemma~\ref{lem:janusrandompreservation}.)
\item
Moreover, the name $\n Z_\nabla$ referred to
in~(\ref{item:canonical.null.set}) is so ``canonical'' that
it evaluates to the same code in the $\mathbb{J}$-generic extension over $V$
as in the $\mathbb{J}^M$-generic extension over $M$.
\\
(This is Fact~\ref{fact:Znablaabsolute}.)
\end{enumerate}
\subsection{Definition of Janus}
A Janus forcing $\mathbb{J}$ will consist of:%
\footnote{We thank Andreas Blass and Jind\v{r}ich Zapletal for their comments
that led to an improved presentation of Janus forcing.}
\begin{itemize}
\item
A countable ``core'' (or: backbone) $\nabla$ which is defined in a
combinatorial way from a parameter~$\bar\ell^*$. (In our
application, we will use a Janus forcing immediately after an
ultralaver forcing, and $\bar\ell^*$ will be a subsequence of the
ultralaver real.) This core is of course equivalent to Cohen
forcing.
\item
Some additional ``stuffing'' $\mathbb{J}\setminus \nabla$ (countable\footnote{Also
the trivial case $\mathbb{J}=\nabla $ is allowed.} or uncountable). We allow
great freedom for this, we just
require that the core $\nabla$ is a ``sufficiently'' complete subforcing (in a
specific combinatorial sense, see Definition~\ref{def:Janus}(\ref{item:fat})).
\end{itemize}
We will use the following
combinatorial theorem
from~\cite{MR2767969}:
\begin{Lem}[{\cite[Theorem 8]{MR2767969}\footnotemark}]
\footnotetext{The theorem
in~\cite{MR2767969} actually says ``for a sufficiently large
$I$'', but the proof shows that this should be read as ``for \emph{all}
sufficiently large $I$''. Also, the quoted theorem only claims that ${\mathcal A}_I$ will
be nonempty, but for $\varepsilon\le\frac12$ and $|I|> N_{\varepsilon,\delta}$
it is easy to see that ${\mathcal A}_I$ cannot be a singleton $\{A\}$: The set $X:= 2^I\setminus A$ has size $\ge 2^{|I|-1}\ge N_{\varepsilon,\delta}$
but satisfies $X+A\not=2^I$, as the constant sequence $\bar 0$ is not in $X+A$.}
\label{lem:tomek}
For every $\varepsilon,\delta>0$ there exists
$N_{\varepsilon,\delta}\in \omega$ such that for all sufficiently
large finite sets $I\subseteq \omega$ there is a family
${{\mathcal A}}_I $ with $|{\mathcal A}_I|\ge 2$ consisting of sets $A \subseteq 2^I$ with
$\dfrac{|A|}{2^{|I|}} \leq \varepsilon$ such that if $X \subseteq
2^I$, $|X| \geq N_{\varepsilon,\delta}$ then
\[
\frac{|\{ A \in {{\mathcal A}}_I: X+A=2^I\}|}{|{{\mathcal A}}_I|} \geq 1-\delta.
\]
(Recall that $X+A\coloneqq \{x+a: x\in X, a\in A\}$.)
\end{Lem}
Rephrasing and specializing to $\delta=\frac14$ and
$\varepsilon = \frac{1}{2^{i}}$ we get:
\begin{Cor}\label{cor:tomek}
For every
$i \in \omega$ there exists $B^*(i)$ such that for all finite
sets $I$
with $|I| \geq B^*(i)$
there is a nonempty
family ${{\mathcal A}}_I$
with $|{\mathcal A}_I| \geq 2$
satisfying the following:
\begin{itemize}
\item ${{\mathcal A}}_I$ consists of sets $A \subseteq 2^I$ with
$\dfrac{|A|}{2^{|I|}} \leq \dfrac{1}{2^{i}}$.
\item
For every $X \subseteq 2^I$ satisfying $|X| \geq B^*(i) $, the
set $\{ A \in {{{\mathcal A}}_I}: X+A=2^I \}$ has at least $\frac34 |{{\mathcal A}}_I|$ elements.
\end{itemize}
\end{Cor}
\begin{Asm}
We fix a sufficiently fast increasing sequence
$\bar\ell^*=(\ell^*_i)_{i\in\omega}$ of natural numbers; more precisely, the sequence $\bar\ell^*$ will be a subsequence of
an ultralaver real $\bar\ell$, defined as in Lemma~\ref{lem:subsequence} using the function $B^*$ from Corollary~\ref{cor:tomek}.
Note that in this case $\ell^*_{i+1}-\ell^*_i \geq B^*(i)$;
so we can fix for each $i$ a family ${\mathcal A}_i \subseteq
{\mathscr P}(2^{L_i})$ on
the interval $L_i \coloneqq [\ell^*_i,\ell^*_{i+1})$ according to Corollary~\ref{cor:tomek}.
\end{Asm}
\begin{Def}\label{def:Janus.nabla}
First we define the ``core'' $\nabla= \nabla_{\bar \ell^*}$ of our
forcing:
\[ \nabla = \bigcup_{i\in \omega} \prod_{j<i} {\mathcal A}_j .\]
In other words,
$\sigma\in \nabla$ iff
$\sigma= (A_0,\ldots, A_{i-1})$
for some $i\in \omega$, $A_0\in {\mathcal A}_0$, \dots, $A_{i-1}\in {\mathcal A}_{i-1}$.
We will denote the number $i$ by $\height(\sigma)$.
The forcing notion $\nabla$ is ordered by reverse inclusion (i.e., end extension): $\tau \leq \sigma$
if
$\tau \supseteq \sigma$.
\end{Def}
\begin{Def}\label{def:Janus}
Let $\bar \ell^* = (\ell^*_i)_{i\in \omega}$ be as in the assumption above.
We say that $\mathbb{J}$ is a Janus forcing based on $\bar \ell^*$ if:
\begin{enumerate}
\item\label{item:ic} $(\nabla, \supseteq)$ is an incompatibility-preserving
subforcing of $\mathbb{J}$.
\item\label{item:heightsarepredense} For each $i\in \omega$ the set $\{\sigma\in \nabla:\,
\height(\sigma)=i\}$ is predense in~$\mathbb{J}$.
So in particular,
$\mathbb{J}$ adds a
branch through $\nabla$. The union of this branch
is called $\n C^\nabla = (\n C^\nabla_0,\n C^\nabla_1,\n C^\nabla_2,\ldots)$, where $\n C^\nabla_i \subseteq 2^{L_i}$ with $\n C^\nabla_i \in {\mathcal A}_i$.
\item\label{item:fat} ``Fatness'':\footnote{This is the crucial combinatorial
property of Janus forcing. Actually, \eqref{item:fat}~implies~\eqref{item:heightsarepredense}.}
For all $p \in \mathbb{J}$ and all real numbers $\varepsilon>0$
there are arbitrarily large $i \in \omega$ such that there is a core condition $\sigma = (A_0,\ldots,A_{i-1}) \in \nabla$ (of length $i$) with
\[ \frac
{| \{ A \in {\mathcal A}_i: \, \sigma^\frown A \parallel_\mathbb{J} p\, \}|}
{| { {\mathcal A}_i } |}
\geq 1-\varepsilon.
\]
(Recall that $p \parallel_\mathbb{J} q$ means that $p$ and $q$ are compatible in $\mathbb{J}$.)
\item \label{item:janus.ccc} $\mathbb{J}$ is ccc.
\item \label{item:janus.sep} $\mathbb{J}$ is separative.\footnote{Separative is defined on page~\pageref{def:separative}.}
\item\label{item:janus.hc} (To simplify some technicalities:) $\mathbb{J} \subseteq H(\aleph_1)$.
\end{enumerate}
\end{Def}
We now define
$\n Z_\nabla$, which will be a canonical $\mathbb{J}$-name of (a code for) a null set. We will use the sequence $\n C^\nabla$ added by $\mathbb{J}$ (see Definition~\ref{def:Janus}(\ref{item:heightsarepredense})).
\begin{Def}\label{def:Znabla}
Each $\n C^\nabla_i$ defines a clopen set $\n Z^\nabla_i =
\{ x \in 2^\omega:\, x \mathord\restriction L_i \in \n C^\nabla_i \}$ of measure at most $\frac{1}{2^{i}}$.
The sequence
$\n Z_\nabla = (\n Z^\nabla_0,\n Z^\nabla_1,\n Z^\nabla_2,\ldots)$
is (a name for) a code for the null set
\[
\nullset(\n Z_\nabla) = \bigcap_{n < \omega} \bigcup_{i \geq n} \n Z^\nabla_i.
\]
\end{Def}
Since $\n C^\nabla$ is defined ``canonically'' (see in particular Definition~\ref{def:Janus}(\ref{item:ic}),(\ref{item:heightsarepredense})),
and $\n Z^\nabla$ is constructed in an absolute way from $\n C^\nabla$, we get:
\begin{Fact}\label{fact:Znablaabsolute}
If $\mathbb{J}$ is a Janus forcing, $M$ a countable model and $\mathbb{J}^M$ a Janus forcing in $M$ which is an $M$-complete subset of $\mathbb{J}$, if $H$ is $\mathbb{J}$-generic over $V$ and
$H^M$ the induced $\mathbb{J}^M$-generic filter over $M$, then $\n C^\nabla$
evaluates to the same real in $M[H^M]$ as in $V[H]$, and therefore
$\n Z^\nabla$ evaluates to the same code (but of course not to the same set of reals).
\end{Fact}
For later reference, we record the following trivial fact:
\begin{Fact} \label{fact:janus.ctblunion}
Being a Janus forcing is absolute. In particular, if $V\subseteq W$
are set theoretical universes and $\mathbb{J}$ is a Janus forcing in $V$,
then $\mathbb{J}$ is a Janus forcing in $W$. In particular, if $M$
is a countable model in $V$ and $\mathbb{J}\in M$ a Janus forcing in $M$, then $\mathbb{J}$ is also a Janus forcing in $V$.
\\
Let $(M^n)_{n\in \omega}$ be an increasing sequence of countable
models, and let $\mathbb{J}^n \in M^n$ be Janus forcings.
Assume that $\mathbb{J}^n$ is $M^n$-complete in~$\mathbb{J}^{n+1}$.
Then $\bigcup_n \mathbb{J}^n$ is a Janus forcing, and an
$M^n$-complete extension of $\mathbb{J}^n$ for all~$n$.
\end{Fact}
\subsection{Janus and strongly meager}
Carlson~\cite{MR1139474} showed that Cohen reals make every uncountable set $X$
of the ground model not strongly meager in the extension (and that not being
strongly meager is preserved in a subsequent forcing with precaliber~$\al1$).
We show that a {\em countable} Janus forcing $\mathbb{J}$ does the same
(for a subsequent forcing that is even $\sigma$-centered,
not just precaliber~$\al1$).
This sounds
trivial, since any (nontrivial) countable forcing is equivalent to Cohen
forcing anyway. However, we show (and will later use) that the canonical
null set $\n Z_\nabla$ defined above witnesses that $X$ is not strongly meager
(and not just some
null set that we get out of the isomorphism between $\mathbb{J}$ and Cohen forcing).
The point is that while $\nabla$ is
not a complete subforcing of~$\mathbb{J}$, the
condition~(\ref{item:fat})
of the Definition~\ref{def:Janus} guarantees that Carlson's argument still
works, if we assume that $X$ is non-thin (not just uncountable).
This is enough for us, since by Corollary~\ref{cor:LDnotthin} ultralaver forcing
makes any uncountable set non-thin.
Recall that we fixed the increasing sequence $\bar \ell^* = (\ell^*_i)_{i\in
\omega}$ and~$B^*$. In the following, whenever we say ``(very) thin'' we mean
``(very) thin with respect to $\bar \ell^*$ and $B^*$'' (see Definition~\ref{def:thin}).
\begin{Lem}\label{lem:janusnotmeager}
If $X$ is not thin, $\mathbb{J}$
is a countable Janus forcing based on $\bar \ell^*$,
and
$\n R$ is a $\mathbb{J}$-name
for a $\sigma$-centered forcing notion,
then
$\mathbb{J}*\n R$ forces that $X$ is not strongly meager witnessed by
the null set $\n Z_\nabla$.
\end{Lem}
\begin{proof}
Let $\n c$ be a $\mathbb{J}$-name for a function $\n c:\n R\to \omega$
witnessing that $\n R$ is $\sigma$-centered.
Recall that ``$\n Z_\nabla$ witnesses that $X$ is not strongly meager''
means that $X+\n Z_\nabla = 2^\omega$.
Assume towards a contradiction that $(p,r) \in \mathbb{J}*\n R$ forces that $X+\n Z_\nabla \neq 2^\omega$. Then we can fix a $(\mathbb{J}*\n R)$-name $\n \xi $ such that
$(p,r) \Vdash \n \xi \notin X + \n Z_\nabla$, i.e.,
$(p,r) \Vdash (\forall x \in X)\,\, \n \xi \notin x + \n Z_\nabla$.
By definition of $\n Z_\nabla$, we get
\[
(p,r) \Vdash (\forall x \in X)\, (\exists n \in \omega)\, (\forall i \geq n) \,\, \n \xi \mathord\restriction L_i \notin x \mathord\restriction L_i + \n C^\nabla_i.
\]
For each $x\in X$ we can find $(p_x,r_x) \leq (p,r)$ and
natural numbers
$n_x \in \omega$ and $m_x \in \omega$ such that
$p_x $ forces that $\n c(r_x) = m_x$
and
\[
(p_x,r_x) \Vdash (\forall i \geq n_x) \,\, \n \xi \mathord\restriction L_i \notin x \mathord\restriction L_i + \n C^\nabla_i.
\]
So $X = \bigcup_{p \in \mathbb{J}, m \in \omega, n \in \omega} X_{p,m,n}$,
where $X_{p,m,n}$ is the set of all $x$ with~$p_x=p$, $m_x=m$, $n_x=n$.
(Note that $\mathbb{J}$ is countable, so the union is countable.)
As $X$ is not thin, there is some $p^*, m^*, n^*$ such that $X^*\coloneqq X_{p^*,m^*,n^*}$ is not very thin.
So we get for all $x\in X^*$:
\begin{equation}\label{eq:prx}
(p^*,r_x) \Vdash (\forall i \geq n^*) \,\, \n \xi \mathord\restriction L_i \notin x \mathord\restriction L_i + \n C^\nabla_i.
\end{equation}
Since $X^*$ is not very thin,
there is some $i_0 \in \omega$ such that for all $i \geq i_0$
\begin{equation}\label{eq:star}
\textrm{the (finite) set } X^* \mathord\restriction L_i \textrm{ has more than } B^*(i)
\textrm{ elements.}
\end{equation}
Due to the fact that $\mathbb{J}$ is a Janus forcing (see Definition~\ref{def:Janus}~\eqref{item:fat}),
there are arbitrarily large $i \in \omega$ such that there is a core condition $\sigma = (A_0,\ldots,A_{i-1}) \in \nabla$ with
\begin{equation} \label{eq:sizeS}
\frac{ | \{ A \in {\mathcal A}_i: \, \sigma^\frown A \parallel_{\mathbb{J}} p^* \} | }
{ | {\mathcal A}_i | }
\geq \frac{2}{3}.
\end{equation}
Fix such an $i$
larger than both $i_0$ and $n^*$, and fix a condition $\sigma$ satisfying~\eqref{eq:sizeS}.
We now consider the following two subsets of ${\mathcal A}_i$:
\begin{equation}\label{eq:two_sets}
\{ A \in {\mathcal A}_i: \, \sigma^\frown A \parallel_{\mathbb{J}} p^* \}
\,\,
\textrm{ and }
\,\,
\{ A \in {\mathcal A}_i: \, X^* \mathord\restriction L_i + A = 2^{L_i} \}.
\end{equation}
By~\eqref{eq:sizeS}, the relative measure (in ${\mathcal A}_i$) of the left one is at least $\frac{2}{3}$;
due to~\eqref{eq:star} and the definition of ${\mathcal A}_i$ according to Corollary~\ref{cor:tomek}, the relative measure of the right one is at least $\frac{3}{4}$;
so the two sets in~\eqref{eq:two_sets} are not disjoint, and we can pick an $A$ belonging to both.
Clearly, $\sigma^\frown A$ forces (in $\mathbb{J}$) that
$\n C^\nabla_i$ is equal to~$A$. Fix $q \in \mathbb{J}$ witnessing $\sigma^\frown A \parallel_{\mathbb{J}} p^*$. Then
\begin{equation}\label{eq:wo_for_contradiction}
q \Vdash_\mathbb{J} X^* \mathord\restriction L_i + \n C^\nabla_i = X^* \mathord\restriction L_i + A = 2^{L_i}.
\end{equation}
Since $p^*$ forces that for each $x \in X^*$ the color $\n c(r_x) = m^*$, we can find an $r^*$
which is (forced by $q \leq p^*$ to be) a lower bound of the \emph{finite} set $\{r_x : \, x \in X^{**} \}$, where $X^{**} \subseteq X^*$ is any finite set with $X^{**} \mathord\restriction L_i = X^* \mathord\restriction L_i$.
By~\eqref{eq:prx},
\[
(q,r^*) \Vdash \n \xi \mathord\restriction L_i \notin X^{**} \mathord\restriction L_i + \n C^\nabla_i = X^* \mathord\restriction L_i + \n C^\nabla_i,
\]
contradicting~\eqref{eq:wo_for_contradiction}.
\end{proof}
Recall that by Corollary~\ref{cor:LDnotthin}, every uncountable set $X$ in $V$ will
not be thin in the $\mathbb{L}_{\bar D}$-extension. Hence we get:
\begin{Cor}\label{cor:ultraplusjanus}
Let $X$ be uncountable. If $\mathbb{L}_{\bar D}$ is any ultralaver forcing
adding an ultralaver real $\bar \ell$, and $\bar \ell^*$ is defined from
$\bar \ell$ as in Lemma~\ref{lem:subsequence}, and if $\n\mathbb{J}$ is a
countable Janus forcing based on $\bar \ell^*$, $\n Q$ is any $\sigma$-centered
forcing, then $\mathbb{L}_{\bar D}*\n\mathbb{J}*\n Q$ forces that $X$ is not strongly meager.
\end{Cor}
\subsection{Janus forcing and preservation of Lebesgue positivity}
We show that every Janus forcing in a countable model $M$ can be extended to
locally preserve a given random real over $M$. (We showed the same for ultralaver
forcing in Section~\ref{ss:ultralaverpositivity}.)
We start by proving that every countable Janus forcing can be embedded into a
Janus forcing which is equivalent to random forcing, preserving the maximality
of countably many maximal antichains. (In the following lemma, the letter
$M$ is just a label to distinguish $\mathbb{J}^M$ from $\mathbb{J}$,
and does not necessarily refer to a model.)
\newcommand{{\bJ^M}}{{\mathbb{J}^M}}
\newcommand{\bJ}{\mathbb{J}}
\begin{Lem}\label{lem:janusmayberandom}
Let $\mathbb{J}^M$ be a countable Janus
forcing (based on $\bar \ell^*$) and let
$\{D_k:\, k\in \omega\}$ be a countable family of open dense subsets
of $\mathbb{J}^M$.
Then there is a Janus forcing $\mathbb{J}$
(based on the same $\bar \ell^*$) such that
\begin{itemize}
\item $\mathbb{J}^M$ is an incompatibility-preserving subforcing of $\mathbb{J}$.
\item Each $D_k$ is still predense in $\mathbb{J}$.
\item $\mathbb{J}$ is forcing equivalent to random forcing.
\end{itemize}
\end{Lem}
\begin{proof}
Without loss of generality assume $D_0=\mathbb{J}^M$.
Recall that $\nabla = \nabla^{\mathbb{J}^M}$ was defined in
Definition~\ref{def:Janus.nabla}.
Note that for each $j$ the set
$\{ \sigma\in \nabla: \, \height(\sigma)=j\}$ is predense in ${\bJ^M}$,
so the set
\begin{align}\label{align:E.k}
E_j\coloneqq \{ p \in {\bJ^M}: \exists \sigma\in \nabla: \, \height(\sigma)=j, \ p \le \sigma\}
\end{align}
is dense open in ${\bJ^M}$; hence without loss of generality each $E_j$ appears in our list of $D_k$'s.
Let $\{r^n:\, n\in \omega\} $ be an enumeration of~$\mathbb{J}^M$.
We now fix $n$ for a while (up to \eqref{def:this.is.random}).
We will construct
a finitely splitting tree
$S^n \subseteq \omega^{<\omega}$ and a family $(\sigma^n_s,p^n_s,
\tau^{*n}_s)_{ s\in S^n}$
satisfying the following (suppressing the superscript~$n$):
\begin{enumerate}[(a)]
\item $\sigma_s\in \nabla$, $\sigma_{\langle\rangle} = \langle\rangle$,
$s\subseteq t$ implies $\sigma_s\subseteq \sigma_t$,
and $s\perp_{S^n} t$ implies $\sigma_s\perp_\nabla \sigma_t$.
\\ (So in particular the set $\{\sigma_t:\, t\in\suc_{S^n}(s)\}$
is a (finite) antichain above $\sigma_s$ in~$\nabla$.)
\item $p_s\in {\bJ^M}$, $p_{\langle\rangle} = r^n$;
if $s\subseteq t$ then
$p_t\leq_{\bJ^M} p_s$ (hence $p_t\leq r^n$);
$s\perp_{S^n} t$ implies $p_s\perp_{\bJ^M} p_t$.
\item $p_s\leq_{\bJ^M} \sigma_s$.
\item $\sigma_s \subseteq \tau^*_s \in \nabla$, and
$\{\sigma_t:\, t\in\suc_{S^n}(s)\}$
is the set of all $ \tau \in \suc_\nabla(\tau^*_s)$ which are compatible
with $p_s$.
\item The set $\{\sigma_t:\, t\in\suc_{S^n}(s)\}$
is a subset of $ \suc_\nabla(\tau^*_s)$
of relative size at least
$1-\frac 1 {\lh(s)+10}$.
\label{item:size.a}
\item Each $s\in {S^n}$ has at least 2 successors (in ${S^n}$).
\item If $k=\lh(s)$, then $p_s\in D_k$ (and therefore also
in all $D_l$ for $l<k$).
\end{enumerate}
Set $\sigma_{\langle\rangle}=\langle\rangle$ and $p_{\langle\rangle}=r^n$.
Given $s,\sigma_s$ and~$p_s$, we construct $\suc_{S^n}(s)$ and
$(\sigma_t,p_t)_{t\in\suc_{S^n}(s)}$:
We apply fatness~\ref{def:Janus}(\ref{item:fat}) to $p_s$ with
$\varepsilon=\frac 1 {\lh(s)+10}$. So we get some $\tau_s^*\in\nabla$
of height bigger than the height of $\sigma_s$ such that
the set $B$ of elements of $\suc_\nabla(\tau_s^*)$ which are compatible
with $p_s$ has relative size at least $1-\varepsilon$.
Since $p_s\leq_{\bJ^M} \sigma_s$ we get that $\tau^*_s$ is compatible
with (and therefore stronger than) $\sigma_s$.
Enumerate $B$ as $\{\tau_0,\dots,\tau_{l-1}\}$.
Set $\suc_{S^n}(s)=\{s^\frown i:\, i<l\}$ and $\sigma_{s^\frown i}=\tau_i$.
For $t\in\suc_{S^n}(s)$, choose $p_t\in {\bJ^M}$ stronger than both $\sigma_t$ and $p_s$
(which is obviously possible since $\sigma_t$ and $p_s$ are compatible),
and moreover $p_t\in D_{\lh(t)}$.
This concludes the construction of the family
$(\sigma^n_s,p^n_s, \tau^{*n}_s)_{ s\in S^n}$.
So $(S^n,\subseteq)$ is a finitely splitting nonempty
tree of height $\omega$ with no maximal nodes and no isolated
branches.
$[S^n]$ is the (compact) set of branches of~$S^n$.
The closed subsets of $[S^n]$ are exactly the sets of the form~$[T]$,
where $T\subseteq S^n$ is a subtree of $S^n$
with no maximal nodes. $[S^n]$
carries a natural (``uniform'') probability measure~$\mu_n$,
which is characterized by
\[
\mu_n((S^n)^{[t]}) = \frac{1}{|{\suc_{S^n}(s)}|}\cdot \mu_n((S^n)^{[s]})
\]
for all $s\in S^n$ and all $t\in \suc_{S^n}(s)$. (We just write $\mu_n(T)$
instead of $\mu_n([T])$ to increase readability.)
We call $T\subseteq S^n$ positive if $\mu_n(T)>0$, and we call $T$
pruned if $\mu_n(T^{[s]})>0$ for all $s\in T$.
(Clearly every positive tree $T$ contains a pruned tree $T'$ of the same
measure, which can be obtained from $T$ by removing all nodes $s$ with
$\mu_n(T^{[s]})=0$.)
Let $T\subseteq S^n$ be a positive pruned tree and $\varepsilon>0$.
Then on all but finitely many levels $k$ there is an $s\in T$ such that
\begin{equation}\label{eq:lebdense}
\suc_T(s)\subseteq \suc_{S^n}(s)\text{ has relative size }\geq
1-\varepsilon.
\end{equation}
(This follows from Lebesgue's density theorem, or can easily be seen directly:
Set $C_m=\bigcup_{t\in T,\,\lh(t)=m}{(S^n)}^{[t]}$. Then
$C_m$ is a decreasing sequence of closed sets, each containing~$[T]$.
If the claim fails, then $\mu_n(C_{m+1} ))\leq \mu_n(C_m)\cdot (1-\varepsilon)$
infinitely often; so $\mu_n(T) \le \mu_n( \bigcap_m C_m ) =0$.)
It is well known that the set of positive, pruned subtrees of~$S^n$, ordered
by inclusion, is forcing equivalent to random forcing (which can
be defined as the set of positive, pruned subtrees of $2^{<\omega}$).
We have now constructed $S^n$ for all $n$. Define
\begin{align}\label{def:this.is.random}
\bJ = {\bJ^M} \cup \bigcup_n \, \bigl\{\, (n,T) : \, T \subseteq S^n
\mbox{ is a positive pruned tree}\,\bigr \}
\end{align}
with the following partial order:
\begin{itemize}
\item The order on $\bJ$ extends the order on~${\bJ^M}$.
\item $(n',T')\le(n,T)$ if $n=n'$ and $T' \subseteq T$.
\item For $p\in {\bJ^M}$: $(n,T) \le p$ if there is a $k$ such that
$p^n_t\leq p$ for all $t\in T$ of length~$k$.
(Note that this will then be true for all bigger $k$ as well.)
\item $p \le (n,T)$ never holds (for $p\in {\bJ^M}$).
\end{itemize}
The lemma now easily follows from the following properties:
\begin{enumerate}
\item The order on $\bJ$ is transitive.
\item ${\bJ^M}$ is an incompatibility-preserving subforcing of $\bJ$.
\\
In particular, $\bJ$ satisfies item~\eqref{item:ic}
of Definition~\ref{def:Janus} of Janus forcing.
\item
For all $k$: the set $\{(n,T^{[t]}):\, t\in T,\ \lh(t)=k\}$
is a (finite) predense antichain below~$(n,T)$.
\item\label{item:compat} $(n,T^{[t]})$ is stronger than $p^n_t$ for each $t\in T$
(witnessed, e.g., by $k=\lh(t)$).
Of course, $(n,T^{[t]})$ is stronger than $(n,T)$ as well.
\item
Since $p^n_t\in D_k$ for $k=\lh(t)$, this implies that each
$D_k$ is predense below each
$(n,S^n)$ and therefore in~$\bJ$.
\\
Also, since each set $E_j$ appeared in our list of open dense subsets
(see \eqref{align:E.k}), the set
$\{\sigma\in \nabla: \, \height(\sigma)=j\}$ is still
predense in $\bJ$, i.e.,
item~\eqref{item:heightsarepredense}
of the Definition~\ref{def:Janus} of Janus forcing is satisfied.
\item The condition $(n,S^n)$ is stronger than~$r^n$, so
$\{(n,S^n):n\in \omega \}$ is predense in $\bJ$ and
$\bJ\setminus {\bJ^M}$ is dense in~$\bJ$.
\\
Below each $(n,S^n)$, the forcing $\bJ$ is isomorphic to random
forcing.
\\
Therefore, $\bJ$ itself is forcing equivalent to random
forcing. (In fact,
the complete Boolean algebra generated by $\bJ$ is
isomorphic to the standard random algebra, Borel sets modulo null sets.)
This proves in particular that $\mathbb{J}$ is ccc, i.e., satisfies
property \ref{def:Janus}(\ref{item:janus.ccc}).
\item
It is easy (but not even necessary) to check that $\bJ$ is separative, i.e.,
property~\ref{def:Janus}(\ref{item:janus.sep}). In any case,
we could replace $\le_\bJ$ by $\le^*_\bJ$, thus making $\bJ$ separative without changing
$\le_{\bJ^M}$, since ${\bJ^M}$ was already separative.
\item Property \ref{def:Janus}(\ref{item:janus.hc}), i.e., $\mathbb{J}\in H(\aleph_1)$,
is obvious.
\item \label{item:the.last}
The remaining item
of the definition
of Janus forcing, fatness~\ref{def:Janus}(\ref{item:fat}),
is satisfied.\\
I.e., given $(n,T)\in \bJ$ and $\varepsilon>0$ there is
an arbitrarily high $\tau^*\in\nabla$ such that the relative
size of the set $\{\tau\in\suc_\nabla(\tau^*):\, \tau\parallel (n,T)\}$
is at least $1-\varepsilon$. (We will show $\ge (1-\varepsilon)^2$ instead,
to simplify the notation.)
\end{enumerate}
We show~(\ref{item:the.last}): Given $(n,T)\in \bJ$ and $\varepsilon>0$,
we use~\eqref{eq:lebdense} to get an arbitrarily high
$s\in T$ such that $\suc_T(s)$ is of relative size
$\geq 1-\varepsilon$ in~$\suc_{S^n}(s)$. We may choose $s$ of length $>\frac 1 \varepsilon$.
We claim that $\tau^*_s$ is as required:
\begin{itemize}
\item Let
$B\coloneqq
\{\sigma_t:\, t\in\suc_{S^n}(s)\}
$.
Note that $B =
\{\tau\in\suc_\nabla(\tau^*_s):\, \tau\parallel p_s\}
$.
$B$ has relative size $\ge 1-\frac{1}{\lh(s)}\ge 1-\varepsilon$
in $\suc_\nabla(\tau^*_s)$
(according to property~(\ref{item:size.a}) of $S^n$).
\item $C\coloneqq \{\sigma_t:\, t\in\suc_T(s)\}$ is a subset of $B$
of relative size $\ge 1-\varepsilon$ according to our choice of~$s$.
\item So $C$ is of relative size $(1-\varepsilon)^2$
in~$\suc_\nabla(\tau^*_s)$.
\item Each $\sigma_t\in C$ is compatible with~$(n,T)$, as $(n,T^{[t]})
\le p_t \le \sigma_t$ (see~(\ref{item:compat})). \qedhere
\end{itemize}
\end{proof}
So in particular if $\mathbb{J}^M$ is a Janus forcing in a countable model $M$, then
we can extend it to a Janus forcing $\mathbb{J}$ which is in fact random forcing.
Since random forcing strongly preserves randoms over countable models (see
Lemma~\ref{lem:random.laver}), it is not surprising that we get local
preservation of randoms for Janus forcing, i.e., the analoga of
Lemma~\ref{lem:extendLDtopreserverandom} and
Corollary~\ref{cor:ultralaverlocalpreserving}. (Still, some additional argument
is needed, since the fact that~$\mathbb{J}$ (which is now random forcing) ``strongly preserves
randoms'' just means that a random real $r$ over $M$ is preserved with respect
to random forcing in $M$, not with respect to $\mathbb{J}^M$.)
\begin{Lem}\label{lem:janusrandompreservation}
If $\mathbb{J}^M$ is a Janus forcing in a countable model $M$ and
$r$ a random real over $M$, then there is a Janus forcing $\mathbb{J}$
such that $\mathbb{J}^M$ is an $M$-complete subforcing of $\mathbb{J}$ and the
following holds:
\\
\textbf{If}
\begin{itemize}
\item $p\in \mathbb{J}^M$,
\item in $M$, $\n {\bar Z} = ( \n Z_1, \ldots , \n Z_m) $
is a sequence of $\mathbb{J}^M$-names for
codes for null sets,
and $Z_1^*,\dots , Z_m^*$ are interpretations under~$p$,
witnessed by a sequence $(p_n)_{n\in \omega}$,
\item $Z^*_i \sqsubset_{k_i} r$ for $i=1,\dots, m$,
\end{itemize}
\textbf{then} there is a $q\leq p$ in $\mathbb{J}$ forcing that
\begin{itemize}
\item $r$ is random over $M[H^M]$,
\item $\n Z_i \sqsubset_{k_i} r$ for $i=1,\dots, m$.
\end{itemize}
\end{Lem}
\begin{Rem}
In the version for ultralaver forcings, i.e.,
Lemma~\ref{lem:extendLDtopreserverandom}, we had to assume that the stems of
the witnessing sequence are strictly increasing. In the Janus version, we do
not have any requirement of that kind.
\end{Rem}
\begin{proof}
Let $\mathcal D$ be the set of dense subsets of $\mathbb{J}^M$ in $M$.
According to Lemma~\ref{lem:random.over.mprime}, we
can first find some countable $M'$ such that $r$ is still random over $M'$
and such that in $M'$ both $\mathbb{J}^M$ and $\mathcal D$
are countable. According to Fact~\ref{fact:janus.ctblunion},
$\mathbb{J}^M$ is a (countable) Janus forcing in $M'$, so we can apply
Lemma~\ref{lem:janusmayberandom} to the set $\mathcal D$
to construct a Janus forcing $\mathbb{J}^{M'}$
which is equivalent to random forcing
such that (from the point of $V$) $\mathbb{J}^{M}\lessdot_M \mathbb{J}^{M'}$.
In $V$, let\footnote{More precisely:
Densely embed $\mathbb{J}^{M'}$ into
(Borel/null)$^{M'}$, the complete Boolean algebra associated with
random forcing in $M'$, and let $\mathbb{J}:=$ (Borel/null)$^V$. Using the
embedding, $\mathbb{J}^{M'}$ can now be viewed as an $M'$-complete
subset of $\mathbb{J}$.}
$\mathbb{J}$ be random forcing.
$\mathbb{J}^{M'}$ is an $M'$-complete subforcing of $\mathbb{J}$
and therefore $\mathbb{J}^{M}\lessdot_M \mathbb{J}$. Moreover,
as was noted in Lemma~\ref{lem:random.laver}, we even
know that random forcing strongly preserves randoms over $M'$
(see Definition~\ref{def:locally.random}). To show that
$\mathbb{J}$ is indeed a Janus forcing, we have to check the
fatness condition~\ref{def:Janus}(\ref{item:fat});
this follows easily from $\Pi^1_1$-absoluteness (recall that
incompatibility of random conditions is Borel).
So assume that (in $M$) the sequence $(p_n)_{n\in\omega}$
of $\mathbb{J}^M$-conditions interprets $\n {\bar Z}$ as $\bar Z^*$.
In $M'$, $\mathbb{J}^M$-names can be reinterpreted as $\mathbb{J}^{M'}$-names,
and the $\mathbb{J}^{M'}$-name $\n {\bar Z}$ is interpreted as $\bar Z^*$
by the same sequence $(p_n)_{n\in\omega}$.
Let $k_1,\ldots, k_m$ be such that $Z_i^*\sqsubset_{k_i} r$
for $i=1,\ldots, m$.
So by strong preservation of randoms, we can in $V$ find some
$q\leq p_0$ forcing that $r$ is random over $M'[H^{M'}]$ (and therefore
also over the subset $M[H^M]$), and that $\n Z_i\sqsubset_{k_i} r$ (where
$\n Z_i$ can be evaluated in
$M'[H^{M'}]$ or equivalently in $M[H^M]$).
\end{proof}
So Janus forcing is locally preserving randoms (just as ultralaver forcing):
\begin{Cor}\label{cor:januslocallypreserves}
If $Q^M$ is a Janus forcing in $M$ and $r$ a real, then there is a
Janus forcing $Q$ over~$Q^M$ (which is in fact equivalent to random
forcing) locally preserving randomness of~$r$ over~$M$.
\end{Cor}
\begin{proof}
In this case, the notion of ``quick'' interpretations is
trivial, i.e., $D^{Q^M}_k = Q^M$ for all~$k$, and the claim follows from
the previous lemma.
\end{proof}
\section{Almost finite and almost countable support iterations}\label{sec:iterations}
A main tool to construct the forcing for BC+dBC will be ``partial countable
support iterations'', more particularly ``almost finite support'' and ``almost
countable support'' iterations. A partial countable support iteration is a
forcing iteration $(P_\alpha,Q_\alpha)_{\alpha<\om2}$ such that for each limit
ordinal $\delta$ the forcing notion $P_\delta$ is a subset of the countable
support limit of $(P_\alpha,Q_\alpha)_{\alpha<\delta}$ which satisfies some
natural properties (see Definition~\ref{partial_CS}).
Instead of transitive models, we will use ord-transitive models (which are
transitive when ordinals are considered as urelements). Why do we do that? We
want to ``approximate'' the generic iteration $\bar\mathbf{P}$ of length $\omega_2$
with countable models; this can be done more naturally with ord-transitive
models (since obviously countable transitive models only see countable
ordinals). We call such an ord-transitive model a ``candidate'' (provided it
satisfies some nice properties, see Definition~\ref{def:candidate}). A basic
point is that forcing extensions work naturally with candidates.
In the next few paragraphs (and also in Section~\ref{sec:construction}),
$x=(M^x,\bar P^x)$ will denote a pair such that $M^x$ is a
candidate and $\bar P^x$ is (in $M^x$) a partial countable support iteration; similarly
we write, e.g., $y= (M^y, \bar P^ y) $ or $x_n=(M^{x_n},\bar P^{x_n})$.
We will need the following results to prove BC+dBC. (However, as opposed to the
case of the ultralaver and Janus section, the reader will probably have to read
this section to understand the construction in the next section, and not just
the following list of properties.)
Given $x=(M^x,\bar P^x)$, we can construct
by induction on $\alpha$ a
partial countable support iteration $\bar P = (P_\alpha, Q_\alpha)_{\alpha
< \om2}$ satisfying:
\begin{quote}
There is a canonical $M^x$-complete embedding from $\bar P^x$ to
$\bar P$.
\end{quote}
In this construction, we can use at each stage $\beta$
any desired $Q_\beta$, as long as
$P_\beta$ forces that $Q^x_\beta$ is (evaluated as)
an $M^x[H^x_\beta]$-complete subforcing of~$Q_\beta$ (where
$H^x_\beta\subseteq P^x_\beta$
is the $M^x$-generic filter induced by the generic filter~$H_\beta\subseteq P_\beta$).
\\
Moreover, we can demand either of
the following two additional properties\footnote{The $\sigma$-centered version is central for the proof of dBC; the random preserving version
for BC.}
of the limit of this iteration~$\bar P$:
\begin{enumerate}
\item
If all $Q_\beta$ are forced to be $\sigma$-centered,
and $Q_\beta$ is trivial for all $\beta\notin M^x$,
then $P_{\omega_2}$ is $\sigma$-centered.
\item
If $r$ is random over
$M^x$, and all $Q_\beta$ locally preserve randomness of $r$
over $M^x[H^x_\beta]$ (see
Definition~\ref{def:locally.random}), then also $P_{\om2}$
locally preserves the randomness of $r$.
\end{enumerate}
Actually, we need the following variant: Assume that we already
have $P_{\alpha_0}$
for some $\alpha_0\in M^x$, and that $P^x_{\alpha_0}$ canonically
embeds into~$P_{\alpha_0}$, and that
the respective assumption on $Q_\beta$ holds for all $\beta\ge
\alpha_0$. Then we get that $P_{\alpha_0}$ forces that the quotient
$P_{\omega_2}/P_{\alpha_0}$ satisfies the respective conclusion.
We also need:\footnote{This will give $\sigma$-closure
and $\al2$-cc for the preparatory forcing $\mathbb{R}$.}
\begin{enumerate}\setcounter{enumi}{2}
\item
If instead of a single $x$ we have a sequence $x_n$
such that each $P^{x_n}$ canonically (and $M^{x_n}$-completely)
embeds into~$P^{x_{n+1}}$, then
we can find a partial countable support
iteration $\bar P$ into which all $P^{x_n}$ embed canonically (and we can again use
any desired $Q_\beta$, assuming that $Q^{x_n}_\beta$
is an $M^{x_n}[H^{x_n}_\beta]$-complete subforcing of $Q_\beta$ for all $n\in\omega$).
\item
(A fact that is easy to prove but awkward to formulate.)
If a $\Delta$-system argument produces two $x_1$, $x_2$ as in Lemma~\ref{lem:prep.is.sigma.preparation}(\ref{item:karotte6}),
then we can find a partial countable support iteration $\bar P$
such that $\bar P^{x_i}$ canonically (and $M^{x_i}$-completely)
embeds into~$\bar P$ for $i=1,2$.
\end{enumerate}
\subsection{Ord-transitive models}\label{subsec:ordtrans}
We will use ``ord-transitive'' models, as introduced in~\cite{MR2115943} (see
also the presentation in~\cite{kellnernep}). We briefly summarize the basic
definitions and properties (restricted to the rather simple case needed in this
paper):
\begin{Def}\label{def:candidate} Fix a suitable finite subset ZFC$^*$ of ZFC (that is
satisfied by $H(\chi^*)$ for sufficiently large regular $\chi^*$).
\begin{enumerate}
\item A set $M$ is called a \emph{candidate}, if
\begin{itemize}
\item $M$ is countable,
\item $(M,\in)$ is a model of ZFC$^*$,
\item $M$ is ord-absolute:
$M \models \alpha\in \ON$ iff $\alpha\in \ON$,
for all $\alpha\in M$,
\item $M$ is ord-transitive: if $x\in M\setminus \ON$, then
$x\subseteq M$,
\item $\omega+1\subseteq M$.
\item ``$\alpha$ is a limit ordinal'' and ``$\alpha=\beta+1$''
are both
absolute between $M$ and $V$.
\end{itemize}
\item A candidate $M$ is called \emph{nice}, if
``$\alpha$ has countable cofinality''
and ``the countable set $A$ is cofinal in $\alpha$'' both are
absolute between $M$ and $V$. (So
if $\alpha\in M$ has countable cofinality, then
$\alpha\cap M$ is cofinal in $\alpha$.)
Moreover, we assume $\om1\in M$ (which implies
$\om1^M=\om1$) and $\om2\in M$ (but we do not require
$\om2^M= \om2$).
\item
Let $P^M$ be a forcing notion in a candidate $M$.
(To simplify notation, we can assume without loss of generality that
$P^M\cap \ON=\emptyset$ (or at least $\subseteq \omega$)
and that therefore $P^M\subseteq M$ and also $A\subseteq
M$ whenever $M$ thinks that $A$ is a subset of $P^M$.)
Recall that a subset $H^M$ of $P^M$ is $M$-generic (or: $P^M$-generic over $M$),
if $|A\cap H^M|=1$ for all maximal antichains $A$ in $M$.
\item Let $H^M$ be $P^M$-generic over $M$ and $\n\tau$ a $P^M$-name in $M$.
We define the evaluation $\n\tau[H^M]^M$ to be
$x$ if
$M$ thinks that $p\Vdash_{P^M} \n\tau=\std x$
for some $p\in H^M$ and $x\in M$ (or equivalently just for $x\in M\cap \ON$),
and $\{\n\sigma[H^M]^M:\, (\n\sigma,p)\in\n\tau,\,
p\in H^M\}$ otherwise.
Abusing notation we write $\n\tau[H^M]$ instead of $\n\tau[H^M]^M$,
and we write $M[H^M]$ for $\{\n\tau[H^M]:\, \n\tau\text{ is a $P^M$-name
in }M\}$.
\item For any set $N$ (typically, an elementary submodel of some $H(\chi)$),
the ord-collapse $k$ (or $k^N$) is a recursively defined function with domain $N$:
$k(x)=x$ if $x\in \ON$, and $k(x)=\{k(y):\,
y\in x\cap N\}$ otherwise.
\item We define $\ordclos(\alpha):=\emptyset$ for all ordinals $\alpha$.
The ord-transitive closure of a non-ordinal $x$ is defined
inductively on the rank:
\[
\ordclos(x)=x\cup\bigcup \{\ordclos(y):y\in x\setminus \ON\}
=x\cup\bigcup \{\ordclos(y):y\in x\}.
\]
So for $x\notin \ON$, the set
$\ordclos(x)$ is the smallest ord-transitive set
containing $x$ as a subset.
HCON is the collection of all sets $x$ such that
the ord-transitive closure of $x$ is countable.
$x$ is in HCON iff $x$ is element of some candidate.
In particular, all reals and all ordinals are HCON.
We write HCON$_\alpha$ for the family of all sets $x$ in HCON whose transitive
closure
only contains ordinals $<\alpha$.
\end{enumerate}
\end{Def}
The following facts can be found in~\cite{MR2115943} or~\cite{kellnernep} (they
can be proven by rather straightforward, if tedious, inductions on the ranks of
the according objects).
\begin{Fact}\label{fact:hcon}
\begin{enumerate}
\item The ord-collapse of a countable elementary submodel of $H(\chi^*)$ is a nice
candidate.
\item Unions, intersections etc.\ are generally not absolute for
candidates. For example, let $x\in M\setminus \ON$. In $M$ we can
construct a set $y$ such that $M\models y=\om1\cup\{x\}$.
Then $y$ is not an ordinal and therefore a subset of $M$, and
in particular $y$ is countable and $y\not=\om1\cup \{x\}$.
\item Let $j:M\to M'$ be the transitive collapse of a candidate $M$,
and $f:\om1\cap M'\to \ON$ the inverse (restricted to the ordinals).
Obviously $M'$ is a countable transitive model of ZFC$^*$;
moreover $M$ is characterized by the pair $(M',f)$
(we call such a pair a ``labeled transitive model'').
Note that $f$ satisfies $f(\alpha+1)=f(\alpha)+1$,
$f(\alpha)=\alpha$ for $\alpha\in \omega\cup\{\omega\}$.
$M\models (\alpha\text{ is a limit})$ iff $f(\alpha)$ is a limit.
$M\models \cf(\alpha)=\omega$ iff $\cf(f(\alpha))=\omega$, and
in that case $f[\alpha]$ is cofinal in $f(\alpha)$.
On the other hand, given a transitive countable model $M'$ of ZFC$^*$
and an $f$ as above, then we can construct a (unique) candidate $M$
corresponding to $(M',f)$.
\item All candidates $M$ with $M\cap \ON \subseteq \omega_1$ are hereditarily countable, so their number is at most $2^{\al0}$. Similarly, the cardinality of HCON$_\alpha$ is at most continuum
whenever $\alpha < \omega_2$.
\item If $M$ is a candidate, and if $H^M$ is $P^M$-generic over $M$, then $M[H^M]$ is a candidate
as well and an end-extension
of $M$ such that $M\cap\ON=M[H^M]\cap \ON$. If $M$ is nice and
($M$ thinks that) $P^M$ is proper, then
$M[H^M]$ is nice as well.
\item Forcing extensions commute with the transitive collapse $j$:
If $M$ corresponds to $(M',f)$, then
$H^M\subseteq P^M$ is $P^M$-generic over $M$ iff
$H'\coloneqq {j}[H^M]$ is $P'\coloneqq j(P^M)$-generic over $M'$,
and in that case $M[H^M]$ corresponds to $(M'[H'],f)$.
In particular, the forcing extension $M[H^M]$ of $M$ satisfies
the forcing theorem (everything that is forced is true, and
everything true is forced).
\item For elementary submodels, forcing extensions commute
with ord-collapses:
Let $N$ be a countable elementary submodel of $H(\chi^*)$,
$P\in N$, $k:N\to M$ the ord-collapse (so $M$ is a candidate),
and let $H$ be $P$-generic
over $V$.
Then $H$ is $P$-generic over $N$
iff $H^M\coloneqq k[H]$
is $P^M\coloneqq k(P)$-generic over $M$;
and in that case the ord-collapse of $N[H]$
is $M[H^M]$.
\end{enumerate}
\end{Fact}
Assume that a nice candidate $M$ thinks that $(\bar P^M,\bar Q^M)$ is a forcing
iteration of length $\om2^V$ (we will usually write $\om2$ for the length of
the iteration, by this we will always mean $\om2^V$ and not the possibly
different $\om2^M$). In this section, we will construct an iteration $(\bar
P,\bar Q)$ in $V$, also of length $\om2$, such that each $P^M_\alpha$
canonically and $M$-completely embeds into $P_\alpha$ for all $\alpha\in
\om2\cap M$. Once we know (by induction) that $P^M_\alpha$ $M$-completely
embeds into $P_\alpha$, we know that a $P_\alpha$-generic filter $H_\alpha$
induces a $P^M_\alpha$-generic (over $M$) filter which we call $H^M_\alpha$.
Then $M[H^M_\alpha]$ is a candidate, but nice only if $P^M_\alpha$ is proper.
We will not need that $M[H^M_\alpha]$ is nice, actually we will only
investigate sets of reals (or elements of $H(\al1)$) in $M[H^M_\alpha]$, so it
does not make any difference whether we use $M[H^M_\alpha]$ or its transitive
collapse.
\begin{Rem}\label{rem:fine.print}
In the discussion so far we omitted some details regarding the theory ZFC$^*$
(that a candidate has to satisfy). The following ``fine print'' hopefully
absolves us from any liability. (It is entirely irrelevant for the
understanding of the paper.)
We have to guarantee that each $M[H^M_\alpha]$ that we consider satisfies
enough of ZFC to make our arguments work (for example, the definitions and
basic properties of ultralaver and Janus forcings should work). This turns out
to be easy, since (as usual) we do not need the full power set axiom for these
arguments (just the existence of, say, $\beth_5$). So it is enough that each
$M[H^M_\alpha]$ satisfies some fixed finite subset of ZFC minus power set, which
we call ZFC$^*$.
Of course we can also find a bigger (still finite) set ZFC$^{**}$ that implies:
$\beth_{10}$ exists, and each forcing extension of the universe with a forcing
of size $\le \beth_4$ satisfies ZFC$^*$. And it is provable (in ZFC) that each
$H(\chi)$ satisfies ZFC$^{**}$ for sufficiently large regular $\chi$.
We define ``candidate'' using the weaker theory ZFC$^*$, and require that nice
candidates satisfy the stronger theory ZFC$^{**}$. This guarantees that all
forcing extensions (by small forcings) of nice candidates will be candidates
(in particular, satisfy enough of ZFC such that our arguments about Janus or
ultralaver forcings work). Also, every ord-collapse of a countable elementary
submodel $N$ of $H(\chi)$ will be a nice candidate.
\end{Rem}
\subsection{Partial countable support iterations}\label{subsec:partialCS}
We introduce the notion of ``partial countable support limit'': a
subset of the countable support (CS) limit containing the union (i.e.,
the direct limit) and satisfying some natural requirements.
Let us first describe what we mean by ``forcing iteration''. They have to satisfy the
following requirements:
\begin{itemize}
\item A \qemph{topless forcing iteration} $(P_\alpha,Q_\alpha)_{\alpha<\varepsilon}$
is a sequence of forcing notions $P_\alpha$ and $P_\alpha$-names
$Q_\alpha$ of quasiorders with a weakest element~$1_{Q_\alpha}$.
A \qemph{topped iteration} additionally has a final limit~$P_\varepsilon$.
Each $ P_\alpha$ is a set of partial functions
on $\alpha$
(as, e.g., in~\cite{MR1234283}). More specifically,
if $\alpha<\beta\le \varepsilon$ and $p\in P_\beta$, then
$p\mathord\restriction\alpha\in P_\alpha$. Also,
$p\mathord\restriction\beta\Vdash_{P_\beta} p(\beta)\in Q_\beta$ for all $\beta\in\dom(p)$.
The order on $P_\beta$ will always be the ``natural'' one:
$q\leq p$ iff
$q\mathord\restriction\alpha$ forces (in $P_\alpha$) that
$q^{\textrm{\textup{tot}}}(\alpha)\leq p^{\textrm{\textup{tot}}}(\alpha)$ for all $\alpha < \beta$,
where $r^{\textrm{\textup{tot}}}(\alpha)=r(\alpha)$
for all $\alpha\in\dom(r)$ and $1_{Q_\alpha}$ otherwise. $P_{\alpha+1}$ consists
of \emph{all} $p$ with $p\mathord\restriction\alpha\in P_\alpha$ and $p\mathord\restriction \alpha\Vdash p^{\textrm{\textup{tot}}}(\alpha)\in Q_\alpha$, so it is forcing equivalent to $P_\alpha*Q_\alpha$.
\item $P_\alpha \subseteq P_\beta$ whenever $\alpha < \beta\le
\varepsilon$. (In particular, the empty condition is an element of each
$P_\beta$.)
\item For any $p \in P_\varepsilon$ and any $q \in P_\alpha$ ($\alpha < \varepsilon$) with $q \leq p \mathord\restriction \alpha$, the partial function
$q \land p\coloneqq q\cup p\mathord\restriction[\alpha,\varepsilon)$ is a condition in $P_\varepsilon$ as well (so in particular, $p \mathord\restriction \alpha$ is a reduction of $p$, hence $P_\alpha$ is a complete subforcing of $P_\varepsilon$; and $q\land p$ is the weakest condition
in $P_\varepsilon$ stronger than both $q$ and $p$).
\item Abusing notation, we usually just write $\bar P$
for an iteration (be it topless or topped).
\item We usually write $H_\beta$ for the
generic filter on $P_\beta$ (which induces
$P_\alpha$-generic filters called $H_\alpha$ for $\alpha\le\beta$).
For topped iterations we call the filter on the final limit sometimes just $H$
instead of $H_\varepsilon$.
\end{itemize}
We use the following notation for quotients of iterations:
\begin{itemize}
\item For $\alpha<\beta$, in the $P_\alpha$-extension
$V[H_\alpha]$, we let $P_\beta/H_\alpha$ be the set
of all $p\in P_\beta$ with $p\mathord\restriction\alpha\in H_\alpha$
(ordered as in~$P_\beta$). We may occasionally write $P_\beta/P_\alpha$ for the $P_\alpha$-name of
$P_\beta/H_\alpha$.
\item Since $P_\alpha$ is a complete subforcing of $P_\beta$,
this is a quotient with the usual properties, in particular
$P_\beta$ is equivalent to $P_\alpha*(P_\beta/H_\alpha)$.
\end{itemize}
\begin{Rem}
It is well known that quotients of proper countable support iterations are
naturally equivalent to (names of) countable support iterations. In this paper,
we can restrict our attention to proper forcings, but we do not really have
countable support iterations. It turns out that it is not necessary to
investigate whether our quotients can naturally be seen as iterations of any
kind, so to avoid the subtle problems involved we will not consider the
quotient as an iteration by itself.
\end{Rem}
\begin{Def}\label{def:fullCS}
Let $\bar P$ be a (topless) iteration of limit length $\varepsilon$.
We define three limits of $\bar P$:
\begin{itemize}
\item The \qemph{direct limit} is the union of the $P_\alpha$ (for $\alpha<\varepsilon$).
So this is the smallest possible limit of the iteration.
\item The \qemph{inverse limit} consists of \emph{all} partial functions $p$
with domain $\subseteq \varepsilon$ such that $p\mathord\restriction\alpha\in P_\alpha$
for all $\alpha<\varepsilon$.
This is the largest possible limit of the iteration.
\item
The \qemph{full countable support limit $P^{\textrm{\textup{CS}}}_\varepsilon$} of $\bar P$
is the inverse limit if $\cf(\varepsilon)=\omega$ and the direct limit otherwise.
\end{itemize}
We say that $P_\varepsilon$ is a \qemph{partial CS limit}, if
$P_\varepsilon$ is a subset of the full CS limit and
the sequence $(P_\alpha)_{\alpha\le \varepsilon}$ is a topped iteration.
In particular, this means that $P_\varepsilon$
contains the direct limit,
and satisfies the following for each $\alpha<\varepsilon$:
$P_\varepsilon$ is closed under $p\mapsto p\mathord\restriction \alpha$, and
whenever $p\in P_\varepsilon$, $q\in P_\alpha$, $q\le p\mathord\restriction\alpha$,
then also the partial function $q\land p$ is in~$P_\varepsilon$.
\end{Def}
So for a given topless $\bar P$ there is a well-defined inverse, direct and full
CS limit. If $\cf(\varepsilon)>\omega$, then the direct and the full CS limit
coincide. If $\cf(\varepsilon)=\omega$,
then the direct limit and the full CS limit (=inverse limit) differ. Both of them are partial CS limits,
but there are many more
possibilities for partial CS limits. By definition, all of them will yield
iterations.
Note that the name ``CS limit'' is slightly inappropriate, as the size of
supports of conditions is not part of the definition. To give a
more specific example:
Consider a topped iteration $\bar P $ of length $\omega+\omega$ where
$P_\omega$ is the direct limit and $P_{\omega+\omega} $ is the full
CS limit. Let $p$ be any element of the full CS limit of $\bar P \mathord\restriction \omega$ which
is not in~$P_\omega$; then $p$ is not in $P_{\omega+\omega}$ either. So not
every countable subset of $\omega+\omega$ can appear as the support of a
condition.
\begin{Def}\label{partial_CS}
A forcing iteration $\bar P$ is called a \qemph{partial CS iteration}, if
\begin{itemize}
\item every limit is a partial CS limit, and
\item every $Q_\alpha$ is (forced to be) separative.\footnote{The reason for this requirement is briefly discussed in Section~\ref{sec:alternativedefs}. Separativity, as well as the relations $\leq^*$ and $=^*$, are defined on page~\pageref{def:separative}.}
\end{itemize}
\end{Def}
The following fact can easily be proved by transfinite induction:
\begin{Fact}\label{fact:eq.eqstar}
Let $\bar P$ be a partial CS iteration. Then for all $\alpha$ the forcing
notion $P_\alpha$ is separative.
\end{Fact}
{}From now on, all iterations we consider will be partial CS
iterations. In this paper, we will only be interested in proper
partial CS iterations, but properness is not part of the definition of
partial CS iteration. (The reader may safely assume that all
iterations are proper.)
Note that separativity of the $Q_\alpha $ implies that all partial CS
iterations satisfy the following (trivially equivalent) properties:
\begin{Fact}\label{fact:suitable.equivalent}
Let $\bar P$ be a topped partial CS iteration of length $\varepsilon$. Then:
\begin{enumerate}
\item Let $H$ be $ P_\varepsilon$-generic. Then $p\in H$ iff $p\mathord\restriction\alpha\in H_\alpha$ for all $\alpha < \varepsilon$.
\item For all $q,p \in P_\varepsilon$: If $q\mathord\restriction \alpha \leq^\ast p\mathord\restriction \alpha$ for each $\alpha < \varepsilon$, then $q \leq^\ast p$.
\item \label{item:3}
For all $q,p \in P_\varepsilon$: If $q\mathord\restriction \alpha \leq^\ast p\mathord\restriction \alpha$ for each $\alpha < \varepsilon$, then $q \parallel p$.
\end{enumerate}
\end{Fact}
We will be concerned with the following situation:
Assume that $M$ is a nice candidate, $\bar P^M$ is (in~$M$) a topped partial CS
iteration of length $\varepsilon$ (a limit ordinal in~$M$), and $\bar P$ is (in
$V$) a topless partial CS iteration of length
$\varepsilon'\coloneqq \sup(\varepsilon\cap M)$. (Recall that
``$\cf(\varepsilon)=\omega$'' is absolute between $M$ and $V$, and that
$\cf(\varepsilon)=\omega$ implies $\varepsilon'=\varepsilon$.) Moreover, assume
that we already have a system of $M$-complete coherent\footnote{I.e., they
commute with the restriction maps: $i_\alpha(p \mathord\restriction \alpha) = i_\beta(p) \mathord\restriction
\alpha$ for $\alpha < \beta$ and $p\in P^M_\beta$.} embeddings
$i_\beta:P^M_\beta\to P_\beta$ for $\beta\in \varepsilon'\cap M=\varepsilon\cap M$.
(Recall that any potential partial CS limit of $\bar P$ is a subforcing of
the full CS limit~$P^{\textrm{\textup{CS}}}_{\varepsilon'}$.)
It is easy to see that there is only one possibility for an embedding $j:
P^M_\varepsilon\to P^{\textrm{\textup{CS}}}_{\varepsilon'} $ (in fact, into any potential partial
CS limit of~$\bar P$) that extends the $i_\beta$'s naturally:
\begin{Def}\label{def:canonicalextension}
For a topped partial CS iteration $\bar P^M$ in $M$ of length $\varepsilon$
and a topless one $\bar P$ in $V$ of length $\varepsilon'\coloneqq \sup(\varepsilon\cap M)$
together with coherent embeddings $i_\beta$, we define
$j: P^M_\varepsilon\to P^{\textrm{\textup{CS}}}_{\varepsilon'} $, the \qemph{canonical extension},
in the obvious way:
Given $p \in P_\varepsilon^M$, take
the sequence of restrictions to $M$-ordinals,
apply the functions $i_\beta$, and let
$j(p)$ be the union of the resulting coherent sequence.
\end{Def}
We do not claim that $j: P^M_\varepsilon\to P^{\textrm{\textup{CS}}}_{\varepsilon'} $ is
$M$-complete.\footnote{\newcounter{myfootnote}\setcounter{myfootnote}{\value{footnote}}
\label{cs.fs.footnote}
For example, if $\varepsilon=\varepsilon'=\omega$ and if $P^M_\omega$ is the
finite support limit of a nontrivial iteration, then $j:P^M_\omega\to P^{\textrm{\textup{CS}}}_\omega$ is not
complete: For notational simplicity, assume that all $Q^M_n$ are (forced to be) Boolean
algebras. In $M$, let $c_n$ be (a $P^M_n$-name for) a nontrivial
element of $Q^M_n$ (so $\lnot c_n$, the Boolean complement, is also nontrivial).
Let $p_n$ be the $P^M_n$-condition $(c_0, \ldots, c_{n-1})$, i.e.,
the truth value of ``$c_m\in H(m)$ for all $m<n$''. Let $q_n$ be the
$P^M_{n+1}$-condition $(c_0, \ldots, c_{n-1}, \lnot c_n)$, i.e., the truth
value of
``$n$ is minimal with $c_n\notin H(n)$''. In $M$, the set $A=\{q_n:\, n\in\omega\}$
is a maximal antichain in $P^M_\omega$. Moreover, the sequence
$(p_n)_{n\in\omega}$ is a decreasing coherent sequence, therefore $i_n(p_n)$
defines an element $p_\omega$ in $P^{\textrm{\textup{CS}}}_\omega$, which is clearly incompatible
with all $j(q_n)$, hence $j[A] $ is not maximal.}
In the following, we will construct partial CS limits $P_{\varepsilon'}$ such
that $j:P^M_\varepsilon \to P_{\varepsilon'}$ is $M$-complete. (Obviously, one
requirement for such a limit is that $j[P^M_\varepsilon]\subseteq
P_{\varepsilon'}$.) We will actually define two versions: The almost FS (``almost
finite support'') and the
almost CS (``almost countable support'') limit.
Note that there is only one effect that the ``top'' of $\bar P^M$ (i.e., the
forcing $P^M_\varepsilon$) has on the canonical extension~$j$: It determines the
domain of $j$. In particular it will generally depend on $P^M_\varepsilon$
whether $j$ is complete or not. Apart from that, the value of any given $j(p)$
does not depend on $P^M_\varepsilon$.
Instead of arbitrary systems of embeddings $i_\alpha$, we will only be interested in
``canonical'' ones.
We assume for notational convenience that
$Q^M_\alpha$ is a subset of $Q_\alpha$ (this will naturally be the case in our application anyway).
\begin{Def}[The canonical embedding]\label{def:canonicalembedding}
Let $\bar P$ be a partial CS iteration in~$V$
and $\bar P^M$ a partial CS iteration in~$M$, both topped and of length~$\varepsilon\in M$.
We construct by induction on $\alpha\in (\varepsilon+1) \cap M$
the canonical $M$-complete embeddings $i_\alpha:P^M_\alpha\to P_\alpha$.
More precisely: We try to construct them, but it is possible that the
construction fails. If the construction succeeds, then
we say that \qemph{$\bar P^M$ (canonically) embeds into $\bar P$}, or
\qemph{the canonical embeddings work}, or
just:
\qemph{$\bar P$ is over $\bar P^M$}, or \qemph{over $P^M_\varepsilon$}.
\begin{itemize}
\item Let $\alpha=\beta+1$. By induction hypothesis,
$i_\beta$ is $M$-complete, so a $V$-generic
filter $H_\beta\subseteq P_\beta$ induces an $M$-generic filter $H^M_\beta
\coloneqq i_{\beta}^{-1}[H_\beta]\subseteq P^M_\beta$.
We require that (in the $H_\beta$ extension) the set $Q^M_\beta[H^M_\beta]$
is an $M[H^M_\beta]$-complete
subforcing of $Q_\beta[H_\beta]$. In this case, we define $i_\alpha$ in the
obvious way.
\item For $\alpha$ limit, let $i_\alpha$ be the canonical
extension of the family $(i_\beta)_{\beta\in \alpha\cap M}$.
We require that $P_\alpha$ contains the
range of $i_\alpha$, and that $i_\alpha$ is $M$-complete; otherwise the
construction fails.
(If $\alpha'\coloneqq \sup(\alpha\cap M) < \alpha$, then $i_\alpha $
will actually be an $M$-complete map into $P_{\alpha'}$, assuming that the
requirement is fulfilled.)
\end{itemize}
\end{Def}
In this section we try to construct a partial CS iteration $\bar P$
(over a given $\bar P^M$) satisfying additional properties.
\begin{Rem}
What is the role of $\varepsilon'\coloneqq \sup( \varepsilon\cap M)$? When our
inductive construction of $\bar P$ arrives at~$P_\varepsilon$ where $\varepsilon'<
\varepsilon$, it would be too late\footnote{%
\label{fn:too.late}
For example:
Let $\varepsilon=\om1$ and $\varepsilon'=\om1\cap M$.
Assume that $P^M_{\om1}$ is (in $M$) a (or: the unique)
partial CS limit of a nontrivial iteration.
Assume that we have a topless iteration $\bar P$ of length $\varepsilon'$
in $V$ such that the canonical embeddings work for all $\alpha\in\om1\cap M$.
If we set $P_{\varepsilon'}$ to be the full CS limit, then
we cannot further extend it to any iteration of length $\om1$
such that the canonical embedding $i_{\om1}$ works: Let
$p_\alpha$ and $q_\alpha$ be as in footnote~\ref{cs.fs.footnote}. In $M$,
the set
$A=\{q_\alpha:\alpha\in\om1\}$ is a maximal antichain, and the sequence
$(p_\alpha)_{\alpha\in\om1}$ is a decreasing coherent
sequence. But in $V$ there is an element $p_{\varepsilon'}\in P^{\textrm{\textup{CS}}}_{\varepsilon'}$
with $p_{\varepsilon'}\mathord\restriction \alpha = j( p_\alpha)$ for all $\alpha\in \varepsilon\cap M$. This
condition $p_{\varepsilon'}$ is clearly incompatible
with all elements of $j[A] = \{ j(q_\alpha): \alpha \in \varepsilon\cap M\}$.
Hence $j[A] $ is not maximal.}
to take care of $M$-completeness of
$i_\varepsilon$ at this stage, even if all $i_\alpha$ work nicely for $\alpha\in
\varepsilon \cap M$. Note that $\varepsilon'<\varepsilon$ implies
that $\varepsilon$ is uncountable in $M$, and that therefore
$P^M_\varepsilon = \bigcup_{\alpha\in \varepsilon\cap M} P^M_\alpha$.
So the natural extension $j$ of the embeddings $(i_\alpha)_{\alpha\in
\varepsilon \cap M}$ has range in $ P_{\varepsilon'}$, which will be a complete
subforcing of $P_\varepsilon$. So we have to ensure
$M$-completeness already in the construction of $P_{\varepsilon'}$.
\end{Rem}
For now we just record:
\begin{Lem}\label{lem:wolfgang}
Assume that we have topped iterations
$\bar P^M$ (in~$M$) of length $\varepsilon$ and
$\bar P$ (in~$V$) of length $\varepsilon'\coloneqq \sup(\varepsilon\cap M)$,
and that for all $\alpha\in\varepsilon\cap M$
the canonical embedding $i_\alpha: P^M_\alpha\to P_\alpha$ works.
Let $i_\varepsilon: P^M_\varepsilon\to P^{\textrm{\textup{CS}}}_{\varepsilon'}$
be the canonical extension.
\begin{enumerate}
\item\label{item:pathetic099}
If $P^M_\varepsilon$ is (in $M$) a direct limit (which
is always the case if $\varepsilon$ has uncountable cofinality)
then $i_\varepsilon$
(might not work, but at least) has range in $P_{\varepsilon'}$
and preserves incompatibility.
\item\label{item:suitable_implies_filter}
If $i_\varepsilon$ has a range contained in $P_{\varepsilon'}$
and maps predense sets $D\subseteq P^M_\varepsilon$ in $M$
to predense sets $i_\varepsilon[D]\subseteq P_{\varepsilon'}$,
then $i_\varepsilon$ preserves incompatibility (and therefore works).
\end{enumerate}
\end{Lem}
\begin{proof} (1)
Since $P^M_\varepsilon$ is a direct limit, the canonical extension
$i_\varepsilon $ has range in $\bigcup_{\alpha<\varepsilon'}
P_\alpha$, which is subset of any partial CS limit $P_{\varepsilon'}$.
Incompatibility in $P^M_\varepsilon$ is the same as incompatibility in
$P^M_\alpha$ for sufficiently large $\alpha\in \varepsilon\cap M$, so
by assumption it is preserved by $i_\alpha$ and hence also by~$i_\varepsilon$.
(2)
Fix $p_1,p_2\in P^M_\varepsilon$, and assume that their images are
compatible in $P_{\varepsilon'}$; we have to show that they
are compatible in $P^M_\varepsilon$. So fix
a generic filter $H\subseteq P_{\varepsilon'} $ containing $i_\varepsilon(p_1)$
and $i_\varepsilon(p_2)$.
In $M$, we define the following set $D$:
\[
D \coloneqq \{ q \in P^M_\varepsilon:
(q \leq p_1 \land q \leq p_2) \textrm{ or }
(\exists \alpha < \varepsilon: q\mathord\restriction \alpha \perp_{P^M_\alpha} p_1\mathord\restriction \alpha) \textrm{ or }
(\exists \alpha < \varepsilon: q\mathord\restriction \alpha \perp_{P^M_\alpha} p_2\mathord\restriction \alpha)
\}.
\]
Using Fact~\ref{fact:suitable.equivalent}(\ref{item:3}) it is easy to check that $D$ is dense.
Since $i_\varepsilon$ preserves predensity,
there is $q\in D$ such that $i_\varepsilon(q)\in H$. We claim that
$q$ is stronger than $p_1$ and~$p_2$. Otherwise we would have without loss
of generality
$ q\mathord\restriction \alpha \perp_{P^M_\alpha} p_1\mathord\restriction \alpha$ for some
$\alpha<\varepsilon$. But the filter $H\mathord\restriction \alpha$ contains both $i_\alpha(q\mathord\restriction\alpha)$
and $i_\alpha(p_1\mathord\restriction\alpha)$, contradicting the assumption that $i_\alpha$
preserves incompatibility.
\end{proof}
\subsection{Almost finite support iterations}
Recall Definition~\ref{def:canonicalextension} (of the canonical extension)
and
the setup that was described there: We have to find a
subset $P_{\varepsilon'}$ of $P^{\textrm{\textup{CS}}}_{\varepsilon'}$ such that the canonical extension~$j:P^M_\varepsilon\to P_{\varepsilon'}$ is $M$-complete.
We now define the almost finite support limit. (The direct limit will in
general not do, as it may not contain the range $j[P^M_\varepsilon]$. The
almost finite support limit is the obvious modification of the direct limit,
and it is the smallest partial CS limit $P _{\varepsilon'}$ such that $j[P^M_\varepsilon]\subseteq
P_{\varepsilon'}$, and it indeed turns out to be $M$-complete as well.)
\begin{Def}\label{def:almostfs}
Let $\varepsilon$ be a limit ordinal in~$M$, and let
${\varepsilon'}\coloneqq \sup(\varepsilon\cap M)$.
Let $\bar P^M$ be a topped iteration in~$M$ of length $\varepsilon$,
and let $\bar P$ be a topless iteration in~$V$ of length $\varepsilon'$.
Assume that the canonical embeddings $i_\alpha$ work for all $\alpha
\in \varepsilon \cap M = \varepsilon' \cap M$. Let $i_\varepsilon$
be the canonical extension.
We define the \emph{almost finite support limit of $\bar P$ over
${\bar P^M}$} (or: almost FS limit) as the following subforcing
$P_{\varepsilon'}$ of $P^{\textrm{\textup{CS}}}_{\varepsilon'}$:
\[P_{\varepsilon'}\coloneqq \{\,
q \land i_\varepsilon(p) \in P^{\textrm{\textup{CS}}}_{\varepsilon'} :\ p\in P^M_\varepsilon \text{ and }
q\in P_\alpha\text{ for some } \alpha\in \varepsilon\cap M\text{ such that }
q\le _{P_\alpha} i_\alpha(p\mathord\restriction \alpha) \, \}.\]
\end{Def}
Note that for $\cf(\varepsilon)>\omega$, the almost FS limit
is equal to the direct limit, as each $p\in P^M_\varepsilon$ is in fact
in $P^M_\alpha$ for some $\alpha\in \varepsilon\cap M$, so $i_\varepsilon(p) =
i_\alpha(p)\in P_\alpha$.
\begin{Lem}\label{lem:afs.complete}
Assume that $\bar P$ and $\bar P^M$ are as above and let $P_{\varepsilon'}$ be
the almost FS limit. Then $\bar P^\frown P_{\varepsilon'}$ is a partial CS
iteration, and $i_\varepsilon$ works, i.e., $i_\varepsilon$ is an
$M$-complete embedding from $P^M_\varepsilon$ to $P_{\varepsilon'}$. (As
$P_{\varepsilon'} $ is a complete subforcing of~$P_\varepsilon$, this also
implies that $i_\varepsilon$ is $M$-complete from $P^M_\varepsilon$
to $P_\varepsilon$.)
\end{Lem}
\begin{proof}
It is easy to see that $P_{\varepsilon'}$ is a partial CS limit and contains
the range $i_\varepsilon[P^M_\varepsilon]$.
We now show preservation of
predensity; this implies $M$-completeness by Lemma~\ref{lem:wolfgang}.
Let $(p_j)_{j\in J} \in M$ be a maximal antichain in
$P^M_\varepsilon$. (Since $P^M_\varepsilon$ does not have to be ccc in $M$,
$J$ can have any cardinality in $M$.) Let $q\land {i_{\varepsilon}}(p)$ be a
condition in $P_{\varepsilon'} $. (If ${\varepsilon'}<\varepsilon$, i.e., if
$\cf(\varepsilon)>\omega$, then we can choose $p$ to be the empty condition.)
Fix $\alpha \in \varepsilon \cap M $ be such
that $q\in P_\alpha$.
Let $H_\alpha$ be $P_\alpha$-generic and contain $q$,
so $p\mathord\restriction\alpha$ is in $H^M_\alpha$.
Now in $M[H^M_\alpha]$ the set $\{p_j: j\in J, p_j\in
P_\varepsilon^M/H_\alpha^M\}$ is predense in
$P^M_\varepsilon/H^M_\alpha$ (since this is forced by the empty condition
in $P^M_\alpha$). In particular,
$p$ is compatible with some $p_j$, witnessed
by $p'\le p,p_j$ in $P^M_\varepsilon /H^M_\alpha$.
We can find $q'\le_{P_\alpha} q$ deciding $j$ and $p'$; since certainly
$q'\le^* {i_{\alpha}}(p'\mathord\restriction \alpha) $, we may assume even $\le$ without loss of generality. Now $q'\land {i_{\varepsilon}} (p') \le q\land
{i_\varepsilon}(p)$ (since $q'\leq q$ and $p'\leq p$),
and $q'\land {i_\varepsilon} (p') \le
{i_\varepsilon}(p_j)$ (since $p'\le p_j$).
\end{proof}
\begin{DefandClaim}\label{lem:kellnertheorem}
Let $\bar P^M$ be a topped partial CS iteration in $M$
of length $\varepsilon$. We can construct by induction on
$\beta\in\varepsilon+1$ an
\emph{almost finite support iteration $\bar P$ over $\bar P^M$}
(or: almost FS iteration) as follows:
\begin{enumerate}
\item As induction hypothesis we assume that the canonical
embedding $i_\alpha$
works for all $\alpha\in \beta \cap M$. (So the notation
$M [H^M_\alpha]$ makes sense.)
\item\label{item:qwrrqw} Let $\beta=\alpha+1$. If $\alpha\in M$, then we can use any
$Q_\alpha$ provided that (it is forced that) $Q^M_\alpha$ is an
$M[H^M_\alpha]$-complete subforcing of $Q_\alpha$.
(If $\alpha\notin M$, then there is no restriction on $Q_\alpha$.)
\item Let $\beta\in M$ and $\cf(\beta)=\omega$.
Then $P_\beta$ is the almost FS limit of
$(P_\alpha,Q_\alpha)_{\alpha<\beta}$ over $P^M_\beta$.
\item Let $\beta\in M$ and $\cf(\beta)>\omega$. Then $P_\beta$ is
again the almost FS limit of
$(P_\alpha,Q_\alpha)_{\alpha<\beta}$ over $P^M_\beta$ (which
also happens to be the direct limit).
\item For limit ordinals not in $M$,
$P_\beta$ is the direct limit.
\end{enumerate}
\end{DefandClaim}
So the claim includes that the resulting $\bar P$ is a (topped) partial CS
iteration of length $\varepsilon$ over $\bar P^M$ (i.e., the canonical embeddings
$i_\alpha$ work for all $\alpha\in (\varepsilon+1) \cap M$), where we only assume
that the $Q_\alpha$ satisfy the obvious requirement given in~(\ref{item:qwrrqw}).
(Note that we can always find some suitable $Q_\alpha$ for $\alpha\in M$,
for example we can just take $Q^M_\alpha$ itself.)
\begin{proof}
We have to show (by induction) that the resulting sequence $\bar P$
is a partial CS iteration, and that $\bar P^M$ embeds into $\bar P$.
For successor cases, there is nothing to do. So assume that $\alpha$
is a limit. If $P_\alpha$ is a direct limit, it is trivially a
partial CS limit; if $P_\alpha$ is an almost FS limit, then the
easy part of Lemma~\ref{lem:afs.complete} shows that it is a partial
CS limit.
So it remains to show that for a limit $\alpha\in M$, the (naturally defined)
embedding $i_\alpha:P^M_\alpha\to P_\alpha$ is $M$-complete.
This was the main claim in Lemma~\ref{lem:afs.complete}.
\end{proof}
The following lemma is natural and easy.
\begin{Lem}
Assume that we construct an almost FS
iteration $\bar P$ over $\bar P^M$ where
each $Q_\alpha$ is (forced to be) ccc.
Then $P_\varepsilon$ is ccc (and in particular proper).
\end{Lem}
\begin{proof}
We show that $P_\alpha$ is ccc by induction on
$\alpha\leq\varepsilon$. For successors, we use that $Q_\alpha$ is
ccc. For $\alpha$ of uncountable cofinality, we know that we took
the direct limit coboundedly often (and all $P_\beta$ are ccc for
$\beta<\alpha$), so by a result of Solovay $P_\alpha$ is again ccc.
For $\alpha$ a limit of countable cofinality not in $M$, just use
that all $P_\beta$ are ccc for $\beta<\alpha$, and the fact that
$P_\alpha$ is the direct limit. This leaves the case that
$\alpha\in M$ has countable cofinality, i.e., the $P_\alpha$ is the
almost FS limit. Let $A\subseteq P_\alpha$ be uncountable. Each
$a\in A$ has the form $q\land i_\alpha(p)$ for $p\in P^M_\alpha$ and
$q\in \bigcup_{\gamma<\alpha} P_\gamma$. We can thin out the set $A$ such that
$p$ are the same and all $q$ are in the same $P_\gamma$. So there have to be compatible
elements in $A$.
\end{proof}
All almost FS iterations that we consider in this paper will satisfy the
countable chain condition (and hence in particular be proper).
We will need a variant of this lemma for $\sigma$-centered forcing notions.
\begin{Lem}\label{lem:4.17}
Assume that we construct an almost FS
iteration $\bar P$ over $\bar P^M$
where only countably many $Q_\alpha$
are nontrivial (e.g., only those with $\alpha\in M$) and where
each $Q_\alpha$ is (forced to be) $\sigma$-centered.
Then $P_\varepsilon$ is $\sigma$-centered as well.
\end{Lem}
\begin{proof}
By induction: The direct limit of countably many
$\sigma$-centered forcings is $\sigma$-centered,
as is the almost FS limit of $\sigma$-centered forcings
(to color $q\land i_\alpha( p)$, use $p$ itself together with the color of $q$).
\end{proof}
We will actually need two variants of the almost FS construction:
Countably many models $M^n$; and starting the almost FS iteration with some $\alpha_0$.
Firstly, we can construct an almost FS iteration not just over one iteration
$\bar P^M$, but over an increasing chain of iterations.
Analogously to Definition~\ref{def:almostfs} and
Lemma~\ref{lem:afs.complete}, we can show:
\begin{Lem}\label{lem:418}
For each $n\in \omega$, let $M^n$ be a nice candidate, and let $\bar P^n$
be a topped partial CS iteration in $M^n$ of
length\footnote{Or only: $\varepsilon\in M^{n_0}$ for some $n_0$.} $\varepsilon\in M^0$ of countable
cofinality,
such that $M^m\in M^n $ and
$M^{n}$ thinks that $\bar P^m$ canonically embeds into $\bar P^{n}$,
for all $m<n$. Let $\bar P$ be a topless iteration of length $\varepsilon$ into
which all $\bar P^n$ canonically embed.
Then we can define the almost FS limit $P_\varepsilon$ over $(\bar
P^n)_{n\in \omega}$ as follows:
Conditions in $P_\varepsilon$ are of the form $q\land i^n_\varepsilon(p)$ where
$n\in\omega$, $p\in P^n_\varepsilon$,
and $q\in P_\alpha$ for some $\alpha\in M^n\cap\varepsilon$ with $q\le i^n_\alpha (p\mathord\restriction \alpha)$.
Then $P_\varepsilon$
is a partial CS limit over each $\bar P^n$.
\end{Lem}
As before, we get the following corollary:
\begin{Cor}\label{cor:ctblmanycandidates}
Given $M^n$ and $\bar P^n$ as above, we can construct a topped partial CS
iteration $\bar P$ such that each $\bar P^n$ embeds $M^n$-completely into it;
we can choose $Q_\alpha$ as we wish (subject to the obvious restriction that
each $Q^n_\alpha$ is an $M^n[H^n_\alpha]$-complete subforcing). If we always choose
$Q_\alpha$ to be ccc, then $\bar P$ is ccc; this is the case if we set
$Q_\alpha$ to be the union of the (countable) sets $Q^n_\alpha$.
\end{Cor}
\begin{proof}
We can define $P_\alpha$ by induction. If $\alpha\in \bigcup_{n\in
\omega} M^n$ has countable cofinality, then we use the almost
FS limit as in Lemma~\ref{lem:418}. Otherwise we use the direct
limit. If $\alpha\in M^n$ has uncountable cofinality, then
$\alpha'\coloneqq \sup(\alpha\cap M)$ is an element of $M^{n+1}$. In our
induction we have already considered $\alpha'$ and have defined
$P_{\alpha'}$ by Lemma~\ref{lem:418} (applied to the sequence
$(\bar P^{n+1}, \bar P^{n+2},\ldots)$).
This is sufficient to show that
$i^n_\alpha:P^n_\alpha\to P_{\alpha'} \lessdot P_\alpha$ is
$M^n$-complete.
\end{proof}
Secondly, we can start the almost FS iteration after some $\alpha_0$
(i.e., $\bar P$ is already given up to $\alpha_0$, and we can continue
it as an almost FS iteration up to $\varepsilon$), and get the same
properties that we previously showed for the almost FS iteration, but
this time for the quotient $P_\varepsilon/P_{\alpha_0}$. In
more detail:
\begin{Lem}\label{lem:almostfsstartatalpha}
Assume that $\bar P^M$ is in $M$ a (topped) partial CS iteration of length
$\varepsilon$, and that $\bar P$ is in $V$ a topped partial CS iteration
of length $\alpha_0$ over $\bar P^M\mathord\restriction {\alpha_0} $
for some $\alpha_0\in \varepsilon \cap M$.
Then we can extend $\bar P$ to a (topped) partial CS iteration of length
$\varepsilon$ over $\bar P^M$, as in the almost FS iteration (i.e.,
using the almost FS limit at limit points $\beta>\alpha_0$ with $\beta\in M$
of countable cofinality; and the direct limit everywhere else).
We can use
any $Q_\alpha$ for $\alpha\geq\alpha_0$ (provided $Q^M_\alpha$ is an
$M[H^M_\alpha]$-complete subforcing of $Q_\alpha$). If all $Q_\alpha$ are ccc,
then $P_{\alpha_0}$ forces that $P_{\varepsilon}/H_{\alpha_0}$ is ccc (in
particular proper); if moreover all $Q_\alpha$ are $\sigma$-centered and only
countably many are nontrivial, then $P_{\alpha_0}$ forces that
$P_{\varepsilon}/H_{\alpha_0}$ is $\sigma$-centered.
\end{Lem}
\subsection{Almost countable support iterations}\label{subsec:almostCS}
``Almost countable support iterations $\bar P$'' (over a given iteration $\bar P^M$ in a candidate~$M$)
will have the following two crucial properties: There is a canonical
$M$-complete embedding of~$\bar P^M $ into~$\bar P$,
and $\bar P$ preserves a given random real (similar to the usual countable support iterations).
\begin{DefandClaim}\label{def:almost_CS_iteration_wolfgang}
Let $\bar P^M$ be a topped partial CS iteration in $M$ of length $\varepsilon$.
We can construct by induction on $\beta\in\varepsilon+1$
the \emph{almost countable support iteration $\bar P$ over $\bar P^M$}
(or: almost CS iteration):
\begin{enumerate}
\item As induction hypothesis, we assume that the canonical embedding
$i_\alpha$ works for every $\alpha\in \beta\cap M$.
We set\footnote{
So for successors $\beta\in M$, we have $\delta'=\beta=\delta$.
For $\beta\in M$ limit, $\beta= \delta$ and $\delta'$ is as
in Definition~\ref{def:canonicalextension}.}
\begin{equation}
\label{eq:delta.prime}
\delta\coloneqq \min(M\setminus \beta),
\quad
\delta'\coloneqq \sup(\alpha+1:\, \alpha\in \delta \cap M ).
\end{equation}
Note that $\delta'\le \beta \le \delta$.
\item Let $\beta=\alpha+1$. We can choose any desired forcing $Q_\alpha$;
if $\beta\in M$ we of course require that
\begin{equation}\label{eq:jehrwetewt}
\text{$Q^M_\alpha$ is an $M[H^M_\alpha]$-complete subforcing of $Q_\alpha$.}
\end{equation}
This defines $P_\beta$.
\item Let $\cf(\beta)> \omega$. Then
$P_\beta$ is the direct limit.
\item Let $\cf(\beta) = \omega$ and assume that $\beta\in M$ (so $M\cap \beta$ is cofinal
in $\beta$ and $\delta' = \beta=\delta$).
We define $P_\beta=P_\delta$ as the union of the following two sets:
\begin{itemize}
\item The almost FS limit of $(P_\alpha,Q_\alpha)_{\alpha<\delta}$, see
Definition~\ref{def:almostfs}.
\item The set $P_{\delta}^\textup{\textrm{gen}}$ of $M$-generic conditions $q \in P^{\textrm{\textup{CS}}}_{\delta}$, i.e., those which satisfy
\begin{displaymath}
q \Vdash_{P^{\textrm{\textup{CS}}}_{\delta}} i^{-1}_\delta[H_{P^{\textrm{\textup{CS}}}_{\delta}}]\subseteq P^M_\delta \textrm{ is } M\textrm{-generic.}
\end{displaymath}
\end{itemize}
\item Let $\cf(\beta) = \omega$ and assume that $\beta\notin M$ but $M\cap \beta$ is cofinal
in $\beta$, so $\delta' = \beta< \delta$.
We define $P_\beta = P_{\delta'}$ as the union of the following two sets:
\begin{itemize}
\item The direct limit of $(P_\alpha,Q_\alpha)_{\alpha<\delta'}$.
\item The set $P_{\delta'}^\textup{\textrm{gen}}$ of $M$-generic conditions $q \in P^{\textrm{\textup{CS}}}_{\delta'}$, i.e., those which satisfy
\begin{displaymath}
q \Vdash_{P^{\textrm{\textup{CS}}}_{\delta'}} i^{-1}_\delta[H_{P^{\textrm{\textup{CS}}}_{\delta'}}]\subseteq
P^M_\delta
\textrm{ is } M\textrm{-generic.}
\end{displaymath}
\end{itemize}
(Note that the $M$-generic conditions form an open subset of $P^{\textrm{\textup{CS}}}_\beta=P^{\textrm{\textup{CS}}}_{\delta'}$.)
\item Let $\cf(\beta) = \omega$ and $M \cap \beta$
not cofinal in~$\beta$ (so $\beta\notin M$).
Then $P_\beta$ is the full CS limit of
$(P_\alpha,Q_\alpha)_{\alpha<\beta}$ (see Definition~\ref{def:fullCS}).
\end{enumerate}
\end{DefandClaim}
So the claim is that for every choice of $Q_\alpha$ (with the obvious
restriction~\eqref{eq:jehrwetewt}), this construction always results in a partial CS iteration
$\bar P$ over $\bar P^M$.
The proof is a bit cumbersome; it is a variant of the usual proof that
properness is preserved in countable support iterations (see
e.g.~\cite{MR1234283}).
We will use the following fact in $M$ (for the iteration $\bar P^M$):
\proofclaim{eq:prelim}{Let $\bar P$ be a topped iteration of length $\varepsilon$.
Let $\alpha_1\le\alpha_2\le\beta\le\varepsilon$. Let
$p_1$ be a $P_{\alpha_1}$-name for a condition in $P_\varepsilon$, and let
$D$ be an open dense set of $P_\beta$. Then there is a $P_{\alpha_2}$-name
$p_2$ for a condition in $D$ such that the empty condition of
$P_{\alpha_2}$ forces: $p_2\leq p_1\mathord\restriction\beta$ and: if $p_1$ is in
$P_\varepsilon/H_{\alpha_2}$, then the condition $p_2$ is as well.}
(Proof: Work in the $P_{\alpha_2}$-extension. We know that $p'\coloneqq
p_1\restriction \beta$ is a $P_\beta$-condition. We now define $p_2$ as
follows: If $p'\notin P_\beta/H_{\alpha_2}$ (which is equivalent to $p_1\notin
P_\epsilon/H_{\alpha_2}$), then we choose any $p_2\leq p'$ in $D$ (which is
dense in $P_\beta$). Otherwise (using that $D\cap P_\beta/H_{\alpha_2}$ is
dense in $P_\beta/H_{\alpha_2}$) we can choose $p_2\leq p'$ in $D\cap
P_\beta/H_{\alpha_2}$.)
The following easy fact will also be useful:
\proofclaim{eq:forces.forces}{Let $P$ be a subforcing of $Q$. We define
$P\mathord\restriction p\coloneqq \{ r\in P: r\le p\}$.
Assume that $p\in P$ and $P\mathord\restriction p = Q\mathord\restriction p$.
\\Then for any $P$-name $\n x$ and any formula $\varphi(x)$
we have:
$p\Vdash_{P} \varphi(\n x) $ iff $p\Vdash_{Q} \varphi(\n x)$.
}
We now prove by induction on $\beta\le\varepsilon$ the following statement
(which includes that the Definition and
Claim~\ref{def:almost_CS_iteration_wolfgang} works up to $\beta$). Let
$ \delta, \delta'$ be as in \eqref{eq:delta.prime}.
\begin{Lem}\label{lem:inductionA}
\begin{enumerate}[(a)]
\item
The topped iteration $\bar P$ of length $\beta$ is a partial CS iteration.
\item
The canonical embedding $i_\delta: P^M_\delta\to P_{\delta'}$ works, hence
also $i_\delta: P^M_\delta\to P_{\delta}$ works.
\item
Moreover, assume that
\begin{itemize}
\item $\alpha\in M\cap \delta $,
\item $\n p\in M$ is a $P^M_\alpha$-name
of a $P^M_\delta$-condition,
\item
$q\in P_\alpha$ forces (in $P_\alpha$)
that $\n p\mathord\restriction\alpha[H^M_\alpha]$ is in $H^M_\alpha$.
\end{itemize}
Then there is a $q^+\in P_{\delta'}$
(and therefore in $P_\beta$) extending $q$ and forcing that
$\n p[H^M_\alpha]$ is in $H^M_\delta$.
\end{enumerate}
\end{Lem}
\begin{proof}
First let us deal with the trivial cases.
It is clear that we always get a partial CS iteration.
\begin{itemize}
\item
Assume that $\beta=\beta_0+1\in M$, i.e., $\delta=\delta'=\beta$.
It is clear that $i_\beta$ works.
To get $q^+$, first extend $q$ to some $q'\in P_{\beta_0}$ (by induction
hypothesis), then define $q^+$ extending $q'$ by $q^+(\beta_0)\coloneqq \n
p(\beta_0)$.
\item
If $\beta=\beta_0+1\notin M$, there is nothing to do.
\item
Assume that $\cf(\beta)>\omega$ (whether $\beta\in M$ or not).
Then $\delta'<\beta$.
So $i_\delta: P^M_\delta\to P_{\delta'}$ works by induction, and
similarly (c) follows from the inductive assumption. (Use
the inductive assumption for $\beta=\delta'$; the $\delta$ that we got at that
stage is the same as the current~$\delta$, and the $q^+$ we obtained at
that stage will still satisfy all requirements at the current stage.)
\item
Assume that $\cf(\beta)=\omega$ and that $M\cap\beta$ is bounded in~$\beta$.
Then the proof is the same as in the previous case.
\end{itemize}
We are left with the cases corresponding to (4) and (5) of
Definition~\ref{def:almost_CS_iteration_wolfgang}: $\cf(\beta)=\omega$ and
$M\cap\beta$ is cofinal in~$\beta$. So either $\beta\in M$, then
$\delta'=\beta=\delta$, or $\beta\notin M$, then $\delta'=\beta<\delta$ and
$\cf(\delta)>\omega$.
We leave it to the reader to check that $P_\beta$ is indeed a partial CS limit.
The main point is to see that for all $p,q\in P_\beta$ the
condition $q\wedge p$ is in $P_\beta$ as well, provided $q\in P_\alpha$ and $q\le p\mathord\restriction \alpha$
for some $\alpha<\beta$.
If $p\in P^\textup{\textrm{gen}}_\beta$, then this follows because $P^\textup{\textrm{gen}}_\beta$ is open in $P^{\textrm{\textup{CS}}}_\beta$;
the other cases are immediate from the definition (by induction).
We now turn to claim (c). Assume $q\in P_\alpha$ and $\n p\in M$ are
given, $\alpha\in M\cap \delta$.
Let $(D_n)_{n \in \omega}$ enumerate all dense sets of~$P^M_\delta$
which lie in~$M$, and let $(\alpha_n)_{n \in \omega} $ be a sequence
of ordinals in $M$ which is cofinal in $\beta$, where $\alpha_0=\alpha$.
Using \eqref{eq:prelim} in~$M$, we can find a sequence $(\n p_n)_{n
\in \omega}$ satisfying the following in~$M$, for all $n>0$:
\begin{itemize}
\item
$\n p_0 = \n p$.
\item
$\n p_n\in M$ is a $P^M_{\alpha_n}$-name of a
$P^M_\delta$-condition in~$D_n$.
\item
$\Vdash_{P^M_{\alpha_{n}}} \n p_{n}\le_{P_\delta^M} \n p_{n-1}$.
\item
$\Vdash_{P^M_{\alpha_{n}}} $ If $
\n p_{n-1}\mathord\restriction\alpha_{n}\in H^M_{\alpha_{n}}$, then
$\n p_{n}\mathord\restriction\alpha_{n}\in H^M_{\alpha_{n}}$ as well.
\end{itemize}
Using the inductive assumption for the $\alpha_n$'s, we can now find
a sequence $(q_n)_{n \in \omega}$ of conditions satisfying the following:
\begin{itemize}
\item $q_0 = q$, $q_n\in P_{\alpha_n}$.
\item $q_{n}\mathord\restriction \alpha_{n-1} = q_{n-1}$.
\item $q_n \Vdash_{P_{\alpha_n}} \n p_{n-1}\mathord\restriction \alpha_n \in H^M_{\alpha_n}$,
so also $ \n p_{n }\mathord\restriction \alpha_n \in H^M_{\alpha_n}$.
\end{itemize}
Let $q^+\in P^{\textrm{\textup{CS}}}_\beta$ be the union of the $q_n$. Then for all $n$:
\begin{enumerate}
\item
$q_n \Vdash_{ P^{\textrm{\textup{CS}}}_\beta} \n p_n\mathord\restriction \alpha_n \in H^M_{\alpha_n}$, so also $q^+$ forces this.
\\(Using induction on $n$.)
\item For all $n$ and all $m\ge n$:
$q^+ \Vdash_{ P^{\textrm{\textup{CS}}}_\beta} \n p_m\mathord\restriction \alpha_m \in H^M_{\alpha_m}$,
so also $ \n p_n\mathord\restriction \alpha_m \in H^M_{\alpha_m}$.
\\(As $\n p_m \le \n p_n$.)
\item
$q^+ \Vdash_{ P^{\textrm{\textup{CS}}}_\beta} \n p_n\in H^M_{\delta}$.
\\ (Recall that $P^{\textrm{\textup{CS}}}_\beta$ is separative, see Fact~\ref{fact:eq.eqstar}.
So $i_\delta(\n p_n)\in H_\delta$ iff $i_{\alpha_n}(\n p\mathord\restriction \alpha_m)\in
H_{\alpha_m}$ for all large~$m$.)
\end{enumerate}
As $q^+\Vdash_{P^{\textrm{\textup{CS}}}_\beta } \n p_n \in D_n\cap H^M_\delta$, we conclude
that $q^+\in P^\textup{\textrm{gen}}_\beta$ (using Lemma~\ref{lem:wolfgang},
applied to ${P^{\textrm{\textup{CS}}}_\beta }$). In particular,
$P^\textup{\textrm{gen}}_\beta$ is dense in $P_\beta$: Let $q\wedge i_\delta(p)$
be an element of the almost FS limit; so $q\in P_\alpha$ for some
$\alpha < \beta$. Now find a generic $q^+$ extending $q$ and stronger than $i_\delta(p)$,
then $q^+\le q\wedge i_\delta(p)$.
It remains to show that $i_\delta$ is $M$-complete.
Let $A\in M$ be a maximal antichain of $P^M_\delta$, and $p\in P_\beta$.
Assume towards a contradiction that $p$ forces in $P_\beta$
that $ i^{-1}_{\delta}[H_\beta ]$ does not intersect $A$ in exactly one point.
Since $P^\textup{\textrm{gen}}_\beta$ is dense in $P_\beta$, we can find some $q\leq p$ in $P^\textup{\textrm{gen}}_\beta$.
Let \[P'\coloneqq \{r\in P^{\textrm{\textup{CS}}}_\beta: r\le
q\}=\{r\in P_\beta: r\le q\}, \]
where the equality holds because $P^\textup{\textrm{gen}}_\beta$ is open in $P^{\textrm{\textup{CS}}}_\beta$.
Let $\Gamma $ be the canonical name for a $P'$-generic filter,
i.e.:
$\Gamma\coloneqq \{(\check r, r): r\in P' \}$.
Let $R$ be either
$P^{\textrm{\textup{CS}}}_\beta$ or $ P_\beta$. We write
$\langle \Gamma\rangle _R$ for the filter generated
by~$\Gamma$ in~$R$, i.e., $\langle \Gamma\rangle _R \coloneqq \{r\in R: (\exists r'\in \Gamma ) \
r'\le r\}$. So
\begin{equation}\label{gehtsnochduemmer}
q\Vdash_R H_R =\langle \Gamma\rangle_R.
\end{equation}
We now see that the following hold:
\begin{enumerate}
\item[--] $ q\Vdash_{P_\beta} i^{-1}_{\delta}[ H _{P_\beta} ] $
does not intersect $A$ in exactly one point. (By
assumption.)
\item[--] $ q\Vdash_{P_\beta} i^{-1}_{\delta}[ \langle
\Gamma \rangle_{P_\beta}] $
does not intersect $A$ in exactly one point. (By
\eqref{gehtsnochduemmer}.)
\item[--] $ q\Vdash_{P_\beta^{\textrm{\textup{CS}}} } i^{-1}_{\delta}[ \langle \Gamma \rangle_{P_\beta}] $
does not intersect $A$ in exactly one point. (By \eqref{eq:forces.forces}.)
\item[--] $ q\Vdash_{P_\beta^{\textrm{\textup{CS}}} } i^{-1}_{\delta}[ \langle \Gamma
\rangle_{P_\beta^{\textrm{\textup{CS}}}}] $ does not intersect $A$ in exactly one point.
(Because $i_\delta$ maps $A$ into $P_\beta\subseteq P_\beta^{\textrm{\textup{CS}}}$, so
$A\cap i^{-1}_\delta[\langle Y\rangle_{P_\beta}] = A \cap i^{-1}_\delta[\langle
Y\rangle_{P_\beta^{\textrm{\textup{CS}}}}]$ for all~$Y$.)
\item[--] $ q\Vdash_{P_\beta^{\textrm{\textup{CS}}} } i^{-1}_{\delta}[ H_{P_\beta^{\textrm{\textup{CS}}}}]$
does not intersect $A$ in exactly one point. (Again by \eqref{gehtsnochduemmer}.)
\end{enumerate}
But this, according to
the definition of $P^\textup{\textrm{gen}}_\beta$, implies $q\notin
P^\textup{\textrm{gen}}_\beta$, a contradiction.
\end{proof}
We can also show that the almost CS iteration
of proper forcings $Q_\alpha$ is proper.
(We do not really need this fact, as we could allow non-proper iterations in
our preparatory forcing, see Section~\ref{sec:7a}(\ref{item:nonproper}). In some sense, $M$-completeness
replaces properness, so the proof of $M$-completeness was similar to the
``usual'' proof of properness.)
\begin{Lem}
Assume that in Definition~\ref{def:almost_CS_iteration_wolfgang},
every $Q_\alpha$ is (forced to be) proper. Then also each $P_\delta$
is proper.
\end{Lem}
\begin{proof}
By induction on $\delta\le \varepsilon$ we prove that for all $\alpha<\delta$
the quotient $P_\delta/H_ \alpha$ is (forced to be) proper. We use the following
facts about properness:
\proofclaim{claim:proper.successor}{
If $P$ is proper and $P$ forces that $Q$ is proper, then $P*Q$
is proper.}
\proofclaim{claim:proper.omega}{
If $\bar P$ is an iteration of length $\omega$
and if each $Q_n$ is forced to be proper, then
the inverse limit $P_\omega$ is proper, as are all quotients
$P_\omega/H_n$.}
\proofclaim{claim:proper.omega.1}{
If $\bar P$ is an iteration of length $\delta$ with
$\cf(\delta)>\omega$,
and if all quotients $P_\beta/H_\alpha$ (for $\alpha < \beta < \delta$)
are forced to be proper, then
the direct limit $P_\delta$ is proper, as are all quotients
$P_\delta/H_\alpha$.}
If $\delta$ is a successor, then our inductive claim easily follows from
the inductive assumption together with~\eqref{claim:proper.successor}.
Let $\delta$ be a limit of countable cofinality, say $\delta = \sup_n
\delta_n$. Define an iteration $\bar P'$ of length $\omega$
with $Q'_n\coloneqq P_{\delta_{n+1}} / H_{\delta_n}$. (Each $Q'_n$ is proper,
by inductive assumption.) There is a natural forcing
equivalence between $ P^{\textrm{\textup{CS}}}_\delta$ and $P^{\prime{\textrm{\textup{CS}}}}_\omega$, the full CS limit of $\bar
P'$.
Let $N \esm H(\chi^*)$ contain $\bar P, P_\delta, \bar P', M, \bar P^M$.
Let $p\in P_\delta\cap N$. Without loss of generality $p\in P_\delta^\textup{\textrm{gen}}$.
So below $p$ we can identify $P_\delta$ with $P^{\textrm{\textup{CS}}}_\delta$ and hence
with $P^{\prime{\textrm{\textup{CS}}}}_ \omega$; now apply~\eqref{claim:proper.omega}.
The case of uncountable cofinality is similar, using~\eqref{claim:proper.omega.1} instead.
\end{proof}
Recall the definition of $\sqsubset_n$ and $\sqsubset$ from Definition~\ref{def:sqsubset}, the notion of
(quick) interpretation $Z^*$ (of a name $\n Z$ of a code for a null set)
and the definition of local preservation
of randoms from Definition~\ref{def:locally.random}.
Recall that we have seen in Corollaries~\ref{cor:ultralaverlocalpreserving} and~\ref{cor:januslocallypreserves}:
\begin{Lem}\label{lem:4.28}
\begin{itemize}
\item If $Q^M$ is an ultralaver forcing in $M$ and $r$ a real,
then there is an ultralaver forcing $Q$ over $Q^M$ locally
preserving randomness of $r$ over~$M$.
\item If $Q^M$ is a Janus forcing in $M$ and $r$ a real, then there is a
Janus forcing $Q$ over~$Q^M$ locally preserving randomness
of~$r$ over~$M$.
\end{itemize}
\end{Lem}
We will prove the following preservation theorem:
\begin{Lem}\label{lem:iterate.random}
Let $\bar P$ be an almost CS iteration (of length $\varepsilon$) over~$\bar P^M$,
$r$ random over~$M$, and $p\in P^M_\varepsilon$. Assume
that each $P_\alpha$ forces
that $Q_\alpha$ locally preserves randomness of $r$ over
$M[H^M_\alpha]$.
Then there is some $q\leq p$ in $P_\varepsilon$ forcing that
$r$ is random over~$M[H^M_\varepsilon]$.
\end{Lem}
What we will actually need is the following variant:
\begin{Lem}\label{lem:preservation.variant}
Assume that $\bar P^M$ is in $M$ a topped partial CS iteration of length
$\varepsilon$, and we already have some topped partial CS iteration $\bar P$
over~$\bar P^M\mathord\restriction\alpha_0$ of length $\alpha_0\in M\cap\varepsilon$. Let $\n
r$ be a $ P_{\alpha_0}$-name of a random real over~$M[H^M_{\alpha_0}]$.
Assume that we extend $\bar P$ to length $\varepsilon$ as an almost CS
iteration\footnote{Of course our official definition of almost CS iteration assumes
that we start the construction at $0$, so we modify this definition in the
obvious way.} using forcings $Q_\alpha$ which
locally preserve the randomness of $\n r$ over~$M[H^M_\alpha]$,
witnessed by a sequence $(D_k^{Q_\alpha^M})_{k\in \omega}$.
Let $p\in P^M_\varepsilon$.
Then we can find a $q\leq p$ in $P_{\varepsilon}$
forcing that $\n r$ is random over
$M[H^M_\varepsilon]$.
\end{Lem}
Actually, we will only prove the two previous lemmas under the following
additional assumption (which is enough for our application, and saves some
unpleasant work). This additional assumption is not really necessary; without
it, we could use the method of~\cite{MR2214624} for the proof.
\begin{Asm}\label{asm:quick}
\begin{itemize}
\item For each $\alpha\in M\cap \varepsilon$,
($P^M_\alpha$ forces that) $Q^M_\alpha$ is either trivial\footnote{More specifically, $Q^M_\alpha=\{\emptyset\}$.}
or adds a new $\omega$-sequence of ordinals.
Note that in the latter case we can assume without loss of generality
that $\bigcap_{n\in\omega}D^{Q_\alpha^M}_n=\emptyset$ (and, of course, that
the $D^{Q_\alpha^M}_n$ are decreasing).
\item Moreover, we assume that already in $M$ there is a set $T\subseteq
\varepsilon$ such that $P_\alpha^M$ forces: $Q_\alpha^M$ is trivial iff
$\alpha\in T$. (So whether $Q_\alpha^M$ is trivial or not does not depend
on the generic filter below $\alpha$, it is already decided in the
ground model.)
\end{itemize}
\end{Asm}
The result will follow as a special case of the following lemma, which we prove
by induction on~$\beta$. (Note that this is a refined version of the proof of
Lemma~\ref{lem:inductionA} and similar to the proof of the preservation theorem
in~\cite[5.13]{MR1234283}.)
\begin{Def}\label{def:quick}
Under the assumptions of
Lemma~\ref{lem:preservation.variant} and
Assumption~\ref{asm:quick},
let $\n Z$ be a $P_\delta$-name, $\alpha_0\le \alpha <
\delta$, and let $\bar p = (p^k)_{k\in \omega}$ be a
sequence of $P_\alpha$-names of conditions in
$P_\delta/H_\alpha$. Let $Z^*$ be a $P_\alpha$-name.
We say that $(\bar p, Z^ *)$ is a \emph{quick} interpretation
of $\n Z$ if $\bar p$ interprets $\n Z$ as $Z^*$ (i.e.,
$P_\alpha$ forces that $p^k$ forces $\n Z \mathord\restriction k = Z^*\mathord\restriction
k$ for all $k$), and moreover:
\begin{quote}
Letting $\beta\ge \alpha$ be minimal with $Q^M_\beta$
nontrivial (if such $\beta$ exists):
$P_\beta$ forces that the sequence $(p^k(\beta))_{k\in \omega}$ is
quick in $Q^M_\beta$, i.e., $p^k(\beta)\in D^{Q_\beta^M}_k$ for all~$k$.
\end{quote}
\end{Def}
It is easy to see that:
\proofclaim{eq:find.quick}{For every name $\n Z$ there is a
quick interpretation $(\bar p, Z^*)$.}
\begin{Lem} \label{lem:induktion.wirklich}
Under the same assumptions as above,
let $\beta$, $\delta$, $\delta'$ be as in \eqref{eq:delta.prime}
(so in particular we have $\delta'\le\beta\le\delta\le\varepsilon$).
\\
{\bf Assume that }
\begin{itemize}
\item $\alpha\in M\cap \delta$ ($=M\cap \beta $) and $\alpha\ge \alpha_0$ (so $\alpha<\delta'$),
\item $ p\in M$ is a $P^M_\alpha$-name of a
$P^M_\delta$-condition,
\item $\n Z\in M$ is a $P^M_\delta$-name of a code for null set,
\item $Z^*\in M$ is a $P^M_\alpha$-name of a code for a null set,
\item $P^M_\alpha$ forces:
$\bar p = (p^k)_{k\in \omega}\in M$ is a quick
sequence in $P^M_\delta/H^M_\alpha$ interpreting $\n
Z $ as~$Z^*$ (as in Definition~\ref{def:quick}),
\item $P^M_\alpha$ forces:
if $p\mathord\restriction
\alpha \in H^M_\alpha$, then $p^0\le p$,
\item $q\in P_\alpha$ forces $p\mathord\restriction \alpha \in H^M_\alpha$,
\item
$q$ forces that $r$ is random over~$M[H^M_\alpha]$, so in particular
there is (in $V$) a $P_\alpha$-name $ \n c_0$ below $q$ for the minimal~$c$ with $Z^*\sqsubset_{c} r$.
\end{itemize}
{\bf Then} there is a condition $q^+\in P_{\delta'}$, extending
$q$, and forcing the following:
\begin{itemize}
\item
$p\in H^M_\delta$,
\item
$r$ is random over~$M[H^M_\delta]$,
\item
$\n Z\sqsubset_{\n c_0} r$.
\end{itemize}
\end{Lem}
We actually claim a slightly stronger version, where instead of
$Z^*$ and $\n Z$ we have finitely many codes for null sets and names of
codes for null sets, respectively. We will use this stronger claim as
inductive assumption, but for notational simplicity we only prove the weaker
version; it is easy to see that the weaker version implies the stronger
version.
\begin{proof}
\emph{\textbf{The nontrivial successor case:}} ${\beta=\gamma+1\in M}$.
If $Q^M_\gamma$ is trivial, there is nothing to do.
Now let $\gamma_0\ge \alpha $ be minimal with $Q^M_{\gamma_0}$ nontrivial. We
will distinguish two cases: $\gamma=\gamma_0$ and $\gamma>\gamma_0$.
Consider first the case that $\gamma=\gamma_0$.
Work in $V[H_\gamma]$ where $q\in H_\gamma$. Note that $M [H^M_\gamma] =
M[H^M_\alpha]$. So $r$ is random over~$ M [H^M_\gamma]$, and
$(p^k(\gamma))_{k\in \omega}$ quickly interprets $\n Z$ as $Z^*$ in $Q^M
_\gamma$.
Now let $q^+\mathord\restriction \gamma= q$, and use the fact that
$Q_\gamma$ locally preserves randomness to find $q^+(\gamma)\le p^0(\gamma)$.
Next consider the case that $Q^M_\gamma$ is nontrivial and $\gamma\ge \gamma_0+1$.
Again work in $V[H_\gamma]$.
Let $k^*$ be maximal with $p^{k^*}\mathord\restriction \gamma\in H^M_\gamma$. (This $k^*$
exists, since the sequence $(p^{k})_{k\in \omega}$ was quick, so there is even
a $k$ with $p^{k}\mathord\restriction ( {\gamma_0+1}) \notin H^M_{\gamma_0+1}$.)
Consider $\n Z$ as a $Q^M_\gamma$-name, and (using~\eqref{eq:find.quick})
find a quick interpretation $Z'$ of $\n Z$ witnessed by a
sequence starting with $p^{k^*}(\gamma)$. In $M[H^M_\alpha]$, $Z'$
is now a $P^M_\gamma/H^M_\alpha$-name. Clearly, the sequence
$(p^k\mathord\restriction \gamma)_{k\in \omega}$ is a quick sequence interpreting $Z'$ as
$Z^*$. (Use the fact that $p^k\mathord\restriction \gamma$ forces $k^*\ge k$.)
\\
Using the induction hypothesis, we can first extend $q$ to a condition $q'\in
P_\gamma$ and then (again by our assumption that $Q_\gamma$ locally preserves
randomness) to a condition $q ^+\in P_{\gamma+1}$.
\emph{\textbf{The nontrivial limit case:}}
${M\cap \beta}$ unbounded in ${\beta}$, i.e., $\delta'=\beta$.
(This deals with cases~(4) and~(5) in
Definition~\ref{def:almost_CS_iteration_wolfgang}.
In case (4) we have $\beta\in M$, i.e., $\beta=\delta$;
in case (5) we have $\beta\notin M$ and $\beta< \delta$.)
Let $\alpha=\delta_0 < \delta_1 < \cdots$ be a sequence of $M$-ordinals
cofinal in $M\cap \delta' = M\cap \delta$. We may assume\footnote{If from some $\gamma$ on
all $Q^M_\zeta$ are trivial, then $P^M_\delta=P^M_\gamma$, so by
induction there is nothing to do. If $Q^M_\alpha$ itself is
trivial, then we let $\delta_0\coloneqq \min\{\zeta: Q^M_\zeta \text{ nontrivial}\}$ instead.}
that each $Q^M_{\delta_n}$ is nontrivial.
Let $(\n Z_n)_{n\in\omega}$ be a list of all $P^ M_\delta$-names in $M$
of codes for null sets (starting with our given null set
$\n Z = \n Z_0$).
Let $(E_n)_{n\in\omega}$ enumerate all open dense sets of $P^M_\delta$ from $M$, without
loss of generality\footnote{well, if we just enumerate a basis of the open
sets instead of all of them\dots}
we can assume that:
\proofclaim{claim:E.n.decides}{
$E_n$ decides $\n Z_0\mathord\restriction n $, \dots, $\n Z_n\mathord\restriction n$.
}
We write $p^k_0$ for $p^k$, and $Z_{0,0}$ for $Z^*$; as mentioned
above, $\n Z=\n Z_0$.
By induction on $n$ we can now find a sequence $\bar p_n = (p^k_n)_{k\in
\omega}$ and $P^M_{\delta_n }$-names $Z_{i,n}$ for $i\in \{0,\dots, n\}$
satisfying the following:
\begin{enumerate}
\item $P^M_{\delta_n}$ forces that
$p^0_n \le p^{k}_{n-1}$ whenever
$p^k_{n-1}\in P^M_{\delta}/H^M_{\delta_n}$.
\item $P_{\delta_n}^M$ forces that $p^0_n\in E_n$. (Clearly $E_n\cap
P^M_\delta/H^M_{\delta_n}$ is a dense set.)
\item $\bar p_n\in M $ is a $P^M_{\delta_n}$-name for a quick sequence
interpreting $(\n Z_0,\ldots, \n Z_n)$ as
$(Z_{0,n},\ldots, Z_{n,n})$
(in $P^M_\delta/H^M_{\delta_n}$),
so $Z_{i,n}$ is a $P^M_{\delta_n}$-name of a code for a null set, for
$0\le i \le n$.
\end{enumerate}
Note that this implies that the sequence $(p^k_{n-1}\mathord\restriction \delta_{n})$ is
(forced to be) a quick sequence interpreting
$(Z_{0,{n}},\ldots, Z_{n-1, {n}})$ as
$(Z_{0,{n-1}},\ldots, Z_{n-1, {n-1}})$.
Using the induction hypothesis, we now
define a sequence $(q_n)_{n\in \omega}$ of conditions $q_n\in P_{\delta_n}$
and a sequence $(c_n)_{n\in\omega}$ (where $c_n$ is a $P_{\delta_n}$-name)
such that (for $n>0$) $q_n $ extends $q_{n-1}$ and forces the following:
\begin{itemize}
\item $ p_{n-1}^0\mathord\restriction \delta_n \in H^M_{\delta_n}$.
\item Therefore, $p_n^0 \le p_{n-1}^0$.
\item $r$ is random over~$M[H^M_{\delta_n}]$.
\item Let $c_n$ be the least $c$ such that $Z_{n,n}\sqsubset_c r$.
\item
$ Z_{i,n} \sqsubset_{c_i} r$ for $i=0,\ldots, n-1$.
\end{itemize}
Now let $q = \bigcup_n q_n\in P^{\textrm{\textup{CS}}}_{\delta'}$.
As in Lemma~\ref{lem:inductionA} it is easy to see that $q\in P^\textup{\textrm{gen}}_{\delta'}
\subseteq P_{\delta'}$.
Moreover, by~\eqref{claim:E.n.decides} we get that
$q$ forces that $\n Z_i = \lim_n Z_{i,n}$.
Since each set $C_{c,r}\coloneqq \{x:x\sqsubset_{c} r\}$ is closed, this implies that
$q $ forces
$\n Z_i \sqsubset_{c_i} r$, in particular $ \n Z= \n Z _0 \sqsubset_{c_0} r$.
\emph{\textbf{The trivial cases:}} In all other cases,
$M \cap \beta $ is bounded in $\beta$, so we already dealt with
everything at stage $ \beta_0\coloneqq \sup(\beta \cap M)$.
Note that $\delta_0'$ and $\delta_0$ used at stage $\beta_0$
are the same as the current $\delta'$ and $\delta$.
\end{proof}
\section{The forcing construction}\label{sec:construction}
In this section we describe a $\sigma$-closed ``preparatory'' forcing notion
$\mathbb{R}$; the generic filter will define a ``generic'' forcing iteration
$\bar \mathbf{P}$, so elements of $\mathbb{R}$ will be approximations to such an
iteration. In Section~\ref{sec:proof} we will show that the forcing
$\mathbb{R}*\mathbf{P}_\om2$ forces BC and dBC.
{} From now on, we assume CH in the ground model.
\subsection{Alternating iterations, canonical embeddings and the preparatory forcing $\mathbb{R}$}
The preparatory forcing $\mathbb{R}$ will consist of pairs $(M,\bar P)$, where $M$
is a countable model and $\bar P\in M$ is an iteration of ultralaver and
Janus forcings.
\begin{Def}\label{def:alternating}
An alternating iteration\footnote{See Section~\ref{sec:alternativedefs} for possible
variants of this definition.} is a topped partial CS iteration $\bar P$
of length $\om2$ satisfying the following:
\begin{itemize}
\item Each $P_\alpha$ is proper.\footnote{This does not seem to be
necessary, see Section~\ref{sec:alternativedefs}, but it is easy to ensure and
might be comforting to some of the readers and/or authors.}
\item For $\alpha$ even, either both $Q_{\alpha}$ and
$Q_{\alpha+1}$ are (forced by the empty condition to be) trivial,\footnote{%
For definiteness, let us agree that the trivial forcing is the
singleton $\{\emptyset\}$.}
or $P_\alpha$ forces that $Q_\alpha$ is an ultralaver forcing adding the
generic real $\bar \ell_\alpha$, and $P_{\alpha+1}$ forces that
$Q_{\alpha+1}$ is a Janus forcing based on $\bar \ell^*_\alpha$ (where $ \bar
\ell^*$ is defined from $ \bar \ell $ as in
Lemma~\ref{lem:subsequence}).
\end{itemize}
\end{Def}
We will call an even index an ``ultralaver position'' and an odd one a ``Janus
position''.
As in any partial CS iteration, each $P_\delta$ for $\cf(\delta)>\omega$
(and in particular $P_\om2$) is a direct limit.
Recall that in Definition~\ref{def:canonicalembedding} we have defined the notion
``$\bar P^M$ canonically embeds into $\bar P$'' for nice candidates $M$ and
iterations $\bar P\in V$ and $\bar P^M\in M$.
Since our iterations now have length $\omega_2$, this
means that the canonical embedding works up to and
including\footnote{This is stronger than to require that the canonical
embedding works for every $\alpha\in\om2\cap M$, even though both $P_\om2$ and
$P^M_{\om2}$ are just direct limits; see footnote~\ref{fn:too.late}.}
$\om2$.
In the following, we will use pairs $x=(M^x,\bar P^x)$ as conditions in a
forcing, where $\bar P^x $ is an alternating iteration in the nice candidate $M^x$.
We will adapt our notation accordingly: Instead of writing $M$,
$\bar P^M$, $P^M_\alpha$
$H_\alpha^M$ (the induced filter), $Q_\alpha^M$, etc.,
we will write $M^x$, $\bar P^x$, $P^x_\alpha$, $H_\alpha^x$, $Q^x_\alpha$, etc.
Instead of ``$\bar P^x$ canonically embeds into $\bar P$'' we
will say%
\footnote{Note the linguistic asymmetry here: A symmetric and more verbose variant
would say ``$x=(M^x,\bar P^x)$ canonically embeds into $(V,\bar P)$''.}
``$x$ canonically embeds into $\bar P$''
or ``$(M^x, \bar P^x)$ canonically embeds into $\bar P$''
(which is a more exact
notation anyway, since the test whether the embedding is $M^x$-complete uses
both $M^x$ and $\bar P^x$,
not just $\bar P^x$).
The following rephrases Definition~\ref{def:canonicalembedding} of
a canonical embedding in our new notation, taking into account that:
\begin{quote}
$\mathbb{L}_{{\bar D}^x}$ is an $M^x$-complete subforcing
of~$\mathbb{L}_{\bar D}$ \ \ iff \ \ $\bar D$
extends $\bar D^x$
\end{quote} (see Lemma~\ref{lem:LDMcomplete}).
\begin{Fact}\label{fact:canonical}
$x=(M^x,\bar P^x)$ canonically embeds into $\bar P$,
if (inductively) for all $\beta\in \om2\cap
M^x\cup\{\om2\}$ the following holds:
\begin{itemize}
\item Let $\beta=\alpha+1$ for $\alpha$ even (i.e., an ultralaver position).
Then either $Q^x_\alpha$ is trivial (and $Q_\alpha$ can be trivial or not),
or we require that
($P_\alpha$ forces that) the $V[H_\alpha]$-ultrafilter system $\bar D$ used for $Q_\alpha$
extends the $M^x[H^x_\alpha]$-ultrafilter system $\bar D^x$
used for $Q^x_\alpha$.
\item Let $\beta=\alpha+1$ for $\alpha$ odd (i.e., a Janus position).
Then either $Q^x_\alpha$ is trivial,
or we require that ($P_\alpha$ forces that)
the Janus forcing $Q^x_\alpha$ is an $M^x[H^x_\alpha]$-complete
subforcing of the Janus forcing $Q_\alpha$.
\item Let $\beta$ be a limit. Then the canonical extension
$i_\beta:P^x_\beta\to P_\beta$ is $M^x$-complete. (The canonical extension
was defined in Definition~\ref{def:canonicalextension}.)
\end{itemize}
\end{Fact}
Fix a sufficiently large regular cardinal $\chi^*$ (see Remark~\ref{rem:fine.print}).
\begin{Def}\label{def:prep}
The \qemph{preparatory forcing} $\mathbb{R}$ consists of
pairs $x=(M^x,\bar P^x)$ such that
$M^x\in H(\chi^*)$ is a nice candidate (containing $\om2$), and
$\bar P^x$ is in $M^x$ an alternating iteration (in particular topped and
of length $\om2$).
\\
We define $y$ to be stronger than $x$ (in symbols: $y\leq_{\mathbb{R}} x$),
if the following holds: either $x=y$, or:
\begin{itemize}
\item $M^x\in M^y$ and $M^x$ is countable in $M^y$.
\item $M^y$ thinks that $(M^x,\bar P^x)$ canonically embeds into $\bar P^y$.
\end{itemize}
\end{Def}
Note that this order on $\mathbb{R}$ is transitive.
We will sometimes write $i_{x,y}$ for
the canonical embedding (in $M^y$) from $P^x_{\om2}$ to $P^y_{\om2}$.
There are several variants of this definition which result in equivalent
forcing notions. We will briefly come back to this in
Section~\ref{sec:alternativedefs}.
The following is trivial by elementarity:
\begin{Fact}\label{fact:esmV}
Assume that $\bar P$ is an alternating iteration (in $V$), that $x=(M^x,\bar P^x) \in
\mathbb{R}$ canonically embeds into $\bar P$, and that $N \esm H( \chi^*)$
contains $x$ and $\bar P$. Let $y=(M^y, \bar P^y)$ be the ord-collapse of
$(N, \bar P)$. Then $y\in\mathbb{R}$ and $y\le x$.
\end{Fact}
This fact will be used, for example, to get from the following Lemma~\ref{lem:trivialexample}
to Corollary~\ref{cor:gurke3}.
\begin{Lem}\label{lem:trivialexample}
Given $x\in\mathbb{R}$, there is an alternating iteration $\bar P$
such that $x$ canonically embeds into $\bar P$.
\end{Lem}
\begin{proof}
For the proof, we use either of the partial CS constructions
introduced in the previous chapter (i.e., an almost CS iteration or an almost
FS iteration over $\bar P^x$). The only
thing we have to check is that we can indeed choose $Q_\alpha$ that satisfy the
definition of an alternating iteration (i.e., as ultralaver or Janus forcings) and
such that $Q^x_\alpha$ is $M^x$-complete in $Q_\alpha$.
In the ultralaver case we arbitrarily extend $\bar D^x$ to an ultrafilter
system $\bar D$, which is justified by Lemma~\ref{lem:LDMcomplete}.
In the Janus case, we take $Q_\alpha\coloneqq Q_\alpha^x$ (this works by Fact~\ref{fact:janus.ctblunion}). Alternatively, we could
extend $Q_\alpha^x$ to a random forcing (using Lemma~\ref{lem:janusrandompreservation}).
\end{proof}
\begin{Cor}\label{cor:gurke3}
Given $x\in\mathbb{R}$ and an HCON object $b\in H(\chi^*)$ (e.g., a real or an ordinal), there is a
$y\leq x$ such that $b\in M^y$.
\end{Cor}
What we will actually need are the following three variants:
\begin{Lem}\label{lem:prep.is.sigma.preparation}
\begin{enumerate}
\item
Given $x\in\mathbb{R}$ there is a $\sigma$-centered
alternating iteration $\bar P$ above $x$.
\item\label{item:karotte2}
Given a decreasing sequence $\bar x=(x_n)_{n\in \omega}$ in
$\mathbb{R}$, there is an alternating iteration $\bar P$
such that each $x_n$ embeds into $\bar P$. Moreover,
we can assume that for all Janus positions $\beta$, the
Janus \footnote{If all $Q^{x_n}_\beta$ are trivial, then we
may also set $Q_\beta$ to be the trivial forcing, which is formally
not a Janus forcing.}
forcing
$Q_\beta$ is (forced to be) the union of the
$Q^{x_n}_\beta$, and that for all limits $\alpha$, the forcing
$P_\alpha$ is the almost FS limit
over~$(x_n)_{n\in\omega}$
(as in Corollary~\ref{cor:ctblmanycandidates}).
\item\label{item:karotte6}
Let $x,y\in \mathbb{R}$. Let $j^x$ be the transitive collapse of
$M^x$, and define $j^y$ analogously.
Assume that $j^x{}[M^x]=j^y {}[ M^y]$, that $j^x(\bar P^x)=j^y(\bar P^y)$ and
that there are $\alpha_0\leq\alpha_1<\om2$ such that:
\begin{itemize}
\item $M^x\cap \alpha_0=M^y\cap \alpha_0$ (and thus $j^x\mathord\restriction \alpha_0=j^y\mathord\restriction\alpha_0$).
\item $M^x\cap [\alpha_0, \omega_2) \subseteq [\alpha_0, \alpha_1)$.
\item $M^y\cap [\alpha_0, \omega_2) \subseteq [\alpha_1, \om2)$.
\end{itemize}
Then there is an alternating iteration $\bar P$
such that both $x$ and $y$ canonically embed into it.
\end{enumerate}
\end{Lem}
\begin{proof}
For (1), use an almost FS iteration.
We only use the coordinates in $M^x$, and use the (countable!)
Janus forcings $Q_\alpha\coloneqq Q^x_\alpha$ for all Janus positions
$\alpha\in M^x$ (see Fact~\ref{fact:janus.ctblunion}).
Ultralaver forcings are $\sigma$-centered anyway, so
$P_\varepsilon$ will be $\sigma$-centered, by Lemma~\ref{lem:4.17}.
For (2), use the almost FS iteration over the sequence $(x_n)_{n\in \omega}$
as in Corollary~\ref{cor:ctblmanycandidates},
and at Janus positions $\alpha$ set $Q_\alpha$ to be the
union of the $Q^{x_n}_\alpha$. (By Fact~\ref{fact:janus.ctblunion},
$Q^{x_n}_\alpha$ is $M^{x_n}$-complete in $Q_\alpha$, so
Corollary~\ref{cor:ctblmanycandidates} can be applied here.)
For (3), we again use an almost FS construction. This
time we start with an almost FS construction over $x$ up
to $\alpha_1$, and then continue with an almost FS
construction over $y$.
\end{proof}
As above, Fact~\ref{fact:esmV} gives us the following consequences:
\begin{Cor}\label{cor:bigcor}
\begin{enumerate}
\item\label{item:gurke0}
$\mathbb{R}$ is $\sigma$-closed. Hence $\mathbb{R}$ does not add new HCON objects (and in particular: no new reals).
\item\label{item:gurke1} $\mathbb{R}$ forces that the generic filter $G\subseteq \mathbb{R}$ is
$\sigma$-directed, i.e., for every countable subset $B$
of $G$ there is a $y\in G$ stronger than each element of $B$.
\item\label{item:gurke2}
$\mathbb{R}$ forces CH. (Since we assume CH in $V$.)
\item\label{item:martin}
Given a decreasing sequence $\bar x=(x_n)_{n\in \omega}$ in
$\mathbb{R}$ and any HCON object $b\in H(\chi^*)$, there is a
$y\in \mathbb{R}$ such that
\begin{itemize}
\item $y\leq x_n$ for all $n$,
\item $M^y$ contains $b$ and the sequence $\bar x$,
\item for all Janus positions $\beta$, $M^y $ thinks that the
Janus forcing $Q^y_\beta$ is (forced to be) the union of
the~$Q^{x_n}_\beta$,
\item for all limits $\alpha$, $M^y $ thinks that $P^y_\alpha$
is the almost FS limit\footnote{constructed in Lemma~\ref{lem:418}}
over $(x_n)_{n\in\omega}$
(of~$(P^y_\beta)_{\beta<\alpha}$).
\end{itemize}
\end{enumerate}
\end{Cor}
\begin{proof}
Item~(\ref{item:martin}) directly follows from
Lemma~\ref{lem:prep.is.sigma.preparation}(\ref{item:karotte2}) and
Fact~\ref{fact:esmV}.
Item~(\ref{item:gurke0}) is a special case
of~(\ref{item:martin}), and~(\ref{item:gurke1}) and~(\ref{item:gurke2}) are
trivial consequences of~(\ref{item:gurke0}).
\end{proof}
Another consequence of Lemma~\ref{lem:prep.is.sigma.preparation} is:
\begin{Lem}\label{lem:al2cc}
The forcing notion $\mathbb{R}$ is $\al2$-cc.
\end{Lem}
\begin{proof}
Recall that we assume that $V$ (and hence $V[G])$
satisfies CH.
Assume towards a contradiction that $(x_i:i< \omega_2)$ is an antichain.
Using CH we may without loss of generality assume that for each
$i\in\omega_2$ the transitive collapse of $(M^{x_i},\bar P^{x_i})$ is the
same. Set $L_i\coloneqq M^{x_i}\cap\om2$. Using the $\Delta$-lemma we
find some uncountable $I\subseteq \om2$ such
that the $L_i$ for $i\in I$ form a $\Delta$-system with root~$L$.
Set $\alpha_0=\sup(L)+ 3$.
Moreover, we may assume $\sup(L_i)<\min(L_j\setminus \alpha_0)$ for all
$i<j$.
Now take any $i,j\in I$, set $x\coloneqq x_i$ and $y\coloneqq x_j$, and use
Lemma~\ref{lem:prep.is.sigma.preparation}(\ref{item:karotte6}).
Finally, use Fact~\ref{fact:esmV} to find $z\le x_i, x_j$.
\end{proof}
\subsection{The generic forcing $\mathbf{P}'$}
Let $G$ be $\mathbb{R}$-generic. Obviously $G$ is a $\le_{\mathbb{R}}$-directed system.
Using the canonical embeddings, we can construct in $V[G]$ a direct
limit $\mathbf{P}'_{\om2} $ of the directed system~$G$:
Formally, we set
\[\mathbf{P}'_{\om2}\coloneqq \{(x,p):\, x\in G \text{ and } p\in P^x_{\om2}\},\]
and we
set $(y,q)\leq (x,p)$ if $y\leq_{\mathbb{R}} x$ and $q$ is (in $y$) stronger than
$i_{x,y}(p)$ (where $i_{x,y}:P^x_{\om2} \to P^y_{\om2} $ is the canonical embedding).
Similarly, we define for each $\alpha$
\[\mathbf{P}'_{\alpha}\coloneqq \{(x,p):\, x\in G,\, \alpha\in
M^x\text{ and } p\in P^x_\alpha\}\] with the same order.
To summarize:
\begin{Def}\label{def:BPstrich} For $\alpha\leq\om2$,
the direct limit of the $P^x_\alpha$ with $x\in G$ is called
$\mathbf{P}'_{\alpha}$.
\end{Def}
Formally, elements of $\mathbf{P}'_{\om2}$ are defined as pairs $(x,p)$. However, the $x$
does not really contribute any information.
In particular:
\begin{Fact}\label{facts:trivial66}
\begin{enumerate}
\item Assume that $(x,p^x)$ and $(y,p^y)$ are in $\mathbf{P}'_{\om2}$,
that $y\leq x$, and that the canonical embedding $i_{x,y}$ witnessing~$y\le x$
maps $p^x$ to $p^y$. Then $(x,p^x)=^*(y,p^y)$.
\item $(y,q)$ is in $\mathbf{P}'_{\om2}$ stronger than $(x,p)$ iff
for some (or equivalently: for any)
$z\leq x,y$ in $G$ the canonically embedded $q$
is in $P^z_{\om2}$ stronger than the canonically embedded $p$.
The same holds if ``stronger than'' is replaced by ``compatible
with'' or by ``incompatible with''.
\item\label{item:bla3} If $(x,p)\in\mathbf{P}'_\alpha$, and if $y$ is such that
$M^y=M^x$ and $\bar P^y\mathord\restriction\alpha=\bar P^x\mathord\restriction\alpha$,
then $(y,p)=^*(x,p)$.
\end{enumerate}
\end{Fact}
In the following, we will therefore often abuse notation and just write $p$ instead of
$(x,p)$ for an element of~$\mathbf{P}'_\alpha$.
We can define a natural restriction map from $\mathbf{P}'_{\om2}$ to $\mathbf{P}'_\alpha$, by
mapping $(x,p)$ to $(x,p\mathord\restriction \alpha)$. Note that by the fact above,
we can assume without loss of generality that $\alpha\in M^x$.
More exactly: There is a $y\leq x$ in~$G$ such that $\alpha\in M^y$ (according
to Corollary~\ref{cor:gurke3}). Then in $\mathbf{P}'_{\om2}$ we have $(x,p)=^*(y,p)$.
\begin{Fact} \label{fact:5.12}
The following is forced by $\mathbb{R}$:
\begin{itemize}
\item $\mathbf{P}'_\beta$ is completely embedded into $ \mathbf{P}'_\alpha$
for $\beta< \alpha\le \om2$
(witnessed by the natural restriction map).
\item If $x\in G$, then $P^x_\alpha$ is $M^x$-completely embedded
into $\mathbf{P}'_\alpha$ for $\alpha\leq\om2$
(by the identity map
$p\mapsto (x,p)$).
\item If $\cf(\alpha)>\omega$, then $\mathbf{P}'_\alpha$ is the union of the
$\mathbf{P}'_\beta$ for $\beta<\alpha$.
\item By definition, $\mathbf{P}'_{\om2}$ is a subset of $V$.
\end{itemize}
\end{Fact}
$G$ will always denote an $\mathbb{R}$-generic filter, while the $\mathbf{P}'_{\om2}$-generic
filter over $V[G]$ will be denoted by $H'_{\om2}$ (and the induced
$\mathbf{P}'_\alpha$-generic by $H'_\alpha$). Recall that for each $x\in G$, the map
$p\mapsto (x,p)$ is an $M^x$-complete embedding of $P^x_{\om2}$ into $\mathbf{P}'_{\om2}$ (and of
$P^x_\alpha$ into $\mathbf{P}'_\alpha$). This way $H'_\alpha\subseteq \mathbf{P}'_\alpha$ induces an $M^x$-generic
filter $H^x_\alpha \subseteq P^x_\alpha$.
So $x\in \mathbb{R}$ forces that $\mathbf{P}'_\alpha$ is approximated by
$P^x_\alpha$. In particular we get:
\begin{Lem}\label{lem:pathetic0}
Assume that $x\in \mathbb{R}$, that $\alpha\leq \om2$ in $M^x$, that $p\in P^x_\alpha$, that $\varphi(t)$ is a first order formula of the language $\{\in\}$ with
one free variable $t$
and that $\dot \tau$ is a $P^x_\alpha$-name in $M^x$.
Then $M^x\models p\Vdash_{P^x_\alpha} \varphi(\dot\tau)$ iff
$x\Vdash_\mathbb{R} (x,p)\Vdash_{\mathbf{P}'_\alpha} M^x[H^x_\alpha]\models \varphi(\dot\tau[H^x_\alpha])$.
\end{Lem}
\begin{proof}
``$\Rightarrow$'' is clear. So assume that $\varphi(\dot\tau)$ is not forced
in $M^x$. Then some $q\leq_{P^x_\alpha} p$ forces the negation.
Now $x$ forces that $(x,q)\leq (x,p)$ in $\mathbf{P}'_\alpha$; but
the conditions $(x,p)$ and $(x,q)$ force contradictory statements.
\end{proof}
\subsection{The inductive proof of ccc} \label{sec:ccc}
We will now prove by induction on~$\alpha$
that $\mathbf{P}'_\alpha$ is (forced to be) ccc and
(equivalent to) an alternating iteration. Once we know this, we can prove
Lemma~\ref{lem:elemsub}, which easily implies all the lemmas in this section. So
in particular these lemmas will only be needed to prove
ccc and not for anything else (and they will probably not
aid the understanding of the construction).
In this section, we try to stick to the following notation: $\mathbb{R}$-names are
denoted with a tilde underneath (e.g., $\n \tau$), while $P^x_\alpha$-names or
$\mathbf{P}'_\alpha$-names (for any $\alpha\le\om2$) are denoted with a dot accent
(e.g., $\dot\tau$). We use both accents when we deal with $\mathbb{R}$-names for
$\mathbf{P}'_\alpha$-names (e.g., $\nd\tau$).
We first prove a few lemmas that are easy generalizations of the following
straightforward observation:
Assume that $x \Vdash_\mathbb{R}(\n z,\n p)\in\mathbf{P}'_\alpha$. In particular,
$x\Vdash \n z\in G$. We first strengthen $x$ to some $x_1$ that decides $\n
z$ and $\n p$ to be $z^*$ and $p^*$. Then $x_1\leq^* z^*$ (the order $\leq^*$
is defined on page~\pageref{def:starorder}), so we can further strengthen
$x_1$ to some $y\leq z^*$. By definition, this means that $z^*$ is canonically
embedded into $\bar P^y$; so (by Fact~\ref{facts:trivial66})
the $P^{z^*}_\alpha$-condition $p^*$ can be interpreted
as a $P^y_\alpha$-condition as well. So we end up with some $y\leq x$ and a
$P^y_\alpha$-condition $p^*$ such that $y\Vdash_\mathbb{R} (\n z,\n p)=^*(y, p^*)$.
Since $\mathbb{R}$ is $\sigma$-closed, we can immediately generalize this to countably
many ($\mathbb{R}$-names for) $\mathbf{P}'_{\alpha}$-conditions:
\begin{Fact}\label{fact:pathetic1}
Assume that $x\Vdash_{\mathbb{R}} \n p_n\in \mathbf{P}'_\alpha$ for all $n\in\omega$.
Then there is a $y\leq x$ and there are $p_n^*\in P^y_\alpha$ such that
$y\Vdash_{\mathbb{R}} \n p_n=^*p_n^*$ for all $n\in\omega$.
\end{Fact}
Recall that more formally we should write: $x\Vdash_{\mathbb{R}} (\n z_n,\n p_n)\in
\mathbf{P}'_\alpha$; and $y\Vdash_{\mathbb{R}} (\n z_n,\n p_n)=^*(y,p_n^*)$.
We will need a
variant of the previous fact:
\begin{Lem}\label{lem:pathetic2}
Assume that $\mathbf{P}'_\beta$ is forced to be ccc, and assume that
$x$ forces (in ${\mathbb{R}}$) that $\nd r_n$ is a $\mathbf{P}'_\beta$-name
for a real (or an HCON object) for every $n\in\omega$.
Then there is a $y\leq x$ and there are $P^y_\beta$-names $\dot{r}^*_n$
in $M^y$ such that
$y\Vdash_{\mathbb{R}} ( \Vdash _{\mathbf{P}'_\beta} \nd r_n=\dot{r}^*_n)$ for all $n$.
\end{Lem}
(Of course, we mean: $\nd r_n$ is evaluated by $G*H'_\beta$, while $\dot{r}^*_n$
is evaluated by $H_\beta^y$.)
\begin{proof}
The proof is an obvious consequence of the previous fact,
since names of reals in a
ccc forcing can be viewed as a countable sequence of conditions.
In more detail:
For notational simplicity assume all $\nd r_n$ are names for
elements
of $2^\omega$.
Working in $V$, we can find for each $n,m\in\omega$
names for a maximal antichain $\n A_{n,m}$ and for a function $\n f_{n,m}:\n A_{n,m}\to 2$
such that $x$ forces that ($\mathbf{P}'_\beta$ forces that) $\nd r_n(m)=\n f_{n,m}(a)$
for the unique $a\in \n A_{n,m}\cap H'_\beta$.
Since $\mathbf{P}'_\beta$ is ccc, each $\n A_{n,m}$ is countable, and
since ${\mathbb{R}}$ is $\sigma$-closed, it is forced
that the sequence $\n\Xi=(\n A_{n,m},\n
f_{n,m})_{n,m\in\omega}$ is in $V$.
In $V$, we strengthen $x$ to $x_1$ to decide $\n\Xi$
to be some $\Xi^*$. We can also assume that
$\Xi^*\in M^{x_1}$ (see Corollary~\ref{cor:gurke3}).
Each $A^*_{n,m}$ consists of countably many $a$ such that
$x_1$ forces $a\in\mathbf{P}'_\beta$. Using Fact~\ref{fact:pathetic1}
iteratively (and again the fact that ${\mathbb{R}}$ is $\sigma$-closed)
we get some $y\leq x_1$ such that each such $a$ is actually
an element of $P^y_\beta$. So in $M^y$, we can use
$( A^*_{n,m}, f^*_{n,m})_{n,m\in \omega}$ to construct $P^y_\beta$-names $\dot{r}^*_n$
in the obvious way.
Now assume that $y\in G$ and that $H'_\beta$ is $\mathbf{P}'_\beta$-generic
over $V[G]$. Fix any $a\in A^*_{n,m}=\n A_{n,m}$.
Since $a\in P^y_\beta$, we get $a \in H^y_\beta$ iff $a\in H'_\beta$.
So there is a unique element $a$ of $A^*_{n,m}\cap H^y_\beta$, and
$\dot{r}^*_n(m)=f^*_{n,m}(a)=\n f_{n,m}(a)=\nd r_n(m)$.
\end{proof}
We will also need the following modification:
\begin{Lem}\label{lem:pathetic3}
(Same assumptions as in the previous lemma.)
In $V[G][H'_\beta]$, let $\mathbf{Q}_\beta$ be the union of $Q^z_\beta[H^z_\beta]$
for all $z\in G$.
In $V$, assume that $x$ forces that each $\nd r_n$ is a name for
an element of $\mathbf{Q}_\beta$. Then there is a $y\leq x$ and there is in $M^y$
a sequence $(\dot r^*_n)_{n\in\omega}$ of
$P^y_\beta$-names for elements of $Q^y_\beta$ such that
$y$ forces $\nd r_n=\dot r^*_n$ for all $n$.
\end{Lem}
So the difference to the previous lemma is: We additionally assume that $\nd
r_n$ is in $\bigcup_{z\in G}Q^z_\beta$, and we additionally get that $\dot r^*_n$ is
a name for an element of $Q^y_\beta$.
\begin{proof}
Assume $x\in G$ and work in $V[G]$. Fix $n$.
$\mathbf{P}'_\beta$ forces that there is some
$y_n\in G$ and some
$P^{y_n}_\beta$-name $\tau_n\in M^{y_n}$ of an element of $Q^{y_n} _\beta$ such that
$\nd r_n$ (evaluated by $H'_\beta$) is the same as
$\tau_n$ (evaluated by $H^{y_n} _\beta$).
Since we assume that $\mathbf{P}'_\beta$ is ccc, we can
find a countable set $Y_n\subseteq G$ of the possible $y_n$,
i.e., the empty condition of $\mathbf{P}'_\beta$ forces $y_n\in Y_n$.
(As $\mathbb{R}$ is $\sigma$-closed and $Y_n\subseteq \mathbb{R} \subseteq V$,
we must have $Y_n\in V$.)
So in $V$, there is (for each $n$) an $\mathbb{R}$-name $\n Y_n$ for this countable
set. Since $\mathbb{R}$ is $\sigma$-closed, we can find some $z_0 \leq x$ deciding
each $\n Y_n$ to be some countable set $Y_n^* \subseteq \mathbb{R} $.
In particular,
for each $y\in Y_n^*$ we know that $z_0 \Vdash_\mathbb{R} y\in G$, i.e.,
$z_0 \leq^* y$;
so using once again that ${\mathbb{R}}$ is $\sigma$-closed we can find
some $z$ stronger
than $z_0 $ and all the $y\in \bigcup_{n\in\omega} Y^*_n$.
Let $X$ contain all $\tau\in M^y$ such that for some $y\in \bigcup_{n\in\omega} Y^*_n$,
$\tau$ is a $P^y_\beta$-name for a $Q^y_\beta$-element.
Since $z\leq y$,
each $\tau\in X$ is actually\footnote{%
Here we use two consequences of
$z\leq y$: Every $P^y_\beta$-name in $M^y$ can be canonically interpreted
as a $P^z_\beta$-name in $M^z$, and $Q^y_\beta$ is (forced to be) a subset
of $Q^z_\beta$.}
a $P^z_\beta$-name
for an element of $Q^z_\beta$.
So $X$ is a set of $P^z_\beta$-names for $Q^z_\beta$-elements;
we can assume that $X\in M^z$.
Also, $z$ forces that $\nd r_n\in X$ for all~$n$.
Using Lemma~\ref{lem:pathetic2}, we can additionally assume that there
are names $P^z_\beta$-name $\dot r^*_n$ in $M^z$ such that
$z$ forces that $\nd r_n = \dot r^*_n$ is forced for each $n$.
By Lemma~\ref{lem:pathetic0}, we know that $M^z$
thinks that $P^z_\beta$ forces that $\dot r^*_n\in X$. Therefore
$\dot r^*_n$ is a $P^z_\beta$-name for a $Q^z_\beta$-element.
\end{proof}
We now prove by induction on $\alpha$ that $\mathbf{P}'_\alpha$ is equivalent to a ccc
alternating iteration:
\begin{Lem}\label{lem:halbfett}
The following holds in $V[G]$ for $\alpha<\om2$:
\begin{enumerate}
\item\label{item:iteration}
$\mathbf{P}'_\alpha$ is equivalent to
an alternating iteration. More formally:
There is an
iteration
$(\mathbf{P}_\beta,\mathbf{Q}_\beta)_{\beta<\alpha}$ with limit $\mathbf{P}_\alpha$
that satisfies the definition of alternating iteration
(up to $\alpha$), and there is
a naturally defined
dense embedding
$j_\alpha:\mathbf{P}'_\alpha\to \mathbf{P}_\alpha$,
such that
for $\beta < \alpha$ we have $j_\beta \subseteq j_\alpha$, and
the embeddings commute with the restrictions.\footnote{I.e.,
$j_\beta(x,p\mathord\restriction \beta) = j_\alpha(x,p\mathord\restriction \beta) = j_\alpha(x,p) \mathord\restriction
\beta$.}
Each $\mathbf{Q}_\alpha$ is the union of all $Q^x_\alpha$
with~$x\in G$.
For $x\in G$ with $\alpha\in M^x$,
the function $i_{x,\alpha}: P^x_\alpha\to \mathbf{P}_\alpha$
that maps $p$ to $j_\alpha(x,p)$ is the canonical $M^x$-complete embedding.
\item In particular, a $\mathbf{P}'_\alpha$-generic filter $H'_\alpha$
can be translated into a $\mathbf{P}_\alpha$-generic filter which we call
$H_\alpha$ (and vice versa).
\item\label{item:a1}
$\mathbf{P}_\alpha$ has a dense subset of size~$\al1$.
\item\label{item:ccc}
$\mathbf{P}_\alpha$ is ccc.
\item\label{item:ch}
$\mathbf{P}_\alpha$ forces CH.
\end{enumerate}
\end{Lem}
\begin{proof}
$\alpha=0$ is trivial (since $\mathbf{P}_0 $ and $\mathbf{P}'_0$ both
are trivial: $\mathbf{P}_0$ is a singleton, and $\mathbf{P}'_0$ consists of
pairwise compatible elements).
So assume that all items hold for all $\beta<\alpha$.
\proofsection{Proof of (\ref{item:iteration})}
\emph{\textbf{Ultralaver successor case:}} Let $\alpha=\beta+1$ with $\beta$
an ultralaver position.
Let $H_\beta$ be $\mathbf{P}_\beta$-generic over $V[G]$. Work in $V[G][H_\beta]$.
By induction, for every $x\in G$ the canonical embedding $i_{x,\beta}$
defines a $P^x_\beta$-generic filter over~$M^x$ called~$H^x_\beta$.
\emph{Definition of $\mathbf{Q}_\beta$ (and thus of $\mathbf{P}_{\alpha}$):}
In $M^x[H^x_\beta]$, the forcing notion $Q^x_\beta$ is defined as $\mathbb{L}_{\bar D^x}$
for some system of ultrafilters $\bar D^x$ in $M^x[H^x_\beta]$.
Fix some $s\in\omega^{{<}\omega}$.
If $y\leq x$ in $G$, then $D_s^y$ extends $D_s^x$.
Let $D_s$ be the union of all $D_s^x$ with $x\in G$.
So $D_s$ is a proper filter. It is even an ultrafilter:
Let $r$ be a $\mathbf{P}_\beta$-name for a real.
Using Lemma~\ref{lem:pathetic2}, we know that there is
some $y\in G$ and some
$P^y_\beta$-name $\n r^y\in M^y$
such that (in $V[G][H_\beta]$) we have
$\n r^y[H^y_\beta]=r$.
So $r\in M^y[H^y_\beta]$, hence
either $r$ or its complement is in $D_s^y$ and therefore in $D_s$.
So all filters in the family $\bar D = (D_s)_{s\in\omega^{{<}\omega}}$
are ultrafilters.
Now work again in $V[G]$. We set $\mathbf{Q}_\beta$ to be the
$\mathbf{P}_\beta$-name for $\mathbb{L}_{\bar D}$.
(Note that $\mathbf{P}_\beta$ forces that $\mathbf{Q}_\beta$ literally is the
union of the $Q^x_\beta[H^x_\beta]$ for $x\in G$,
again by Lemma~\ref{lem:pathetic2}.)
\emph{Definition of $j_\alpha$:}
Let $(x,p)$ be in $\mathbf{P}'_\alpha$.
If $p\in P^x_\beta$, then we set $j_\alpha(x,p)=j_\beta(x,p)$,
i.e., $j_\alpha$ will extend $j_\beta$. If $p=(p\mathord\restriction\beta,p(\beta))$
is in $P^x_\alpha$ but not in $P^x_\beta$, we set
$j_\alpha(x,p)=(r,s)\in \mathbf{P}_\beta*\mathbf{Q}_\beta$ where
$r=j_\beta(x,p\mathord\restriction\beta)$ and
$s$ is the ($\mathbf{P}_\alpha$-name for) $p(\beta)$ as evaluated in
$M^x[H^x_\beta]$. From $\mathbf{Q}_\beta = \bigcup_{x\in G} Q^x_\beta[H^x_\beta]$
we conclude that
this embedding is dense.
\emph{The canonical embedding:}
By induction we know that $i_{x,\beta}$ which
maps $p\in P^x_\beta$ to $j_\beta(x,p)$ is
(the restriction to $P^x_\beta$ of) the canonical
embedding of $x$ into $\mathbf{P}_{\om2}$. So we have to extend the
canonical embedding to $i_{x,\alpha}:P^x_\alpha\to \mathbf{P}_\alpha$.
By definition of ``canonical embedding'', $i_{x,\alpha}$ maps
$p\in P^x_\alpha$ to the pair
$(i_{x,\beta}(p\mathord\restriction\beta), p(\beta))$.
This is the same as $j_\alpha(x,p)$.
We already know that $D^x_s$ is (forced to be)
an $M^x[H^x_\beta]$-ultrafilter that is extended by~$D_s$.
\emph{\textbf{Janus successor case:}} This is similar, but simpler than
the previous case: Here, $\mathbf{Q}_\beta$ is just defined as the union of
all $Q^x_\beta[H^x_\beta]$ for~$x\in G$.
We will show below that this union satisfies the ccc;
just as in Fact~\ref{fact:janus.ctblunion},
it is then easy to see that this union is again a Janus forcing.
In particular, $\mathbf{Q}_\beta$ consists of hereditarily countable objects
(since it is the union of Janus forcings, which by definition
consist of hereditarily countable objects).
So since $\mathbf{P}_\beta$ forces CH,
$\mathbf{Q}_\beta$ is forced to have size~$\al1$.
Also note that since all Janus forcings involved are separative,
the union (which is a limit of an in\-com\-patibility-preserving
directed system)
is trivially separative as well.
\emph{\textbf{Limit case:}} Let $\alpha$ be a limit ordinal.
\emph{Definition of $\mathbf{P}_\alpha$ and $j_\alpha$:}
First we define $j_\alpha: \mathbf{P}_\alpha' \to \mathbf{P}^{\textrm{\textup{CS}}}_\alpha$: For each $(x,p)\in \mathbf{P}'_\alpha$,
let $j_\alpha(x,p)\in \mathbf{P}^{\textrm{\textup{CS}}}_\alpha$
be the union of all $j_\beta(x,p\mathord\restriction \beta)$ (for $\beta\in \alpha\cap M^x$).
(Note that $\beta_1<\beta_2$ implies that $j_{\beta_1}(x,p\mathord\restriction \beta_1)$ is
a restriction of $ j_{\beta_2}(x,p\mathord\restriction \beta_2)$, so this union is indeed
an element of $\mathbf{P}^{\textrm{\textup{CS}}}_\alpha$.)
$\mathbf{P}_\alpha$ is the set of all $q\wedge p$, where $p\in j_\alpha[\mathbf{P}'_\alpha]$,
$q\in \mathbf{P}_\beta$ for some $\beta < \alpha$, and $q \le p\mathord\restriction \beta$.
It is easy
to check that $\mathbf{P}_\alpha$ actually is a partial countable
support limit, and that $j_\alpha$ is dense.
We will show below that $\mathbf{P}_\alpha$ satisfies the ccc, so in
particular it is proper.
\emph{The canonical embedding:}
To see that $i_{x,\alpha}$ is the (restriction of the) canonical embedding,
we just have to check that $i_{x,\alpha}$ is $M^x$-complete.
This is the case since
$\mathbf{P}'_\alpha$ is the direct limit of all $P^y_\alpha$
for $y\in G$ (without loss of generality $y\le x$),
and each $i_{x,y}$ is $M^x$-complete (see Fact~\ref{fact:5.12}).
\proofsection{Proof of (\ref{item:a1})}
Recall that we assume CH in the ground model.
The successor case, $\alpha=\beta+1$, follows easily from
(\ref{item:a1})--(\ref{item:ch}) for $\mathbf{P}_\beta$
(since $\mathbf{P}_\beta$
forces that $\mathbf{Q}_\beta$ has size $2^{\al0}=\aleph_1 = \aleph_1^V$).
If $\cf(\alpha)>\omega$,
then $\mathbf{P}_\alpha=\bigcup_{\beta<\alpha} \mathbf{P}_\beta$, so the proof is easy.
So let $\cf(\alpha)= \omega $. The following straightforward argument works for
any ccc partial CS iteration where all iterands $\mathbf{Q}_\beta$ are of size $\le \aleph_1$.
For notational simplicity we assume $\Vdash_{\mathbf{P}_\beta } \mathbf{Q}_\beta \subseteq
\omega_1$ for all $\beta<\alpha$ (this is justified by inductive assumption~(\ref{item:ch})).
By induction, we can assume that for all $\beta<\alpha$ there is a dense
$\mathbf{P}^*_\beta\subseteq \mathbf{P}_\beta$ of size~$\al1$ and that every $\mathbf{P}^*_\beta$ is ccc.
For each $p\in \mathbf{P}_\alpha$ and
all $\beta\in \dom(p)$
we can find a maximal antichain
$A^p_\beta\subseteq \mathbf{P}_\beta^*$ such that each element $a\in A^p_\beta$
decides the value of $p(\beta)$, say $a \Vdash_{\mathbf{P}_\beta}
p(\beta)=\gamma^p_\beta(a)$. Writing\footnote{Since $\le $ is separative, $p\sim q$ iff $p=^*q$, but
this fact is not used here.} $p\sim q$ if $p\le q $ and $q\le p$,
the map $p\mapsto (A^p_\beta, \gamma^p_\beta)_{\beta\in\dom(p)}$ is 1-1
modulo $\sim$. Since each $A^p_\beta$ is countable,
there are only $\al1$ many possible values, therefore there are only $\al1$
many $\sim $-equivalence classes. Any set of representatives will be dense.
Alternatively, we can prove~(\ref{item:a1}) directly for $\mathbf{P}'_\alpha$.
I.e.,
we can find a $\le^*$-dense subset $\mathbf{P}'' \subseteq \mathbf{P}'_\alpha$ of
cardinality~$\aleph_1$. Note that a condition $(x,p)\in \mathbf{P}'_\alpha$
essentially depends only on $p$ (cf.~Fact~\ref{facts:trivial66}). More
specifically, given $(x,p)$ we can ``transitively\footnote{%
In more detail: We define a function $f:M^x\to V$ by induction as follows: If
$\beta\in M^x\cap \alpha+1$ or if $\beta=\om2$, then $f(\beta)=\beta$.
Otherwise, if $\beta\in M^x\cap \ON$, then $f(\beta)$ is the smallest ordinal
above $f[\beta]$. If $a\in M^x\setminus\ON$, then $f(a)=\{f(b):\, b\in a\cap
M^x\}$. It is easy to see that $f$ is an isomorphism from $M^x$ to
$M^{x'}\coloneqq f[M^x]$ and that $M^{x'}$ is a candidate. Moreover,
the ordinals that occur in $M^{x'}$ are subsets of $\alpha+\om1$ together
with the interval $[\om2,\om2+\om1]$; i.e., there are $\al1$ many
ordinals that can possibly occur in $M^{x'}$, and therefore there are
$2^\al0$ many possible such candidates. Moreover, setting $p'\coloneqq f(p)$,
it is easy to check that $(x,p)=^*(x',p')$ (similarly to Fact~\ref{facts:trivial66}).
}
collapse $x$ above
$\alpha$'', resulting in a $=^*$-equivalent condition $(x',p')$. Since
$|\alpha|=\al1$, there are only $\al1^{\al0}=2^{\al0}$ many such candidates
$x'$ and since each $x'$ is countable and $p'\in x'$, there are only
$2^{\al0}$ many pairs $(x',p')$.
\proofsection{Proof of (\ref{item:ccc})}
\emph{\textbf{Ultralaver successor case:}} Let $\alpha=\beta+1$ with $\beta$ an ultralaver
position.
We already know that $\mathbf{P}_\alpha=\mathbf{P}_\beta*\mathbf{Q}_\beta$ where $\mathbf{Q}_\beta$ is an ultralaver
forcing, which in particular is ccc, so by induction $\mathbf{P}_\alpha$
is ccc.
{\bf\em Janus successor case:}
As above it suffices to show that $\mathbf{Q}_\beta$,
the union of the Janus forcings
$Q^x_\beta[H^x_\beta]$ for $x\in G$, is (forced to be) ccc.
Assume towards a contradiction that this is not the case, i.e., that we have
an uncountable antichain in~$\mathbf{Q}_\beta$. We already know that $\mathbf{Q}_\beta$
has size $\al1$ and therefore the uncountable antichain has size $\al1$. So,
working in $V$, we assume towards a contradiction that
\begin{equation}\label{eq:ijqprjqr0999}
x_0\Vdash_{\mathbb{R}} p_0\Vdash_{\mathbf{P}_\beta}
\{ \nd a_i:i\in \omega_1\}\text{ is a maximal (uncountable) antichain in }\mathbf{Q}_\beta.
\end{equation}
We construct by induction on $n\in\omega$ a
decreasing sequence of conditions such that $x_{n+1}$ satisfies the
following:
\begin{enumerate}
\item[(i)] For all $i\in\om1\cap M^{x_n}$ there is (in $M^{x_{n+1}}$)
a $P^{x_{n+1}}_\beta$-name $\dot{a}_i^*$ for
a $Q^{x_{n+1}}_\beta$-condition
such that
\[
x_{n+1}\Vdash_{\mathbb{R}} p_0\Vdash_{\mathbf{P}_\beta}\nd a_i=\dot{a}_i^*.
\]
Why can we get that? Just use Lemma~\ref{lem:pathetic3}.
\item[(ii)] If $\tau$ is in $M^{x_n}$ a $P^{x_n}_\beta$-name for
an element of $Q^{x_n}_\beta$, then there is $k^*(\tau)\in\om1$
such that
\[
x_{n+1}\Vdash_{\mathbb{R}} p_0 \Vdash_{\mathbf{P}_\beta}\,
(\exists i<k^*(\tau))\ \nd a_i \parallel_{\mathbf{Q}_\beta} \tau.
\]
Also, all these $k^*(\tau)$ are in $M^{x_{n+1}}$.
\\
Why can we get that?
First note that $x_n\Vdash p_0\Vdash (\exists i\in\om1) \ \nd a_i
\parallel \tau $. Since
$\mathbf{P}_\beta$ is ccc, $x_n$ forces that there is some bound $\n k(\tau)$
for $i$. So it suffices that $x_{n+1}$ determines $\n k(\tau)$ to be
$k^*(\tau)$ (for all the countably many $\tau$).
\end{enumerate}
Set $\delta^*\coloneqq \om1\cap\bigcup_{n\in\omega} M^{x_n}$.
By Corollary~\ref{cor:bigcor}(\ref{item:martin}), there is some $y$ such that
\begin{itemize}
\item $y \le x_n$ for all $n\in\omega$,
\item $(x_n)_{n\in\omega}$ and $(\dot{a}_i^*)_{i\in\delta^*}$ are in $M^y$,
\item ($M^y$ thinks that) $P^y_\beta$ forces that
$Q^y_\beta$ is the union of $Q^{x_n}_\beta$, i.e., as a formula:
$M^y\models P^y_\beta\Vdash Q^y_\beta=\bigcup_{n\in\omega} Q^{x_n}_\beta$.
\end{itemize}
Let $G$ be ${\mathbb{R}}$-generic (over $V$) containing $y$, and let $H_\beta$
be $\mathbf{P}_\beta$-generic (over $V[G]$) containing $p_0$.
Set $A^*\coloneqq \{\dot{a}^*_i[H^y_\beta]:\, i<\delta^*\}$.
Note that $A^*$ is in $M^y[H^y_\beta]$. We claim
\begin{equation}\label{eq:pijqr9}
A^*\subseteq Q^y_\beta[H^y_\beta]\text{ is predense.}
\end{equation}
Pick any $q_0\in Q^y_\beta$.
So there is some $n\in\omega$ and
some $\tau$ which is in
$M^{x_n}$ a $P^{x_n}_\beta$-name of a $Q^{x_n}_\beta$-condition,
such that $q_0=\tau[H^{x_n}_\beta]$. By (ii) above,
$x_{n+1}$ and therefore $y$ forces (in $\mathbb{R}$) that for some $i<k^*(\tau)$
(and therefore some $i< \delta^*$) the condition $p_0$ forces
the following (in $\mathbf{P}_\beta$):
\begin{quote}
The conditions
$\nd a_i$ and $\tau$ are compatible in~$\mathbf{Q}_\beta$. Also,
$\nd a_i=\dot a^*_i$ and $\tau $
both are in $Q^y_\beta$,
and
$Q^y_\beta$ is an incompatibility-preserving subforcing of $\mathbf{Q}_\beta$.
Therefore $M^y[H^y_\beta]$
thinks that $\dot a^*_i$ and $\tau$ are compatible.
\end{quote}
This proves~\eqref{eq:pijqr9}.
Since $Q^y_\beta[H^y_\beta]$ is $M^y[H^y_\beta]$-complete in $\mathbf{Q}_\beta[H_\beta]$,
and since $A^*\in M^y[H^y_\beta]$,
this implies
(as $\dot{a}^*_i[H^y_\beta]=\nd a_i[G*H_\beta]$ for all
$i<\delta^*$)
that $\{\nd a_i[G*H_\beta]:\, i<\delta^*\}$
already is predense, a contradiction to~\eqref{eq:ijqprjqr0999}.
{\bf\em Limit case:}
We work with $\mathbf{P}'_\alpha$, which by definition
only contains HCON objects.
Assume towards a contradiction that $\mathbf{P}'_\alpha$ has an
uncountable antichain. We already know that $\mathbf{P}'_\alpha$ has
a dense subset of size~$\al1$ (modulo $=^*$), so the antichain has size $\al1$.
Again, work in $V$. We assume towards a contradiction that
\begin{equation}\label{eq:lkjwtoi}
x_0\Vdash_{\mathbb{R}} \{\n a_i:\, i\in\om1\} \text{ is a maximal (uncountable)
antichain in }\mathbf{P}'_\alpha.
\end{equation}
So each $\n a_i$ is an ${\mathbb{R}}$-name for an HCON object $(x,p)$ in $V$.
To lighten the notation we will abbreviate elements $(x,p)\in \mathbf{P}'_\alpha$
by~$p$; this is justified by Fact~\ref{facts:trivial66}.
Fix any HCON object $p$ and $\beta<\alpha$.
We will now define
the $({\mathbb{R}}*\mathbf{P}' _\beta)$-names $\nd\iota(\beta,p)$ and $\nd r(\beta,p)$:
Let $G$ be ${\mathbb{R}}$-generic and containing $x_0$, and
$H'_\beta$ be $\mathbf{P}'_\beta$-generic.
Let $R$ be the quotient $\mathbf{P}'_\alpha / H'_\beta $.
If $p$ is not in $R$, set $\nd\iota(\beta, p)=\nd r(\beta,p)=0$.
Otherwise, let $\nd\iota(\beta, p)$ be the minimal $i$ such that
$\n a_i\in R$ and $\n a_i$ and $p$ are compatible (in $R$),
and set $\nd r(\beta, p)\in R$ to be a witness of this compatibility.
Since $\mathbf{P}'_\beta$ is (forced to be) ccc, we can find
(in~$V[G]$) a countable set $\n X^\iota(\beta, p)\subseteq \omega_1$
containing all possibilities for $\nd\iota(\beta, p)$
and similarly $\n X^r(\beta, p)$ consisting of HCON objects for $\nd r(\beta, p)$.
To summarize:
For every $\beta<\alpha$
and every HCON object $p$, we can define (in~$V$) the ${\mathbb{R}}$-names
$\n X^\iota(\beta, p)$ and $\n X^r(\beta, p)$ such that
\begin{equation}
x_0\Vdash_{\mathbb{R}} \ \Vdash_{\mathbf{P}'_\beta} \biggl(p\in \mathbf{P}'_\alpha/H'_\beta \ \rightarrow
\ (\exists i\in \n X^\iota(\beta,p))\,
(\exists r\in \n X^r(\beta, p))\ r\leq_{\mathbf{P}'_\alpha/H'_\beta}
p,\n a_i\biggr).
\end{equation}
Similarly to the Janus successor case, we define by induction on
$n\in\omega$ a decreasing sequence of conditions such that
$x_{n+1}$ satisfies the following:
For all $\beta\in\alpha\cap M^{x_n}$ and $p\in P^{x_n}_\alpha$,
$x_{n+1}$ decides $\n X^{\iota}(\beta,p)$
and $\n X^{r}(\beta,p)$ to be some $X^{\iota*}(\beta,p)$ and $X^{r*}(\beta,p)$.
For all $i\in\om1\cap M^{x_{n}}$,
$x_{n+1}$ decides $\n a_i$ to be some $a^*_i\in P^{x_{n+1}}_\alpha$.
Moreover, each
such $X^{\iota*}$ and $X^{r*}$ is in $M^{x_{n+1}}$,
and every $r\in X^{r*}(\beta,p)$ is in
$P^{x_{n+1}}_\alpha$.
(For this, we just use Fact~\ref{fact:pathetic1}
and Lemma~\ref{lem:pathetic2}.)
Set $\delta^*\coloneqq \om1\cap\bigcup_{n\in\omega} M^{x_{n}}$,
and set $A^*\coloneqq \{a^*_i:\, i\in\delta^*\}$.
By Corollary~\ref{cor:bigcor}(\ref{item:martin}), there is some $y$ such that
\begin{gather}
\text{$y\leq x_n$ for all $n\in \omega$},\\
\text{$\bar x\coloneqq (x_n)_{n\in\omega}$ and $A^*$ are in $M^y$},\\
\label{eq:gqetwet}\text{($M^y$ thinks that) $P^y_\alpha$ is defined as
the almost FS limit over $\bar x$}.
\end{gather}
We claim that $y$ forces
\begin{equation}\label{eq:khweqt}
A^*\text{ is predense in } P^y_\alpha.
\end{equation}
Since
$P^y_\alpha$ is $M^y$-completely embedded into $\mathbf{P}'_\alpha$,
and since $A^*\in M^y $ (and since
$\n a_i=a^*_i$ for all $i\in\delta^*$) we get that
$\{\n a_i:\, i\in\delta^*\}$ is predense, a contradiction
to~\eqref{eq:lkjwtoi}.
So it remains to show~\eqref{eq:khweqt}. Let $G$ be ${\mathbb{R}}$-generic containing $y$.
Let
$r$ be a condition in $P^y_\alpha$; we will find $i<\delta^*$ such that $r$
is compatible with $a^*_i$.
Since $P^y_\alpha$ is the almost FS limit over $\bar x$, there
is some $n\in\omega$ and $\beta\in \alpha\cap
M^{x_n}$ such that $r$
has the form $q\land p$ with
$p$ in $P^{x_n}_\alpha$, $q\in P^y_\beta$ and $q\le p\mathord\restriction \beta$.
Now let $H'_\beta$
be $\mathbf{P}'_\beta$-generic containing $q$.
Work in $V[G][H'_\beta]$. Since $q\leq p\mathord\restriction\beta$,
we get $p\in \mathbf{P}'_\alpha/H'_\beta$.
Let $\iota^*$ be the evaluation by $G*H'_\beta$ of $\nd\iota(\beta,p)$,
and let $r^*$ be the evaluation of $\nd r(\beta,p)$.
Note that $\iota^* < \delta^*$ and $r^*\in P^y_\alpha$.
So we know that
$a^*_{\iota^*}$ and $p$ are compatible in $\mathbf{P}'_\alpha/H'_\beta$
witnessed by $r^*$.
Find $q'\in H'_\beta$ forcing
$r^*\le_{\mathbf{P}'_\alpha/H'_\beta} p, a^*_{\iota^*}$. We may find $q'\le q$.
Now
$q'\land r^*$ witnesses that $q\land p$ and $
a^*_{\iota^*}$ are compatible in~$P^y_\alpha$.
To summarize: The crucial point in proving the ccc is that ``densely'' we
choose (a variant of) a finite support iteration, see~\eqref{eq:gqetwet}.
Still, it is a bit surprising that we get the ccc, since we can also argue
that densely we use (a variant of) a countable support iteration.
But this does not prevent the ccc, it only prevents
the generic iteration from having direct limits in
stages
of countable cofinality.%
\footnote{Assume that $x$ forces that $\mathbf{P}'_\alpha$ is the union
of the $\mathbf{P}'_\beta$ for $\beta<\alpha$; then we can find a stronger
$y$ that uses an almost CS iteration over~$x$. This almost CS
iteration contains a condition $p$ with unbounded support. (Take
any condition in the generic part of the almost CS limit; if this
condition has bounded domain, we can extend it to have unbounded domain, see
Definition~\ref{def:almost_CS_iteration_wolfgang}.) Now $p$ will be
in $\mathbf{P}'_\alpha$ and have unbounded domain.}
\proofsection{Proof of (\ref{item:ch})}
This follows from (\ref{item:a1}) and (\ref{item:ccc}).
\end{proof}
\subsection{The generic alternating iteration $\mathaccent "7016{\mathbf{P}}$}
In Lemma~\ref{lem:halbfett} we have seen:
\begin{Cor}\label{cor:summary}
Let $G$ be ${\mathbb{R}}$-generic. Then we can construct\footnote{in an ``absolute
way'': Given $G$, we first define $\mathbf{P}'_\om2$ to be the direct limit of $G$,
and then inductively construct the $\mathbf{P}_\alpha$'s from $\mathbf{P}'_\om2$.}
(in $V[G]$)
an alternating iteration $\bar \mathbf{P}$ such that the following holds:
\begin{itemize}
\item $\bar \mathbf{P}$ is ccc.
\item If $x\in G$, then $x$ canonically
embeds into $\bar \mathbf{P}$.
(In particular, a $\mathbf{P}_\om2$-generic filter $H_\om2$
induces a $P^x_\om2$-generic filter over $M^x$, called $H^x_\om2$.)
\item Each $\mathbf{Q}_\alpha$ is the union of all $Q^x_\alpha[H^x_\alpha]$
with~$x\in G$.
\item $ \mathbf{P}_\om2$ is equivalent to the direct limit $\mathbf{P}'_\om2$ of $G$:
There is a dense embedding $j:\mathbf{P}'_\om2\to \mathbf{P}_\om2$, and
for each $x\in G$ the function
$p\mapsto j(x,p)$ is the canonical embedding.
\end{itemize}
\end{Cor}
\begin{Lem}\label{lem:weiothowet}
Let $x\in {\mathbb{R}}$. Then ${\mathbb{R}}$ forces the following:
$x\in G$ iff $x$ canonically embeds into $\bar \mathbf{P}$.
\end{Lem}
\begin{proof}
If $x\in G$, then we already know that $x$ canonically embeds into
$\bar \mathbf{P}$.
So assume (towards a contradiction)
that $y$ forces that $x$ embeds, but $y\Vdash x\notin G$.
Work in $V[G]$ where $y\in G$.
Both $x$ (by assumption) and $y\in G$ canonically embed into $\bar \mathbf{P}$.
Let $N$ be an elementary submodel
of $H^{V[G]}(\chi^*)$ containing $x,y,\bar \mathbf{P}$; let $z = (M^z, \bar P^z)$ be the
ord-collapse of $(N, \bar \mathbf{P})$. Then $z\in V$ (as $\mathbb{R}$ is $\sigma$-closed)
and $z\in \mathbb{R}$,
and (by elementarity) $z\leq x,y$. This shows that $x\parallel_\mathbb{R} y$,
i.e., $y$ cannot force $x\notin G$, a contradiction.
\end{proof}
Using ccc, we can now prove a lemma that is in fact stronger than
the lemmas in the previous Section~\ref{sec:ccc}:
\begin{Lem}\label{lem:elemsub}
The following is forced by ${\mathbb{R}}$: Let $N \esm H^{V[G]}(\chi^*)$ be
countable, and let $y$ be the ord-collapse of $(N,\bar\mathbf{P})$.
Then $y\in G$. Moreover, if $x\in G\cap N$, then $y \le x$.
\end{Lem}
\begin{proof}
Work in $V[G]$ with $x\in G$. Pick an elementary
submodel $N$ containing $x$ and $\bar \mathbf{P}$. Let $y$ be the
ord-collapse of $(N,\bar \mathbf{P})$ via a collapsing map $k$.
As above, it is clear that $y\in{\mathbb{R}}$
and $y\leq x$.
To show $y\in G$, it is (by the previous lemma)
enough to show that $y$ canonically embeds.
We claim that $k^{-1}$ is the canonical embedding of $y$ into $\bar\mathbf{P}$.
The crucial point is to show $M^y$-completeness. Let $B\in M^y$ be
a maximal antichain of $P^y_{\om2}$, say $B=k(A)$ where $A\in N$ is a maximal
antichain of $\mathbf{P}_{\om2}$. So (by ccc) $A$ is countable, hence $A\subseteq N$.
So not only $A=k^{-1}(B)$ but even
$A=k^{-1}[B]$.
Hence $k^{-1}$ is an $M^y$-complete embedding.
\end{proof}
\begin{Rem} We used the ccc of $\mathbf{P}_{\om2}$ to prove Lemma~\ref{lem:elemsub};
this use was essential in the sense that we can in turn easily prove the ccc of
$\mathbf{P}_{\om2}$ if we assume that Lemma~\ref{lem:elemsub} holds. In fact
Lemma~\ref{lem:elemsub} easily implies all other lemmas in
Section~\ref{sec:ccc} as well.
\end{Rem}
\section{The proof of \textup{BC+dBC}}\label{sec:proof}
We first%
\footnote{Note that for this weak version, it would be enough to
produce a generic iteration of length 2 only, i.e., $\mathbf{Q}_0*\mathbf{Q}_1$, where
$\mathbf{Q}_0$ is an ultralaver forcing and $\mathbf{Q}_1$ a corresponding Janus forcing.}
prove that no uncountable $X$ in $V$ will be smz or sm in the
final extension $V[G*H]$.
Then we show how to modify the argument to work for all uncountable sets
in $V[G*H]$.
\subsection{\textup{BC+dBC} for ground model sets.}\label{sec:groundmodel}
\begin{Lem}\label{lem:6.1}
Let $X \in V$ be an uncountable set of reals.
Then $\mathbb{R}*\mathbf{P}_\om2$ forces that $X$ is not smz.
\end{Lem}
\begin{proof}\
\begin{enumerate}
\item
Fix any even $\alpha< \om2$ (i.e., an
ultralaver position) in our iteration.
The ultralaver forcing $\mathbf{Q}_\alpha$ adds a
(canonically defined code for a)
closed null set $\dot F$ constructed from the
ultralaver real $\bar \ell_\alpha$.
(Recall Corollary~\ref{cor:absolutepositive}.)
In the following, when
we consider various ultralaver forcings $\mathbf{Q}_\alpha$, $Q_\alpha$, $Q^x_\alpha$,
we treat
$\dot F$ not as an actual name, but rather as a definition
which depends on the forcing used.
\item
According to Theorem~\ref{thm:pawlikowski}, it is enough to show that $X+\dot
F$ is non-null in the $\mathbb{R}*\mathbf{P}_{\om2}$-extension, or equivalently, in every
$\mathbb{R}*\mathbf{P}_{\beta}$-extension ($\alpha < \beta<\om2$).
So assume towards a contradiction that there is a $\beta > \alpha$
and an $\mathbb{R}*\mathbf{P}_{\beta}$-name
$\nd Z$ of a (code for a) Borel null set such that some
$(x,p)\in \mathbb{R}*\mathbf{P}_\om2$ forces that
$X + \dot F \subseteq \nd Z$.
\item Using the dense embedding $j_\om2:\mathbf{P}'_\om2\to \mathbf{P}_\om2$, we may
replace $(x,p)$ by a condition $(x,p')\in \mathbb{R}*\mathbf{P}'_\om2$.
According to
Fact~\ref{fact:pathetic1} (recall that we now know that $\mathbf{P}_\om2$
satisfies ccc)
and Lemma~\ref{lem:pathetic2}
we can assume
that $p'$ is already a $P^x_\beta$-condition $p^x$
and that
$\nd Z $ is (forced by $x$ to be the same as)
a $P^x_\beta$-name $\dot Z^x$ in $M^x$.
\item
We construct (in $V$) an iteration $\bar P$ in the following way:
\begin{enumerate}
\item[(a)]
Up to $\alpha$, we take an arbitrary alternating iteration
into which $x$ embeds.
In particular, $P_\alpha$ will be proper and hence
force that $X$ is still uncountable.
\item[(b)] Let $Q_\alpha$ be any ultralaver forcing (over $Q^x_\alpha$
in case $\alpha\in M^x$).
So according
to Corollary~\ref{cor:absolutepositive}, we know that
$Q_\alpha$ forces that $X+\dot F$ is not null.
Therefore we can pick (in $V[H_{\alpha+1}]$)
some $\dot r$ in $X+\dot F$ which is random over
(the countable model)
$M^x[H^x_{\alpha+1}]$, where $H^x_{\alpha+1}$ is induced
by $H_{\alpha+1}$.
\item[(c)] In the rest of the construction, we preserve
randomness of $\dot r$ over $M^x[H^x_{\zeta}]$ for each $\zeta\le \om2$.
We can do this using an almost CS iteration
over~$x$
where at each Janus position we use a random version of Janus forcing and
at each ultralaver position we use a suitable ultralaver forcing;
this is possible by Lemma~\ref{lem:4.28}.
By Lemma~\ref{lem:preservation.variant}, this iteration
will preserve the randomness of $\dot r$.
\item[(d)] So we get $\bar P$ over $x$
(with canonical embedding $i_x$) and $q\leq_{P_\om2} i_x(p^x)$
such that $q\mathord\restriction\beta$ forces (in $P_\beta$)
that $\dot r$ is random over $M^x[H^x_{\beta}]$, in particular that
$\dot r\notin \dot Z^x$.
\end{enumerate}
We now pick a countable $N\esm H(\chi^*)$ containing
everything and ord-collapse $(N,\bar P)$ to $y\leq x$. (See Fact~\ref{fact:esmV}.)
Set $X^y\coloneqq X\cap M^y$ (the image of $X$ under the collapse).
By elementarity, $M^y$ thinks that (a)--(d) above holds for $\bar P^y$
and that $X^y$ is uncountable. Note that $X^y\subseteq X$.
\item
This gives a contradiction in the obvious way:
Let $G$ be $\mathbb{R}$-generic over $V$ and contain $y$,
and let $H_\beta$ be $\mathbf{P}_\beta$-generic over $V[G]$ and contain $q\mathord\restriction\beta$.
So $M^y[H^y_\beta]$ thinks that $r\notin \dot Z^x$ (which is
absolute) and that $r=x+f$ for some $x\in X^y\subseteq X$
and $f\in F$ (actually even in $F$ as evaluated in $M^y[H^y_{\alpha+1}]$).
So in $V[G][H_\beta]$, $r$ is the sum of an element of $X$
and an element of $F$. So $(y,q)\leq (x,p')$ forces that $\dot r\in (X+\dot
F)\setminus \nd Z$, a contradiction to~(2).
\qedhere
\end{enumerate}
\end{proof}
Of course, we need this result not just for ground model sets $X$, but for
$\mathbb{R}*\mathbf{P}_{\om2}$-names $\nd X=(\nd x_i:i\in\om1)$ of uncountable sets. It is
easy to see that it is enough to deal with $\mathbb{R}*\mathbf{P}_{\beta}$-names for (all)
$\beta<\om2$. So given $\nd X$, we can (in the proof) pick $\alpha$ such that
$\nd X$ is actually an $\mathbb{R}*\mathbf{P}_{\alpha}$-name. We can try to repeat the same
proof; however, the problem is the following: When constructing $\bar P$
in~(4), it is not clear how to simultaneously make all the uncountably many
names $(\nd x_i)$ into $\bar P$-names in a sufficiently ``absolute'' way. In
other words: It is not clear how to end up with some $M^y$ and $\dot X^y$
uncountable in $M^y$ such that it is guaranteed that $\dot X^y$ (evaluated in
$M^y[H^y_{\alpha}]$) will be a subset of $\nd X$ (evaluated in
$V[G][H_\alpha]$). We will solve this problem in the next section by factoring
$\mathbb{R}$.
Let us now give the proof of the corresponding weak version of dBC:
\begin{Lem}\label{lem:6.2}
Let $X \in V$ be an uncountable set of reals.
Then $\mathbb{R}*\mathbf{P}_\om2$ forces that $X$ is not strongly meager.
\end{Lem}
\begin{proof}
The proof is parallel to the previous one:
\begin{enumerate}
\item
Fix any even $\alpha< \om2$ (i.e., an
ultralaver position) in our iteration.
The Janus forcing $\mathbf{Q}_{\alpha+1}$
adds a (canonically defined code for a)
null set $\dot Z_\nabla$.
(See Definition~\ref{def:Znabla} and Fact~\ref{fact:Znablaabsolute}.)
\item
According to~\eqref{eq:notsm}, it is enough to show that
$X+\dot Z_\nabla=2^\omega$ in the $\mathbb{R}*\mathbf{P}_{\om2}$-extension, or equivalently, in every
$\mathbb{R}*\mathbf{P}_{\beta}$-extension ($\alpha<\beta<\om2$).
(For every real $r$, the statement
$r\in X+\dot Z_\nabla$, i.e., $(\exists x\in X)\ x+r\in\dot Z_\nabla$, is absolute.)
So assume towards a contradiction that there is a $\beta > \alpha$
and an $\mathbb{R}*\mathbf{P}_{\beta}$-name
$\nd r$ of a real such that some
$(x,p)\in \mathbb{R}*\mathbf{P}_\om2$ forces that
$\nd r\notin X + \dot Z_\nabla$.
\item Again, we can assume that $\nd r $ is a $P^x_\beta$-name $\dot r^x$ in $M^x$.
\item
We construct (in $V$) an iteration $\bar P$ in the following way:
\begin{enumerate}
\item[(a)]
Up to $\alpha$, we take an arbitrary alternating
iteration into which $x$ embeds.
In particular, $P_\alpha$ again forces that $X$ is still uncountable.
\item[(b1)] Let $Q_\alpha$ be any ultralaver forcing (over $Q_\alpha^x$). Then
$Q_\alpha$ forces that $X$ is not thin
(see Corollary~\ref{cor:LDnotthin}).
\item[(b2)] Let $Q_{\alpha+1}$ be a countable Janus forcing.
So $Q_{\alpha+1}$ forces $X+\dot Z_\nabla=2^\omega$. (See Lemma~\ref{lem:janusnotmeager}.)
\item[(c)] We continue the iteration in a $\sigma$-centered way.
I.e., we use an almost FS iteration over $x$ of
ultralaver forcings and countable Janus forcings,
using trivial $Q_\zeta$ for all $\zeta\notin M^x$; see
Lemma~\ref{lem:4.17}.
\item[(d)] So $P_\beta$ still
forces that $X+\dot Z_\nabla=2^\omega$, and in particular that $\dot
r^x\in X+\dot Z_\nabla$.
(Again by Lemma~\ref{lem:janusnotmeager}.)
\end{enumerate}
Again, by collapsing some $N$ as in the previous proof,
we get $y\le x $ and $X^y\subseteq X$.
\item
This again gives the obvious contradiction:
Let $G$ be $\mathbb{R}$-generic over $V$ and contain $y$,
and let $H_\beta$ be $\mathbf{P}_\beta$-generic over $V[G]$ and contain $p$.
So $M^y[H^y_\beta]$ thinks that
$r=x+ z$ for some $x\in X^y\subseteq X$
and $z \in Z_\nabla$ (this time, $\dot Z_\nabla$ is evaluated in $M^y[H^y_{\beta}]$),
contradicting~(2).
\qedhere
\end{enumerate}
\end{proof}
\subsection{A factor lemma}\label{sec:factor}
We can restrict $\mathbb{R}$ to any ${\alpha^*}<\om2$
in the obvious way: Conditions are pairs
$x=(M^x,\bar P^x)$ of nice candidates $M^x$ (containing ${\alpha^*}$) and alternating
iterations $\bar P^x$, but now $M^x$ thinks that $\bar P^x$ has length ${\alpha^*}$
(and not $\om2$). We call this variant $\mathbb{R}\mathord\restriction{\alpha^*}$.
Note that all results of Section~\ref{sec:construction} about $\mathbb{R}$ are still true for
$\mathbb{R}\mathord\restriction{\alpha^*}$. In particular, whenever $G\subseteq \mathbb{R}\mathord\restriction\alpha^*$ is generic,
it will define a direct limit (which we call $\mathbf{P}^{\prime*}$), and an
alternating iteration of length $\alpha^*$ (called $\bar \mathbf{P}^*$); again
we will have that $x\in G$ iff $x$ canonically embeds into $\bar
\mathbf{P}^*$.
There is a natural projection map from $\mathbb{R}$ (more exactly: from the dense
subset of those $x$ which satisfy ${\alpha^*}\in M^x$) into $\mathbb{R}\mathord\restriction\alpha^*$,
mapping $x=(M^x,\bar P^x)$ to
$x\mathord\restriction{\alpha^*}\coloneqq (M^x,\bar P^x\mathord\restriction{\alpha^*})$. (It is obvious that this projection is
dense and preserves $\leq$.)
There is also a natural embedding $\varphi$ from $\mathbb{R}\mathord\restriction{\alpha^*}$ to $\mathbb{R}$: We
can just continue an alternating iteration of length ${\alpha^*}$ by appending
trivial forcings.
$\varphi$ is complete: It preserves $\leq$ and $\perp$. (Assume that $z\leq
\varphi(x),\varphi(y)$. Then $z\mathord\restriction{\alpha^*}\leq x,y$.) Also, the projection is a
reduction: If $y\leq x\mathord\restriction{\alpha^*}$ in $\mathbb{R}\mathord\restriction{\alpha^*}$, then let $M^z$ be a model
containing both $x$ and $y$. In $M^z$, we can first construct
an alternating iteration of length $\alpha^*$ over $y$
(using almost FS over $y$, or almost CS ---
this does not matter here). We then continue this iteration $\bar P^z$
using almost FS or almost CS over $x$.
So $x$ and $y$ both embed into $\bar P^z$, hence
$z=(M^z,\bar P^z)\leq x,y$.
So according to the general factor lemma of forcing theory, we know that
$\mathbb{R}$ is forcing equivalent to $\mathbb{R}\mathord\restriction{\alpha^*} * (\mathbb{R}/\mathbb{R}\mathord\restriction{\alpha^*})$,
where $\mathbb{R}/\mathbb{R}\mathord\restriction{\alpha^*}$ is the quotient of $\mathbb{R}$ and $\mathbb{R}\mathord\restriction{\alpha^*}$,
i.e.,
the ($\mathbb{R}\mathord\restriction{\alpha^*}$-name for the)
set of $x\in\mathbb{R}$ which are compatible (in $\mathbb{R}$) with all $\varphi(y)$ for
$y\in G\mathord\restriction{\alpha^*}$ (the generic filter for $\mathbb{R}\mathord\restriction{\alpha^*}$),
or equivalently, the set of $x\in\mathbb{R}$ such that $x\mathord\restriction{\alpha^*}\in
G\mathord\restriction{\alpha^*}$. So Lemma~\ref{lem:weiothowet} (relativized to
$\mathbb{R}\mathord\restriction\alpha^*$) implies:
\proofclaim{eq:oetwji}{$\mathbb{R}/\mathbb{R}\mathord\restriction{\alpha^*}$ is the set of $x\in\mathbb{R}$ that
canonically embed (up to ${\alpha^*}$) into $\mathbf{P}_{\alpha^*}$.}
\begin{Setup}
Fix some $\alpha^*<\om2$ of uncountable cofinality.\footnote{Probably the cofinality is completely irrelevant, but the picture is clearer this way.}
Let $G\mathord\restriction{\alpha^*}$ be
$\mathbb{R}\mathord\restriction{\alpha^*}$-generic over $V$ and work in $V^*\coloneqq V[G\mathord\restriction{\alpha^*}]$.
Set $\bar\mathbf{P}^*=(\mathbf{P}^*_\beta)_{\beta<\alpha^*}$, the generic alternating
iteration added by $\mathbb{R}\mathord\restriction{\alpha^*}$.
Let $\mathbb{R}^*$ be the quotient
$\mathbb{R}/\mathbb{R}\mathord\restriction\alpha^*$.
\end{Setup}
We claim that $\mathbb{R}^*$ satisfies (in $V^*$) all the properties
that we proved in Section~\ref{sec:construction} for $\mathbb{R}$ (in $V$), with
the obvious modifications. In particular:
\begin{enumerate}[(A)$_{\alpha^*}$]
\item $\mathbb{R}^*$ is $ \al2$-cc, since it is
the quotient of an $\al2$-cc forcing.
\item $\mathbb{R}^*$ does not add new reals (and more generally, no new
HCON objects), since it is the quotient of a $\sigma$-closed forcing.\footnote{It is
easy to see that $\mathbb{R}^*$ is even $\sigma$-closed, by ``relativizing'' the
proof for $\mathbb{R}$, but we will not need this.}
\item
Let $G^*$ be $\mathbb{R}^*$-generic over $V^*$. Then $G^*$ is $\mathbb{R}$-generic
over $V$, and therefore
Corollary~\ref{cor:summary} holds for~$G^*$.
(Note that $\mathbf{P}'_\om2$ and then $\mathbf{P}_\om2$ is constructed from~$G^*$.)
Moreover, it is easy to see%
\footnote{%
For $\beta \le \alpha^*$, let
$\mathbf{P}^{\prime*}_\beta$ be the direct limit of $(G\mathord\restriction\alpha^*)\mathord\restriction \beta$
and $\mathbf{P}^{\prime}_\beta$ the direct limit of $G^*\mathord\restriction\beta$.
The function $k_\beta: \mathbf{P}^{\prime*}_\beta\to \mathbf{P}^{\prime}_\beta$
that maps $(x,p)$ to $(\varphi(x),p)$
preserves $\leq$ and $\perp$ and is surjective
modulo $=^*$,
see Fact~\ref{facts:trivial66}(\ref{item:bla3}).
So it is clear that
defining $\bar\mathbf{P}^*\mathord\restriction\beta$ by induction
from $\mathbf{P}^{\prime*}_\beta$ yields the same result as
defining $\bar\mathbf{P}\mathord\restriction\beta$ from $\mathbf{P}_{\beta}'$.
}
that $\bar\mathbf{P}$ starts with $\bar\mathbf{P}^*$.
\item In particular, we get a variant of Lemma~\ref{lem:elemsub}:
The following is forced by ${\mathbb{R}^*}$: Let $N \esm H^{V[G^*]}(\chi^*)$ be
countable, and let $y$ be the ord-collapse of $(N,\bar\mathbf{P})$.
Then $y\in G^*$. Moreover: If $x\in G^*\cap N$, then $y \le x$.
\end{enumerate}
We can use the last item to prove the $\mathbb{R}^*$-version of
Fact~\ref{fact:pathetic1}:
\begin{Cor}\label{cor:slkjte}
In $V^*$, the following holds:
\begin{enumerate}
\item \label{item:fangen.a}
Assume that $x\in\mathbb{R}^*$ forces that
$p\in \mathbf{P}_\om2$. Then there is a $y\leq x$
and a $p^y\in P^y_\om2$ such that $y$ forces
$p^y=^*p$.
\item \label{item:fangen.b}
Assume that $x\in\mathbb{R}^*$ forces that
$\nd r$ is a $\mathbf{P}_{\om2}$-name of a real. Then there is a $y\leq x$
and a $P^y_\om2$-name $\dot r^y$ such that $y$ forces
that $\dot r^y $ and $\nd r $ are equivalent as $\mathbf{P}_{\om2}$-names.
\end{enumerate}
\end{Cor}
\begin{proof} We only prove (\ref{item:fangen.a}), the proof of (\ref{item:fangen.b})
is similar.
Let $G^*$ contain $x$. In $V[G^*]$, pick an elementary
submodel $N$ containing $x,p,\bar\mathbf{P}$ and let $(M^{z},\bar P^{z},p^{z})$
be the ord-collapse of $(N,\bar\mathbf{P},p)$.
Then $z\in G^*$.
This whole situation is forced by some $y\leq z\leq x\in G^*$.
So $y$ and $p^y$ is as required, where
$p^y\in P^y_\om2$ is the canonical image of $p^z$.
\end{proof}
We also get the following analogue of Fact~\ref{fact:esmV}:
\proofclaim{claim:44}{
In $V^*$ we have: Let $x\in \mathbb{R}^*$.
Assume that $\bar P$ is an alternating iteration that extends
$ \bar \mathbf{P}\mathord\restriction \alpha^*$ and
that $x=(M^x,\bar P^x) \in
\mathbb{R}$ canonically embeds into $\bar P$, and that $N \esm H( \chi^*)$
contains $x$ and $\bar P$. Let $y=(M^y, \bar P^y)$ be the ord-collapse of
$(N, \bar P)$. Then $y\in\mathbb{R}^*$ and $y\le x$.
}
We now claim that $\mathbb{R}*\mathbf{P}_\om2$ forces BC+dBC.
We know that $\mathbb{R}$ is forcing equivalent to $\mathbb{R}\mathord\restriction{\alpha^*} *
\mathbb{R}^*$. Obviously we have
\[
\mathbb{R}*\mathbf{P}_\om2=\mathbb{R}\mathord\restriction{\alpha^*}* \mathbb{R}^**\mathbf{P}_{\alpha^*} * \mathbf{P}_{{\alpha^*},\,\om2}
\] (where $\mathbf{P}_{{\alpha^*},\,\om2}$ is the quotient of
$\mathbf{P}_\om2$ and $\mathbf{P}_{\alpha^*}$).
Note that $\mathbf{P}_{\alpha^*}$ is already determined by $\mathbb{R}\mathord\restriction{\alpha^*}$,
so $\mathbb{R}^**\mathbf{P}_{\alpha^*}$ is (forced by $\mathbb{R}\mathord\restriction{\alpha^*}$ to be)
a product $\mathbb{R}^*\times \mathbf{P}_{\alpha^*}=\mathbf{P}_{\alpha^*}\times \mathbb{R}^*$.
But note that this is not the same as $\mathbf{P}_{\alpha^*} * \mathbb{R}^*$, where we
evaluate the definition of~$\mathbb{R}^*$ in the $\mathbf{P}_{\alpha^*}$-extension of
$V[G\mathord\restriction{\alpha^*}]$: We would get new candidates and therefore new conditions
in~$\mathbb{R}^*$
after forcing with~$\mathbf{P}_{\alpha^*}$. In other words, we can
\emph{not} just argue as follows:
\begin{wrongproof}
$\mathbb{R}*\mathbf{P}_\om2$ is the same as
$(\mathbb{R}\mathord\restriction{\alpha^*}* \mathbf{P}_{\alpha^*})* (\mathbb{R}^**\mathbf{P}_{{\alpha^*},\om2})$;
so given an $\mathbb{R}*\mathbf{P}_\om2$-name $X$ of a set of reals of size~$\al1$,
we can choose $\alpha^*$ large enough so that
$X$ is an $(\mathbb{R}\mathord\restriction{\alpha^*}* \mathbf{P}_{\alpha^*})$-name. Then,
working in the $(\mathbb{R}\mathord\restriction{\alpha^*}* \mathbf{P}_{\alpha^*})$-extension,
we just apply Lemmas~\ref{lem:6.1} and~\ref{lem:6.2}.
\end{wrongproof}
So what do we do instead?
Assume that $\nd X=\{\nd\xi_i:\, i\in\om1\}$ is an $\mathbb{R}*\mathbf{P}_{\om2}$-name for
a set of reals of size~$\aleph_1$. So there is a $\beta<\om2$ such that
$\nd X$ is added by $\mathbb{R}*\mathbf{P}_\beta$.
In the $\mathbb{R}$-extension, $\mathbf{P}_{\beta}$ is ccc, therefore we can assume
that each $\nd \xi_i$ is a system of countably many countable
antichains $\n A^m_i$ of~$\mathbf{P}_\beta$,
together with functions $\n f^m_i:\n A^m_i\to\{0,1\}$.
For the following argument,
we prefer to work with the equivalent $\mathbf{P}_\beta'$ instead of~$\mathbf{P}_\beta$.
We can assume that each of the sequences $B_i\coloneqq (\n
A^m_i,\n f^m_i)_{m\in\omega}$ is an element of~$V$ (since $\mathbf{P}'_\beta$ is a
subset of~$V$ and since $\mathbb{R}$ is $\sigma$-closed).
So each $B_i$ is decided by a maximal antichain~$Z_i$ of~$\mathbb{R}$.
Since $\mathbb{R}$ is $\al2$-cc, these $\al1$ many antichains all
are contained in some $\mathbb{R}\mathord\restriction {\alpha^*}$ with ${\alpha^*}\geq \beta$.
So in the $\mathbb{R}\mathord\restriction {\alpha^*}$-extension $V^*$ we
have the following situation:
Each $\xi_i$ is a very ``absolute\footnote{or: ``nice'' in the sense of~\cite[5.11]{MR597342}}''
$\mathbb{R}^* * \mathbf{P}_{\alpha^*}$-name
(or equivalently, $\mathbb{R}^* \times \mathbf{P}_{\alpha^*}$-name),
in fact they are already determined by antichains that are in $\mathbf{P}_{\alpha^*}$
and do not depend on $\mathbb{R}^*$. So we can interpret them as
$\mathbf{P}_{\alpha^*}$-names.
Note that:
\proofclaim{claim:xi.i}{ The $\xi_i$ are forced
(by $\mathbb{R}^**\mathbf{P}_{\alpha^*}$)
to be pairwise different,
and therefore already by $\mathbf{P}_{\alpha^*}$.}
Now we are finally ready to prove that $\mathbb{R}* \mathbf{P}_{\om2}$ forces that every uncountable $X$
is neither smz nor sm. It is enough to show that for every name $\nd X$
of an uncountable set of reals of size $\al1$ the forcing $\mathbb{R}* \mathbf{P}_{\om2}$
forces that $\nd X$ is neither smz nor sm. For the rest of the proof
we fix such a name $\nd X$, the corresponding $\nd \xi_i$'s (for $i\in \omega_1$),
and the appropriate $\alpha^*$ as above. From now on, we
work in the $\mathbb{R}\mathord\restriction\alpha^*$-extension~$V^*$.
So we have to show that $\mathbb{R}^* * \mathbf{P}_{\om2}$ forces that $\nd X$
is neither smz nor sm.
After all our preparations, we can just repeat the proofs of BC (Lemma~\ref{lem:6.1})
and dBC (Lemma~\ref{lem:6.2})
of Section~\ref{sec:groundmodel}, with the
following modifications. The modifications are the same for both proofs;
for better readability we describe the results of the change only for the proof of dBC.
\begin{enumerate}
\item Change:
Instead of an arbitrary ultralaver position $\alpha<\om2$, we obviously
have to choose $\alpha\geq \alpha^*$.
\\ For the dBC: We choose $\alpha\ge \alpha^*$ an arbitrary ultralaver position.
The Janus forcing $\mathbf{Q}_{\alpha+1}$
adds a (canonically defined code for a)
null set $\dot Z_\nabla$.
\item
Change: No change here.
(Of course we now have an $\mathbb{R}^**\mathbf{P}_{\alpha^*}$-name $\nd X$ instead
of a ground model set.)\\
For the dBC:
It is enough to show that
$\nd X+\dot Z_\nabla=2^\omega$ in the $\mathbb{R}^**\mathbf{P}_{\om2}$-extension of~$V^*$,
or equivalently, in every
$\mathbb{R}^**\mathbf{P}_{\beta}$-extension ($\alpha<\beta<\om2$).
So assume towards a contradiction that there is a $\beta > \alpha$
and an $\mathbb{R}^**\mathbf{P}_{\beta}$-name
$\nd r$ of a real such that some
$(x,p)\in \mathbb{R}^**\mathbf{P}_\om2$ forces that
$\nd r\notin \nd X + \dot Z_\nabla$.
\item Change: no change. (But we use Corollary~\ref{cor:slkjte}
instead of Lemma~\ref{lem:pathetic2}.)\\
For dBC:
Using Corollary~\ref{cor:slkjte}(\ref{item:fangen.b}), without loss of generality $x$ forces $p^x=^* p $
and there is a $P^x_\beta$-name $\dot r^x$ in $M^x$ such that $\dot r^x=\nd r$ is forced.
\item Change:
The iteration obviously has to start with the $\mathbb{R}\mathord\restriction\alpha^*$-generic iteration
$\bar\mathbf{P}^*$ (which is ccc),
the rest is the same.
\\ For dBC:
In $V^*$ we construct an iteration $\bar P$ in the following way:
\begin{enumerate}
\item[(a1)]
Up to $\alpha^*$, we use the iteration $\bar\mathbf{P}^* $ (which
already lives in our current universe $V^*$). As explained
above in the paragraph preceding~\eqref{claim:xi.i},
$\nd X$ can be interpreted as a $\mathbf{P}_{\alpha^*}$-name $\dot X$, and by
\eqref{claim:xi.i}, $\dot X $ is forced to be uncountable.
\item[(a2)]
We continue the iteration from $\alpha^*$ to $\alpha$ in a way that
embeds $x$ and such that $P_\alpha$ is proper.
So $P_{\alpha}$ will force
that $\dot X$ is still uncountable.
\item[(b1)] Let $Q_\alpha$ be any ultralaver forcing (over $Q_\alpha^x$). Then
$Q_\alpha$ forces that $\dot X$ is not thin.
\item[(b2)] Let $Q_{\alpha+1}$ be a countable Janus forcing.
So $Q_{\alpha+1}$ forces $\dot X+\dot Z_\nabla=2^\omega$.
\item[(c)] We continue the iteration in a $\sigma$-centered way.
I.e., we use an almost FS iteration over $x$ of
ultralaver forcings and countable Janus forcings,
using trivial $Q_\zeta$ for all $\zeta\notin M^x$.
\item[(d)] So $P_\beta$ still
forces that $\dot X+\dot Z_\nabla=2^\omega$, and in particular that $\dot
r^x\in \dot X+\dot Z_\nabla$.
\end{enumerate}
We now pick (in $V^*$) a countable $N\esm H(\chi^*)$ containing
everything and ord-collapse $(N,\bar P)$ to $y\leq x$, by \eqref{claim:44}.
The HCON object $y$ is of course in $V$ (and even in $\mathbb{R}$), but we can say more: Since the iteration $\bar P$ starts with the $(\mathbb{R}\mathord\restriction\alpha^*)$-generic iteration $\bar\mathbf{P}^*$, the condition $y$ will be in the quotient forcing $\mathbb{R}^*$.\\
Set $\dot X^y\coloneqq \dot X\cap M^y$ (which is
the image of $\dot X$ under the collapse, since we view $\dot X$ as a set of
HCON-names).
By elementarity, $M^y$ thinks that (a)--(d) above holds for $\bar P^y$
and that $\dot X^y$ is forced to be uncountable.
Note that $\dot X^y\subseteq \dot X$ in the following sense:
Whenever $G^**H$ is $\mathbb{R}^**\mathbf{P}_{\om2}$-generic over $V^*$, and $y\in G^*$, then the evaluation of $\dot X^y $ in $M^y[H^y]$ is a subset of the evaluation of $\dot X$ in $V^*[G^**H]$.
\item Change: No change here.\\
For dBC: We get our desired contradiction as follows:\\
Let $G^*$ be $\mathbb{R}^*$-generic over $V^*$ and contain $y$.
Let $H_\beta$ be $\mathbf{P}_\beta$-generic over $V^*[G^*]$ and contain $p$.
So $M^y[H^y_\beta]$ thinks that
$ r=x+ z$ for some $x\in X^y\subseteq X$
and\footnote{Note
that we get the same Borel code, whether we evaluate $\dot Z_\nabla$ in $M^y[H^y_{\beta}]$ or in $V^*[G^**H_\beta]$. Accordingly,
the actual Borel set of reals coded by $Z_\nabla$ in the smaller universe
is a subset of the corresponding Borel set in the larger universe.}
$z \in Z_\nabla$, contradicting~(2).
\end{enumerate}
\section{A word on variants of the definitions}\label{sec:alternativedefs}
The following is not needed for understanding the paper, we just
briefly comment on alternative ways some notions could be defined.
\subsection{Regarding \qemph{alternating iterations}} \label{sec:7a}
We call the set of $\alpha\in\om2$ such that $Q_\alpha$ is (forced to be) nontrivial
the \qemph{true domain} of $\bar P$ (we use this notation in this
remark only). Obviously $\bar P$ is naturally isomorphic
to an iteration whose length is the order type of its true domain.
In Definitions~\ref{def:alternating} and~\ref{def:prep}, we could have
imposed
the following additional requirements. All these variants lead
to equivalent forcing notions.
\begin{enumerate}
\item $M^x$ is (an ord-collapse of) an
{\em elementary} submodel of $H(\chi^*)$.
\\
This is equivalent, as conditions coming from
elementary submodels are dense in our $\mathbb{R}$, by
Fact~\ref{fact:esmV}.
\\
While this definition looks much simpler and therefore
nicer (we could replace ord-transitive models
by the better understood elementary models), it would not make
things easier and just ``hides'' the point of the
construction: For example, we use models $M^x$ that
are (an ord-collapse of) an
elementary submodel of~$H^{V'}(\chi^*)$ for some forcing extension
$V'$ of~$V$.
\item Require that ($M^x$ thinks that) the true domain of $\bar P^x$
is $\om2$.
\\
This is equivalent for the same reason as (1) (and this
requirement is compatible with (1)).
\\
This definition would allow to drop the ``trivial'' option
from the definition. The whole proof would still work with
minor modifications --- in particular, because of the following
fact:
\footnote{We are grateful to Stefan
Geschke and Andreas Blass for pointing out
this fact. The only references we are aware
of are \cite[proof of Lemma 2]{MR1179593} and \cite{MO84129}.}
\proofclaim{eq:blass}{The finite support iteration of
$\sigma$-centered forcing notions of
length $<(2^{\aleph_0})^+$ is again
$\sigma$-centered.}
We chose our version for two reasons: first, it seems
more flexible, and second, we were initially not aware of
\eqref{eq:blass}.
\item Alternatively, require that ($M^x$ thinks that) the true
domain of $\bar P^x$ is countable.
\\
Again, equivalence can be seen as in~(1), again (3) is compatible with~(1)
but obviously not with~(2).
\\
This requirement would not make the definition easier,
so there is no reason to adopt it. It would have the slight
inconvenience
that instead of using ord-collapses as in Fact~\ref{fact:esmV},
we would have to put another model on top to make the iteration
countable. Also, it would have the (purely aesthetic) disadvantage that
the generic iteration itself does not satisfy this
requirement.
\item \label{item:nonproper}
Also, we could have dropped the requirement that the iteration
is proper. It is never directly used, and ``densely''
$\bar P$ is proper anyway. (E.g., in Lemma~\ref{lem:6.1}(4)(a),
we would just construct $\bar P$ up to $\alpha$ to
be proper or even ccc, so that $X$ remains uncountable.)
\end{enumerate}
\subsection{Regarding \qemph{almost CS iterations and separative iterands}}
\label{sec:7b}
Recall that in Definition~\ref{partial_CS} we
required that each iterand $Q_\alpha$ in a partial CS iteration
is separative. This implies the property (actually: the three equivalent
properties) from Fact~\ref{fact:suitable.equivalent}.
Let us call this property \qemph{suitability} for now.
Suitability is a property of the limit
$P_\varepsilon$ of $\bar P$. Suitability always holds for finite support
iterations and for countable support iterations. However, if we do not assume
that each $Q_\alpha$ is separative, then suitability may fail
for partial CS iterations.
We could drop the separativity assumption, and instead
add suitability as an additional natural requirement to the definition of partial CS limit.
The disadvantage of this approach is that we would have to check in all
constructions of partial CS iterations that suitability is
indeed satisfied
(which we found to be straightforward but rather cumbersome, in
particular in the case of the almost CS iteration).
In contrast, the disadvantage of assuming that $Q_\alpha$ is separative
is minimal and purely cosmetic: It is well known that every quasiorder $Q$
can be made into a separative one
which is forcing equivalent to the original~$Q$ (e.g., by just
redefining the order to be $\leq^*_Q$).
\subsection{Regarding \qemph{preservation of random and quick sequences}}
Recall Definition~\ref{def:locally.random} of local preservation of random reals
and Lemma~\ref{lem:4.28}.
In some respect the dense sets $D_n$ are unnecessary. For
ultralaver forcing $\mathbb{L}_{\bar D}$, the notion of a ``quick'' sequence
refers to the sets $D_n$ of conditions with stem of length at
least~$n$.
We could define a new partial order on $\mathbb{L}_{\bar D}$ as
follows:
\[ q\le' p \ \Leftrightarrow \ (q=p) \ \text{or} \ (q\le p \text{ and the
stem of $q$ is strictly longer than the stem of~$p$}).
\]
Then $(\mathbb{L}_{\bar D}, \le)$ and
$(\mathbb{L}_{\bar D}, \le')$ are forcing equivalent, and any
$\le'$-interpretation of a new real will automatically be quick.
Note however that $(\mathbb{L}_{\bar D}, \le')$ is now not separative any more.
Therefore we chose not to take this approach, since losing separativity
causes technical inconvenience, as described in \ref{sec:7b}.
\bibliographystyle{alpha}
|
2,869,038,154,980 | arxiv | \section{Introduction}
The fast rotating regime for a Bose Einstein condensate in a harmonic trap,
observed experimentally in \cite{Ketterle1,Boulder04,Boulder04bis},
displays
analogies with type II superconductors behaviors and Quantum Hall
Physics. However, some different features have emerged and are of
interest, in particular due to the existence of a potential
trapping the atoms.
A quantum fluid described by a macroscopic wave function rotates
through the nucleation of quantized vortices \cite{Donnelly91}.
For a condensate confined in a harmonic potential with cylindrical
symmetry around the rotation axis, a limiting regime occurs when
the rotational frequency $\Omega$ approaches the transverse trapping
frequency: the centrifugal force nearly balances the trapping
force so that the size of the condensate increases and the number
of vortices diverges. The visible vortices arrange themselves in a
triangular Abrikosov lattice. The system is strongly confined
along the axis of rotation, and it is customary to restrict to a
two dimensional analysis in the $x-y$ plane. We will call
$z=x+iy$.
The hamiltonian is
similar to that for a charged particle in a magnetic field: for
rotational angular velocities just below the transverse trap
frequency, the wave function of the condensate can be described
using only components in the Lowest Landau Level
(LLL):\begin{equation}\label{Psi}\Psi(z)=\Phi_0 \prod_{i=1}^N (z-z_i)
e^{-|z|^2/2}\end{equation}
where $\Phi_0$ is a normalization factor and the $z_i$ are the
location of the vortices.
In rescaled units, the reduced energy in the LLL is
\cite{H,WBP,ABD} \begin{equation}\label{elll}{\cal E}_{LLL}(\Psi)=\int \Bigl [
(1-\Omega)|z|^2|\Psi|^2+\frac G2 |\Psi|^4\Bigr ] d^2r\end{equation} under
$\int\! d^2r|\Psi|^2=1$, where $\Omega$ is the rotational velocity,
the transverse trap frequency is scaled to 1, and $G$ models the
interaction term: $G=Ng/(d\sqrt{2\pi})$, where $g$ is the two body
interaction strength and $d$ is the characteristic size of the
harmonic oscillator in the direction of the rotation.
In the
absence of a confining potential, the problem is reduced to the
one studied by Abrikosov \cite{A} for a type II superconductor and
the minimizer is a wave function with a uniform triangular lattice
\cite{K}; its modulus vanishes once in each cell and is periodic
over the lattice. The presence of the confining potential is at
the origin of a slow varying density profile, which can be
described as the mean of the modulus of the wave function on many
cells.
Ho \cite{H}
predicted that for a uniform lattice, the smoothed density profile
is a gaussian. Various contributions \cite{WBP,CKR,ABD} then
pointed out that the energy can be lowered if this smoothed
density distribution is an inverted parabola rather then a
gaussian. This type of density profile can be achieved either by
taking wave functions with a uniform lattice but with components
outside the LLL \cite{WBP} or by remaining in the LLL and
distorting the lattice. The study of the distortion has been the
focus of recent papers \cite{CKR,ABD,Macdo} and raises the issue
of the optimal vortex distribution. In the LLL description, there
are two kinds of vortices : the ``visible vortices", which lie in
the region
where the wave function is
significant (for instance inside the Thomas Fermi region in the
case of the inverted parabola), and the ``invisible vortices"
which are in the region where the modulus of the wave function is
small.
The visible vortices form a regular triangular lattice, while the
invisible ones seem to have a strong distorted shape, whose distribution is
essential to recreate the inverted parabola profile inside the
LLL approximation.
These latter are not within reach of experimental evidence, but
can be computed numerically \cite{ABD,Macdo}. An important
theoretical question is the distribution of these invisible
vortices, their number, or an estimate of how many of them are
necessary to approximate the inverted parabola properly inside the
LLL.
Our main result is to prove that in order to minimize the energy
in the LLL, there is a need for an infinite number of vortices.
The main tool that we use is an explicit expression of the
projector onto the LLL. This projector also allows us to
approximate any slow varying density profile by LLL wave
functions.
{\bf Projection onto the LLL and infinite number of zeroes} We
define a small parameter $\varepsilon=\sqrt{1-\Omega}$ and make the change of
variables $\psi(z)=\sqrt\varepsilon\Psi(\sqrt \varepsilon z)$, so that the
condensate is of size of order 1 and the lattice spacing is
expected to be of order $\sqrt\varepsilon$. The energy gets rescaled as
${\cal E}_{LLL}(\Psi)=\varepsilon E_{LLL}(\psi)$ where
\begin{equation}\label{ELLL}E_{LLL}(\psi)=\int \Bigl [ |z|^2|\psi|^2+\frac G2
|\psi|^4\Bigr ]d^2r.\end{equation} Moreover, $\psi$ belongs to the LLL so
that $f(z)=\psi(z)e^{|z|^2/2\varepsilon}$ is a holomorphic function and
thus belongs to the so called Fock-Bargman space
\begin{equation}\label{fock}{\cal F}=\left\{ f \hbox{ is holomorphic }, \ \int
|f|^2e^{-|z|^2/\varepsilon}d^2r<\infty\right \}.\end{equation} Let us point out that
such a function $f$ is not only determined by its zeroes and
normalization factor as in (\ref{Psi}), but also by a globally
defined phase, which is a holomorphic function.
The space ${\cal F}$ is a Hilbert space endowed with the scalar
product $\left <f,g\right
> =\int f(z)g(z)e^{-|z|^2/\varepsilon}d^2r.$
The point of considering this space is that the projection of a
general function $g(z,\bar z)$ onto ${\cal F}$ is explicit, and
called the Szego projector \cite{M,F} :\begin{equation}\label{pi}\Pi(g)=\frac
1{\pi\varepsilon}\int
e^{\frac{z\overline{z'}}{\varepsilon}}e^{-\frac{|z'|^{2}}{\varepsilon}}g(z',\bar{
z'})d^2r'.\end{equation} If $g$ is a holomorphic function, then an
integration by part yields $\Pi(g)=g$.
If one considers the minimization of $E_{LLL}(\psi)$ without the
holomorphic constraint on $f$, then the minimization process
yields that
$|z|^2+G|\psi|^2-\mu=0$, where $\mu$ is the chemical potential due
to the constraint $\int|\psi|^2=1$, so that
$|\psi|$ is the inverted parabola \begin{equation}\label{ip}
\left|\psi\right|^{2}(z)=\frac{2}{\pi
R^{2}}\left (1-\frac{|z|^{2}}{R^{2}}\right )\!1_{\left\{|z|\leq
R\right\}},
R=\sqrt{\mu}=\left(\frac{2G}{\pi}
\right)^{1/4}\!\!\!\!.\end{equation} The restriction to the LLL prevents
from achieving this specific inverted parabola
since $\psi e^{|z|^2/2\varepsilon}$ cannot be a holomorphic
function.
The advantage of the explicit formulation of the projector
$\Pi$ is that it allows us to derive an equation satisfied by
$\psi$ or rather $f$ when minimizing the energy in the LLL. A
proper distribution of zeroes can approximate an inverted parabola
profile but is going to modify the radius $R$ by a coefficient
$b^{1/4}$ coming from the contribution of the vortex lattice to
the energy.
\begin{theo}\label{el}If $f\in{\cal F}$ minimizes \begin{equation}
E(f)=\int \Bigl [ |z|^2|f|^2e^{-|z|^2/\varepsilon}+\frac G2
|f|^4e^{-2|z|^2/\varepsilon}\Bigr ]d^2r\end{equation} under $\int
|f|^2e^{-|z|^2/\varepsilon}d^2r=1,$ then $f$ is a solution of the
following equation\begin{equation}\label{eqf}\Pi\Bigl (
(|z|^2+G|f|^2e^{-|z|^2/\varepsilon}-\mu)f\Bigr )=0\end{equation} where $\mu$ is the
chemical potential coming from the mass constraint.
\end{theo} Note that given the relation between $f$ and $\psi$,
$E(f)$ and $E_{LLL}(\psi)$ are identical. Equation (\ref{eqf})
comes from the fact that for any $g$ in ${\cal F}$ with $\left
<f,g\right
> =0$, if $f$
minimizes $E$, then we have \begin{equation}\label{fgeq}\int \Bigl [ |z|^2\bar
g f e^{-|z|^2/\varepsilon}+\frac G2 |f|^2\bar g fe^{-2|z|^2/\varepsilon}\Bigr
]d^2r=0\end{equation} and we use the scalar product in ${\cal F}$ and the
definition of the projector to conclude.
The equation for the minimizer allows us to derive that this
minimizer cannot be a polynomial:
\begin{theo}If $f\in{\cal F}$ minimizes $E$, then $f$ has an infinite
number of zeroes.\label{nopol}\end{theo}
We are going to argue by contradiction and assume that $f$ is a
polynomial.
1. The proof first requires another formulation of
(\ref{eqf}). The projector $\Pi$ has many properties \cite{M,ABN}:
in particular, one can check, using an integration by part in the
expression of $\Pi$, that $\Pi_{\varepsilon}(|z|^{2}f)=z\varepsilon\partial_{z}f+\varepsilon
f$. As for the middle term in the equation, one can compute that
if $f$ is a polynomial,
$\Pi_{\varepsilon}\left(e^{-\frac{|z|^{2}}{\varepsilon}}\left|f\right|^{2}
f\right)
=\Pi_{\varepsilon}\left(e^{-\frac{|z|^{2}}{\varepsilon}}\left|f\right|^{2}\right)\Pi_{\varepsilon}
f\\ =
\Pi_{\varepsilon}(\overline{f}(z))\Pi_{\varepsilon}(e^{-\frac{|z|^{2}}{\varepsilon}}f^{2})
=\bar{f}(\varepsilon\partial_{z})\Pi_{\varepsilon}(e^{-\frac{|z|^{2}}{\varepsilon}}f^{2})$. A
simple change of variable yields
$\Pi_{\varepsilon}\left(e^{-\frac{|z|^{2}}{\varepsilon}}f^{2}\right)(z)=(\pi
\varepsilon)^{-2}\int
e^{-\frac{z\overline{z'}-2|z'|^{2}}{\varepsilon}}f^{2}(z')d^2r'\\=\frac 12
\Pi_{\varepsilon}\left(f^2 (\frac {.}{\sqrt 2})\right )(\frac z
{\sqrt{2}})=\frac 12 f^2(\frac z{2})$. Thus, we find the
following simplification of (\ref{eqf}):
\begin{equation}\label{eqf2}z\varepsilon\partial_{z}f + \frac G2
\bar{f}(\varepsilon\partial_{z})[f^2(z/2)]-(\mu-\varepsilon)
f=0.\end{equation}
2. Now we assume that $f$ is polynomial of degree $n$ and a
solution of (\ref{eqf2}). We are going to show that there is a
contradiction due to the term of highest degree in the equation.
Indeed, if $f$ is a polynomial of degree $n$, then
$(\varepsilon\partial_{z})^k[f^2(z/2)]$ is of degree $2n-k$. But (\ref{eqf2})
implies that
$\bar{f}(\varepsilon\partial_{z})[f^2(z/2)]$ is of degree $n$, hence $f$ must
be equal to $cz^n$. This is indeed a solution of (\ref{eqf2}) if
$n\varepsilon+G|c|^2\varepsilon^n(2n)!/(2^{2n+1}n!)-\mu+\varepsilon=0$. Using
that $\int |f|^2 e^{-|z|^2/\varepsilon}
=1$, we find that $|c|^2\pi\varepsilon^{n+1}n!=1$.
The Stirling formula provides the existence of a constant $c_0$
such that $n\epsilon + {c_0 G}/({2\pi \epsilon}\sqrt n) \leq
\mu$.
For the minimizer, $\mu$ is of the same order
as the energy, thus of order 1, so that if
$\varepsilon$ is too small, no $n$ can satisfy this last identity hence
the minimizer is not a polynomial. A
similar argument can be used to check that, if $f$ is more
generally a holomorphic function in ${\cal F}$,
then it cannot have a finite
number of zeroes. The detailed proof will be given in
\cite{ABN}.
{\bf Approximation of a slow varying profile by the LLL} {\em The
Abrikosov solution} The Abrikosov problem \cite{A} consists in
minimizing the ratio $\left < |u|^4\right >/\left < |u|^2\right
>^2$ over periodic functions,
where $\left < . \right
>$ denotes the average value over a cell, for functions $u$ obtained
as limits of LLL functions. The minimum is achieved for
$u=u_\varepsilon(z,e^{2i\pi/3})$ where \cite{Tk}
\begin{equation}
\label{eq.Abriko}u_\varepsilon(z,\tau)=e^{-{|z|^{2}/2\varepsilon}}f_\varepsilon(z,\tau),\
f_\varepsilon(z,\tau)=e^{ z^{2}/2\varepsilon}\Theta(\sqrt{\frac{\tau_{I}}{\pi
\varepsilon}}z, \tau)\end{equation}
and for any complex number $\tau=\tau_R+i\tau_I$,
\begin{equation}\label{theta}
\Theta(v,\tau)=\frac{1}{i}\sum_{n=-\infty}^{+\infty}(-1)^{n}e^{i\pi\tau(n+1/2)^{2}}
e^{(2n+1)\pi iv}. \end{equation} The $\Theta$ function has the following
properties
\begin{equation}\label{thetaprop}\Theta(v+k+l\tau,\tau)=(-1)^{k+l}e^{-2i\pi
lv} e^{-i\pi l\tau}\Theta(v,\tau)\end{equation} so that $|u_\varepsilon(z,\tau)|$ is
periodic over the lattice $\sqrt{\frac{\pi\varepsilon}{\tau_I}}\bf{Z}} \newcommand{\ttt}{\bf{T}\oplus
\sqrt{\frac{\pi\varepsilon}{\tau_I}}\bf{Z}} \newcommand{\ttt}{\bf{T}\tau$, and vanishes at each point
of the lattice. Without loss of generality, one can restrict
$\tau$ to vary in $|\tau|\geq 1$, $-1/2\leq \tau_R<1/2$: this is
equivalent to require that the smallest period for $\Theta$ is 1
and along the $x$ axis (see \cite{KC}) and any lattice in the
plane, can be obtained from one of these by similarity.
For any $\tau$, $f_\varepsilon$ given by (\ref{eq.Abriko}) is a solution
of \begin{equation}\label{eqfabri}\Pi(|f_\varepsilon|^2e^{-|z|^2/\varepsilon}f_\varepsilon)=\lambda_\tau
f_\varepsilon,\hbox{ with } \lambda_\tau=\left < |u_\varepsilon|^2\right >
b(\tau),\end{equation} and \begin{equation}\label{btau}b(\tau)=\frac {\left <
|u_\varepsilon|^4\right >}{\left < |u_\varepsilon|^2\right
>^2}=\sum_{k,l\in \bf{Z}} \newcommand{\ttt}{\bf{T}} e^{-\pi |k\tau -l|^2/\tau_I}
.\end{equation} This expression can be obtained using arguments in \cite{T}.
The minimal value of $b(\tau)\sim 1.16$ is achieved for
$\tau=e^{2i\pi/3}$, that is for the triangular lattice \cite{K}:
in \cite{K}, it is argued that one can restrict to $\tau_R=-1/2$,
and vary $\tau_I$ in $(1/2,\sqrt3/2)$. Accepting this restriction, they
compute the variations of $b$ which depends on a single parameter
and is indeed minimal for the triangular lattice. In \cite{ABN},
we prove that this restriction is rigorous using the description
of these lattices by varying $\tau$ for $|\tau|=1$ and $\tau_R\in
(-1/2,0)$.
If one compares (\ref{eqfabri}) and (\ref{eqf}), one notices that
they only differ by the term $\Pi(|z|^2f)=\varepsilon z\partial_z f+\varepsilon
f$, which is negligible on the lattice size, but plays a role
on the shape of the density profile.
{\em The role of the confining potential} A natural candidate to
approximate any slow varying profile $\alpha (z,\bar z)$ is to take
$\alpha(z,\bar z)u_\varepsilon(z,\tau)$, where $u_\varepsilon$ is the periodic
function defined in (\ref{eq.Abriko}). Of course, such a function
is not in the LLL, but can be well approximated in the LLL by
$f^\alpha e^{-{|z|^{2}/2\varepsilon}}$ where $f^\alpha=\Pi(\alpha f_\varepsilon)$, $\Pi$ is
the projector onto the LLL (\ref{pi}) and $f_\varepsilon$ comes from
(\ref{eq.Abriko}).
Estimating the energy of $f^\alpha$ yields
$E(f^\alpha)-\int [
|z|^2|\alpha|^2 \left < |u_\varepsilon|^2\right
>+\frac {Gb(\tau)}2 |\alpha|^4 \left < |u_\varepsilon|^2\right >^2 ]d^2r$
$\sim C\varepsilon^{1/4}.$
This computation uses calculus on $\Pi$ \cite{ABN}, and
that
$u_\varepsilon$ and $\alpha$ do not vary on the same
scale, hence the integrals can be decoupled. The contribution of $u_\varepsilon$
to the energy is through the coefficient $b(\tau)$, which is minimum
for $\tau=e^{2i\pi/3}$.
Using pseudo differential calculus, one can show \cite{ABN},
when $\varepsilon$ is
small, that $f^\alpha$ is
very close to $\alpha u_\varepsilon$: the error is at most like
$\varepsilon^{1/4}$ if $\alpha$ is not more singular than an inverted
parabola.
In particular, when $\alpha$ is an inverted parabola,
this implies that in the
Thomas Fermi region, the distribution of visible vortices is almost that
of the triangular lattice since $\alpha u_\varepsilon$ is a good
approximation. Outside the support of the inverted parabola,
where $f^\alpha$
is very small,
one can check that the density of distribution of zeroes of
$f_\alpha$ decreases like $1/|z|$ for large $|z|$. Contrary to what
was explained in \cite{WBP,S}, it is not a small distortion of
the lattice which results in large changes in the density
distribution, but a very specific and far from uniform
distribution of the invisible
vortices (outside the Thomas Fermi region)
which allows to approximate an inverted parabola.
The special shape of the inverted parabola comes out if one
wants to approximate the equation of the minimizer of the energy:
for any $\lambda$, we can prove that
\begin{align}\nonumber&\Pi\Bigl (
(|z|^2+G|f^\alpha|^2e^{-|z|^2/\varepsilon}-\lambda)f^\alpha\Bigr
)+C\varepsilon^{1/4}\\&\sim\Pi\Bigl ( (|z|^2+Gb(\tau)\left <
|u_\varepsilon|^2\right
> |\alpha|^2-\lambda)\alpha f_\varepsilon\Bigr )\label{simp}\end{align}
where $C$ only depends on bounds on $\alpha$. In other words, in the
equation for $f^\alpha$, one can separate in the term
$|f^\alpha|^2e^{-|z|^2/\varepsilon}$
the contributions due to the lattice and to the profile. The
right hand side of
(\ref{simp}) is zero if $\alpha$ is the
inverted parabola
$$ \alpha(z)= \sqrt{\frac{2}{\pi
R_0^{2}\left < |u_\varepsilon|^2\right
>}\Bigl (1-\frac{|z|^{2}}{R_0^{2}}\Bigr)},\
R_0=\left(\frac{2Gb(\tau)}{\pi}
\right)^{1/4}$$ and $\lambda=R_0^2$, so that
$f^\alpha$ is almost a solution of (\ref{eqf}), up to an error in $\varepsilon^{1/4}$.
{\em Variations of the lattice} This approach can be used to study
the variations in energy
due to deformations of the lattice. The triangular lattice,
corresponding to $\tau^1=e^{2i\pi/3}$, is such that the Hessian of
$b(\tau)$ is isotropic ($\sim 0.68 Id$).
Two
lattices
close to each other can be described by two close complex numbers
$\tau^1$ and
$\tau^2$; the difference in
energy between $E(f^\alpha(.,\tau^1))$ and $E(f^\alpha(.,\tau^2))$ is
at leading order
$$\frac G4 \frac{\partial^2 b}{\partial \tau_R^2}
|\tau^1-\tau^2|^2\int |\alpha|^4 \left < |u_\varepsilon|^2\right >^2d^2r\sim
\frac {0.68G}{3\pi R_0^2}|\tau^1-\tau^2|^2$$
This computation justifies the approach
which consists in decoupling the lattice contribution from the
profile contribution in the energy
\cite{S} but, given the definition of $f^\alpha$ using $\Pi$, it
relies on strong deformations of the lattice for
points far away from the Thomas Fermi region.
For a shear deformation for which
$u_{ij}$ are the components of the deformation tensor,
$\tau^2-\tau^1=i\sqrt 3 u_{xy}$.
The elastic coefficient $C_2$ is defined by the fact that the
difference in energy should be $4C_2 u_{xy}^2$. This separation of
scales allows to compute $C_2\sim 0.68 G/(4 \pi R_{0}^{2})$ (see
also \cite{S,Sinova02}) and relate it in BEC to the same one
computed for the Abrikosov solution.
{\bf Approximation by polynomials and modes} An interesting
issue, especially for the computations of modes, is to get
an estimate of the degree of the polynomial which could approximate $f^\alpha$,
since this function has an infinite number of zeroes. We can prove
\cite{ABN} that
as the degree of the polynomial gets large, the minimum of the energy for
the problem restricted to polynomials (and the computation of modes) is a
good approximation of the full problem. The convergence rates that we
obtain are not satisfactory yet. We believe that a good degree should be
$\kappa /\varepsilon$, with $\kappa>R_0^2$ and $R_0$ is the radius
of the inverted parabola. Given
that the volume of the cell is $\pi \varepsilon$, $\kappa=R_0^2$ would correspond
to having only the visible vortices. Numerical simulations indicate that a
sufficient number of invisible vortices is needed to recreate the inverted
parabola profile \cite{ABD}. There are two types of invisible vortices:
those close to the boundary of the inverted parabola which contribute to
the bulk modes and those sufficiently far away which produce single
particle excitations as explained in
\cite{Macdo}. An open issue is to understand the location of these latter
invisible vortices; some simulations suggest
that they lie on concentric circles, but then the density of these
circles should be very low to match our predicted global vortex density
far away which behaves like $1/|z|$.
\begin{figure}
\centerline{\includegraphics[width=6.5cm]{essai.ps}}
\caption{Number of modes $n/n_{tot}$ having a lower energy
than a given energy $e$ for $G=3$, $\Omega=0.999$}
\label{fig}
\end{figure}
We have performed numerical simulations with
$\Omega=0.999$ and $G=3$: this fixes the number
of visible vortices to 30, and we vary the number of
total vortices $N$. One needs at
least $N=52$ (that is 22 invisible vortices)
to properly approximate the inverted parabola, the energy minimizer and the bulk
modes.
The distortion
of the lattice is small at the edges but large at large
distances. For $N$ too small, some modes do not appear
(see Figure \ref{fig}), while
for $N$ very large, one expects higher modes that \cite{Macdo,Macdo2}
interpret as single particle modes.
{\bf Conclusion}
We have shown that for the minimizer of the Gross Pitaevskii
energy in the LLL,
the lattice of vortices is infinite, but not uniform. Any slow
varying profile can be approximated in the LLL by distorting the
lattice. This is proved using an explicit expression for the
projection onto the LLL. Our results also give an insight on the
elastic coefficient $C_2$ and the approximation of the minimizer
and modes by polynomials.
{\em Acknowledgements: } We are very indebted to Jean Dalibard and
Allan MacDonald for stimulating discussions. Part of them took
place at the "Fondation des Treilles" in Tourtour which hosted a
very fruitful interdisciplinary maths-physics meeting on these
topics. We thank James Anglin, Sandy Fetter and Sandro Stringari
for interesting comments.
|
2,869,038,154,981 | arxiv | \section{The ANNIE Experiment}
The primary physics goal of ANNIE in Phase II is to study the multiplicity of final state neutrons from neutrino-nucleus interactions in water. These measurements will improve our understanding of the many-body dynamics of neutrino-nucleus intera-ctions and will allow to reduce the systematic uncertainties of the neutrino energy reconstruction in oscillation
experiments and the signal-background separation for neutrino experiments. Efficient detection of neutrons in ANNIE will be made possible by searching for a delayed signal from their capture on gadolinium (Gd) dissolved in water. Gd nuclei have high neutron capture cross-sections and produce 8 MeV gammas in several tens of microseconds after the initial event, which provide a detectable signal in water Cherenkov detectors.
The ANNIE detector consists of a Gd-doped water detector deployed on the Booster Neutrino Beam at Fermilab \cite{BNB}. This beam is about 93\% pure $\nu_{\mu}$ (when running in neutrino mode) and has a spectrum that peaks at about 700 MeV. A Front Veto to reject entering backgrounds produced in the upstream rock and an external Muon Range Detector (MRD) downstream from the neutrino target are also utilised. The experiment is designed to proceed in two stages: a partially-instrumented test-beam run using only photomultiplier tubes (PMTs) (Phase I) for the purpose of measuring critical neutron backgrounds to the experiment \cite{ANNIE} and a physics run with a fully-instrumented detector (Phase II).
Phase I is now completed and the analysis of the data collected ongoing. ANNIE Phase II will be the first experiment to use fast-timing and position-precise LAPPDs \cite{LAPPD}, which are now being produced by Incom Inc., to perform these measurements [2]. To realise the physics goals for Phase II, the ANNIE collaboration has developed several track and kinematic reconstruction techniques.
\section{Track Reconstruction}
To reconstruct a beam neutrino interaction vertex, we employ an algorithm based on the techniques used in previous WCh detectors, such as Super-Kamiokande \cite{SK}. A track of a charged particle produced from a neutrino interaction can be characterized by six parameters: three spatial parameters specify the vertex position, one time parameter reflects when the interaction took place and two angular parameters specify the direction of the primary lepton track. A relativistic particle traveling in water will emit Cherenkov light, which is collected by the photodetectors mounted on the inner surface of the tank. The photon hit timing and the Cherenkov cone pattern are used to reconstruct the six parameters that are determined from a maximum-likelihood fit. These six parameters are varied in the fit to maximize the overall figure-of-merit (FOM), which is used to estimate the goodness of the fit. For the cone-edge fit, we build an analytical probability density function to describe the expected angular distribution of all digits. We then vary the track parameters and calculate an angular distribution from them, comparing with the PDF to give us the cone-edge component of the FOM. The best-fit vertex position and track parameters are those which maximize the overall FOM \cite{ANNIE}.
A sample of muons that are produced within the expected fiducial volume and stopped inside the MRD are selected for the event reconstruction. The vertex radial displacement ($\Delta$r) is defined as the distance between the reconstructed and the true vertex positions, and the track angular displacements ($\Delta\phi$) is defined as the angle between the reconstructed and the true track directions. Both the displacements are calculated on an event by event basis. We define the vertex (angular) resolution as the value of $\Delta$r ($\Delta\phi$) at the 68th percentile of all successfully reconstructed events from the sample. The vertex and angular resolutions are investigated for a PMT-only configuration including 128 8-inch PMTs (about 20\% coverage of the inner surface of the tank) and for a combined configuration including 128 PMTs and 5 LAPPDs on the downstream wall of the tank.
Figure \ref{fig:R_phi} shows the cumulative distributions, expressed as a percentage of successfully reconstructed events, for the vertex and direction reconstruction and compares the performance achieved in the two configurations. The vertex resolution from 20\% coverage with 128 conventional PMTs is about 38 cm. A configuration with 5 LAPPDs and 128 PMTs achieves a much improved resolution of 12 cm. Using the LAPPDs and PMTs combined configuration, the track angle can be reconstructed with a resolution of about 5 degrees, which is a factor of two improvement over the PMT-only configuration.
\begin{figure}[htb]
\centering
\includegraphics[height=1.8in]{DeltaR}
\centering
\includegraphics[height=1.8in]{DeltaPhi}
\caption{Cumulative distributions of vertex (left) and direction (right) resolutions, for reconstructed events by the ANNIE detector with 128 PMTs only (blue) and 5 LAPPDs and 128 PMTs (red).}
\label{fig:R_phi}
\end{figure}
\section{Energy Reconstruction}
To estimate the muon and neutrino energy quasi-elastic events stopped inside the MRD are selected. The track length in the MRD is calculated as a fit to the recorded hits. The track length in the water tank is reconstructed using a Deep Learning Neural Network algorithm (from Tensorflow 1.3.0). This is trained on multiple parameters including the photodetector hit times, the number of hits and the Cherenkov photons emission points based on the reconstructed interaction vertex and direction. The track length is then passed to a Boosted Decision Tree (BDT) together with the MRD track length, the reconstructed vertex position and angle, the distances of the reconstructed vertex from the detector walls and the number of hits in LAPPDs and PMTs for each event. The BDT (from Scikit-Learn 0.18.2), similar to that in \cite{EnerReco}, is used to reconstruct the muon and neutrino energy. Figure \ref{fig:ener} shows the muon and neutrino energy resolutions achieved for the ANNIE configuration with 5 LAPPDs and 128 PMTs. The energy resolution is defined as the percentage of $|\Delta E^{true} - \Delta E^{reco} |$/$\Delta E^{true}$. The muon (neutrino) energy resolution achieved at the 68th percentile of all reconstructed events from the sample is 10\% (14\%).
\begin{figure}[htb]
\centering
\includegraphics[height=1.7in]{Ereco}
\caption{The cumulative distribution of the muon (red) and neutrino (green) energy resolution for reconstructed events by the ANNIE detector with 5 LAPPDs and 128 PMTs.}
\label{fig:ener}
\end{figure}
\section{Kinematic Reconstruction}
The reconstruction of event kinematics requires the estimation of the energy and direction of all particles in the final state. Present results have focused on quasi-elastic events, which are the primary interaction channel in ANNIE and are completely described by the energy of the incoming neutrino and the energy and momentum of the outgoing muon. Stopped muon events are selected for which the muon energy is measurable. The muon and neutrino energies from the BDT are used together with the reconstructed muon angle to calculate the momentum transferred, Q$^2$. Figure \ref{fig:Q} compares the accuracy of momentum transfer calculated for detector simulations using 128 PMTs alone, and with 128 PMTs and 5 LAPPDs. The addition of LAPPDs considerably improves the Q$^2$ resolution.
\begin{figure}[htb]
\centering
\centering
\includegraphics[height=1.8in]{Q_2}
\caption{The 1-$\sigma$ Q$^2$ resolution for four bins in true Q$^2$, reconstructed by the ANNIE detector with 128 PMTs only (blue) and 5 LAPPDs and 128 PMTs (red).}
\label{fig:Q}
\end{figure}
\section{Conclusions}
In Phase I, the ANNIE Collaboration performed measurements of the neutron backgrounds. In Phase II, the Collaboration plans to measure the neutron yield from $\nu_{\mu}$ interactions as a function of Q$^2$. The key technological component of Phase II, LAPPDs, are now being produced by Incom Inc. and the
simulation and reconstruction tools for Phase II show good performance. The ANNIE Phase II data taking is foreseen in late 2018.
|
2,869,038,154,982 | arxiv | \section*{ACKNOWLEDGMENT}
This work is supported by the Robotics and Internet-of-Things Lab of Prince Sultan University.
\bibliographystyle{ieeetr}
\section{INTRODUCTION}
Tree counting from aerial images is a challenging problem with many applications such as forest inventory, crop estimation, and farm management. Nevertheless, counting the number of palms trees in large farms has been a challenging problem for agriculture authorities due to a massive number of trees and the inefficiency of manual counting approaches. The problem becomes even more laborious and tedious when we also need to identify the GPS location of palm trees for governance purposes. The inefficiency of traditional methods leads to inconsistent data collection about the number of palms, as reported by agriculture experts.
For this purpose, we present a deep learning framework for building an inventory of individual palm trees by automatically counting and geolocating them using areal color images collected by unmanned aerial vehicles.
The remaining of the paper is organized as follows. Section II discusses the related works that dealt with car detection and aerial image analysis using CNN, and some comparative studies applied to other object detectiors. Then, section III describes the datasets and the obtained results.
\section{RELATED WORKS}
There have been several research studies on aerial image processing using deep learning in general, and on palm detection and counting more specifically. Before the area of deep learning, in 2011, Shafri et al. proposed in \cite{Helmi2011IJRS} a detection technique of the oil-palm tree by combining several techniques, namely, edge enhancement, spectral and blob analysis, and segmentation. Reference \cite{Li2016RemoteSensing} represents the first work to detect and count palm trees from multispectral QuickBird satellite images using deep learning, dated from 2016. The spatial resolution is 2.4m, but images were processed with the panchromatic band to achieve a 60 cm resolution. The authors developed a convolutional neural network detection with a sliding window approach to localize and classify palm trees in Malaysia with an accuracy of 96\%. It was shown that the proposed CNN detector provides better performance as compared to local maximum filter and template matching. In \cite{Li2017IGARSS}, the same authors proposed a classification technique based on AlexNet for the detection of palm trees from high-resolution satellite images. The classification accuracy achieved was 92\% to 97\% for the study area of palm tree farms of Malaysia. Our work differs from \cite{Li2017IGARSS} in several aspects. First, we consider high-resolution aerial images rather than satellite images, which provides a higher-resolution of 2 cm/pixel and more apparent features of palm trees. Besides, aerial images provide more up-to-date data as compared to their satellite counterparts. Second, we addressed an object detection problem rather than a classification problem, which requires both the localization and the classification of palm tree instances in images. Third, we do not only detect palm trees, but we also determine its geolocation from geotagged images.
In \cite{Zortea2018IGARSS}, the objective was to devise a deep learning algorithm for automatically building an inventory of palm trees from aerial images collected by drones. Their contribution was to combine the output of two CNN algorithms where the first is applied to 10 cm/pixel images to learn fine-grain features, and the second neural network 20 cm/pixel to focus on more coarse grain features. The authors achieved detection accuracy values between 91.2 and 98.8\% using the orthomosaic of decimeter spatial resolution. Our work differs in several aspects. First, we consider higher-resolution UAV-based aerial images of 2 cm/pixel. We also develop palm detection models based on state-of-the-art algorithms, namely, YoloV3, Faster RCNN, and EfficienDet. Furthermore, we use the metadata of geotagged images to identify the geolocation of detected palms. In our work, we can achieve an accurate inventory of palm tree farms not only in terms of counting but also for the trees' geolocalization.
Some other works considered other types of trees such as \cite{Neupane2019PLOS} that developed a deep learning model for detecting banana trees from aerial images. They have reached the accuracy values of 96.4\%, 85.1\%, and 75.8\% for the altitudes 40, 50, and 60 meters, on the same farm. They have applied deep learning detection algorithm on orthomosaic maps. They have used Faster RCNN with a 42-layered Inception-v2 model feature extractor. Our work improves over \cite{Neupane2019PLOS} in that it can be applied to geotagged images, which enables us to uniquely identify each palm tree by its geolocation, and correctly deal with the issue of overlapping images.
\section{Experiments}
\begin{figure}[ht]
\centering
\includegraphics[width=8 cm]{DJI_0068-north-healthy_result.jpg}
\caption{\small \sl Sample image of the palm counting and geolocation application.}
\label{fig:sample_detection_geolocation}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm,height=6cm]{Distance_correction.png}
\end{center}
\includegraphics[width=8.5cm]{Distance_correction_PSU_sample.png}
\setlength{\belowcaptionskip}{-0.2cm}
\caption{\small \sl Distance correction (d) for the geolocation of palm trees from drone images. H is the drone altitude, h is the (average) palm tree height, D is the distance between the image center (drone's vertical projection on earth) and the position of the palm tree summit in the image (supposedly corresponding to the detected bounding box center).
\label{fig:distance_correction}}
\end{figure}
While the detection of palm infestation is a significant determinant for accurate crop estimation, it needs to be accompanied by a reliable tree counting technique, given the fact that traditional counting methods are inaccurate and inefficient. In order to address this problem, we propose an automated tree counting technique from aerial images, which can also have many other applications such as forest inventory, and farm management. For this aim, we collected a dataset of 217 UAV images taken in a palm tree farm in Kharj region, in Saudi Arabia, with a total of 9,873 instances (8,652 palms, and 1,221 other trees). We manually labeled this dataset using Labelbox \cite{Labelbox}. Then, we trained a Faster R-CNN model \cite{Faster_R-CNN_journal}, which is a state-of-the-art two-stage object detection algorithm, on 80\% of the dataset (174 images). We obtained a precision of 94\% and a recall of 84\% for the detection of palm trees, on the testing dataset (43 images). The average precision (AP) at an IoU (Intersection over Union) threshold of 0.5 attains 83\%.
\par Furthermore, we developed an algorithm that tags each detected tree with its GPS location by applying photogrammetry concepts to the metadata extracted from drone images (altitude and GPS location of the drone, image size, calibrated focal length, yaw degree), then applying a distance correction based on the ratio between the drone altitude and the estimated average palm height. This geolocation technique was tested on two different types of drones (DJI Mavic Pro, and Phantom 4 pro), and was assessed to provide an average geolocation accuracy of 2.8m, a maximum of 4.9m, and a standard deviation of 1.2m. Figure \ref{fig:sample_detection_geolocation} shows an example of palm detection and geolocation in a UAV image, displaying on top of each detected bounding box the class of each object (palm tree or other tree), the classification confidence level, and the latitude and longitude of the bounding box center.
The GPS tagging allows to uniquely identify, track and count the number of palm trees from a series of drone images, while correctly dealing with the issue of image overlapping while the drone is flying. This procedure can be generalized to the gelocation of any other objects in UAV images.
To geolocate a pixel ($x,y$) in a drone image, We first calculate the equivalent distance to the central pixel ($x_c, y_c$) in the image frame:
\[d_x=\frac{(x-x_c)}{F_c}H\]
\[d_y=\frac{(y_c-y)}{F_c}H\]
Where $H$ is the drone altitude, and $F_c$ is the calibrated focal length of the camera.
Then, we apply a rotation by the value of the flight yaw, to convert $(d_x,d_y)$ to local tangent plane (LTP) coordinates. Finally, we apply a distance correction (Figure \ref{fig:distance_correction}) based on the average palm tree height to take account of projection issues that induce a difference between the position of the palm tree summit on the image (which corresponds to the center of the detected bounding box) and its footprint.
|
2,869,038,154,983 | arxiv | \section{Introduction}
In the cores of galaxy clusters, the intracluster medium (ICM) can reach high enough density and low enough temperature that the inferred cooling time is much shorter than the age of the Universe. In these so-called ``cooling flow clusters'', simple models predict that $\sim$100--1000 M$_{\odot}$ yr$^{-1}$ of cool gas should condense out of the hot intracluster plasma and fuel star formation in the central cluster galaxy \citep[for a review, see][]{fabian94}. However, with few notable exceptions \citep{mcnamara06,mcdonald12c}, we do not observe such vigorous starbursts at the centers of galaxy clusters: the typical star formation rate in the core of a cooling flow cluster is only $\sim$1--10 M$_{\odot}$ yr$^{-1}$ \citep[e.g.,][]{hicks05,odea08,mcdonald11b}. This low level of star formation is most likely being fueled by local thermomodynamic instabilities in the ICM \citep[e.g.,][]{mccourt12}, with the remaining $\gtrsim90\%$ of the energy lost from cooling being offset by radio-mode feedback from the central AGN \citep[e.g.,][]{churazov01, rafferty06, rafferty08,fabian12,mcnamara12}.
While most cooling flow clusters show evidence of such ``reduced cooling flows'' in the form of ongoing star formation \citep[e.g.,][]{johnstone87,mcnamara89,allen95,odea08,hicks10,mcdonald11b} and cold molecular gas \citep[e.g.,][]{edge01,edge02,salome03,hatch05,salome11,mcdonald12b}, there is still very little evidence for gas between 10$^4$\,K and 10$^7$\,K, which would directly link the hot and cool phases.
High resolution X-ray spectroscopy of individual clusters has, thus far, only been able to put upper limits on the amount of $\sim$10$^6$\,K gas in the cores of galaxy clusters \citep[e.g.,][]{peterson03,peterson06,sanders10}.
Recently, \cite{sanders11b} performed a stacking analysis of X-ray grating spectra, yielding a detection of O\,\textsc{vii} at a level $\sim$4--8 times lower than the expectation from simple cooling flow models.
Using the FUSE satellite, \cite{oegerle01} and \cite{bregman01,bregman06} found evidence for O\,\textsc{vi} emission in the far-UV, probing gas at $\sim10^{5.5}$\,K, in the cores of several galaxy clusters, inferring cooling rates well below the cooling flow expectation. These observations were nearly always centered on the nucleus of the central cluster galaxy, where AGN feedback could in fact be heating the gas, causing excess O\,\textsc{vi} emission.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.99\textwidth]{a1795.eps}
\caption{\emph{Left:} Continuum ($\sim7000$\AA) image of the central galaxy in Abell~1795. \emph{Left-Center:} H$\alpha$ image showing the twin filaments extending $\sim$50\,kpc to the south of the central galaxy \citep{mcdonald09}. \emph{Right-Center:} Far-UV image from HST ACS/SBC \citep{mcdonald09} showing young stellar populations along the H$\alpha$ filaments. \emph{Right:} Zoom-in on the far-UV image, showing the location of the 2.5$^{\prime\prime}$ COS aperture. This aperture, which is shown in all four panels, is centered on the brightest UV clump in the western filament. In the left-most panel, the positions of the two closest galaxies are denoted by A and B. These galaxies are 16 and 21 kpc away in projection from the COS pointing, respectively, implying that any gas/stars in the filaments that originated in the satellite galaxies would have been stripped from these galaxies $>$26 Myr ago (assuming $v = 800$ km s$^{-1}$).
}
\label{fig:image}
\end{figure*}
Here we present new far-UV (FUV) spectroscopy (\S2) of Abell~1795 (A1795), a nearby, strongly-cooling galaxy cluster \citep[see e.g.,][Ehlert et~al.~ 2014]{fabian01,ettori02}. These spectra were obtained along the southwestern filament, which is observed to be cooling rapidly in the X-ray \citep[][Ehlert et~al.~ 2014]{crawford05,mcdonald10}, contains warm ($10^4$\,K) ionized \citep{cowie83,crawford05,mcdonald09} and cold molecular gas \citep{salome04,mcdonald12b}, and is rapidly forming stars\citep[$\sim$1 M$_{\odot}$ yr$^{-1}$;][]{mcdonald09}. With deep FUV spectroscopy, we can estimate the age and metallicity of these newly-formed stars (\S3), allowing us to determine if they are forming in situ or have been tidally stripped, while also measuring the O\,\textsc{vi} emission line flux (\S4) far from the influence of the central AGN ($\sim$30 kpc). This approach will allow us to link the young stars to the cooling X-ray gas, if that is indeed their origin (\S5). We will finish, in \S6, with a discussion of the current state of the cooling flow problem, and how these new data can advance our understanding.
Throughout this paper we assume H$_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.27$, and $\Omega_{\Lambda}=0.73$.
\section{Data}
\setcounter{footnote}{0}
FUV spectroscopy for this program was acquired using the \emph{Cosmic Origins Spectrograph} (COS) on the \emph{Hubble Space Telescope} using the G140L grating with $\lambda_{center}=1280$\AA, which yields a spectral coverage of 1080--1900\AA. The 2.5$^{\prime\prime}$ aperture was centered at ($\alpha$, $\delta$) = 207$^{\circ}\hspace{-0.04in}.2199$, +26$^{\circ}$\hspace{-0.04in}.5873, which corresponds to the peak of both the FUV and H$\alpha$ emission along the filament (see Figure \ref{fig:image}).
COS spectroscopy is simultaneously obtained in ``blue'' and ``red'' channels, with respective wavelengths spanning $\sim$1080--1200\AA\ and $\sim$1250--1800\AA\ for our setup. The gap between 1200--1250\AA\ spans the geocoronal Ly$\alpha$ line (1216\AA). We observe one other strong geocoronal line due to O\,\textsc{i} at $\sim$1302.2--1306\AA\footnote{\url{http://www.stsci.edu/hst/cos/calibration/airglow\_table.html}}, which is redward of redshifted Ly$\beta$ ($\lambda_{Ly\beta} = 1290.8$\AA). We determine the redshift of the spectrum using a joint fit to the Ly$\beta$ emission line and two absorption features in the red channel (Si\,\textsc{iv}$\lambda$1394 and C\,\textsc{iv}$\lambda$1548). This fit yields $z=0.0619 \pm 0.0005$, which is consistent with our optical redshift of $z=0.0618$ from \cite{mcdonald12a} at the same position along the filament.
\section{FUV Continuum: Young Stellar Populations}
\subsection{UV Absorption Indices}
To constrain the age and metallicity of the stellar population responsible for the observed FUV continuum in A1795, we use predicted UV absorption line strengths from \cite{maraston09} [hereafter M09], which are cast in terms of the \emph{International Ultraviolet Explorer} (IUE) index system established by \cite{fanelli92}.
The use of an index-based approach helps reduce the uncertainty in our results due to reddening effects, which are most severe at UV wavelengths. Since the M09 models do not consider contributions of other hot stellar phases (e.g. blue horizontal branch), we implicitly assume that the observed FUV emission is entirely due to young ($<$ 1 Gyr) stars.
\begin{deluxetable*}{c c c}[h!]
\centering
\tablecaption{Stellar Age and Metallicity from UV Absorption Line Indices}
\tablewidth{12.5 cm}
\tablehead{
\colhead{Indices} &
\colhead{~~A [Myr]~~} &
\colhead{log$_{10}$(Z) [Z$_{\odot}$]}
}
\startdata
BL1302, Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664 & 7.5$^{+2.5}_{-2.0}$ & $-0.4^{+0.2}_{-0.1}$ \\
\\
\phantom{BL1302,} Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664 & 10.0$^{+10.0}_{-4.5}$ & -0.2$^{+0.2}_{-0.2}$ \\
BL1302, \phantom{Si\,\textsc{iv},} BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664 & 7.5$^{+2.5}_{-2.0}$ & -0.4$^{+0.2}_{-0.3}$ \\
BL1302, Si\,\textsc{iv}, \phantom{BL1425,} C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664 & 7.5$^{+2.0}_{-4.0}$ & -0.4$^{+0.2}_{-0.1}$ \\
BL1302, Si\,\textsc{iv}, BL1425, \phantom{C\,\textsc{iv}a,} C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664 & 9.5$^{+0.5}_{-2.0}$ & -0.5$^{+0.1}_{-0.1}$ \\
BL1302, Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, \phantom{C\,\textsc{iv},} C\,\textsc{iv}e, BL1617, BL1664 & 3.0$^{+4.0}_{-1.0}$ & -0.1$^{+0.5}_{-0.3}$ \\
BL1302, Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, \phantom{C\,\textsc{iv}e,} BL1617, BL1664 & 6.5$^{+3.5}_{-3.0}$ & -0.4$^{+0.2}_{-0.1}$ \\
BL1302, Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, \phantom{BL1617,} BL1664 & 7.5$^{+2.5}_{-2.0}$ & -0.4$^{+0.1}_{-0.1}$ \\
BL1302, Si\,\textsc{iv}, BL1425, C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617\phantom{, BL1664} & 7.5$^{+2.5}_{-2.0}$ & -0.4$^{+0.2}_{-0.1}$
\enddata
\enddata
\tablecomments{All uncertainties are 1$\sigma$. See \cite{maraston09} for a discussion of far-UV absorption indices.
}
\label{table:indices}
\end{deluxetable*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.99\textwidth]{redcont.eps}\\
\caption{Far-UV spectrum from the red channel of our HST-COS observation. This spectrum has been binned in wavelength by a factor of 27 (2.2\AA) in order to improve signal-to-noise. Vertical grey bands correspond to spectral indices as defined by \cite{fanelli92}. Overplotted are various models generated with Starburst99 \citep{leitherer99} using the latest stellar tracks from \cite{ekstrom12} and \cite{georgy13}, demonstrating the overall quality of these fits to the data. }
\label{fig:redcont}
\end{figure*}
The M09 models come in two flavors: ones based on empirical fitting functions
to IUE spectra of Milky Way and Magellanic Cloud (MC) stars or others based on
theoretical Kurucz spectra. We use the latter here given that they reproduce
the ages of MC globular clusters (from color-magnitude diagrams) to within a
mean residual of 0.02 $\pm$ 0.32 dex (see Appendix C of M09). These models
span a semi-regular grid of ages from 1 Myr-1 Gyr and total metallicities from
-1.00 to +0.35 dex with respect to solar\footnote{The spacing of the age grid changes from 0.5 Myr,
to 5 Myr, to 50 Myr in the intervals 1 Myr-10 Myr, 10 Myr-0.1 Gyr, and 0.1-1.0
Gyr, respectively. On the other hand, the models are spaced at 0.1 dex
intervals in terms of metallicity, except the +0.35 dex models, which are
offset by 0.15 dex from the +0.20 dex models.}. We allow age and metallicity
to vary simultaneously in our fits, which follow a maximum likelihood
approach. Statistical errors in the best-fit quantities are estimated via
Monte Carlo simulations of the measured index errors. Although M09 recommend
fitting to all IUE indices in the 1000-2000\AA\ range at once with their
theoretical models, we have performed fits to a variety of index combinations, specifically a blue set (BL1302, Si\,\textsc{iv}, BL1425) and a red set (C\,\textsc{iv}a, C\,\textsc{iv}, C\,\textsc{iv}e, BL1617, BL1664), in order to test the robustness of our results. These results are summarized in Table \ref{table:indices}. Overall, we find consistently low ages (7.5$^{+2.5}_{-2.0}$ Myr) and metallicities (0.4$^{+0.2}_{-0.1}$ Z$_{\odot}$) regardless of which combination of indices we employ.
\subsection{Full Spectrum Synthesis}
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{chis_sb99.eps}
\caption{Goodness-of-fit ($\chi^2_{dof}$) as a function of age for instantaneous (upper panel) and continuous (lower panel) star formation. For all choices of IMF and metallicity, the data are well-represented by either a young ($\lesssim$10\,Myr) stellar population or ongoing star formation.}
\label{fig:chis}
\end{figure}
A complementary method of constraining the age and metallicity of the UV-emitting stellar population is to model the full spectrum using synthetic stellar population models. For this, we opt to use the latest version of Starburst99 \citep[v7.0.0;][]{leitherer99}\footnote{\url{http://www.stsci.edu/science/starburst99/docs/default.htm}}, which is, to our knowledge, the only publicly-available stellar population synthesis (SPS) code with spectral resolution better than 1\AA\ over 1000\AA\ $< \lambda < 2000$\AA. We restrict this analysis to $>$1100\AA\ -- at bluer wavelengths, Starburst99 uses empirical stellar spectra which provide a qualitatively-poor fit to the data (see \S4). We use the latest Geneva tracks, which are only available for solar \citep{ekstrom12} and 0.14Z$_{\odot}$ \citep{georgy13} metallicities. We explore three different initial mass functions: Salpeter \citep[$\alpha=-2.35$;]{salpeter55}, with top-heavy ($\alpha=-1.35$) and top-light ($\alpha=-3.35$) variants. For each choice of metallicity and IMF, we generate synthetic spectra assuming either a burst or continuous star formation, over timescales of 50\,Myr. These model spectra are fit to the data, allowing the normalization, redshift, and reddening to vary, with a lower limit of the Galactic value imposed on the reddening \citep{schlegel98}.
Figures \ref{fig:redcont} and \ref{fig:chis} show the results of this exercise. In Figure \ref{fig:redcont} we compare several best-fitting models to the data, demonstrating the overall quality of these fits. These synthetic spectra are able to adequately fit the various absorption features discussed in \S3.1. The goodness-of-fit ($\chi^2_{dof}$) is shown in Figure \ref{fig:chis} as a function of age, metallicity, and IMF, for both instantaneous and continuous star formation.
All starburst models favor a relatively young population, with the majority showing minima at ages of 5 Myr. The continuous star formation models prefer either a Salpeter IMF, with ages $>$10~Myr or a younger top-light stellar population. As is also the case for the burst-like models, the distinguishing power between IMFs comes from the C\,\textsc{iv} feature (see Fig.\ \ref{fig:redcont}), which is likely contaminated by emission. Qualitatively, all three choices of IMF perform equally well in fitting the spectrum over the range $1220 < \lambda_{rest} < 1700$\AA.
This analysis corroborates the results from FUV line indices (\S3.1), demonstrating that the best-fitting stellar population is one that is either currently forming stars, or ceased very recently ($\lesssim$10 Myr ago).
\section{FUV Emission Lines: Probing 10$^{5.5}$K Gas with O\,\textsc{vi}}
In Figure \ref{fig:bluecont} we show the blue side ($<$1200\AA) of the spectrum. These data are significantly noisier than the red channels, and suffer from airglow emission and Galactic H$_2$ absorption to a higher degree. Despite this, there is evidence for emission at redshifted O\,\textsc{vi}\,$\lambda1038$ in the binned spectrum (upper panel of Fig.\ \ref{fig:bluecont}). The spectrum of Galactic H$_2$ absorption from \cite{mccandliss03} shows strong absorption at the same wavelength as redshifted O\,\textsc{vi}\,$\lambda$1032, which likely explains the lack of emission from the bluer line in this doublet, which should have a factor of 2 higher flux. At these wavelengths, Starburst99 relies on empirical stellar spectra (see \S3) which provide a relatively poor fit to the data, so we opt to model the continuum spectrum by performing a median smoothing over a 2.4\AA\ ($\sim$650 km s$^{-1}$) window, which should be significantly wider than any emission lines. The residual spectrum with this model subtracted is shown in the lower panel of Figure \ref{fig:bluecont}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.49\textwidth]{bluecont.eps}
\caption{\emph{Upper panel:} Far-UV spectrum from the blue channel of our HST-COS observation. Here, we show the binned (0.8\AA) data along with stellar population synthesis models from Starburst99 \citep{leitherer99} and a median continuum spectrum computed with a 2.4\AA\ smoothing window. At these wavelengths Starburst99 only provides empirical models, which yield poor fits to the data (note that the red channels are well-fit by a solar-metallicity model; Fig.\ \ref{fig:redcont}). \emph{Lower panel:} Residual spectrum, with the median continuum level subtracted. The expected location of the O\,\textsc{vi}\,$\lambda\lambda$1032,1038 doublet is highlighted. In purple, we plot the spectrum of Galactic H$_2$ absorption from \cite{mccandliss03}, which shows that the bluer O\,\textsc{vi}\,$\lambda$1032 line is most likely being absorbed by molecular gas in our galaxy. The correspondence between a saturated absorption line and the O\,\textsc{vi}$\lambda$1032 line is highlighted by a vertical dashed line. We detect the redder, unabsorbed line at $>$4$\sigma$ significance, as shown in the inset, with a velocity width of $\lesssim$0.7\AA\ ($\lesssim$90 km s$^{-1}$).
}
\label{fig:bluecont}
\end{figure}
Assuming a linewidth of 0.43\AA\ ($\sigma = 50$ km s$^{-1}$), based on the H$\alpha$ velocity dispersion \citep{mcdonald12a}, we measure a flux of $f_{O\textsc{vi}} = 4.0 \pm 0.9 \times 10^{-17}$ erg s$^{-1}$ cm$^{-2}$ in the redder line of the O\,\textsc{vi} doublet ($>$4$\sigma$ detection). While the statistical significance of this line is low, we note that it is the most statistically significant deviation over the entire blue spectrum. The fact that this deviation is at the wavelength that we expect to find redshifted O\,\textsc{vi} further strengthens the significance of the detection. The width of this line is $\lesssim$0.7\AA\, or $\lesssim$90 km s$^{-1}$.
\section{Interpretation: Ongoing Star Formation in Condensing Filaments of Intracluster Gas}
Given the high ionization energy of the O\,\textsc{vi} transition (138.1 eV), it not likely to be due to the same process(es) responsible for the low-ionization H$\alpha$ emission (Fig.\ \ref{fig:image}). The original detection of O\,\textsc{vi} in the center of Abell~1795 by \cite{bregman06} was taken as evidence for cooling of the ICM. Alternatively, \cite{sparks12} recently invoked an evaporation scenario to explain the filamentary coronal emission in the core of the Virgo cluster. In the latter scenario, the hypothesis is that cool gas has been stripped from nearby gas-rich galaxies and is being heated via conduction by the hot ICM. The lack of stars in the filaments surrounding M87 was offered as further evidence of this scenario.
The detection of young ($\lesssim$10 Myr) stars and cold molecular gas \citep{mcdonald12b} in the filaments of Abell~1795 favors the scenario in which gas is condensing and stars are forming in situ. If, instead, the cool gas originated in a nearby galaxy, it would have been stripped \emph{at least} 26 Myr ago, given the distance to the nearest satellite galaxy (21~kpc; Fig.\ \ref{fig:image}) and the typical galaxy velocities in the core of Abell~1795 ($\sim$800 km s$^{-1}$).
Assuming the stars are forming in situ, it seems unlikely that the filamentary gas is simultaneously condensing (into stars) and evaporating (causing coronal emission) -- the simplest explanation is that the hot intracluster gas is cooling through the O\,\textsc{vi} transition and into molecular gas, before ultimately forming stars. The agreement between the measured metallicity of the stars (0.4$^{+0.2}_{-0.1}$\,Z$_{\odot}$; Table 1) and the cooling ICM at the same position (0.5$^{+0.3}_{-0.2}$\,Z$_{\odot}$; Ehlert et~al.~ 2014) further supports this scenario.
If we assume that the O\,\textsc{vi}\,$\lambda$1038 emission is due to cooling intracluster gas, we can estimate the cooling rate following \cite{edgar86} and \cite{voit94}. The inferred cooling rate can vary by a factor of $\sim$1.6 depending on if the cooling proceeds isobarically or isochorically. Based on very deep X-ray data of the cooling filament (Ehlert et~al.~ 2014), we estimate the sound-crossing time at the location of the COS aperture to be $\sim$2-5~Myr and the cooling time to be $\sim$2~Myr (assuming $n_e \sim1$ $cm^{-3}$). Thus, it is unclear exactly how the cooling will proceed. Based on the measured O\,\textsc{vi}\,$\lambda$1038 flux, and following \cite{edgar86}, we estimate a cooling rate of 0.85 $\pm$ 0.15 (stat) $\pm$ 0.20 (sys) M$_{\odot}$ yr$^{-1}$, where the systematic uncertainty includes expectation for isobaric (1.0 $\pm$ 0.2 M$_{\odot}$ yr$^{-1}$) and isochoric (0.7 $\pm$ 0.2 M$_{\odot}$ yr$^{-1}$) cooling.
This cooling rate, which probes $\sim$10$^{5.5}$\,K gas, can be compared to both the X-ray cooling rate and the star formation rate, which probe the hot ($\sim$10$^7$\,K) and cold ($\sim$10\,K) extremes, respectively. We estimate the classical cooling rate (\.{M}$_X$ = $2L_X\mu m_p/5kT$) within the COS aperture to be $\sim$1 M$_{\odot}$ yr$^{-1}$, assuming that, on small ($<$kpc) scales, the cooling ICM is similar in morphology (clumpiness) to the UV continuum. Based on our stellar population modeling (\S3.2), we estimate an extinction-corrected star formation rate (assuming continuous star formation) within the COS aperture of 0.11 $\pm$ 0.02 M$_{\odot}$ yr$^{-1}$. This suggests that $13^{+3}_{-2}$\% of the gas cooling through 10$^{5.5}$\,K is converted to stars.
Alternatively, some fraction of the O\,\textsc{vi} emission could come from a mixing layer, where hot electrons from the ICM penetrate the cold filaments, following \cite{fabian11} -- correcting for this would raise the inferred star formation efficiency. However, we do not find an excess of optical line emission above the expectation given the UV continuum level, as is observed in NGC~1275, suggesting that these effects are small in the filaments of Abell~1795.
For comparison, \cite{bregman06} find a total O\,\textsc{vi}-derived cooling rate of 42 $\pm$ 9 M$_{\odot}$ yr$^{-1}$ over the full core of A1795, while the extinction-corrected star formation rate \citep[assuming $\left<E(B-V)\right>$ = 0.1;][]{mcdonald12a} over the same area is only 5.2 M$_{\odot}$ yr$^{-1}$ \citep{mcdonald09}. This corresponds to an efficiency of forming stars out of the 10$^{5.5}$\,K gas of 12$^{+4}_{-2}$\%. This is also comparable to the star formation efficiency found by \cite{mcdonald11b} of 14$^{+18}_{-8}$\% based on X-ray spectroscopy and UV photometry.
\section{Implications for the Cooling Flow Problem}
Given that cooling flows are found to be $\sim$1\% efficient at converting the cooling ICM into stars \citep[e.g.,][]{odea08}, it is important to understand at what temperatures the bulk of the gas is held up. Assuming that the IMF does indeed follow \cite{salpeter55}, we find, for the filaments in Abell~1795 (where the effects of AGN feedback should be minimized), that the cooling efficiency at high temperatures is of order 100\%
($\epsilon_{hot}$~$\equiv$~\.{M}$_{\textrm{\footnotesize O{\tiny VI}}}$/\.{M}$_X $ $\sim$ 0.7--1.0),
while the star formation efficiency is low ($\epsilon_{cold}$~$\equiv$~SFR/\.{M}$_{\textrm{\footnotesize O{\tiny VI}}}$~$\sim$~0.11--0.16). In contrast, \cite{bregman01} found, using O\,\textsc{vi} observations from FUSE, that $\epsilon_{hot}$ was of order 10\%\ \emph{over the full cluster core} for Abell~1795, Abell~2597, and Perseus. This suggests that the cooling flow problem may be divided into two separate inefficiencies:
\begin{itemize}
\item $\epsilon_{hot} \sim 0.1$ : globally inefficient cooling at high temperatures (10$^7$K $\rightarrow$ 10$^5$K), due to some large-scale feedback source (e.g., AGN);
\item $\epsilon_{cold} \sim 0.1$ : locally inefficient cooling at low temperatures (10$^5$K $\rightarrow$ stars), manifesting as inefficient star formation.
\end{itemize}
The latter, which we quantify here and in \cite{mcdonald11b}, may be low due to conduction suppressing cooling at low temperatures. For comparison, star clusters in our Galaxy have typical star formation efficiencies of $\sim$8--30\% \citep{lada03} -- in principle, star formation embedded in a hot plasma should proceed less efficiently.
\section*{Summary}
We present deep far-UV spectroscopy of a cooling filament in Abell~1795, obtained using the \emph{Cosmic Origins Spectrograph} on HST. These data allow us to simultaneously probe the young stellar populations and the intermediate temperature (10$^{5.5}$\,K) gas. We find evidence for ongoing, in situ star formation, which suggests that the cool gas in these filaments was \emph{not} stripped from an infalling galaxy. The detection of O\,\textsc{vi}\ emission suggests that this star formation is being fueled by condensing intracluster gas, and that the cooling is proceeding efficiently at high temperatures, contrary to what is observed on large scales in cluster cores. We propose a scenario where the two orders of magnitude disagreement between luminosity-based X-ray cooling rates and star formation in cluster cores is due to a combination of globally-inefficient cooling at high temperatures ($\epsilon_{hot} \sim 0.1$; e.g., AGN feedback) and locally-inefficient star formation at low temperatures ($\epsilon_{cold} \sim 0.1$).
\section*{Acknowledgements}
M.\,M. acknowledges support provided by NASA through a Hubble Fellowship grant from STScI and through HST GO-12992 contract NAS5-26555. S.\,V. acknowledges support from a Senior NPP Award held at NASA-GSFC. S.\,E. acknowledges support from SAO subcontract SV2-82023 under NASA contract NAS8-03060.
|
2,869,038,154,984 | arxiv | \section{Introduction}
\noindent
In an ancient dusty magic school, where wizards cast spells by moving their hands in complex arcane patterns, a young student, Henri Pother,
hides behind a curtain watching a master wizard perform his most secret spell. Unfortunately for Henri, he can only see through a few small holes in the curtain. To make matters worse, the wizard is invisible! All Henri can see is the motion of the dust in the air around the wizard's invisible hands. \textit{Can Henri reconstruct the movement of the wizard's hands from only limited observations of the air currents?}
The above make-believe scenario is an analogue for the real-world setting of weather modeling: We have equations that model weather, but we do not know how these equations are forced (we don't know the movement of the wizard's hands). On the other hand, we observe parts of the evolution on large-scales (the motion of the dust particles from which one can systematically construct an approximation of the surrounding air current velocity, for instance through Particle Image Velocimetry \cite{WillertGharib1991}). The real question that we want to answer is then: \textit{Can we reconstruct the forcing on the system from only sparse observations of the velocity?} In the present paper, we show that the answer is ``yes,'' at least in the case of the 2D periodic incompressible Navier-Stokes equations. In particular, we propose a new algorithm for forcing reconstruction. We not only show computationally that this algorithm finds the ``true'' forcing, reaching machine-precision accuracy exponentially fast in time, but we prove mathematically that it does so.
Adequate control of fluid flow is a difficult problem that has been extensively studied \cite{gunzburger2002perspectives,gunzburger2012flow} particularly in light of modern advances in computing. Motivation for such studies is found in a number of disciplines in the physical and engineering sciences, where the ability to control either classical Newtonian fluids or complex non-Newtonian fluids is of significant interest.
In \cite{Azouani_Olson_Titi_2014} a feedback control mechanism was exploited to smoothly merge incoming partial observations with a dynamic model given by a system of partial differential equations that is simultaneously evolved forward in time, allowing for a controlled estimate of the true full state of the unobserved system. This is often referred to as the Azouani-Olson-Titi (AOT) algorithm. The AOT approach has been studied in multiple contexts of fluid systems, yielding both numerical evidence of its efficacy and rigorous analysis to justify the convergence of the generated approximating state to its true value. The current investigation exploits this feedback control approach to produce a rigorously justified approach to the determination of a steady, but a priori unknown, external driving force from partial observations of the flow field in two-dimensional turbulence.
A common assumption found in many of the works that incorporate feedback control in fluid systems is the existence of a perfect model for the dynamical evolution of the fluid itself, that is, that the underlying physical model is known exactly beforehand. Notable exceptions to this framework are the studies performed in \cite{FarhatGlattHoltzMartinezMcQuarrieWhitehead2020} and \cite{carlson2020parameter}, where the true dynamic model is known up to an unidentified parameter $\lambda$. In \cite{FarhatGlattHoltzMartinezMcQuarrieWhitehead2020}, the canonical Rayleigh-B\'enard convection setting is investigated where the exact value of the Prandtl number (the nondimensional quantity denoting the ratio of kinematic viscosity to thermal diffusivity) is unknown. Observations extracted from an identical setup, but with a different value of the Prandtl number are then rigorously shown to drive the simulated solution toward the true state up to a constant error that is Prandtl number-dependent. This analysis is then supplemented with a battery of numerical simulations that verify the mathematical conclusions and probe the efficacy of the method by exploring situations outside of the regimes where convergence can mathematically be guaranteed. In \cite{carlson2020parameter}, a similar study is performed on the periodic 2D Navier-Stokes equations with an unknown viscosity, but advanced the procedure further by exploiting a numerically observed phenomenon of relaxation of the state error to being relatively constant in order to propose an approximate value to the true viscosity; this procedure can then be iterated to produce a sequence of approximating values that reliably converge to the true viscosity value up to numerical precision. In a series of recent papers \cite{carlson2022, martinez2022convergence, martinez2022force}, the underlying mechanisms leading to the observed convergence were identified in a mathematically salient way to supply an analytical proof of this convergence.
A variation to the approach for parameter recovery that was originally introduced in \cite{carlson2020parameter} was proposed in \cite{pachev2022concurrent}. There, a more principled perspective based on enforcing a form of null-controllability was taken to develop a continuously-updating parameter algorithm in contrast to the sequentially-updating algorithm in \cite{carlson2020parameter}. This variation was numerically studied in the context of the one-dimensional Kuramoto-Sivashinsky equation and was seen to reliably infer multiple unknown parameters appearing in the system in a concurrent fashion. The algorithm of interest here is primarily based on the one introduced in \cite{pachev2022concurrent}. In contrast to \cite{pachev2022concurrent}, however, the current article negotiates a situation that firstly, possesses richer dynamical behavior in two-dimensional turbulent flow, in addition to one that contains significantly more parameters to infer in having to determine all coefficients of the forcing function, the number of which may potentially be very large, and secondly, we supply the first rigorous proof of convergence of a parameter recovery algorithm that is based on the null-controllability perspective from \cite{pachev2022concurrent}.
Lastly, we remark that the approaches described above for discovery of model parameters while simultaneously reconstructing the true state of the system is readily comparable to other data-driven model recovery techniques such as SINDy (see \cite{brunton2016discovering}, for example, or \cite{djeumou2022learning,mojgani2022discovery} and references therein). One particular benefit of the feedback control mechanism exploited here for the purpose of recovering the unknown forcing function in 2D turbulence, is that the parameter estimation is performed \textit{on-the-fly} in the fashion of continuous data assimilation and without any need for post-processing. Indeed, the data is inserted dynamically into the model in real-time, which is, in turn, incrementally updated to obtain the correct parameters that characterize the external forcing.
The rest of this article is organized as follows: Section \ref{sec:pre} will establish notation and the necessary preliminary results required by the analysis that follows. Section \ref{sec:pde} provides a formal derivation of the forcing update algorithm. Section \ref{sec:analysis} provides the rigorous analysis that justifies the algorithm in 2D. Section \ref{sec:simulations} describes the numerical simulations that were performed that demonstrate how well the algorithm performs in 2D, and Section \ref{sec:conclusions} concludes with a takeaway message and potential for future work. A reader that is less interested in the technical details of the mathematical analysis may be well-served to focus on Sections \ref{sec:pde}, \ref{sec:simulations} and \ref{sec:conclusions}.
\section{The equations of motion and their mathematical setting} \label{sec:pre}
In this section, we provide a short description of the mathematical model of interest and the functional setting in which we carry out our convergence analysis. The development of the underlying algorithm in Section \ref{sec:pde} can be followed without some of the definitions provided here, but these definitions and preliminary results are necessary for the rigorous analysis performed in Section \ref{sec:analysis} that completely justifies the algorithm.
We consider an incompressible fluid in a periodic domain, $\Omega=[0,L]^d$, of length $L>0$ (all of the analysis and simulations presented below assume that $d=2$, but the heuristic derivation of the algorithm is independent of the dimension $d$), subject to a time-independent external body force $\mathbf{f}$, which is mean-free over $\Omega$. The evolution of the velocity field is governed by the Navier-Stokes equations given by
\begin{equation}\label{eq:NS_true}
\partial_t\mathbf{u} + \mathbf{u}\cdot \nabla \mathbf{u} =-\nabla p+ \nu \nabla^2 \mathbf{u} + \mathbf{f},\quad \nabla\cdot\mathbf{u} = 0,
\end{equation}
where fluid density has been normalized to unity, the kinematic viscosity, $\nu$, is known, and $\mathbf{u}, p$ are assumed mean-free over $\Omega$. We assume that the body force is unknown and that the velocity field is partially observed. The main objective is to reconstruct the non-conservative component of $\mathbf{f}$ at \textit{observational scales} by leveraging the model jointly with the observations. For convenience, let us therefore assume that $\nabla\cdotp\mathbf{f}=0$.
An important non-dimensional quantity that serves as a proxy for the Reynolds number when the Navier-Stokes equations is externally forced is the Grashof number, which we will denote by $G$. In two-dimensions, $G$ is defined by
\begin{align}\label{Grashof}
G = \frac{\norm{\bf f}_{2}}{(\kappa_0\nu)^2},\quad \text{where}\ \kappa_0:=\frac{2\pi}{L},
\end{align}
where $\|\hspace{1pt}\cdotp\|_2$ is the $L^2$ norm over $\Omega$. Note that $\kappa_0$ is smallest eigenvalue of $-\Delta$, while in the context of three-dimensional turbulence, the Grashof number is known to be related to the Reynolds number, $\text{Re}$, of the flow as $G\sim\text{Re}^2$ (see \cite{DascaliucFoiasJolly2009}).
For our purposes, we will assume that the body force is twice weakly differentiable over $\Omega$, all of whose weak partial derivatives are square-integrable over $\Omega$. It is known that the corresponding initial value problem for the 2D NSE \eqref{eq:NS_true} is well-posed in this setting and, moreover, the dynamics for the system possesses a finite-dimensional global attractor when ${\bf f}$ is time-independent, see e.g., \cite{ConstantinFoias88, Robinson2003, Temam_1997}; additional properties of the solution which are relevant to the analysis performed later in the article is provided in Section \ref{sec:appendix}.
\section{Derivation of the algorithm and Heuristics for convergence}\label{sec:pde}
The method for parameter recovery studied in this paper makes crucial use of a state-recovery algorithm originally introduced by Azouani, Olson, and Titi \cite{Azouani_Olson_Titi_2014} in the context of continuous data assimilation for the 2D NSE equations; the system associated to this method is introduced below in \eqref{eq:NS_nudge}. In this seminal work, a {\it feedback control paradigm} is introduced for obtaining an approximation of a solution through having only observed \textit{sufficiently large, but finitely many parameters} of the solution, collected continuously in time, that asymptotically converges to the true solution corresponding to the observed data at an exponential rate. This particular algorithm differed from previously studied algorithms through the manner in which observations were used; in \cite{Azouani_Olson_Titi_2014}, observations were inserted into the dynamical model as an external forcing term that enforced relaxation to the true state of the system, whereas previous algorithms such as that studied in \cite{OlsonTiti03} had inserted observations into the model by directly replacing dynamical terms. The use of such schemes for numerical weather prediction has been and continues to be extensive (see \cite{AbarbanelShirmanBreenKadakiaReyArmstrongMargoliash2017, Baek2007, SzendroRodriguezLopez2009, YangBakerLiCordesHuffNagpalOkerekeVillafaneKalnayDuane2006} for instance).
Since its introduction to the mathematical fluid dynamics community, this algorithm has been a topic of intense activity (see \cite{AlbanezBenvenutti18, AlbanezLopesTiti2016, AltafTitiGebraelKnioZhaoMcCabeHoteit2017, BessaihOlsonTiti2015, BiswasFoiasMondainiTiti2019, BiswasHudsonLariosPei2018, BiswasMartinez2017, BlocherMartinezOlson2018, BlomkerLawStuartZygalakis2013, BrettLamLawMcCormickScottStuart2013, carlson2020parameter, DesamsettiDasariLangodanTitiKnioHoteit2019, FarhatJollyTiti2015, FarhatLunasinTiti2016a, FarhatLunasinTiti2016b, FarhatLunasinTiti2016c, FarhatLunasinTiti2017, FarhatLunasinTiti17, FarhatJohnstonJollyTiti2018, FoiasMondainiTiti2016, GeshoOlsonTiti2016, JollySadigovTiti2015, JollySadigovTiti2017, IbdahMondainiTiti2020, LeoniMazzinoBiferale2018,LeoniMazzinoBiferale2020, JollyMartinezTiti2017, JollyMartinezOlsonTiti2019, MarkowichTitiTrabelsi2016, MondainiTiti18}).
To describe our algorithm for parameter recovery, we shall denote the observable state of the velocity by $I_h(\mathbf{u})$, where $I_h$ is an (autonomous) bounded linear projection operator whose output is a suitable interpolation of the observed data into the phase space of the system \eqref{eq:NS_true}, which consists only of divergence-free vector fields. In general, $I_h$ is often referred to as an \textit{interpolant observable operator}. In the context of incompressible flows considered here, $I_h$ is understood to perform an interpolation of the observed data, then orthogonally projects the result onto solenoidal vector fields. Hence, $I_h^2=I_h$, $I_h\nabla r=0$, and $\nabla\cdotp I_h\mathbf{v}=0$, for any sufficiently smooth scalar field $r$ and vector field $\mathbf{v}$. Here, $h>0$ quantifies the resolution of the observational field in such a way that $\lim_{h\rightarrow 0}I_h(\mathbf{u})=\mathbf{u}$.
Specific examples of the manner of interpolation encoded in $I_h$ include lagrangian interpolation of nodal values or local averages which are distributed uniformly across the domain at a mesh-size $h$, but also include large-scale filtering such as projection onto finitely many Galerkin modes up to a cut-off frequency $1/h$.
We will assume that $I_h$ and its complementary projection $J_h=I-I_h$ satisfy the following boundedness properties:
\begin{align}\label{eq:boundedness}
\|I_h(\partial^\alpha\phi)\|_2^2\leq {c_m^2}{h^{-2m}}\|\phi\|_2^2,\quad \|J_h(\phi)\|_2^2\leq C_m^2h^{2m}\sum_{|\alpha|=k}\|\partial^\alpha\phi\|_2^2,
\end{align}
for any multi-index $|\alpha|=m$ with $m\geq0$, for some constants $c_m, C_m>0$, independent of $h$; such inequalities are satisfied in the case where $I_h$ is given by spectral projection. In this special case, $I_h$ also satisfies $I_h\partial^\alpha=\partial^\alpha I_h$, so that $I_h(\nabla^2J_h(\mathbf{w}))=0$. For the remainder of the manuscript, $I_h$ is therefore assumed to represent spectral projection, that is, projection onto Fourier modes $|k|\leq h^{-1}$.
The unobserved scales of the flow are then dynamically approximated by directly inserting the observations, $I_h(\mathbf{u})$, into \eqref{eq:NS_true} as a feedback-control term, resulting in the system
\begin{equation}\label{eq:NS_nudge}
\partial_t\mathbf{v} + \mathbf{v}\cdot\nabla\mathbf{v}=- \nabla q + \nu \nabla^2 \mathbf{v} + \mathbf{g} - \mu I_h(\mathbf{v})+\mu I_h(\mathbf{u}),\quad \nabla\cdotp\mathbf{v}=0,
\end{equation}
where $\mu$ denotes a tuning parameter, often referred to as the \textit{nudging coefficient}, and $\mathbf{g}$ is a divergence-free vector field chosen as a putative approximation to the unknown forcing $\mathbf{f}$. A fundamental property of the system is that in the absence of model error, i.e., $\mathbf{g}=\mathbf{f}$, then the approximating flow field, $\mathbf{v}$, asymptotically synchronizes with the true flow field, $\mathbf{u}$, provided that the true flow field is observed through sufficiently small scales, i.e., $h\ll1$, and $\mu$ is appropriately tuned \cite{azouani2014continuous}.
In the presence of model error, this fundamental property can be leveraged to develop an ansatz for $\mathbf{f}$ by enforcing relaxation of the state error, $\mathbf{w}=\mathbf{v}-\mathbf{u}$, on the observed scales. Indeed, the evolution of $\mathbf{w}$ on the observed scales is governed by
\begin{align}\notag
\partial_tI_h(\mathbf{w})
=I_h\left(-\mathbf{v}\cdotp\nabla\mathbf{v}+\nu\nabla^2\mathbf{v}+\mathbf{g}-\partial_t\mathbf{u}\right) -\mu I_h(\mathbf{w}).\notag
\end{align}
Assuming that $\mathbf{g}$ can be chosen instantaneously such that
\begin{align}\label{eq:ansatz}
\mathbf{g}=\partial_tI_h(\mathbf{u})+I_h\left(\mathbf{v}\cdotp\nabla\mathbf{v}-\nu\nabla^2I_h\mathbf{u}-\nu\nabla^2J_h\mathbf{v}\right),
\end{align}
where $J_h=I-I_h$ denotes the complementary projection, it follows that $\partial_tI_h(\mathbf{w})+\mu I_h(\mathbf{w})-\nu I_h\nabla^2J_h(\mathbf{w})=0$. Note that this choice critically relies on replacing the nudged velocity field $\mathbf{v}$ in the Laplacian term ($\nabla^2$) directly with the observed data $I_h\mathbf{u}$ combined with the unobserved data from the nudged solution $J_h\mathbf{v}$. This is in direct contrast to simply identifying the forcing update as a function of $\mathbf{v}$. This `direct replacement' strategy employed here is necessary for the rigorous convergence of the forcing to the true value, but as noted below does not appear to be necessary in practice.
By orthogonality of $I_h(\mathbf{w})$ and $J_h(\mathbf{w})$ it follows that the energy balance at observational scales one step forward in time satisfies
\begin{align}\label{eq:converge:low:modes}
\|I_h(\mathbf{w})(t+\Delta t)\|_2=e^{-\mu\Delta t}\|I_h(\mathbf{w}(t))\|_2.
\end{align}
In particular, exponential relaxation of $I_h(\mathbf{w})$ is enforced. The utility of this choice is readily seen; setting $\mathbf{h}=\mathbf{g}-\mathbf{f}$ we find that
\begin{align}\label{eq:error:formal}
I_h(\mathbf{h}
&=I_h\left[J_h(\mathbf{w})\cdotp\nabla J_h(\mathbf{w})+J_h(\mathbf{w})\cdotp\nabla\mathbf{u}+\mathbf{u}\cdotp\nabla J_h(\mathbf{w})\right]\notag\\
&\quad -\nu I_h(\nabla^2J_h\mathbf{w})+O(I_h(\mathbf{w})).
\end{align}
Under the assumption that $\mu$ is sufficiently large relative to the observational density, $h$, and time-step, $\Delta t$, the resulting state error one step forward, $\mathbf{w}(t+\Delta t)$ will satisfy the following estimates (see the Appendix for details):
\begin{align}\label{eq:error:sync}
\|\mathbf{w}(t+\Delta t)\|_2\leq O \left(\frac{\|\mathbf{h}(t)\|_2}{\mu}\right),\quad \|\nabla\mathbf{w}(t+\Delta t)\|_2\leq O \left(\frac{\|\mathbf{h}(t)\|_2}{\sqrt{\mu\nu}}\right).
\end{align}
Further assuming that $I_h\mathbf{f}=\mathbf{f}$ (the forcing function is completely observable), it follows that $I_h\mathbf{h}=\mathbf{h}$.
Then the model error in \eqref{eq:error:sync} is essentially controlled by
\begin{align}
\|I_h\mathbf{h}(t+\Delta t)\|_2&
\leq O(\|I_h(\mathbf{w}(t+\Delta t))\|_2)+O\left(\frac{\|\mathbf{h}(t)\|_2}{\sqrt{\mu\nu}}\right)\notag.
\end{align}
Hence, upon invoking \eqref{eq:converge:low:modes} for a sufficiently large time step $\Delta t$ and the fact that $I_h(\mathbf{h})=\mathbf{h}$, then for $\mu$ chosen appropriately large,
\begin{align}\label{eq:heuristic}
\|\mathbf{h}(t+\Delta t)\|_2\leq\frac{1}2\|\mathbf{h}(t)\|_2.
\end{align}
Owing to \eqref{eq:converge:low:modes}, we see that \eqref{eq:heuristic} implies exponential convergence of $\mathbf{h}$ to zero upon further iteration.
\section{Definition of the algorithm}
The discussion above lends itself to an implementable algorithm:
At stage $1$, the algorithm is initialized with a pair $(\mathbf{v}_0^{-1}, \mathbf{g}^0)$ that prescribes an initial velocity and a force for \eqref{eq:NS_nudge}. Integration of \eqref{eq:NS_nudge} forward-in-time produces $\mathbf{v}^0(t)=\mathbf{v}^0(t;\mathbf{v}_0^{-1},\mathbf{g}^0)$, for all times $t\in I_0=[0,\infty)$. After a transient period that allows the state error, $\mathbf{w}^0=\mathbf{v}^0-\mathbf{u}$ to establish a balance with the model error, $\Delta \mathbf{g}^0=\mathbf{g}^0-\mathbf{f}$, a new estimate of the forcing function modified from \eqref{eq:ansatz} is calculated at time $t_1\gg 0$:
\begin{align}\label{eq:ansatz:mod:stage1}
\mathbf{g}^1
&=\partial_tI_h(\mathbf{u})+I_h\left(\mathbf{v}^0\cdotp\nabla\mathbf{v}^0-\nu\nabla^2(I_h\mathbf{u}+J_h\mathbf{v}^0)\right).
\end{align}
This yields a new approximation, $\mathbf{g}^1$, to the force. Notably, $\mathbf{g}^1=I_h(\mathbf{g}^1)$. This concludes the initial cycle of the algorithm. The process is then repeated: suppose that $\mathbf{g}^\ell$ have been produced in this way at stages $\ell=1,\dots, k$ at times $t_\ell\gg t_{\ell-1}$, respectively, and that $\mathbf{g}^\ell=I_h\mathbf{g}^\ell$. At stage $k$, \eqref{eq:NS_nudge} is re-initialized at time $t=t_k$, with the pair $(\mathbf{v}^{k-1}(t_k),\mathbf{g}^k)$; this produces an approximating velocity, $\mathbf{v}^{k}(t)=\mathbf{v}^k(t;\mathbf{v}^{k-1}(t_k),\mathbf{g}^k)$, over the time interval $I_k=[t_k,\infty)$. After a sufficiently long interval of time for $\mathbf{w}^k=\mathbf{v}^k-\mathbf{u}$ to achieve balance with the model error, $\Delta\mathbf{g}^k=\mathbf{g}^k-\mathbf{f}$, the forcing function estimate is updated according to
\begin{align}\label{eq:ansatz:mod:stagekp1}
&\mathbf{g}^{k+1}=\partial_tI_h(\mathbf{u})+I_h\left(\mathbf{v}^k\cdotp\nabla\mathbf{v}^k-\nu\nabla^2(I_h\mathbf{u}+J_h\mathbf{v}^k)\right),
\end{align}
at time $t_{k+1}\gg t_{k}$ to produce $\mathbf{g}^{k+1}$; again $\mathbf{g}^{k+1}=I_h\mathbf{g}^{k+1}$. In this way a sequence of approximating forces is generated, $\mathbf{g}^1,\mathbf{g}^2,\mathbf{g}^3,\dots$, as observations are assimilated. At this point, we remark that the recent work \cite{martinez2022force} analytically demonstrated convergence of a similar algorithm for inferring $\mathbf{f}$ in \eqref{eq:NS_true}. In contrast, the algorithm there is defined by making use of a spectral filtering method introduced by \cite{CelikOlsonTiti2019}, while the algorithm proposed in the current article is defined by enforcing exponential convergence of the state error but only at the observational scale, an idea that was initially introduced in \cite{pachev2022concurrent} in the context of the one-dimensional Kuramoto-Sivashinsky equation. In essence, the algorithm proposed here is a marriage of those studied in \cite{martinez2022force, pachev2022concurrent}.
\section{Rigorous Convergence Analysis}\label{sec:analysis}
The precise version of \eqref{eq:error:sync} asserts that the stage $k$ state error satisfies
\begin{align}\label{eq:error:sync:true}
\|\mathbf{w}^k(t)\|_2\leq O\left(\frac{\|\Delta\mathbf{g}^{k}\|_2}{\mu}\right),\quad
\|\nabla\mathbf{w}^k(t)\|_2\leq O\left(\frac{\|\Delta\mathbf{g}^{k}\|_2}{\sqrt{\mu\nu}}\right)
\end{align}
for all $t\geq t_{k+1}$, where $t_{k+1}\gg t_k$, provided that $\mu h^2\leq \nu$ and $\mu$ is chosen sufficiently large, depending only on the maximum magnitude of the energy, enstrophy, and palenstrophy of $\mathbf{u}$; a proof of \eqref{eq:error:sync:true} can be found in \cite{martinez2022force}.
Now, from \eqref{eq:NS_true}, \eqref{eq:NS_nudge}, and \eqref{eq:ansatz}, the model error at stage $k+1$ can be expanded as:
\begin{align}\label{eq:error:rep}
\Delta\mathbf{g}^{k+1}
&=\left[I_h\left(\mathbf{w}^{k}\cdotp\nabla\mathbf{w}^{k}\right)+I_h\left(\mathbf{u}\cdotp\nabla\mathbf{w}^{k}\right)+I_h\left(\mathbf{w}^{k}\cdotp\nabla\mathbf{u}\right)\right]_{t=t_{k+1}}.
\end{align}
Note that the fact that $\mathbf{f}$ is confined to the subspace spanned by the observations, that is, $\mathbf{f}=I_h\mathbf{f}$, has been applied. It is then clear from \eqref{eq:error:rep} that $I_h\Delta \mathbf{g}^{k+1}=\mathbf{g}^{k+1}$.
To assess the size of $\Delta\mathbf{g}^{k+1}$, observe that integration by parts and the properties $I_h^2=I_h$, $I_h\mathbf{g}^k=\mathbf{g}^k$, for all $k$, yield
\begin{align}
\langle I_h\left(\mathbf{w}^{k}\cdotp\nabla\mathbf{w}^{k}\right),\Delta\mathbf{g}^{k+1}\rangle&=-\langle \mathbf{w}^{k}\cdotp\nabla \Delta\mathbf{g}^{k+1},\mathbf{w}^{k}\rangle, \notag\\
\langle I_h(\mathbf{u}\cdotp\nabla\mathbf{w}^k),\Delta\mathbf{g}^{k+1}\rangle&=-\langle \mathbf{u}\cdotp\nabla \Delta\mathbf{g}^{k+1},\mathbf{w}^k\rangle,\notag\\
\langle I_h(\mathbf{w}^k\cdotp\nabla\mathbf{u}),\Delta\mathbf{g}^{k+1}\rangle&=-\langle \mathbf{w}^k\cdotp\nabla\Delta\mathbf{g}^{k+1},\mathbf{u}\rangle\notag,
\end{align}
where brackets denote the $L^2$--inner product over $\Omega$. Now, upon taking the $L^2$--inner product of \eqref{eq:error:rep} with $\Delta\mathbf{g}^{k+1}$, integrating by parts, invoking the fact that $\Delta\mathbf{g}^{k+1}$ is solenoidal, then applying the Cauchy-Schwarz inequality, H\"older's inequality (see the Appendix), and \eqref{eq:boundedness}, yields
\begin{align}\notag
\|\Delta\mathbf{g}^{k+1}\|_2^2&\leq \left(\|\mathbf{w}^{k}\|_{4}^2
+2\|\mathbf{u}\|_{\infty}\|\mathbf{w}^{k}\|_2\right)\|\nabla\Delta\mathbf{g}^{k+1}\|_2.
\end{align}
Let $R_k$ denote the supremum in time of $\||\nabla|^k\mathbf{u}(t)\|_2$. By interpolation, the Cauchy-Schwarz inequality, and \eqref{eq:boundedness} it follows that
\begin{align}
\|\Delta\mathbf{g}^{k+1}\|_2\leq \frac{c_0}{h}\left(\|\nabla\mathbf{w}^k\|_2\|\mathbf{w}^k\|_2+\sqrt{R_2R_0}\|\mathbf{w}^k\|_2\right),\notag
\end{align}
for some universal non-dimensional constant $c_0>0$ that depends on $c_1, c_2$ from \eqref{eq:boundedness} and the constants of interpolation; we refer to the Appendix for additional details. Finally, \eqref{eq:error:sync:true} implies
\begin{align}\label{eq:pre:convergence}
\|\Delta\mathbf{g}^{k+1}\|_2\leq c_0\frac{\sqrt{R_2R_0}}{\mu h}\left(\frac{\|\Delta\mathbf{g}^{k}\|_2}{\sqrt{\mu\nu R_2R_0}}+1\right)\|\Delta\mathbf{g}^{k}\|_2,
\end{align}
where $c_0$ now denotes a larger, but still non-dimensional, constant. Conditions ensuring the convergence of $\mathbf{g}^k$ to the true force, $\mathbf{f}$, may then be given by
\begin{align}\label{eq:conditions}
2c_0(R_2R_0)^{1/2}h\leq \mu h^2\leq\nu,\quad \mu\nu \geq \frac{1}{2}\frac{\|\Delta\mathbf{g}^0\|_2}{\sqrt{R_2R_0}}.
\end{align}
Indeed, under \eqref{eq:conditions}, one may then deduce from \eqref{eq:pre:convergence} that
\begin{align}
\|\Delta\mathbf{g}^{k+1}\|_2\leq \frac{1}2\|\Delta\mathbf{g}^{k}\|_2,\quad\text{for all}\ k\geq0.
\end{align}
Observe that \eqref{eq:conditions} constitutes a non-trivial set of pairs $(\mu,h)$ whenever the observational density, $h^{-1}$, is sufficiently large in relation to the viscosity, energy ($R_0$), and palenstrophy ($R_2$), of the flow, and the nudging parameter, $\mu$, is tuned appropriately large. Note that the choice of $\mu$ ultimately depends on the true forcing, $\mathbf{f}$, through the values of $R_0,~R_2$ and the initial error. In practice, this is often approximately known, for instance, in terms of the Reynolds number of the flow; we refer the reader to the Appendix for an additional comment on this point.
\begin{figure}[ht]
\includegraphics[width=.9\linewidth,trim = 0mm 0mm 0mm 0mm, clip]{halfhalf_Centered.png}
\caption{\label{fig:a} Force (bottom left cut-away, left color bar) and initial vorticity (top right cut-away, right color bar).}
\end{figure}
\section{Simulation Results}\label{sec:simulations}
To demonstrate the utility of this algorithm in practice, we computationally recover the forcing function for 2D forced incompressible Navier-Stokes.
\begin{figure}[ht]
\includegraphics[width=.9\linewidth,trim = 0mm 0mm 0mm 0mm, clip]{L2errors_trim.png}
\caption{\label{fig:b} (log-linear plot) $L^2$ errors vs. time. ``DR'' refers to the direct replacement algorithm, while ``EX'' refers to the exact Laplacian algorithm. ``-c'' indicates that the force was updated continuously (i.e., at each time step), while ''-d'' indicates discrete force updates, every $0.25$ time units.}
\end{figure}
\begin{itemize}
\item The simulation was carried out in Matlab R2021a using a pseudo-spectral method with explicit Euler time-stepping, respecting the CFL constraint and 2/3-dealiasing rule. The spatial resolution was $2048^2$ on the domain $[-\pi,\pi)^2$ with time-step
$\Delta t =0.0025$.
The equation was simulated at the stream-function level using the Basdevant formula \cite{basdevant1983technical} to compute the nonlinearity efficiently.
\item The unobserved forcing function $\mathbf{f}$ was determined by randomly (normal, $N(0,1)$, distribution-seeded in Matlab with \texttt{rng(0,\textquotesingle twister\textquotesingle)}) picking amplitudes for wave-modes in Fourier space on an annulus with inner radius 16 and outer radius 64 (see Figure \ref{fig:a} for an illustration of the forcing function). Specifically, the forcing recovery algorithm was tasked to recover 11,672 (real-valued) unknowns of the forcing.
\item The viscosity was $\nu=10^{-4}$, and the Grashof number was $G=2.5\times 10^6$, leading to a solution with an energy spectrum resolved to machine precision before the dealiasing cut-off.
\item Initial data for $\mathbf{u}$ was given by a simulation spun up from zero data and run with the same forcing until energy and enstropy were roughly statistically stabilized; namely, time $t=210$ (see FIG. \ref{fig:a}).
Initial data and forcing for the $\mathbf{v}$ system were identically zero.
\item The interpolation operator, $I_h$, was taken to be a projection onto the ``observed'' Fourier modes of the stream function, that is, those on wave-modes $k$, $0<|k|\leq 64$. In particular, only $\approx0.893\%$ of the unknowns in the solution were observed. The nudging parameter was taken to be $\mu=1.9/\Delta t$.
\item The time-derivative that appears in the ansatz forcing function \eqref{eq:ansatz:mod:stagekp1} was computed via a forward-Euler finite difference approximation of $\partial_t I_h(\mathbf{v})$.
\item The nonlinearity of the nudged equation was computed the same way as in the solution of the `truth' system (using $\mathbf{v}$ instead of $\mathbf{u}$), and the same can be said for the forcing estimate in \eqref{eq:ansatz:mod:stagekp1}.
\end{itemize}
\begin{figure}[ht]
\includegraphics[width=.9\linewidth,trim = 0mm 0mm 0mm 0mm, clip]{psi_error_spec_DR_trim.png}
\caption{\label{fig:psi_err_spec.png} (log-log plot) Spectrum of the error in the stream function at times $t=0.0,0.5\ldots,40.0$ using direct replacement scheme. Colors move from blue ($t=0.0$) to red ($t=40.0)$. Vertical dashed line is the observational wave-number cut-off at $|k|=64$. Horizontal solid line is machine precision ($\epsilon\approx2.2\times10^{-16}$).}
\end{figure}
The forcing estimation was implemented in two different ways:
\begin{enumerate}
\item As outlined here, a direct replacement strategy was used for the Laplacian term, i.e. \eqref{eq:ansatz:mod:stagekp1} was used. This is the method justified by the rigorous analysis.
\item Rather than replace the Laplacian term with a combination of the low modes from the observed truth $I_h\mathbf{u}$ and the high modes of the nudged system $J_h\mathbf{v}$, we also simply used the low modes of the nudged system, i.e. replacing \eqref{eq:ansatz:mod:stagekp1} with
\begin{equation}\label{eq:ansatz:exact}
\mathbf{g}^{k+1} = \partial_t I_h(\mathbf{u}) + I_h(\mathbf{v}^k\cdot \nabla \mathbf{v}^k - \nu \nabla^2 I_h \mathbf{v}^k).
\end{equation}
\end{enumerate}
Although only the first case is rigorously justified, we found no qualitative computational difference between these two schemes, i.e. convergence of both the state and approximated forcing function were nearly identical between the two approaches. This is demonstrated in FIG \ref{fig:b}. In this Figure, the direct replacement (DR) simulations are those where the forcing update is given by \eqref{eq:ansatz:mod:stagekp1} whereas the exact Laplacian refers to \eqref{eq:ansatz:exact}. Note that there is very minimal difference between the convergence rates in either case.
\begin{figure}[ht]
\includegraphics[width=.9\linewidth,trim = 0mm 0mm 0mm 0mm, clip]{force_error_spec_DR_trim.png}
\caption{\label{fig:force_err_spec.png} (log-log plot) Spectrum of the error in the stream function forcing at times $t=0.0,0.5\ldots,40.0$ using direct replacement scheme. Colors move from blue ($t=0.0$) to red ($t=40.0)$. Vertical dashed line is the observational wave-number cut-off at $|k|=64$. Horizontal solid line is machine precision ($\epsilon\approx2.2\times10^{-16}$). The difference in the horizontal scale from the previous Figure is due to the lack of information in the higher modes for the true forcing function $\mathbf{f}$.}
\end{figure}
In addition to variations in the forcing update, we also updated the forcing continuously (or at every discrete time step in the simulation) or at uniformly-spaced time intervals. In all cases, the force and the solution converged exponentially fast with very little distinguishing features amongst the different simulations, as shown in FIG. \ref{fig:b}. We anticipate this is because the relaxation time scale required between forcing updates is akin to $1/\mu$ which for the selected values of $\mu$ is on the same order as the actual time step of the numerical algorithm so that `continuous' updating of the forcing is on the requisite time scale anyway. We also observed the time-evolution of the convergence of the energy spectrum of the errors of the stream function and the forcing function (FIG. \ref{fig:psi_err_spec.png} and FIG. \ref{fig:force_err_spec.png}). These Figures demonstrate that the state (streamfunction in this case) converges on the observable scales (below the cutoff of $|k| = 64$) almost instantaneously, but the unobserved scales of the state, and the full forcing function (which is everything in the observed range) converge much slower.
\section{Conclusions}\label{sec:conclusions}
In conclusion, we have developed a novel algorithm for recovering an unknown large scale forcing function in the 2D Navier-Stokes equations that can be implemented on the fly. The algorithm is both rigorously and numerically justified, is simple to implement, and does not require an ensemble of simulations to generate the desired forcing function.
We emphasize that the rigorous result provided here is restricted to two dimensions only due to the current lack of a global-in-time bound on solutions for the gradient of the velocity field for the three dimensional Navier-Stokes equations. Under suitable assumptions on the regularity of solutions to the 3D equations, we are confident that the same algorithm presented here will recover the forcing in that setting. Hence, we do not anticipate that the recovery of the forcing critically relies on the inverse cascade in 2D turbulence which tends to aggregate large scale structures \cite{boffetta2012two}. Further investigations (both numerical and analytical) are required to confirm these hypotheses however.
\begin{acknowledgments}
We wish to acknowledge the ADAPT group and J. Murri and B. Pachev in particular who invigorated discussions surrounding these results developed here.
AF was partially supported by NSF grant DMS-2206493. AL was supported by NSF grants CMMI-1953346 and DMS-2206741. VM was partially supported by NSF grants DMS-2206491 and DMS-2213363. JPW was partially supported by NSF grant DMS-2206762, as well as the Simons Foundation travel grant under 586788.
\end{acknowledgments}
\section{Appendix}\label{sec:appendix}
We provide some additional supporting details for the convergence analysis performed after \eqref{eq:error:rep}.
In particular we utilize several inequalities in the rigorous analysis which are described below. First we note that we will make use of the $L^p$ norm on functions defined for functions $\phi:\Omega \rightarrow \mathbb{R}$ on the domain $\Omega = [0,L]^2$ as
\begin{equation*}
\|\phi\|_p = \left(\int_\Omega |\phi(\mathbf{x})|^p d\mathbf{x}\right)^{1/p},
\end{equation*}
with $p=\infty$ corresponding to the supremum norm over the entire domain.
This allows us to describe the following inequalities defined for all functions $\phi,\psi,\chi:\Omega \rightarrow\mathbb{R}$ (extension to vector-valued functions is immediate):
\begin{itemize}
\item Cauchy-Schwarz inequality:
\begin{equation}
\int_\Omega |\phi(\mathbf{x})\psi(\mathbf{x})| d\mathbf{x} \leq \|\phi\|_2\|\psi\|_2.
\end{equation}
\item Agmon interpolation inequality:
\begin{equation}
\|\phi\|_\infty \leq c \|\nabla \phi\|_2^{1/2}\|\nabla^2\phi\|_2^{1/2},
\end{equation}
where $c$ is a universal constant.
\item Ladyzhenskaya interpolation inequality:
\begin{equation}
\|\phi\|_4 \leq c \|\nabla \phi\|_2^{1/2}\|\phi\|_2^{1/2},
\end{equation}
where once again $c$ is a universal constant.
\item Generalized H\"older inequality:
\begin{equation}
\|\phi \psi \chi\|_r \leq \|\phi\|_p \|\psi\|_q \|\chi\|_s,
\end{equation}
where $\frac{1}{s}+\frac{1}{q}+\frac{1}{p} = \frac{1}{r}$.
\item Gronwall inequality:
Suppose that $f(t),g(t):[0,T]\rightarrow\mathbb{R}$ are continuously differentiable, and satisfy
\begin{equation}
\frac{df}{dt} \leq \alpha f(t) + g(t),
\end{equation}
for some constant $\alpha$. Then
\begin{equation}
f(t) \leq f(0) e^{\alpha t} + \int_0^t e^{\alpha (t-s)}g(s)ds.
\end{equation}
\end{itemize}
The rigorous analysis also makes use of a priori bounds on the velocity field corresponding to \eqref{eq:NS_true}. To state these bounds, suppose $\mathbf{f},\nabla\mathbf{f}\in L^2(\Omega)$ such that $\mathbf{f}$ is divergence-free. Let $\kappa_0$ and $G$ be defined by \eqref{Grashof} and denote a \textit{shape factor} of the force by
\[
\sigma_1:=\frac{\kappa_0\|\nabla\mathbf{f}\|_{2}}{\|\mathbf{f}\|_{2}}.
\]
Strong solutions of \eqref{eq:NS_true} satisfy
\begin{align}\label{est:apriori}
\begin{split}
\|\mathbf{u}(t)\|_{2}^2&\leq \|\mathbf{u}_0\|_{2}^2e^{-\kappa_0^2\nu t}+\kappa_0^2\nu^2G^2 < R_0\\%(1-e^{-\kappa_0^2\nu t})\\
\|\nabla\mathbf{u}(t)\|_{2}^2&\leq \|\nabla\mathbf{u}_0\|_{2}^2e^{-\kappa_0^2\nu t}+\nu^2G^2 < R_1\\
\|\Delta\mathbf{u}(t)\|_{2}^2&\leq \|\Delta\mathbf{u}_0\|_{2}^2e^{-\kappa_0^2\nu t}+c\kappa_0\nu^2 (\sigma_1+G)^2G^2 < R_2
\end{split}
\end{align}
for all $t\geq0$. The first two bounds are classical and may be found in \cite{ConstantinFoias1988, Temam1997, FoiasManleyRosaTemamBook2001}, whereas the third can be found in, for instance, \cite{martinez2022force}; we have taken the $R_j$ to be strict upper bounds on these estimates.
Note that when $t$ is sufficiently large, one may assume that $R_0, R_1, R_2$ are independent of the initial velocity; this is a safe assumption in the analysis performed above since we must always wait long enough before the first update to ensure that $\mathbf{u}(t), \Delta\mathbf{u}(t)$ henceforth remain bounded by $R_0$ and $R_2$, respectively.
Lastly, the crucial ingredient used to close all of the estimates and arrive at \eqref{eq:pre:convergence} is provided by the state error estimates \eqref{eq:error:sync:true}. The bound stated for $\nabla\mathbf{w}^k$ was originally proven in \cite{martinez2022force}. We state its precise form here: Let $\mathbf{u}_0,\mathbf{v}_0$ be divergence-free velocity fields, which are mean-free over $\Omega$, and whose derivatives up to second-order are square-integrable over $\Omega$. Then there exist universal constants $\overline{c}_0, \underline{c}_0\geq1$ such that if $\mu, N$ satisfy
\begin{align
\underline{c}_0\nu\left(\sigma_1+G\right)G^2\leq {\mu h^2}\leq \overline{c}_0,\notag
\end{align}
then for each $k\geq1$, there exists $t_k>t_{k-1}>0$ (where $t_0:=0$), such that
\begin{align
\|\nabla\mathbf{w}^k(t)\|_{2}
&\leq \left(\frac{2C_1\kappa_0^2\nu}{\mu}\right)^{1/2}\frac{\|\Delta\mathbf{g}^{k}(t_k)\|_{2}}{\kappa_0\nu},\notag
\end{align}
for all $t\in [t_k,\infty)$, for some universal constant $C_1\geq1$ independent of $k$.
On the other hand, the bound for $\mathbf{w}^k$ was not required in \cite{martinez2022force}. For the sake of completeness, we provide the details of this estimate here. Observe that the system governing the evolution of $\mathbf{w}^k=\mathbf{v}^k-\mathbf{u}$ is given by
\begin{align}
\partial_t\mathbf{w}^k-&\nu\nabla^2\mathbf{w}^k+\mathbf{w}^k\cdotp\nabla\mathbf{w}^k+\mathbf{w}^k\cdotp\nabla\mathbf{u}+\mathbf{u}\cdotp\nabla\mathbf{w}^k=-\nabla r^k+\Delta\mathbf{g}^k-\mu I_h\mathbf{w}^k,\quad\nabla\cdotp\mathbf{w}^k=0\notag
\end{align}
where $r^k=q^k-p$, where $q^k, p$ denote the pressures corresponding to $\mathbf{v}^k,\mathbf{u}$, respectively. Then the corresponding energy balance is given by
\begin{align}
\frac{1}2&\frac{d}{dt}\|\mathbf{w}^k\|_{2}^2+\nu\|\nabla\mathbf{w}^k\|_{2}^2+\mu\|\mathbf{w}^k\|_{2}^2=-\langle \mathbf{w}^k\cdotp\nabla\mathbf{u},\mathbf{w}^k\rangle+\langle\Delta\mathbf{g}^k,\mathbf{w}^k\rangle+\mu\|J_h\mathbf{w}^k\|_{2}^2\notag.
\end{align}
The first of these terms on the right-hand side can be estimated with H\"older's inequality, Ladyzhenskaya's inequality, the Cauchy-Schwarz inequality, and \eqref{est:apriori} to obtain
\begin{align}
|\langle \mathbf{w}^k\cdotp\nabla\mathbf{u},\mathbf{w}^k\rangle|\leq\|\nabla\mathbf{u}\|_2\|\mathbf{w}^k\|_{4}^2\leq cR_1\|\nabla\mathbf{w}^k\|_2\|\mathbf{w}^k\|_2\leq c\frac{R_1^2}{\mu}\|\nabla\mathbf{w}^k\|_2^2+\frac{\mu}{4}\|\mathbf{w}^k\|_2^2\notag.
\end{align}
The second term can be estimated with the Cauchy-Schwarz inequality to obtain
\begin{align}
|\langle\Delta\mathbf{g}^k,\mathbf{w}^k\rangle|\leq\|\Delta\mathbf{g}^k\|_2\|\mathbf{w}^k\|_2\leq\frac{1}{\mu}\|\Delta\mathbf{g}^k\|_2^2+\frac{\mu}4\|\mathbf{w}^k\|_2^2.
\end{align}
Lastly, the third term can be estimated using the Cauchy-Schwarz inequality and \eqref{eq:boundedness} to obtain
\begin{align}
\mu\|J_h\mathbf{w}^k\|_2^2\leq C_1^2\mu h^2\|\nabla\mathbf{w}^k\|_2^2\notag.
\end{align}
Now let us assume that $\mu,h$ satisfy
\[
2C_1^2\mu h^2\leq \nu,\quad c\mu\nu\geq R_1^2.
\]
We may then combine the above estimates to arrive at
\begin{align}
\frac{d}{dt}\|\mathbf{w}^k\|_2^2+\mu\|\mathbf{w}^k\|_2^2\leq \frac{1}{\mu}\|\Delta\mathbf{g}^k\|_2^2,\notag
\end{align}
from which \eqref{eq:error:sync:true} follows upon taking $t$ sufficiently large to evaluate $\mathbf{g}^k$, and finally an application of Gronwall's inequality.
\nocite{*}
|
2,869,038,154,985 | arxiv |
\subsection*{Results}
\begin{figure}[t]
\centering
\labelphantom{fig:charger_scale_stations}
\labelphantom{fig:charger_scale_power}
\includegraphics[width=\linewidth]{img/charger_scale.pdf}
\caption{\subref*{fig:charger_scale_stations})~Power Law Scaling for Gas Station and Electric Vehicle Supply Equipment for United State Counties (n = 3143), showing novel super-linear behavior for EVSE stations and expected sub-linear behavior for gas stations. Super-linear behavior suggests EVSE infrastructure has been consolidated in larger population centers. 95\% confidence intervals for the scaling exponent \(\beta\) are shown in the legend. \subref*{fig:charger_scale_power})~Comparison of the power delivery of existing gas stations (Assuming 12 pumps per station and improved efficiency of EVs) to existing EVSE infrastructure. While the EVSE infrastructure of some counties has reached parity, in quantity with gasoline stations, no counties have reached parity in terms of power delivery.\label{fig:charger_scale}
}
\end{figure}
We curated a dataset of county-level EVSE charger, gasoline station counts, and the corresponding county population.
We fit several Generalized Linear Models (GLM) to the data using maximum likelihood estimation (MLE) to predict the number of EVSE (\(Y_{EVSE}\)) and gasoline (\(Y_{GS}\)) stations for all counties (\(n = 3143\)) in the United States from their population (Figure~\ref{fig:charger_scale_stations}).
Using a likelihood ratio test, we compared all models to their null counterparts and have tabulated the test statistic \(\lambda_{LR}\) for each model in Table~\ref{tab:model_zoo}.
All models were found to be highly significant with a p-value of less than \(10^{-99}\), as such, we have only reported the test statistic.
For the power-law scaling model, we initially fit models assuming a Poisson distribution, \(Pois(\mu)\), but found the assumption that the variance is equal to the mean to be a poor fit for our data.
We relaxed this by fitting a Negative Binomial (NB) distribution, \(NB(r, \mu)\) parameterized by a dispersion parameter \(r\) and its expected value \(\mu \).
Using this model, our fitted values for \(\beta \), with a 95\% confidence interval, was \(\beta = 1.17 \pm 0.051\) for the EVSE stations and \(\beta = 0.77 \pm 0.0092\) for the gasoline stations.
We do see a reduction in McFadden's \(R^2_{McF}\) for the NB models~(Table~\ref{tab:model_zoo}) driven by a higher likelihood for their null counterpart, and not a reduction in the model's fit.
We performed a Wald test to determine if \(\beta \neq 1\) was statistically significant for the NB power law models;
and found \(\beta \) to be significantly different from 1 for both the EVSE (\(\mbox{SE} = 0.026\mbox{, W} = 6.4\mbox{, p} < 10^{-9}\)) and gas stations (\(\mbox{SE} = 0.0047\mbox{, W} = -50\mbox{, p} < 10^{-99}\)).
As all models achieved similar Root-Mean-Squared-Deviations (RMSD) and are statically significant (\(\lambda_{LR}\)), we used the Bayesian Information Criterion (BIC) to compare models.
Using the criteria that \(\Delta BIC > 6\) indicates a significant difference in model performance~\cite{leitaoThisScalingNonlinear2016}, we found strong evidence for the NB power scaling models over both linear and quadratic models.
An in-depth discussion of the above statistical tests can be found in the SI appendix.
We performed an ordinary least squares (OLS) regression on the log-log transformed data in line with prior scaling works~\cite{bettencourtOriginsScalingCities2013,bettencourtGrowthInnovationScaling2007,taylor2019scalability,kempes2019scales,leitaoThisScalingNonlinear2016,marquetScalingPowerlawsEcological2005}.
However, as this method does not support zero-count data, it excludes a significant portion of our dataset (43.8\% for EVSE and 1\% for gasoline stations).
We did not find the OLS models compelling and excluded the fits from this analysis; see the SI Appendix for additional details.
\subsubsection*{Gas Station to EVSE Scaling}
Driven by an analogous utility, as EV adoption increases, EVSE infrastructure should tend towards the same coarse-grained regularities as gasoline stations.
With the number of stations in an area to be proportional to the vehicle miles travel \(M\), average efficiency \(E\), and power delivery of a station \(P\): \(Y \propto \eta M / P\).
Assuming no change in consumer driving behavior, the number of EVSE stations to replace a single gasoline station is:
\begin{equation}\label{eq:ev_power_parity}
\frac{Y_{EVSE}}{Y_{GS}} = \frac{\eta_{EV} P_{GS}}{\eta_{ICE} P_{EVSE}}
\end{equation}
Using the regulated gasoline flow rate of 10 \si{gpm} for a consumer pump in the United States~(40 CFR \S 1090.1550) and the EPA's 33.705\si{kWh/gallon} of gasoline equivalency (40 CFR \S 600.002), the max power delivery of a consumer gasoline pump is \(P_{EVSE} = 20.2 \si{MW}\).
We assume \( \eta_{EV} / \eta_{ICE} \approx 3 \), or that on average EV's consume 1/3 the energy per mile traveled compared to ICE vehicles.
Assuming \(P_{EVSE} = 400\si{kW}\) or Extreme Fast Charging~\cite{ahmedEnablingFastCharging2017}, gives \( Y_{EVSE} / Y_{GS} = 17 \), or to replace one gasoline pump, 17 ports are required to reach power parity.
Reducing \(P_{EVSE}\) to 11.5kW, or the upper end of available home chargers, we find \( Y_{EVSE} / Y_{GS} \approx 586 \).
From this, holding the average number of pumps/ports constant, the number of EVSE stations needed in an area is given by Eq.~\ref{eq:ev_charger_prediction}.
\begin{equation}\label{eq:ev_charger_prediction}
\hat{Y}_{EVSE} = 3 \frac{P_{GS}}{P_{EV}} \times Y_{0,GS} N^{\beta_{GS}}
\end{equation}
\begin{table}[h]
\caption{Model Statistics for Fitted Generalized Linear Models\hspace*{\fill}}\label{tab:model_zoo}
\centering
\input{img/model_zoo.tex}
\end{table}
\subsubsection*{Mean-field Model Relating Charging Power and Vehicle Speed}
In this section, we develop a mean-field model to estimate the maximum average speed of an EV for a given charger power.
We assume that the EV charges at \(P_{EVSE}\) for \(\alpha \) percent of the time, then drives at a speed of \(v\) for the remaining \(1 - \alpha \) percent of the time.
We further make the simplifying assumption that aerodynamic forces dominate the overall power draw of the EV.\@
For a continuous drive cycle, the energy delivered during charging must match the energy demand of the drive cycle:
\begin{equation}
P_D (1 - \alpha) = \alpha P_{EVSE} \to
\frac{1}{2} \rho C_d A v^3 = \frac{\alpha}{1 - \alpha} P_{EVSE}
\end{equation}
From this, we can relate the power of the charger \(P_{EVSE}\) to the vehicle's speed while driving \(v\):
\begin{equation}\label{eq:home_charging_speed}
v = \sqrt[3]{ \frac{\alpha}{1-\alpha} \frac{2}{\rho C_d A} P_{EVSE} }
\end{equation}
As the vehicle is stationary while charging, the average vehicle speed is \(\bar{v} = v (1-\alpha) \propto \alpha^{1/3} {\left( 1-\alpha \right)}^{2/3} \).
The maximum average speed occurs when the vehicle spends \(\alpha = 1/3\) of its time charging at 2x the power it consumes while driving~(Figure~\ref{fig:home_chargers_avg_speed}).
For our analysis we assume \( \rho = 1.225 \si{kg/m^3}\) and \(C_d A = 0.75 \si{m^2} \); or a boxy sedan at International Standard Metric Conditions.
In the case of \(P_{EVSE} = 1.92 \si{kW}\), this translates to a driving speed of \(v = 28.6 \si{mph}\), and an average speed of \( \bar{v} = 19 \si{mph} \), speeds well suited for residential driving.
Higher driving speeds are possible with 1.92\si{kW}, but only at the expense of increased charging times, which may be tenable for sort trips if charging can occur before the next trip.
However, operating above \(\alpha > 1/3\) will increase the overall trip duration for long-range trips.
At the other extreme, 400 \si{kW} delivers a max driving speed (\textasciitilde170mph) far beyond highway speed limits.
Instead, the higher charging rates enable shorter charging times for a given driving speed (Figure~\ref{fig:home_chargers_driving_speed}).
Our assumption that aerodynamic forces dominate the overall power consumption breaks down at low speeds.
More complex models are possible\cite{guttenbergINCEPTSSoftwareHighfidelity2021,weiPersonalVehicleElectrification2021}, but do not provide a closed-form solution for velocity as a function of charger power.
Additionally, as a lower bound on the vehicle's power consumption, Eq.~\ref{eq:home_charging_speed} provides an upper bound on the vehicle's speed for a given charger power.
Thus the qualitative conclusion that charger power limits the types of trips that are feasible remains valid.
\subsubsection*{Home Charging}
At present, consumer-grade chargers range from 1.92kW (120V@12A) to 11.5kW (240V@48A), significantly slower than commercial or a ``gasoline-equivalent'' charger.
As shown in Figure~\ref{fig:home_chargers_avg_speed}, home chargers are insufficient for a continuous operating cycle at highway speeds.
The mean-field analysis provides a framework to understand the scaling relationship between EVSE power and vehicle speed.
A more detailed analysis is needed to account for real-world factors such as mixed power level charging stations and idle time for the vehicle.
Home charging may be sufficient for many consumers' local commutes~\cite{weiPersonalVehicleElectrification2021}; however, it will be insufficient for predominantly long-distance highway travel.
\begin{figure}[t]
\centering
\labelphantom{fig:home_chargers_avg_speed}
\labelphantom{fig:home_chargers_driving_speed}
\includegraphics[width=\linewidth]{img/home_chargers.pdf}
\caption{\subref*{fig:home_chargers_avg_speed})~Average vehicle speed vs.~time charging for varying charger power outputs. The solid black lines of constant driving speed (\(v\)) reflect the impact of increasing charge time, when the vehicle is stationary, on the average speed (\(\bar{v}\)). The maximum \(\bar{v}\) for a given charger power occurs at \(\alpha = 1/3\), indicated by the dashed vertical bar. \subref*{fig:home_chargers_driving_speed})~Trade-off between driving speed \(v\) and charger power for various charging times, with vertical lines at notable charger power levels. For example, a vehicle driving at 30mph would spend \textasciitilde10\% of it's time charging at 11.5kW vs.\ \textasciitilde0.2\% at 400kW.\label{fig:home_chargers}}
\end{figure}
\subsection*{EVSE Infrastructure Gap}
Using Eq.~\ref{eq:ev_charger_prediction}, we estimated the EVSE Station Gap: \(\Delta Y_{EVSE} = \hat{Y}_{EVSE} - Y_{EVSE}\), or the number of additional stations needed for each county, assuming all current and future chargers are 400kW~(Figure~\ref{fig:us_charger_gap}).
The gap between the existing EVSE infrastructure and the existing ICE infrastructure is large (Figure~\ref{fig:charger_scale_power}).
Our model assumes a fixed average pumps/ports ratio, ignores consumer behavior regarding longer charging times, the role of home chargers, and variations in consumer behavior over time.
Our treatment of existing gasoline stations neglect factors, such as gasoline purchases outside of on-highway transportation, that may inflate the number of stations in a region.
Finally, scaling laws can not provide precise EVSE placement or sizing information as a coarse-grained model.
Further refinement is left to optimization-based methods~\cite{shareefReviewStageoftheartCharging2016} or complex systems analysis~\cite{guttenbergINCEPTSSoftwareHighfidelity2021}.
\subsection*{Temporal Evolution of Scaling Relationships}
Beyond the planning and policy implications of the scaling relationships presented here, these findings also provide new insights into scaling phenomena in general.
The central question in scaling theory is how to connect scaling exponents with fundamental mechanisms.
For many biological systems and human infrastructure, the optimal solution to dominant physical constraints directs the scaling exponent~\cite{kempes2019scales,bettencourtOriginsScalingCities2013,westScaleUniversalLaws2017,marquetScalingPowerlawsEcological2005}.
At the same time, recent studies show that human institutions can adjust their exponent values based on their distinct goals and missions~\cite{taylor2019scalability}.
However, in biology, we have not observed the evolution of scaling exponent values in time towards the equilibrium optimum.
Such temporal changes would have occurred during much deeper histories than we can observe.
EVSE infrastructure is an example of an out-of-equilibrium scaling relationship, where predicting the equilibrium scaling exponent provides unique planning forecasts.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/us_charger_gap.pdf}
\caption{The EVSE Station Gap between the number of existing EVSE stations and the number predicted with Eq.~\ref{eq:ev_charger_prediction}, assuming all current and future chargers are capable of 400kW.
At present, no counties have sufficient EVSE stations to meet power parity, even when assuming all existing stations have been upgraded to 400 kW.
}\label{fig:us_charger_gap}
\end{figure}
\matmethods{
County-level population estimates were obtained from the United States Census Bureau's Population Estimates Program using the 2020 Vintage estimates of the 2010 Census; at the time of writing, this was the most up-to-date estimate available~\cite{CountyPopulationTotalsC2010V2020}.
We used the Forth Quarter 2020 counts for ``Gasoline Stations'' from the United States Bureau of Labor Statistics Quarterly Census of Employment and Wages Program~\cite{QuarterlyCensusEmployment2021}.
EVSE charger locations, and other metadata, were obtained from the National Renewable Laboratory's Alternative Fuel Stations Application Programming Interface~\cite{nrelAlternativeFuelStations2021} and then geocoded to counties using shapefiles obtained from the United States Census~\cite{census2020TIGERLine2021}.
We then fit various GLMs to the data to predict station counts, EVSE and Gasoline, for each county using maximum likelihood estimation as implemented in the Julia Package GLM.jl.
Due to difficulty fitting some parameters, we increased the maximum iterations from 30 to 60 but otherwise used the default settings.
For an extended discussion of model fitting, see the GLM section of the SI Appendix.
Total EVSC Power for each station was computed by multiplying the counts of Level 1 (1.4\si{kW}), Level 2 (7.2\si{kW}), and DC Fast Chargers (50\si{kW}) by their respective power outputs shown in parenthesis~\cite{ahmedEnablingFastCharging2017}.
Individual EVSE station power estimates were summed to produce the county-level EVSE power estimates shown in Figure~\ref{fig:charger_scale_power}.
}
\showmatmethods{}
\acknow{
The authors from CMU acknowledge the support from Mobility21, A United States Department of Transportation National University Transportation Center.
CPK thanks CAF Canada and Toby Shannan for generously supporting this work.
}
\showacknow{}
|
2,869,038,154,986 | arxiv | \section{Introduction and notations}
\subsection{Introduction}
The $\zeta$-function, which is usually introduced via one of the
following series,
\be\label{kj023dndndr3}
\zeta(s)=
\begin{cases}
\displaystyle\sum\limits_{n=1}^\infty \frac{1}{\,n^{s}}\,,\qquad & \Re{s}>1 \\[6mm]
\displaystyle\frac{1}{1-2^{1-s}}\sum\limits_{n=1}^\infty \frac{(-1)^{n-1}}{\,n^{s}}\,,\qquad & \Re{s}>0 \,,\quad s\neq1
\end{cases}
\ee
is of fundamental and long-standing importance in modern analysis, number theory, theory of special
functions and in a variety other fields. It is well known that $\zeta(s)$ is meromorphic on the entire complex
$s$-plane and that it has one simple pole at $s=1$ with residue 1.
Its expansion in the Laurent series in a neighbourhood of $s=1$ is
usually written the following form
\be\label{dhd73vj6s1}
\zeta(s)\,=\,\frac{1}{\,s-1\,} + \sum_{m=0}^\infty \frac{(-1)^m (s-1)^m}{m!} \gamma_m\,,
\qquad\qquad \qquad s\neq1.
\ee
where coefficients $\gamma_m$, appearing in the regular part of
expansion {\eqref{dhd73vj6s1}}, are called
\emph{generalized Euler's constants} or \emph{Stieltjes constants},
both names being in use.\footnote{The definition
of Stieltjes constants accordingly to formula {\eqref{dhd73vj6s1}} is
due to Godfrey H.~Hardy.
Definitions, introduced by Thomas Stieltjes and Charles
Hermite between 1882--1884, did not contain coefficients $(-1)^m$ and $m!$
In fact, use of these factors is not well justified; notwithstanding,
Hardy's form {\eqref{dhd73vj6s1}} is largely accepted and is more
frequently encountered in modern literature.
For more details, see \cite[vol.~I, letter 71 and following]{stieltjes_01},
\cite[p.~562]{lagarias_01}, \cite[pp.~538--539]{iaroslav_07}.}\up{,}\footnote{Some authors use the name \emph{generalized Euler's constants}
for other constants,
which were conceptually introduced and studied by Briggs in 1961 \cite{briggs_01} and Lehmer in 1975 \cite{lehmer_01}. They were subsequently
rediscovered in various (usually slightly different) forms by several
authors, see e.g.~\cite{tasaka_01,pilehrood_01,xia_01}.
Further generalization of both, generalized Euler's constants defined
accordingly to {\eqref{dhd73vj6s1}} and generalized Euler's constants introduced
by Briggs and Lehmer, was done by Dilcher in \cite{dilcher_01}.}
Series {\eqref{dhd73vj6s1}} is the standard definition for $\gamma_m$.
Alternatively, these constants may be also defined via the following limit
\begin{eqnarray}
\label{k98y9g87fcfcf}
\gamma_m = \lim_{n\to\infty} \left\{
\sum_{k=1}^n \frac{\ln^m k}{k} -
\frac{\ln^{m+1} n}{m+1} \right\} , \quad m=0, 1, 2,\ldots
\end{eqnarray}
The equivalence between definitions {\eqref{dhd73vj6s1}} and {\eqref{k98y9g87fcfcf}}
was demonstrated by various authors, including Adolf Pilz \cite{gram_01}, Thomas Stieltjes,
Charles Hermite \cite[vol.~I, letter 71 and following]{stieltjes_01}, Johan Jensen \cite{jensen_02,jensen_03},
J\'er\^ome Franel \cite{franel_01},
J{\o}rgen P.~Gram \cite{gram_01},
Godfrey H.~Hardy \cite{hardy_03}, Srinivasa Ramanujan
\cite{ramanujan_01}, William E.~Briggs, S.~Chowla
\cite{briggs_02} and many others, see e.g.~\cite{berndt_02,todd_01,israilov_01,zhang_01}.
It is well known that $\gamma_0=\gamma$ Euler's constant, see
e.g.~\cite{zhang_01}, \cite[Eq.~(14)]{iaroslav_07}.
Higher generalized Euler's constants are not known to be related to the
``standard'' mathematical constants,
nor to the ``classic'' functions of analysis.
In our recent work \cite{iaroslav_08}, we obtained two interesting
series representations for the logarithm of the $\Gamma$-function
containing Stirling numbers of the first kind $S_1(n,k)$
\begin{eqnarray}
\displaystyle\notag
\ln\Gamma(z)\, =&& \left(z-\frac{1}{\,2\,}\right)\!\ln z -z +\frac{1}{\,2\,}\ln2\pi + \\[1mm]
\displaystyle&&\displaystyle\label{kj20ejcn2dnd}
\qquad\qquad
+ \frac{1}{\,\pi\,}\!
\sum_{n=1}^\infty\!\frac{ 1}{\,n\cdot n!\,}\sum_{l=0}^{\lfloor\!\frac{1}{2}n\!\rfloor} (-1)^{l}
\frac{\, (2l)!\cdot|S_1(n,2l+1)|\,}{(2\pi z)^{2l+1}} \\[5mm]
\displaystyle
\ln\Gamma(z) = &&\displaystyle\left(z-\frac{1}{\,2\,}\right)\!\ln\!{\left(z-\frac{1}{\,2\,}\right)}
- z +\frac{1}{\,2\,}+\frac{1}{\,2\,}\ln2\pi - \notag\\[1mm]
\displaystyle&&\displaystyle\label{lj023od230dend}
\qquad\qquad
- \frac{1}{\,\pi\,}\!
\sum_{n=1}^\infty\!\frac{ 1}{\,n\cdot n!\,}\sum_{l=0}^{\lfloor\!\frac{1}{2}n\!\rfloor} (-1)^{l}
\frac{\, (2l)!\cdot(2^{2l+1}-1)\cdot|S_1(n,2l+1)|\,}{(4\pi)^{2l+1}\cdot\big(z-\frac{1}{2}\big)^{2l+1}}
\end{eqnarray}
as well as their analogs for the polygamma functions $\Psi_k(z)$.\footnote{Both series converge in a part of the
right half--plane \cite[Fig.~2]{iaroslav_08} at the
same rate as $\sum \big(n\ln^{m}\!
n\big)^{-2}\,$, where $m=1$ for $\ln\Gamma(z)$
and $\Psi(z)$,
$m=2$ for $\Psi_1(z)$ and $\Psi_2(z)$,
$m=3$ for $\Psi_3(z)$ and $\Psi_4(z)$, \emph{etc.} \label{gtf1a}}
The present paper is a continuation of this previous work, in which we show that
the use of a similar technique permits to derive two new series
expansions for
generalized Euler's constants $\gamma_m$, both series involving
Stirling numbers of the first kind.
The first series is convergent and contains polynomials in $\pi^{-2}$
with rational coefficients (the latter involves Stirling numbers of the
first kind). From this series,
by a formal procedure, we deduce the second expansion, which is
semi-convergent and contains rational terms only.
This expansion is particularly simple and involves only Bernoulli
numbers and a non-linear
combination of generalized harmonic numbers.
Convergence analysis of discovered series shows that the
former converges slightly better than Euler's series
$\sum n^{-2}$, in a rough approximation at the same rate as
\be
\nonumber
\sum_{n=3}^\infty\frac{\ln^m \!\ln n}{\,n^2\ln^2 \! n\,}\,,
\qquad\quad m=0, 1, 2,\ldots
\ee
The latter series diverges very quickly, approximately as
\be
\nonumber
\sum_{n=2}^\infty(-1)^{n-1} \frac{\,\ln^m \!n\,}{\sqrt{n\,}}\left
(\frac{n}{\pi e}\right)^{2n}\,,
\qquad\quad m=0, 1, 2,\ldots
\ee
\subsection{Notations and some definitions}\label{notations}
Throughout the manuscript, following abbreviated notations are used: $
\gamma=0.5772156649\ldots$ for
Euler's constant, $\gamma_m$ for $m$th generalized Euler's constant
(Stieltjes constant)
accordingly to their definition {\eqref{dhd73vj6s1}},\footnote{In
particular $\gamma_1=-0.07281584548\ldots{}$,
$\gamma_2=-0.009690363192\ldots{}$, $\gamma_3=+0.002053834420\ldots{}$.}
$\binom{k}{n}$ denotes the binomial coefficient $C^n_k$,
${B}_n$~stands for the $n$th Bernoulli number,\footnote{In particular
${B}_0=+1$, ${B}_1=-\frac{1}{2}$, ${B}_2=+\frac{1}{6}$,
${B}_3=0$, ${B}_4=-\frac{1}{30}$, ${B}_5=0$, ${B}_6=+\frac{1}{42}$, ${B}_7=0$,
${B}_8=-\frac{1}{30}$, ${B}_9=0$, ${B}_{10}=+\frac{5}{66}$,
${B}_{11}=0$, ${B}_{12}=-\frac{691}{2730}$, \emph{etc}.,
see \cite[Tab.~23.2, p.~810]{abramowitz_01}, \cite[p.~5]{krylov_01}
or \cite[p.~258]{gelfond_01} for further values. Note also
that some authors may use slightly different definitions for the
Bernoulli numbers, see e.g.~\cite[p.~91]{hagen_01},
\cite[pp.~32, 71]{lindelof_01}, \cite[p.~19, \no138]{gunter_03_eng}
or \cite[pp.~3--6]{arakawa_01}.}
$H_n$ and $H^{(s)}_n$ denote the $n$th harmonic number and the $n$th
generalized harmonic number of order $s$
\begin{eqnarray}
\nonumber
H_n \,\equiv\sum_{k=1}^n \frac{1}{k}\,,\qquad\qquad\qquad
H^{(s)}_n \,\equiv\sum_{k=1}^n \frac{1}{k^s}\,,
\end{eqnarray}
respectively.
Writings $\lfloor x\rfloor$ stands for the integer part of $x$,
$\operatorname{tg}z$ for the tangent of $z$,
$\operatorname{ctg}z$ for the cotangent of $z$, $\operatorname{ch}z$
for the hyperbolic cosine of $z$, $\operatorname{sh}z$ for the
hyperbolic sine of $z$,
${\operatorname{th}}z$ for the hyperbolic tangent of $z$,
$\operatorname{cth}z$ for the hyperbolic cotangent of $z$.
In order to avoid any confusion between compositional inverse and
multiplicative inverse,
inverse trigonometric and hyperbolic functions are denoted
as $\arccos$, $\arcsin$, $\operatorname{arctg}, \ldots$ and not as
$\cos^{-1}$,
$\sin^{-1}$, $\operatorname{tg}^{-1}, \ldots{}$.
Writings $\Gamma(z)$ and $\zeta(z)$ denote respectively the gamma and
the zeta functions of argument $z$.
The Pochhammer symbol~$(z)_n$, which is also known as the generalized
factorial function, is defined as the rising factorial
$(z)_n\equiv z(z+1)(z+2)\cdots(z+n-1)=\Gamma(z+n)/\Gamma
(z)$.\footnote{For nonpositive and complex $n$, only the latter
definition $(z)_n\equiv\Gamma(z+n)/\Gamma(z)$ holds.}\up{,}\footnote{Note that some writers (mostly German-speaking)
call such a function \emph{facult\'e analytique} or \emph{Facult\"at}, see e.g.~\cite{schlomilch_04}, \cite[p.~186]{schlomilch_05},
\cite[vol.~II, p.~12]{schlomilch_06}, \cite[p.~119]{hagen_01}, \cite{kramp_01}. Other names and notations
for $(z)_n$ are briefly discussed in \cite[pp.~45--47]{jordan_01} and
in \cite[pp.~47--48]{knuth_01}.\\[-8mm]} For sufficiently large $n$, not
necessarily integer,
the latter can be given by this useful approximation\looseness=1
\be\label{lk2093mffmnjw}
\begin{array}{ll}
\displaystyle
(z)_n \;&\displaystyle =\,\frac{\,n^{n+z-\frac{1}{2}}\sqrt{2\pi} \,}{\Gamma(z)\,e^{n}}
\left\{1+ \frac{\,6 z^2 - 6z + 1\,}{12 n} + \frac{\,36 z^4 - 120 z^3 + 120 z^2 - 36 z + 1}{288 n^2} + O(n^{-3})\right\}\\[8mm]
&\displaystyle
\,=\,\frac{\,n^z\cdot \Gamma(n)\,}{\Gamma(z)}\left\{1+ \frac{\,z(z-1)\,}{2 n}
+ \frac{\,z(z-1)(z-2)(3z-1)\,}{24 n^2} +O(n^{-3})\right\}
\end{array}
\ee
which follows from the Stirling formula for the $\Gamma
$-function.\footnote{A simpler
variant of the above formula may be found in \cite{tricomi_01}.}
Unsigned (or signless) and signed Stirling numbers of the first kind,
which are also known as \emph{factorial coefficients},
are denoted as $|S_1(n,l)|$ and $S_1(n,l)$ respectively (the latter are
related to the former
as $S_1(n,l)=(-1)^{n\pm l}|S_1(n,l)|$).\footnote{There exist more than
50 notations for the Stirling numbers,
see e.g.~\cite{gould_02}, \cite[pp.~vii--viii, 142, 168]{jordan_01},
\cite[pp.~410--422]{knuth_02}, \cite[Sect.~6.1]{knuth_01}, and we do
not insist on our particular notation, which may
seem for certain not properly chosen.} Because in literature various
names, notations and definitions were adopted
for the Stirling numbers of the first kind, we specify that
we use exactly the same definitions and notation as in \cite[Section
2.1]{iaroslav_08},
that is to say $|S_1(n,l)|$ and $S_1(n,l)$ are defined as the coefficients
in the expansion of rising/falling factorial
\be\label{x2l3dkkk03d}
\specialnumber{a,b}
\begin{cases}
\displaystyle
\prod_{k=0}^{n-1} (z+k)
\,=\,(z)_n\,=\,\frac{\Gamma(z+n)}{\Gamma(z)}\,=\,\sum_{l=1}^n |S_1(n,l)|\cdot z^l \,=\, \sum_{l=0}^\infty |S_1(n,l)|\cdot z^l\\[6mm]
\displaystyle
\prod_{k=0}^{n-1} (z-k)
\,=\,(z-n+1)_n\,=\frac{\Gamma(z+1)}{\Gamma(z+1-n)}\,=\,\sum_{l=1}^n S_1(n,l)\cdot z^l \,=\,\sum_{l=0}^\infty S_1(n,l)\cdot z^l
\end{cases}
\ee
respectively, where $z\in\mathbb{C}$ and $n\geqslant1$. Note that
if $l\notin[1,n]$, where $l$ is supposed to be
nonnegative, then $S_1(n,l)=0$, except for $S_1(0,0)$ which is set to 1
by convention.
Alternatively, the same numbers may be equally defined as the
coefficients in the following MacLaurin series
\be\label{ld2jr3mnfdmd}
\specialnumber{a,b}
\begin{cases}
\displaystyle
(-1)^l\frac{\ln^l(1-z)}{l!}\,=\sum_{n=l}^\infty\!\frac{|S_1(n,l)|}{n!}z^n \,=\sum_{n=0}^\infty\!\frac{|S_1(n,l)|}{n!}z^n \,, \qquad & |z|<1\,,\quad l=0, 1, 2, \ldots \\[6mm]
\displaystyle
\frac{\ln^l(1+z)}{l!}\,=\sum_{n=l}^\infty\!\frac{S_1(n,l)}{n!}z^n \,=\sum_{n=0}^\infty\!\frac{S_1(n,l)}{n!}z^n\,, \qquad & |z|<1\,,\quad l=0, 1, 2, \ldots
\end{cases}
\ee
Signed Stirling numbers of the first kind, as we defined them above,
may be also given via the following explicit formula
\be\label{io20323m3e}
S_1(n,l)\,=\,
\frac{(2n-l)!}{(l-1)!}
\sum_{k=0}^{n-l}\frac{1}{(n+k)(n-l-k)!(n-l+k)!}
\sum_{r=0}^{k}\frac{(-1)^{r} r^{n-l+k} }{r!(k-r)!}
\ee
$l\in[1,n]$,
which may be useful for the computation of $S_1(n,l)$ when $n$ is not
very large.\footnote{From the above definitions, it follows that:
$S_1(1,1)=+1$, $S_1(2,1)=-1$, $S_1(2,2)=+1$, $S_1(3,1)=+2$,
$S_1(3,2)=-3$, $S_1(3,3)=+1$, \ldots\,,
$S_1(8,5)=-1960$, \ldots\,, $S_1(9,3)=+118\,124$, \emph{etc.} Note
that there is an error in Stirling's treatise \cite{stirling_01}:
in the last line in the table on p.~11 \cite{stirling_01} the value of
$|S_1(9,3)|=118\,124$ and not 105\,056. This error has been
noted by Jacques Binet \cite[p.~231]{binet_01}, Charles Tweedie \cite[p.~10]{tweedie_01} and some others (it was also corrected
in some translations of \cite{stirling_01}).}
All three above definitions agree with those adopted by Jordan \cite[Chapt.~IV]{jordan_01}, \cite{jordan_02,jordan_00}, Riordan
\cite[p.~70 \emph{et seq.}]{riordan_01},
Mitrinovi\'c \cite{mitrinovic_01}, Abramowitz \& Stegun \cite[\no
24.1.3, p.~824]{abramowitz_01} and many others (moreover,
modern CAS, such as \textsl{Maple} or \textsl{Mathematica}, also share
these definitions; in particular \texttt{Stirling1(n,l)} in the former
and \texttt{StirlingS1[n,l]} in the latter
correspond to our $S_1(n,l)$).\footnote{A quick analysis of several
alternative names, notations and definitions may be found in works of
Charles Jordan \cite[pp.~vii--viii, 1 and Chapt.~IV]{jordan_01}, Gould
\cite{gould_02,gould_03}, and Donald E.~Knuth
\cite[Sect.~6.1]{knuth_01}, \cite[pp.~410--422]{knuth_02}.\label{alkjcow2edchb}}
Kronecker symbol (or Kronecker delta) of arguments $l$ and $k$ is denoted
by $\,\delta_{l,k}\,$ ($\,\delta_{l,k}=1\,$
if $l=k$ and $\,\delta_{l,k}=0\,$ if $l\neq k$).
$\operatorname{Re}{z}$ and $\operatorname{Im}{z}$ denote respectively
real and imaginary parts
of $z$.
Letter $i$ is never used as index and is $\sqrt{-1\,}$. The writing
$\operatorname{res}_{z=a} f(z)$ stands for
the residue of the function $f(z)$
at the point $z=a$. Finally, by the relative error between the quantity $A$
and its approximated value $B$, we mean $(A-B)/A$.
Other notations are standard.\looseness=-1
\section{A convergent series representation for generalized Euler's constants $\gamma_m$
involving Stirling numbers and polynomials in $\pi^{-2}$}
\subsection{Derivation of the series expansion}
In 1893 Johan Jensen \cite{jensen_04,jensen_03} by contour
integration methods
obtained an integral formula for the $\zeta$-function
\be\label{kjd02jddnsa}
\begin{array}{cc}
\displaystyle
\zeta(s) = \frac{1}{s-1} + \frac{1}{2} + 2\!\!\int\limits_0^{\pi/2} \!
\frac{(\cos\theta)^{s-2}\sin s\theta}{e^{2\pi\tg\theta}-1} d\theta \,
=\,
\frac{1}{s-1} + \frac{1}{2} + 2\!
\int\limits_0^\infty \!
\frac{\sin(s \arctg x)\,}{\left(e^{2\pi x}-1\right) \left(x^2+1\right)^{s/2}}\, dx \\[8mm]
\displaystyle
=\,
\frac{1}{s-1} + \frac{1}{2} + \frac{1}{i}\!
\int\limits_0^\infty \!
\frac{(1-ix)^{-s}-(1+ix)^{-s}}{\,e^{2\pi x}-1\,} \, dx \,,\qquad\quad s\neq 1
\end{array}
\ee
which extends {\eqref{kj023dndndr3}} to the entire complex plane except
$s=1$. Expanding the above formula
into the Laurent series about $s=1$ and performing the term-by-term
comparison of the
derived expansion with the Laurent series {\eqref{dhd73vj6s1}} yields
the following representation for the $m$th Stieltjes constant
\be\label{kljc3094jcmfd}
\gamma_m \,=\,\frac{1}{2}\delta_{m,0}+\,\frac{1}{i}\!\int\limits_0^\infty \! \frac{dx}{e^{2\pi x}-1} \left\{
\frac{\ln^m(1-ix)}{1-ix} - \frac{\ln^m(1+ix)}{1+ix}
\right\}\,,
\qquad m=0, 1, 2,\ldots
\ee
which is due to the Jensen and Franel.\footnote{In the explicit
form, this integral formula was given by Franel in 1895 \cite{franel_01} (in the above, we corrected the original Franel's formula which was not valid for $m=0$).
However, it was remarked by Jensen
\cite{jensen_03} that it can be elementary derived from {\eqref{kjd02jddnsa}} obtained two years earlier and it is hard to disagree
with him.
By the way, it is curious that in works of modern authors, see
e.g.~\cite{connon_01,choi_01}, formula {\eqref{kljc3094jcmfd}}
is often attributed to Ainsworth and Howell, who discovered it
independently much later \cite{ainsworth_01}.}
Making a change of variable in the latter formula \break \mbox{$\,x=-\frac{1}{2\pi}\ln(1-u)\,$}, we have
\be\label{jhvc94hfhnf}
\gamma_m \,=\,\frac{1}{2}\delta_{m,0}+\,\frac{1}{\,2\pi i\,}\!\!\!\bigints\limits_{\!\!\!\!\!\!\!\!\!0}^{\;\;\;\;\;\;1} \!\! \left\{
\dfrac{\ln^m\!\left[1-\dfrac{\ln(1-u)}{2\pi i}\right]}{\,1-\dfrac{\ln(1-u)}{2\pi i}\,} \,-\,
\dfrac{\ln^m\!\left[1+\dfrac{\ln(1-u)}{2\pi i}\right]}{\,1+\dfrac{\ln(1-u)}{2\pi i}\,}
\right\}\frac{du}{\,u\,}
\ee
where $\, m=0, 1, 2,\ldots$
Now, in what follows, we will use a number of basic properties of
Stirling numbers, which can be found in an amount sufficient for the
present purpose
in the following literature: \cite{stirling_01,hindenburg_01,kramp_01}, \cite[Book I, part I]{laplace_02}, \cite{ettingshausen_01,schlaffli_01,schlaffli_02,schlomilch_04},
\cite[pp.~186--187]{schlomilch_05}, \cite[vol.~II,
pp.~23--31]{schlomilch_06}, \cite{appel_01,cayley_00,cayley_01,cayley_02,boole_01,glaisher_02}, \cite[p.~129]{carlitz_02},
\cite[Chapt.~IV]{jordan_01}, \cite{jordan_02,jordan_00,nielsen_04}, \cite[pp.~67--78]{nielsen_01}, \cite{nielsen_03,tweedie_01},
\cite[Sect.~6.1]{knuth_01}, \cite[pp.~410--422]{knuth_02}, \cite[Chapt.~V]{comtet_01}, \cite{dingle_01},
\cite[Chapt.~4, \S3, \no196--\no210]{polya_01_eng}, \cite[p.~60 \emph
{et seq.}]{hagen_01}, \cite{netto_01},
\cite[p.~70 \emph{et seq.}]{riordan_01}, \cite[vol.~1]{stanley_01},
\cite{bender_01}, \cite[Chapt.~8]{charalambides_01}, \cite[\no
24.1.3, p.~824]{abramowitz_01}, \cite[Sect.~21.5-1, p.~824]{korn_01},
\cite[vol.~III, p.~257]{bateman_01}, \cite{norlund_02,steffensen_02}, \cite[pp.~91--94]{conway_01}, \cite[pp.~2862--2865]{weisstein_04},
\cite[Chapt.~2]{arakawa_01}, \cite{mitrinovic_01,gould_01,gould_02,gould_03,wachs_01,carlitz_02,carlitz_03}, \cite[p.~642]{olson_01}, \cite{salmieri_01,gessel_01,wilf_01,moser_01,bellavista_01,wilf_02,temme_02,howard_01,butzer_02,butzer_01,hwang_01,adamchik_03,timashev_01,grunberg_01,louchard_01,shen_01,shirai_01,sato_01,rubinstein_01,rubinstein_02,hauss_01,skramer_01,iaroslav_08}. Note that many writers discovered
these numbers independently, without realizing that they deal with the
Stirling numbers.
For this reason, in many sources, these numbers may appear under
different names, different notations and
even slightly different definitions.\footnote{Actually, only in the
beginning of the XXth century, the name ``Stirling numbers'' appeared
in mathematical literature
(mainly, thanks to Thorvald N.~Thiele
and Niels Nielsen \cite{nielsen_04,tweedie_01}, \cite[p.~416]{knuth_02}).
Other names for these numbers include: \emph{factorial coefficients},
\emph{faculty's coefficients} (\emph{Facult\"atencoefficienten},
\emph{coefficients de la facult\'e analytique}), \emph{differences of
zero} and even \emph{differential coefficients of nothing}.
Moreover, the Stirling numbers are also closely connected to the \emph
{generalized Bernoulli numbers} $B^{(s)}_n$, also known as
\emph{Bernoulli numbers of higher order}, see e.g.~\cite[p.~129]{carlitz_02}, \cite[p.~449]{gould_01}, \cite[p.~116]{gould_02};
many of their properties may be, therefore, deduced from those of $B^{(s)}_n$.}
Consider the generating equation for the unsigned Stirling numbers of
the first kind, formula (\ref{ld2jr3mnfdmd}a).
This power series is uniformly and absolutely convergent inside the
disk $|z|<1$.
Putting $l+m-1$ instead of $l$, multiplying both sides by $(l)_m$ and
summing over $l=[1,\infty)$,
we obtain for the left side
\be\notag
\begin{array}{ll}
\displaystyle
\sum_{l=1}^\infty (l)_m \cdot \frac{\big[-\ln(1-z)\big]^{l+m-1}}{(l+m-1)!}
\,=\,\sum_{l=1}^\infty \frac{\big[-\ln(1-z)\big]^{l+m-1}}{(l-1)!} =\\[6mm]
\displaystyle\qquad\qquad
=\,\big[-\ln(1-z)\big]^m \!\cdot\underbrace{\sum_{l=1}^\infty \frac{\big[-\ln(1-z)\big]^{l-1}}{(l-1)!}}_{e^{-\ln(1-z)}} \,
=\, (-1)^m\cdot\frac{\ln^m(1-z)}{1-z}
\end{array}
\ee
while the right side of (\ref{ld2jr3mnfdmd}a), in virtue of the
absolute convergence, becomes
\begin{eqnarray*}
\displaystyle
\sum_{l=1}^\infty\,(l)_m\!\cdot \sum_{n=0}^\infty \frac{\big
|S_1(n,l+m-1)\big|}{n!} \, z^n
& =&
\sum_{n=0}^\infty\frac{z^n}{n!} \,\cdot\!\!\!
\sum_{l=1}^{n-m+1}\!\!(l)_m\!\cdot\big|S_1(n,l+m-1)\big| \\[7mm]
&= & m!\cdot\!
\sum_{n=0}^\infty\frac{\,\big|S_1(n+1, m+1)\big|\,}{n!} \,z^n
\end{eqnarray*}
Whence
\begin{eqnarray}
\label{iu2d092n1}
\frac{\ln^m(1-z)}{1-z}\,=\,(-1)^m m!\cdot\!
\sum_{n=0}^\infty\frac{\,\big|S_1(n+1, m+1)\big|\,}{n!} \,z^n\,,
\qquad
\begin{array}{l}
m=0, 1, 2,\ldots \\[6pt]
|z|<1
\end{array}
\end{eqnarray}
Writing in the latter $-z$ for $z$,
and then subtracting
one from another yields the following series
\be
\label{djhd9ehdbne}
\frac{\ln^m(1-z)}{1-z}-\frac{\ln^m(1+z)}{1+z}\, =\,2(-1)^m m!\cdot\!
\sum_{k=0}^\infty\frac{\,\big|S_1(2k+2,m+1)\big|\,}{(2k+1)!}
\,z^{2k+1}
\qquad\quad
\ee
$m=0, 1, 2,\ldots\,$,
which is absolutely and uniformly convergent in the unit disk $|z|<1$, and
whose coefficients grow logarithmically with $k$
\begin{eqnarray}
\label{j6s8g64r}
\frac{\,\big|S_1(2k+2,m+1)\big|\,}{(2k+1)!}\sim\frac{\,\ln^m{k}\,
}{m!}\,,\qquad\quad k\to\infty\,,\qquad m=0, 1, 2,\ldots
\end{eqnarray}
in virtue of known asymptotics for the Stirling numbers,
see e.g.~\cite[p.~261]{jordan_00}, \cite[p.~161]{jordan_01}, \cite[\no24.1.3, p.~824]{abramowitz_01}, \cite[p.~348, Eq.~(8)]{wilf_02}.
Using formul{\ae}~from \cite[p.~217]{comtet_01}, \cite[p.~1395]{shen_01}, \cite[p.~425, Eq.~(43)]{kowalenko_01},
the law for the formation of first coefficients may be also written in
a more simple form
\be\label{uf87tfuy89}
\frac{\,\big|S_1(2k+2,m+1)\big|\,}{(2k+1)!}\,=\,
\begin{cases}
\,1 \,, & m=0\\[1mm]
\,H_{2k+1}\,,& m=1 \\[1mm]
\,\frac{1}{2}\big\{H^2_{2k+1} - H^{(2)}_{2k+1}\big\} \,,\qquad\quad& m=2 \\[1mm]
\,\frac{1}{6}\big\{H^3_{2k+1} - 3H_{2k+1} H^{(2)}_{2k+1}+2H^{(3)}_{2k+1}\big\}\,,\qquad\quad& m=3
\end{cases}
\ee
For higher $m$, values of this coefficient may be similarly reduced to
a non-linear combination of the generalized harmonic numbers.
Since expansion {\eqref{djhd9ehdbne}} holds only inside the unit circle,
it cannot be directly used for the insertion into Jensen--Franel's
integral formula {\eqref{kljc3094jcmfd}}.
However, if we put in {\eqref{djhd9ehdbne}} $z=\frac{1}{2\pi i}\ln
(1-u)$, we obtain for the right part
\be\label{jwoerivh304}
\begin{array}{ll}
\displaystyle
2\,(-1)^m m!\cdot\!\sum_{k=0}^\infty\frac{\,\big|S_1(2k+2,m+1)\big|\,}{\,(2\pi i)^{2k+1}}\cdot\underbrace{\frac{\ln^{2k+1}(1-u)}{(2k+1)!}
}_{\text{see (\ref{ld2jr3mnfdmd}a)}} = \\[10mm]
\displaystyle\qquad\qquad
\,=\,2i\,(-1)^m m!\cdot\!\sum_{k=0}^\infty\frac{\,(-1)^k \big|S_1(2k+2,m+1)\big|\,}{\,(2\pi)^{2k+1}}\cdot\!\sum_{n=1}^\infty\!\frac{\big|S_1(n,2k+1)\big|}{n!}\,u^n \\[7mm]
\displaystyle\qquad\qquad
=\,2i\,(-1)^m m!\cdot\!\sum_{n=1}^\infty\!\frac{\,u^n\,}{n!} \cdot\!\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}
\frac{\,(-1)^k \big|S_1(2k+2,m+1)\big|\cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}}
\end{array}
\ee
Therefore, for $m=0, 1, 2,\ldots$\,, we have
\be\label{joc20j4cxcs}
\begin{array}{ll}
\displaystyle
\frac{1}{\,2\pi i\,}
\left\{
\dfrac{\ln^m\!\left[1-\dfrac{\ln(1-u)}{2\pi i}\right]}{\,1-\dfrac{\ln(1-u)}{2\pi i}\,} \,-\,
\dfrac{\ln^m\!\left[1+\dfrac{\ln(1-u)}{2\pi i}\right]}{\,1+\dfrac{\ln(1-u)}{2\pi i}\,}
\right\}=\, \\[10mm]
\displaystyle\qquad\qquad\qquad=\,
\frac{\,(-1)^m m!\,}{\pi}\!\sum_{n=1}^\infty\!\frac{\,u^n\,}{n!}
\cdot\!\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^k \big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}}
\end{array}
\ee
which uniformly holds in $|u|<1$ and also is valid for $u=1$.\footnote
{The unit radius of convergence of this series is conditioned
by the singularity the most closest to the origin. Such singularity is
a branch point located at $u=1$. Note also that since the series
is convergent for $u=1$ as well, in virtue of Abel's theorem on power
series, it is uniformly convergent everywhere on the disc $|u|\leqslant
1-\varepsilon$,
where positive parameter $\varepsilon$ can be made as small as we please.}
Substituting {\eqref{joc20j4cxcs}} into {\eqref{jhvc94hfhnf}} and
performing the term-by-term integration from $u=0$ to $u=1$ yields
the following series representation for $m$th generalized Euler's constant
\be\label{jkhf3984fhd}
\gamma_m\,=\,\frac{1}{2}\delta_{m,0}+
\frac{\,(-1)^m m!\,}{\pi} \!\sum_{n=1}^\infty\frac{1}{\,n\cdot n!\,} \!
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^{k}\big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}\,}
\ee
where $m=0, 1, 2,\ldots{}$.
In particular, for Euler's constant and first Stieltjes constant, we
have following series expansions
\be\label{jcwio0ecn32}
\begin{array}{rcl}
\displaystyle
\gamma\; &=&\displaystyle\,\frac{1}{2}+
\frac{\,1\,}{\pi}\sum_{n=1}^\infty\frac{1}{\,n\cdot n!\,}
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^{k}\!\cdot (2k+1)!\cdot\big|S_1(n,2k+1)\big|}{\,(2\pi)^{2k+1}\,} \\[5mm]
\displaystyle
& =&\,\displaystyle \frac{1}{2}+ \frac{1}{2\pi^2}+\frac{1}{8\pi^2}+\frac{1}{18}\!\left(\frac{1}{\pi^2}-\frac{3}{4\pi^4}\right)
+\frac{3}{96}\!\left(\frac{1}{\pi^2}-\frac{3}{2\pi^4}\right) \\[6mm]
& &\displaystyle
+\frac{1}{600}\!\left(\frac{12}{\pi^2}-\frac{105}{4\pi^4}+\frac{15}{4\pi^6}\right)
+\frac{1}{4\,320}\!\left(\frac{60}{\pi^2}-\frac{675}{4\pi^4}+\frac{225}{4\pi^6}\right) + \ldots \\[7mm]
\displaystyle
\gamma_1\:& =&\displaystyle\,-\frac{\,1\,}{\pi}\sum_{n=1}^\infty\frac{1}{\,n\cdot n!\,}
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^{k}\!\cdot (2k+1)!\cdot H_{2k+1}\cdot\big|S_1(n,2k+1)\big|}{\,(2\pi)^{2k+1}\,} \\[5mm]
\displaystyle
& =&\,\displaystyle -\frac{1}{2\pi^2}-\frac{1}{8\pi^2}-\frac{1}{18}\!\left(\frac{1}{\pi^2}-\frac{11}{8\pi^4}\right)
-\frac{3}{96}\!\left(\frac{1}{\pi^2}-\frac{11}{4\pi^4}\right) \\[6mm]
& &\displaystyle
-\frac{1}{600}\!\left(\frac{12}{\pi^2}-\frac{385}{8\pi^4}+\frac{137}{16\pi^6}\right)
-\frac{1}{4\,320}\!\left(\frac{60}{\pi^2}-\frac{2\,475}{8\pi^4}+\frac{2055}{16\pi^6}\right) - \ldots
\end{array}
\ee
respectively.
As one can easily notice, each coefficient of these expansions contains
polynomials in $\pi^{-2}$ with rational coefficients.
The rate of convergence of this series, depicted in {Fig.~\ref{kjfc0234nd}}, is relatively slow and depends, at least for the moderate
number of terms, on~$m$:
the greater the order $m$, the slower the convergence.
A more accurate description of this dependence, as well as the exact
value of the rate of convergence,
both require a detailed convergence analysis of {\eqref{jkhf3984fhd}},
which is performed in the next section.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{error_Lgm_log_scale_maple_v12.eps}\vspace{-14mm}
\caption{Absolute values of relative errors of the series expansion for $\,\gamma_0\,$, $\,\gamma_1\,$ and
$\,\gamma_2\,$ given by \protect\eqref{jkhf3984fhd}--\protect\eqref{jcwio0ecn32}, logarithmic scale.}
\label{kjfc0234nd}
\end{figure}
\subsection{Convergence analysis of the derived series}
The convergence analysis of series {\eqref{jkhf3984fhd}} consists in the
study of its general term, which is given by the finite truncated sum
over index $k$.
This sum has only odd terms, and hence, by elementary transformations,
may be reduced
to that containing both odd and even terms
\be\label{lkh908gb9gi8vityr}
\begin{array}{ll}
& \displaystyle
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}(-1)^{k}\frac{\,\big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}\,} \,=\\[6mm]
& \displaystyle \qquad\qquad
= \sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor} (-1)^{\frac{1}{2}(2k+1)-\frac{1}{2}}
\frac{\,\big|S_1(2k+1+1,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}\,}\\[8mm]
& \displaystyle \qquad\qquad
= \,\frac{1}{2}\!\sum_{l=1}^{n} \big[1-(-1)^{l}\big] \cdot(-1)^{\frac{1}{2}(l-1)}\cdot
\frac{\,\big|S_1(l+1,m+1)\big| \cdot\big|S_1(n,l)\big|\,}{\,(2\pi)^{l}\,}
\,=\,\ldots
\end{array}
\ee
where, in the last sum, we changed the summation index by putting $l=2k+1$.
Now, from the second integral formula for the unsigned Stirling numbers
of the first kind,
see {\eqref{ock2w3jkmd1}}, it follows that
\be\notag
\begin{array}{ll}
\displaystyle
(-1)^{\frac{1}{2}(l-1)}\cdot\frac{\,\big|S_1(l+1,m+1)\big|\,}{\,(2\pi)^{l}\,}\,
=\,\frac{(-1)^m}{\,2\pi \,}\cdot\frac{(l+1)!}{(m+1)!}\!\!
\ointctrclockwise\limits_{|z|=r}\!\!\left[+\frac{i}{2\pi z}\right]^l\frac{\ln^{m+1}(1-z)}{z^2}\, dz\,\\[10mm]
\displaystyle
(-1)^l\cdot(-1)^{\frac{1}{2}(l-1)}\cdot\frac{\,\big|S_1(l+1,m+1)\big|\,}{\,(2\pi)^{l}\,}\,
=\,\frac{(-1)^m}{\,2\pi \,}\cdot\frac{(l+1)!}{(m+1)!}\!\!
\ointctrclockwise\limits_{|z|=r}\!\!\left[-\frac{i}{2\pi z}\right]^l\frac{\ln^{m+1}(1-z)}{z^2}\, dz
\end{array}
\ee
where $0<r<1$. Therefore, since $(l+1)!=\int \! x^{l+1} e^{-x} dx$
taken from $0$ to $\infty$,
the last sum in {\eqref{lkh908gb9gi8vityr}} reduces to the following
integral representation
\be\label{oi23jrn3ds3}
\begin{array}{ll}
\ldots\;&\displaystyle =\,
\frac{(-1)^m}{\,4\pi (m+1)! \,}\sum_{l=1}^{n} \big|S_1(n,l)\big|\cdot(l+1)!\cdot\!\! \ointctrclockwise\limits_{|z|=r}\!\!
\left[\left(\frac{i}{2\pi z}\right)^{\!l} - \left(-\frac{i}{2\pi z}\right)^{\!l}\right]\frac{\ln^{m+1}(1-z)}{z^2}\, dz \\[9mm]
&\displaystyle
=\,\frac{(-1)^m}{\,4\pi (m+1)! \,}\cdot\int\limits_{0}^\infty\left[\sum_{l=1}^{n} \big|S_1(n,l)\big|\!\! \ointctrclockwise\limits_{|z|=r}\!\!
\left[\left(\frac{ix}{2\pi z}\right)^{\!l} - \left(-\frac{ix}{2\pi z}\right)^{\!l}\right]\frac{\ln^{m+1}(1-z)}{z^2}\, dz\right] x\,e^{-x}\,\,dx \\[10mm]
&\displaystyle
=\,\frac{(-1)^m}{\,4\pi (m+1)! \,}\cdot \!\! \ointctrclockwise\limits_{|z|=r}\!\!
\frac{\ln^{m+1}(1-z)}{z^2}\left\{\int\limits_{0}^\infty\left[\left(\frac{ix}{2\pi z}\right)_{\!\!n}
- \left(-\frac{ix}{2\pi z}\right)_{\!\!n}\right]x\,e^{-x}\,dx\right\} dz
\end{array}
\ee
The integral in curly brackets is difficult to evaluate in a closed-form,
but at large $n$, its asymptotical value may be readily obtained.
Function $1/\Gamma(z)$ is analytic on the entire complex $z$-plane,
and hence, can be expanded
into the MacLaurin series
\be\label{poi2d293dm}
\frac{1}{\Gamma(z)}\,=\,z+\gamma z^2+ \left(\! \frac{\gamma^2}{2}-\frac{\pi^2}{12}\right)\!z^3 +
\ldots\,\equiv\sum_{k=1}^\infty z^k a_k \,,\qquad |z|<\infty\,,\\[6mm]
\ee
where
\be\notag
a_k\,\equiv\,\frac{1}{k!}\cdot \left[\frac{1}{\Gamma(z)}\right]^{(k)}_{z=0} \!=\,
\frac{(-1)^k}{\,\pi \, k!\,}\cdot \Big[\sin\pi x\cdot\Gamma(x)\Big]^{(k)}_{x=1}
\ee
see e.g.~\cite[p.~256, \no6.1.34]{abramowitz_01}, \cite[pp.~344 \&
349]{wilf_02}, \cite{hayman_01}.
Using Stirling's approximation for the Pochhammer symbol {\eqref{lk2093mffmnjw}}, we have for
sufficiently large $n$
\be\label{poi2d293dm2}
\begin{array}{l}
\displaystyle
\left(\frac{ix}{2\pi z}\right)_{\!\!n} - \left(-\frac{ix}{2\pi z}\right)_{\!\!n}\sim
\, \frac{n^{\frac{ix}{2\pi z}}\cdot\Gamma(n)}{\Gamma\left(\frac{ix}{2\pi z}\right)}\,-\,
\frac{n^{-\frac{ix}{2\pi z}}\cdot\Gamma(n)}{\Gamma\left(-\frac{ix}{2\pi z}\right)}\,=\\[8mm]
\displaystyle
=\,(n-1)! \left[\exp\!\left(\frac{ix\ln n}{2\pi z}\right)\!\sum_{k=1}^\infty a_k \!\left(\frac{i\,x}{2\pi z}\right)^{\!k}-
\exp\!\left(-\frac{ix\ln n}{2\pi z}\right)\!\sum_{k=1}^\infty (-1)^k a_k \!\left(\frac{i\,x}{2\pi z}\right)^{\!k}\right]
\end{array}
\ee
Substituting this approximation into the integral in curly brackets
from {\eqref{oi23jrn3ds3}},
performing the term-by-term integration\footnote{Series
(23)--(24) being
uniformly convergent.} and taking into account that $z^{-s}\Gamma
(s)=\int\! x^{s-1} e^{-zx} dx$ taken over $x\in[0,\infty)$,
yields
\be\label{c293mned}
\begin{array}{ll}
\displaystyle
\int\limits_{0}^\infty\left[\left(\frac{ix}{2\pi z}\right)_{\!n} - \left(-\frac{ix}{2\pi z}\right)_{\!n}\right]x\,e^{-x}\,dx\,\sim \\[7.8mm]
\displaystyle\qquad\qquad
\sim \,(n-1)!\sum_{k=1}^\infty a_k \!\left(\frac{i}{2\pi z}\right)^{\!k}\!\!\cdot (k+1)! \left\{\left[1+\frac{i\ln n}{2\pi z}\right]^{-k-2} \!\!\! -
(-1)^k\left[1-\frac{i\ln n}{2\pi z}\right]^{-k-2} \right\}\\[8mm]
\displaystyle\qquad\qquad
\sim \,(n-1)! \cdot\frac{32\,i\pi^3 z^3 \big(4\pi^2 z^2 - 3\ln^2 n\big)}{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,}\,,
\qquad\qquad\quad n\to\infty\,,
\end{array}
\ee
where, at the final stage, we retained only the first significant term
corresponding to factor $k=1$.\footnote{The
second term of this sum, corresponding to $k=2$, is
\begin{eqnarray*}
(n-1)!\cdot \frac{396\,i\gamma\pi^3 z^3
\big(4\pi^2 z^2 - \ln^2 n\big)\ln n}{\big(4\pi^2 z^2 +\ln^2 n\big)^{4}}=(n-1)!
\cdot O\left(\frac{1}{\,\ln^{5} n\,}\right)\,,\qquad\quad n\to\infty\,,
\end{eqnarray*}
and hence, may be neglected at large $n$.}
Now, if $|z|\leqslant1-e^{-1}\approx0.63$, then the principal
branch of \mbox{$\big|\ln^{m+1}(1-z)\big|\leqslant 1$}
independently of $m$ and $\arg{z}$. Analogously, one can always find
such sufficiently large~$n_0$, that for any however small $\varepsilon>0$,
\be\label{29ci23nd3jw}
\left| \frac{32\,i\pi^3 z^3 \big(4\pi^2 z^2 - 3\ln^2 n\big) }{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,}\right|<\varepsilon\,,
\qquad n\geqslant n_0\,,
\ee
on the circle $|z|=1-e^{-1}$
(for example, if $\varepsilon=1$, then $n_0=1222$;
if $\varepsilon=0.1$, then $n_0=38\,597$; if $\varepsilon=0.01$, then
$n_0=33\,220\,487$; \emph{etc.}).\footnote{Note that
for fixed $n$, the left-hand side of {\eqref{29ci23nd3jw}} reaches its
maximum when $z$ is imaginary pure.}
Combining all these results and taking into account that $|dz|=|z|\,
d\arg{z}$, we conclude that
\be\label{k039dm3dmedc}
\begin{array}{ll}
\displaystyle
\frac{1}{\,n\cdot n!\,} \left|
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^{k}
\big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}\,} \right| <
\\[8mm]
\displaystyle\qquad\qquad\qquad\qquad\qquad
< \, \frac{1}{\,n^2\,}\cdot\frac{\varepsilon}{\,2\big(1-e^{-1}\big) (m+1)!\,}
\,<\,\frac{C}{\,n^2\,}\,,
\qquad
\qquad n\geqslant n_0\,,
\end{array}
\ee
Numerical simulations,
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{plot_error_upperboundexact_160_log_maple_v12.eps}\vspace{-3mm}
\caption{Relative error between the upper bound and the left--hand side in \eqref{k039dm3dmedc} as a function of $n$
for four different orders $m$, logarithmic scale (the curve in long dashes correspond to $m=1$, that in short ones to $m=3$). Results displayed above correspond to $C=1/(2\pi)$.}
\label{cj2394chcned}
\end{figure}
see Fig.~\ref{cj2394chcned}, show that this
simple inequality, valid for all $m$,
may provide more or less accurate approximation\vadjust{\eject} for the general term of {\eqref{oi23jrn3ds3}}, and this greatly depends on $m$.
Moreover, the joint analysis of {Figs.~\ref{kjfc0234nd} and \ref{cj2394chcned}} also indicates that first partial sums of
series {\eqref{jkhf3984fhd}} may behave quite irregularly. One of the
reasons of such a behaviour is that
for $1\leqslant n\leqslant53$, absolute value {\eqref{29ci23nd3jw}}
increases, and it starts to decrease only
after $n=54$.\footnote{On the circle $|z|=1-e^{-1}$, absolute value {\eqref{29ci23nd3jw}} has
one of its third-order poles at $n=e^{2\pi(1-e^{-1})}\approx53.08$.
Other poles are located either below $n=1$,
e.g.~$n=e^{-2\pi(1-e^{-1})}\approx0.02$, or are complex. More
precisely, all poles of
this expression occur at $n=\big[\cos\big(2\pi(1-e^{-1})\cos
\varphi\big)
\mp i\sin\big(2\pi(1-e^{-1})\cos\varphi\big)\big]e^{\pm2\pi
(1-e^{-1})\sin\varphi}$, where $\varphi\equiv\arg z$.}
Notwithstanding, inequality {\eqref{k039dm3dmedc}} guarantees that in
all cases, the discovered series for $\gamma_m$ given by {\eqref{jkhf3984fhd}}
converges for large $n$ not worse than Euler's series $\sum n^{-2}$.
Asymptotics {\eqref{c293mned}},
as well as the rates of convergence previously obtained in \cite{iaroslav_08},
both suggest that
the exact rate of convergence of series {\eqref{jkhf3984fhd}} may also
involve logarithms.
For instance, from \cite[Sect.~3]{iaroslav_08}, it
straightforwardly follows that the rate of
convergence of this series at $m=0$ is equal to $\sum(n\ln n)^{-2}$, see also footnote \ref{gtf1a}.
Indeed, if we replace the integral in curly brackets from {\eqref{oi23jrn3ds3}} by its first-order
approximation {\eqref{c293mned}},
and then, evaluate the sum of corresponding residues at $
z_{1,2}\equiv\pm\frac{i\ln n}{2\pi}$
\vspace{2mm}
\be\notag
\begin{array}{ll}
\displaystyle
\sum_{l=1}^2\res_{z=z_l}\!\frac{z\big(4\pi^2 z^2 - 3\ln^2 n\big)\ln(1-z)}{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,}
\,=\,\frac{\ln^2n-4\pi^2}{8\pi^2 \big(4\pi^2 +\ln^2 n\big)^2}
\,\sim \frac{1}{\,8\pi^2\ln^2 n\,}\\[9mm]
\displaystyle
\sum_{l=1}^2\res_{z=z_l}\!\frac{z\big(4\pi^2 z^2 - 3\ln^2 n\big)\ln^2(1-z)}{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,}
\,=\,\frac{\ln^2n \cdot\ln\big(4\pi^2 +\ln^2 n\big)+\ldots}{8\pi^2 \big(4\pi^2 +\ln^2 n\big)^2}
\,\sim \frac{2\cdot\ln\ln n}{\,8\pi^2\ln^2 n\,}\\[9mm]
\displaystyle
\sum_{l=1}^2\res_{z=z_l}\!\frac{z\big(4\pi^2 z^2 - 3\ln^2 n\big)\ln^3(1-z)}{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,}
\,=\,\frac{3\ln^2n \cdot\big[\ln^2(2\pi +i\ln n) + \ln^2(2\pi -i\ln n) \big]+\ldots}{16\pi^2 \big(4\pi^2 +\ln^2 n\big)^2} \\[5mm]
\displaystyle \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\,\sim \frac{3\cdot\ln^2\!\ln n}{\,8\pi^2\ln^2 n\,}
\end{array}
\ee
and so on, we find that
\be\label{owjh298enc}
\begin{array}{ll}
& \displaystyle
\frac{1}{\,n\cdot n!\,} \sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}(-1)^{k}\frac{\,\big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|\,}{\,(2\pi)^{2k+1}\,} \,\sim\\[8mm]
& \displaystyle \qquad\quad
\sim\,\frac{8\,i\,\pi^2 \, (-1)^m}{\,n^2(m+1)! \,}\! \ointctrclockwise\limits_{|z|=r}\!\!
\frac{z\big(4\pi^2 z^2 - 3\ln^2 n\big)\ln^{m+1}(1-z)}{\,\big(4\pi^2 z^2 +\ln^2 n\big)^3\,} \, dz \,
\sim\,(-1)^{m+1}\frac{\,2\pi\,}{\, \,m!\,}\cdot\frac{\ln^m\ln n}{\,n^2\ln^2 n\,}\qquad
\end{array}
\ee
in virtue of the Cauchy residue theorem. Of course, this formula is
only a rough approximation,
because poles $z_{1,2}$ belongs to the disc $|z|=r<1$ only if
$1\leqslant n\leqslant535$, while formula {\eqref{c293mned}} is
a double first-order approximation and is
accurate only for large $n$.
Furthermore, residues were also evaluated only in the first
approximation. However, obtained expression
gives an idea of what the true rate
of convergence of series {\eqref{jkhf3984fhd}} could be, and it also
explains quite well why series for higher
generalized Euler's constants converge more slowly than those for lower
generalized Euler's constants.
Moreover, this approximation agrees with the fact that {\eqref{jkhf3984fhd}} converges not worse than Euler's series, and also is
consistent with the previously derived rate of convergence for $\gamma
$ from \cite{iaroslav_08}, which was obtained by another method.
\section{Expansion of generalized Euler's constants $\gamma_m$ into
the formal series with rational coefficients}
\subsection{Introduction}
Expansions into the series with rational coefficients is an interesting
and challenging subject.
There exist many such representations for Euler's constant $\gamma$
and first of them date back to
the XVIIIth century. For instance, from the Stirling series for the
digamma function
at $z=1$, it straightforwardly follows that
\be\label{lkce02m}
\gamma\,=\,\frac{1}{2} + \sum_{k=1}^{N}\frac{\,{B}_{2k}}{\,2k\,}
+\, \theta\cdot\frac{\,{B}_{2N+2}\,}{\,2(N+1)\,} \,=\,
\frac{1}{2}+ \frac{1}{12}-\frac{1}{120}+\frac{1}{252}-\frac{1}{240}+\frac{1}{132}-\frac{691}{32\,760}+ \ldots
\ee
where $0<\theta<1$ and $N<\infty$.\footnote{This result should be
attributed to both Stirling and De Moivre, who originated
Stirling series, see \cite[p.~135]{stirling_01} and \cite{demoivre_01} respectively (for more information on Stirling series,
see also \cite[part II, Chapter VI, p.~466]{euler_02}, \cite[p.~33]{gauss_02},
\cite[p.~329]{bromwich_01}, \cite[p.~111]{norlund_02}, \cite[\S12-33]{watson_01},
\cite[\S15-05]{jeffreys_02}, \cite[p.~530]{knopp_01}, \cite[p.~1]{copson_01},
\cite{kratzer_01}, \cite[\S4.1, pp.~293--294]{olver_01}, \cite[pp.~286--288]{gelfond_01},
\cite[\no6.1.40--\no6.1.41]{abramowitz_01}, \cite{murray_01}). Curiously, Srivastava and Choi \cite[p.~6]{srivastava_03},
did not notice the trivial connection between this series and the
Stirling series for the digamma function
and erroneously credited this result to Konrad Knopp, in whose book
\cite{knopp_01} it appears, with a slightly different remainder, on p.~527
(Knopp himself never claimed the authorship of this formula).} A more
general representation of the same kind may be obtained by
Euler--MacLaurin summation
\begin{eqnarray}
\gamma\,=\, \sum_{k=1}^n \frac{\,1\,}{k}-\ln{n}
-\frac{\,1\,}{2n} + \sum_{k=1}^{N} \frac{\,{B}_{2k}\,}{2k\cdot n^{2k}}
+ \theta\cdot\frac{\,{B}_{2N+2}\,}{2(N+1)\cdot n^{2N+2}}\,,
\end{eqnarray}
where $0<\theta<1$, $N<\infty$ and $n$ is positive integer, see
e.g.~\cite[\no377]{gunter_03_eng}.
Two above series are \emph{semi-convergent} (or \emph{divergent
enveloping}), i.e.~they diverge as $N\to\infty$.
The first known convergent series representation for Euler's constant
with only rational terms, as far as we know, dates back to 1790 and
is due to Gregorio Fontana and Lorenzo Mascheroni
\begin{eqnarray}
\label{njw3uiqch}
\gamma\,=\sum_{n=1}^{\infty} \frac{\,\big|G_n\big|\,}{n}= \frac
{1}{2}+\frac{1}{24}+\frac{1}{72}+\frac{19}{2880}+\frac{3}{800}
+\frac{863}{362\,880}+ \ldots
\end{eqnarray}
where rational coefficients $G_n$, known as \emph{Gregory's
coefficients},\footnote{These coefficients are also called
\emph{(reciprocal) logarithmic numbers}, \emph{Bernoulli numbers of
the second kind},
normalized \emph{generalized Bernoulli numbers} $B_n^{(n-1)}$, \emph
{Cauchy numbers} and normalized \emph{Cauchy numbers
of the first kind} $C_{1,n}$. They were introduced by
James Gregory in 1670 in the context of area's interpolation formula
(which is known nowadays as \emph{Gregory's interpolation formula})
and were subsequently rediscovered in various contexts by many famous
mathematicians, including Gregorio Fontana, Lorenzo Mascheroni,
Pierre--Simon Laplace, Augustin--Louis Cauchy, Jacques Binet,
Ernst Schr\"oder, Oskar Schl\"omilch, Charles Hermite, Jan C.~Kluyver
and Joseph Ser
\cite[vol.~II, pp.~208--209]{rigaud_01},
\cite[vol.~1, p.~46, letter written on November 23, 1670 to John
Collins]{newton_01}, \cite[pp.~266--267, 284]{jeffreys_02},
\cite[pp.~75--78]{goldstine_01},
\cite[pp.~395--396]{chabert_01},
\cite[pp.~21--23]{mascheroni_01}, \cite[T.~IV,
pp.~205--207]{laplace_01}, \cite[pp.~53--55]{boole_01}, \cite{van_veen_01},
\cite[pp.~192--194]{goldstine_01},
\cite{lienard_01,wachs_01,schroder_01,schlomilch_03}, \cite[pp.~65, 69]{hermite_01},
\cite{kluyver_02,ser_01}.
For more information about these important coefficients, see
\cite[pp.~240--251]{norlund_02}, \cite{norlund_01}, \cite[p.~132,
Eq.~(6), p.~138]{jordan_02}, \cite[p.~258, Eq.~(14)]{jordan_00}, \cite[pp.~266--267, 277--280]{jordan_01},
\cite{nielsen_01,nielsen_03,steffensen_01}, \cite[pp.~106--107]{steffensen_02}, \cite{davis_02},
\cite[p.~190]{weisstein_04},
\cite[p.~45, \no370]{gunter_03_eng},
\cite[vol.~III, pp.~257--259]{bateman_01}, \cite{stamper_01}, \cite[p.~229]{krylov_01}, \cite[\no600, p.~87]{proskuriyakov_01_eng},
\cite[p.~216, \no75-a]{knopp_01}
\cite[pp.~293--294, \no13]{comtet_01}, \cite{carlitz_01,howard_02,young_01,adelberg_01,zhao_01,candelpergher_01},
\cite[Eq.~(3)]{mezo_01}, \cite{merlini_01,nemes_01},
\cite[pp.~128--129]{alabdulmohsin_01},
\cite[Chapt.~4]{arakawa_01}, \cite{skramer_01,iaroslav_08}.\label{jpqwcnqasd}} are given either via their
generating function
\begin{eqnarray}
\label{eq32}
\frac{z}{\ln(1+z)} = 1+\sum
_{n=1}^\infty G_n z^n ,\qquad|z|<1\,,
\end{eqnarray}
or explicitly
\begin{eqnarray}
\label{ldhd9ehn}
G_n=\frac{1}{n!} \sum\limits_{l=1}^{n} \frac{S_1(n,l)}{l+1}
=\frac{1}{n!}\!\int\limits_0^1 (x-n+1)_n\, dx=-\frac{B_n^{(n-1)}}{\,
(n-1)\,n!\,}\,=\,\frac{C_{1,n}}{n!}
\end{eqnarray}
This series was first studied by Fontana, who, however,
failed to find a constant to which it converges. Mascheroni identified
this \emph{Fontana's constant}
and showed that it equals Euler's constant \cite[pp.~21--23]{mascheroni_01}.
This series was subsequently rediscovered many times, in particular, by
Ernst Schr\"oder in 1879 \cite[p.~115, Eq.~(25a)]{schroder_01},
by Niels E.~N{\o}rlund in 1923 \cite[p.~244]{norlund_02},
by Jan C.~Kluyver in 1924 \cite{kluyver_02}, by Charles Jordan in 1929
\cite[p.~148]{jordan_02},
by Kenter in 1999 \cite{kenter_01}, by Victor Kowalenko in 2008 \cite{kowalenko_01,kowalenko_02}.
An expansion of a similar nature
\be\label{c32c324fv}
\gamma\,=\,1 - \sum_{n=1}^\infty \frac{C_{2,n}}{\,n\cdot(n+1)!\,}
=\,
1-\frac{1}{4}-\frac{5}{72}-\frac{1}{32}-\frac{251}{14\,400}-\frac
{19}{1728} -
\frac{19\,087}{2\,540\,160} - \ldots
\ee
where rational numbers $C_{2,n}$, known as \emph{Cauchy numbers of the
second kind}\footnote{These
numbers, called by some authors signless \emph{generalized Bernoulli numbers}
$|B_n^{(n)}|$ and signless \emph{N{\o}rlund numbers}, are much less
famous than
Gregory's coefficients $G_n$, but their study is also very interesting, see
\cite[pp.~150--151]{norlund_02}, \cite[p.~12]{davis_02}, \cite{norlund_01}, \cite[vol.~III,
pp.~257--259]{bateman_01},
\cite[pp.~293--294, \no13]{comtet_01}, \cite{howard_03,adelberg_01,zhao_01,qi_01,iaroslav_08}.}
\be
\begin{cases}
\displaystyle
\frac{z}{(1+z)\ln(1+z)}\,=\,1+\sum_{n=1}^\infty\!\frac{(-1)^n z^n C_{2,n}}{n!} \\[8mm]
\displaystyle
C_{2,n}\,=\sum\limits_{l=1}^{n} \frac{|S_1(n,l)|}{l+1} = \int\limits_0^1 (x)_n\, dx\,=\,|B_n^{(n)}|
\end{cases}
\ee
follows from a little-known series
for the digamma function given by Jacques Binet in 1839 \cite[p.~257,
Eq.~(81)]{binet_01}
and rediscovered later by Niels E.~N{\o}rlund in his monograph \cite[p.~244]{norlund_02}.\footnote{Strictly speaking, Binet found only
first four coefficients of the corresponding series for the digamma
function and incorrectly calculated the last coefficient (for $K(5)$ he
took $\frac{245}{3}$
instead of $\frac{245}{6}$ \cite[p.~237]{binet_01}), but otherwise
his method and derivations are correct. It is also notable that
Binet related coefficients $K(n)$ to the Stirling numbers and
provided two different ways for their computation, see \cite[Final remark]{iaroslav_08}.}
Series \break
\begin{eqnarray}
&& \gamma=1-\sum_{n=1}^\infty\sum
_{k=2^{n-1}}^{2^{n}-1} \frac{n}{(2k+1)(2k+2)} \,=\,1-\sum_{n=1}^\infty\sum
_{k=2^{n}+1}^{2^{n+1}} \frac{(-1)^{k+1}n}{\,k\,} \,=\, 1-\frac{1}{12}-\frac{43}{420} \notag\\[3mm]
&& \phantom{\gamma=\,}
-\frac{20\,431}{240\,240}-\frac{2\,150\,797\,323\,119}{36\,100\,888\,223\,400}
- \frac{9\,020\,112\,358\,835\,722\,225\,404\,403}{236\,453\,376\,820\,564\,453\,502\,272\,320} - \ldots\label{lk2eojmwjksd} \\[-3mm]
\notag
\end{eqnarray}
was given in the first form by Niels Nielsen in 1897 \cite[Eq.~(6)]{nielsen_02},
and in the second form by Ernst Jacobsthal in 1906 \cite[Eqs.~(8)]{jacobsthal_01}.
The same series (in various forms) was independently obtained by
Addison in 1967 \cite{addison_01} and
by Gerst in 1969 \cite{gerst_01}. The famous series
\begin{eqnarray}
\label{lk2jd029jde}
\gamma= \sum_{n=2}^\infty
\frac{(-1)^n}{n} \lfloor\log_2{n}\rfloor\,=\,\frac{1}{2}-\frac
{1}{3}+\frac{1}{2}-
\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+\ldots
\end{eqnarray}
was first given by Ernst Jacobsthal in 1906 \cite[Eqs.~(9)]{jacobsthal_01}
and subsequently rediscovered by many writers, including Giovanni Vacca
\cite{vacca_01},
H.F.~Sandham \cite{sandham_01}, D.F.~Barrow, M.S.~Klamkin, N.~Miller
\cite{barrow_01} and
Gerst \cite{gerst_01}.\footnote{It should be remarked that this
series is often incorrectly attributed to Vacca,
who only rediscovered it. This error initially
is due to Glaisher, Hardy and Kluyver, see e.g.~their works \cite{glaisher_01,hardy_03,kluyver_02}.
It was only much later that Stefan Kr\"amer \cite{skramer_01}
correctly attributed this series to Jacobsthal \cite{jacobsthal_01}.}
Series
\be\label{f413f14cx45}
\gamma= \sum_{n=m}^\infty
\frac{ \beta_n }{n} \lfloor\log _m{n}\rfloor ,\qquad
\beta_n =
\begin{cases}
m-1 , & n = \mbox{multiple of } m \vspace{3pt}\cr
-1 , & n \neq\mbox{multiple of } m
\end{cases}
\ee
which generalizes foregoing Jacobsthal--Vacca's series {\eqref{lk2jd029jde}},
is due to Jan C.~Kluyver who discovered it in 1924 \cite{kluyver_02}.
In contrast, as concerns generalized Euler's constants $\gamma_m$ the
results are much more modest.
In 1912 Hardy \cite{hardy_03}, by trying to generalize
Jacobsthal--Vacca's series {\eqref{lk2jd029jde}}
to first generalized Euler's constant, obtained the following series
\begin{eqnarray}
\label{jh3298uhnjd}
\label{jm9c204dj} \gamma_1 = \frac{\ln2}{2}\sum
_{n=2}^\infty\frac{(-1)^n}{n} \lfloor
\log_2{n}\rfloor\cdot\big(2\log_2{n} - \lfloor
\log_2{2n}\rfloor\big)
\end{eqnarray}
which is, however, not a full generalization of Jacobsthal--Vacca's series
since it also contains irrational coefficients.
In 1924--1927, Kluyver tried, on several occasions \cite{kluyver_03,kluyver_01}, to better Hardy's result and to obtain series for
$\gamma_m$ with
rational terms only, but these attempts were not successful.
There also exist formul{\ae}~similar to Hardy's series.
For instance,
\be\notag
\sum_{n=1}^\infty \frac{\,H^{m}_n - (\gamma+\ln n)^m\,}{n} \,=\,
\begin{cases}
\,-\gamma_1 -\frac{1}{2}\gamma^2+\frac{1}{2}\zeta(2)\,,\qquad & m=1 \\[2mm]
\,-\gamma_2 -\frac{2}{3}\gamma^3 -2\gamma_1\gamma +\frac{5}{3}\zeta(3)\,,\qquad & m=2 \\[2mm]
\,-\gamma_3 -\frac{3}{4}\gamma^4 -3\gamma_2\gamma-3\gamma_1\gamma^2+\frac{43}{8}\zeta(4)\,,\qquad & m=3
\end{cases}
\ee
see e.g.~\cite{furdui_01,mathstack_02}.\footnote{Cases $m=1$
and $m=2$ are discussed in cited references.
Formula for $m=3$ was kindly
communicated to the author by Roberto Tauraso.}Besides, several
asymptotical representations
similar to Hardy's formula
are also known for $\gamma_m$.
For instance, Nikolai M.~G\"unther and Rodion O.~Kuzmin
gave the following formula
\begin{eqnarray}
\label{d21309dunmd}
\gamma_1\,=\, \sum_{k=1}^n \frac{\,\ln k\,}{k}-\frac{\,\ln^2 \!{n}\,}{2}
-\frac{\,\ln{n}\,}{2\,n} -\frac{\,1-\ln{n}\,}{12\,n^2} + \theta\cdot
\frac{\,11-6\ln{n}\,}{720\,n^4}
\end{eqnarray}
where $0<\theta<1$,
see \cite[\no 388]{gunter_03_eng}.\footnote{In the third edition of
\cite{gunter_03_eng}, published
in 1947, there are two errors in exercise \no388: the value of $\gamma
_1$ is given
as $-0.073927\ldots$ instead of $-0.072815\ldots{}$, and the denominator
of the last term has the value $176$ instead of $720$. These errors
were corrected in the fourth edition of this book, published
in 1951.}
M.~I.~Israilov \cite{israilov_01} generalized expression {\eqref{d21309dunmd}} and showed that the $m$th Stieltjes constant
may be given by a similar semi-convergent asymptotical series
\be\label{jhx2uxhbcxed}
\gamma_m\,=\,\sum_{k=1}^n \frac{\,\ln^m \! k\,}{k} - \frac{\,\ln
^{m+1} \! n\,}{m+1}
- \frac{\,\ln^m \! n\,}{2n}- \sum_{k=1}^{N-1} \frac{\,{B}_{2k}\,
}{(2k)!}\left[\frac{\ln^m \! x}{x}\right]^{(2k-1)}_{x=n}\!\!
- \theta\cdot\frac{\,{B}_{2N}\,}{(2N)!}\left[\frac{\ln^m\!
x}{x}\right]^{(2N-1)}_{x=n}
\ee
where $m=0, 1, 2,\ldots{}$, $0<\theta<1$, and integers $n$ and $N$
may be arbitrary chosen provided
that $N$ remains finite.\footnote{Note that at fixed
$N$, the greater the number $n$, the more accurate this formula; at $n\to\infty$ it straightforwardly
reduces to {\eqref{k98y9g87fcfcf}}. It seems also appropriate to
note here that, although
G\"unther, Kuzmin and Israilov obtained {\eqref{d21309dunmd}}
and {\eqref{jhx2uxhbcxed}} independently, both these formul\ae~may be readily derived from an old
semi--convergent series for the $\zeta$--function, given, for example, by
J{\o}rgen P.~Gram in 1895 \cite[p.~304, 2nd unnumbered formula]{gram_01} (this series for
$\zeta(s)$ may be, of course, much older since it is a simple application
of the Euler--Maclaurin summation formula; note also that Gram uses a slightly different convention for the Bernoulli numbers).}\up{,}\footnote{In \cite[Eq.~(3)]{israilov_01},
there is a misprint: in the denominator of the second sum $2r$ should
be replaced by $(2r)!$
[this formula appears correctly on p.~101 \cite{israilov_01}, but with
a misprint in Eq.~(3) on p.~98]. This misprint
was later reproduced in \cite[Theorem~0.3]{eddin_02}.} Using various
series representations
for the $\zeta$-function, it is also possible to obtain corresponding
series for the Stieltjes constants.
For instance, from Ser's series for the $\zeta$-function (see
footnote \ref{kw09h2nb}), it follows that
\begin{eqnarray}
\gamma_m\,=\,-\frac{1}{\,m+1\,}\sum_{n=0}^\infty\frac{1}{\,n+2\,
}\sum_{k=0}^{n} (-1)^k \binom{n}{k}\frac{\ln^{m+1}(k+1)}{k+1}\,,
\qquad m=0, 1, 2\ldots
\end{eqnarray}
An equivalent result was given by Donal F.~Connon \cite{connon_06}\footnote{Strictly speaking,
Connon \cite[p.~1]{connon_06} gave this formula for the generalized
Stieltjes constants $\gamma_m(v)$,
of~which ordinary Stieltjes constants are simple particular cases
$\gamma_m=\gamma_m(1)$, see e.g.~\cite[Eqs.~(1)--(2)]{iaroslav_07}.}
\begin{eqnarray}
\gamma_m\,=\,-\frac{1}{\,m+1\,}\sum_{n=0}^\infty\frac{1}{\,n+1\,
}\sum_{k=0}^n (-1)^k \binom{n}{k}\ln^{m+1}(k+1)\,,
\qquad m=0, 1, 2\ldots
\end{eqnarray}
who used Hasse's series for the $\zeta$-function.\footnote{\label{kw09h2nb}The
series representation for $\zeta(s)$,
which is usually attributed to Helmut Hasse, is actually due to Joseph
Ser who derived it in 1926 in a slightly different form.
The series in question is
\be\nonumber
\zeta(s)\,=\,\frac{1}{\,s-1\,}\sum_{n=0}^\infty\frac{1}{\,n+2\,
}\sum_{k=0}^{n} (-1)^k \binom{n}{k}(k+1)^{-s}
\,=\,\frac{1}{\,s-1\,}\sum_{n=0}^\infty\frac{1}{\,n+1\,}\sum
_{k=0}^n (-1)^k \binom{n}{k}(k+1)^{1-s}
\ee
The first variant was given by Ser in 1926 in \cite[p.~1076,
Eq.~(7)]{ser_01}, the second variant was given by Hasse
in 1930 \cite[pp.~460--461]{hasse_01}. The equivalence between two
forms follows from the recurrence relation
for the binomial coefficients.
It is interesting that many writers do not realize that Ser's formula
and Hasse's formula are actually the same
(the problem is also that Ser's paper \cite{ser_01} contains errors,
e.g.~in Eq.~(2), p.~1075, the last term in the second line
should be $(-1)^n (n+1)^{-s}$ instead of $(-1)^n n^{-s}$).
An equivalent series representation was also independently discovered
by Jonathan Sondow
in 1994 \cite{sondow_03}.}
Similarly, using another Ser's series expansions for $\zeta
(s)$,\footnote{Ser's formula \cite[p.~1076, Eq.~(4)]{ser_01},
corrected (see footnote \ref{kw09h2nb}) and written in our
notations, reads
\begin{eqnarray}
\nonumber
\zeta(s)\,=\,\frac{1}{\,s-1\,}+\sum_{n=0}^\infty\big| G_{n+1}\big
|\sum_{k=0}^{n} (-1)^k \binom{n}{k}(k+1)^{-s}.
\end{eqnarray}
} we conclude that
\begin{eqnarray}
\gamma_m\,=\sum_{n=0}^\infty\big| G_{n+1}\big|\sum_{k=0}^{n}
(-1)^k \binom{n}{k}\frac{\ln^{m}(k+1)}{k+1}\,,
\qquad m=0, 1, 2\ldots
\end{eqnarray}
where $G_n$ are Gregory's coefficients, see
\eqref{njw3uiqch}--\eqref{ldhd9ehn}.\footnote{Note that for $m=0$,
the latter series reduces to Fontana--Mascheroni's series {\eqref{njw3uiqch}}.} This series
was also independently discovered by Marc--Antoine Coppo in 1997 \cite[p.~355, Eq.~(5)]{coppo_01} by the method
of finite differences.
Several more complicated series representations for $\gamma_m$ with
irrational coefficients
may be found in \cite{todd_01,lavrik_01_eng,israilov_01,stankus_01_eng,zhang_01,weisstein_06,coffey_02,coffey_08}.\footnote{Apart
from formula {\eqref{jhx2uxhbcxed}}, in \cite{israilov_01},
Israilov also obtained
several other series representations for $\gamma_m$.
Stankus \cite{stankus_01_eng} showed that first two Stieltjes
constants may be represented by the series involving the divisor
function. Works of several authors showing that Stieltjes
constants are related to series containing nontrivial zeros of the
$\zeta$--function are summarized in \cite{weisstein_06}.
Coffey gave various series representations for $\gamma_m$ in \cite{coffey_02,coffey_08}.
However, we also noted that \cite{coffey_02,coffey_08} both
contain numerous rediscoveries, as well as inaccuracies in attribution of formul\ae~(see also \cite{connon_05}).
For instance, ``Addison-type series'' \cite[Eq.~(1.3)]{coffey_08} are actually due to Nielsen and Jacobsthal,
see our Eq.~{\eqref{lk2eojmwjksd}} above.
Numbers $p_{n+1}$ are simply signless Gregory's
coefficients $|G_n|$ and their asymptotics,
Eq.~(4.8)/(2.82) \cite[p.~473/31]{coffey_02}, is known at
least since 1879 \cite[Eqs.~(25)--(25a)]{schroder_01}
(see also \cite{steffensen_01}, \cite[pp.~106--107]{steffensen_02}).
Their ``full'' asymptotics, Eq.~(4.10)/(2.84)
\cite[p.~473/31]{coffey_02}, is also known and
was given, for example, by Van Veen in 1951
\cite{van_veen_01}, \cite{norlund_01}.
Representation (2.17) \cite[p.~455/13]{coffey_02} is
due to Hermite, see Eq.~(13) \cite[p.~541]{iaroslav_07}.
Formula (1.17) \cite[p.~2052]{coffey_08} was
already obtained at least twice: by Binet in 1839
\cite[p.~257]{binet_01} and by N{\o}rlund in 1923
\cite[p.~244]{norlund_02}, see details in \cite[Final remark]{iaroslav_08}.
Formula (1.18) \cite[p.~2052]{coffey_08}
straightforwardly follows from Ser's
results \cite{ser_01} dating back to 1926, \emph{etc.}}
\subsection{Derivation of the series expansion}
Consider now series {\eqref{jkhf3984fhd}}. A formal rearrangement of
this expression
may produce a series with rational terms only for $\gamma_m$.
In view of the fact that
\begin{eqnarray}
\label{kljd023jdnr}
\zeta(k+1)\,=\,\sum_{n=k}^\infty\frac{\,\big|S_1(n,k)\big|\,
}{n\cdot n!}\,=\,\sum_{n=1}^\infty\frac{\big|S_1(n,k)\big|}{n\cdot
n!}\,,\qquad\quad k=1, 2, 3,\ldots
\end{eqnarray}
see e.g.~\cite[pp.~166, 194--195]{jordan_01},\footnote{See also \cite{shen_01,sato_01},
where this important result was rediscovered much later.
By the way, this formula
may be
generalized to the Hurwitz $\zeta$-function
\begin{eqnarray}
\nonumber
\zeta(k+1,v)\,=\,\sum_{n=k}^\infty\frac{|S_1(n,k)|}{n\cdot(v)_n}\,
,\qquad\quad k=1, 2, 3,\ldots\,,\quad\,
\operatorname{Re}{v}>0\,,
\end{eqnarray}
see \cite{iaroslav_08}. At $n\to\infty$, the
general term of this series is
$O\!\left(\!\dfrac{\,\ln^{k-1}\! n}{n^{v+1}\,}\!\right)$.}
and that
\begin{eqnarray}
\nonumber
\zeta(2k)=(-1)^{k+1}\frac{\,(2\pi)^{2k}\cdot{B}_{2k}\,}{2\cdot
(2k)!}\,=\,
\frac{\,(2\pi)^{2k}\cdot|{B}_{2k}|\,}{2\cdot(2k)!}\,,\qquad\quad
k=1, 2, 3,\ldots
\end{eqnarray}
the formal interchanging of the order of summation in \eqref{jkhf3984fhd} leads to
\be\notag
\begin{array}{l}
\displaystyle
\frac{\,(-1)^m m!\,}{\pi} \sum_{n=1}^\infty\frac{1}{\,n\cdot n!\,}
\sum_{k=0}^{\lfloor\!\frac{1}{2}n\!\rfloor}\frac{\,(-1)^{k}\big|S_1(2k+2,m+1)\big| \cdot\big|S_1(n,2k+1)\big|}{\,(2\pi)^{2k+1}\,} \asymp\\[8mm]
\displaystyle\qquad\qquad\quad
\,\asymp\,\frac{\,(-1)^m m!\,}{\pi}\sum_{k=0}^\infty\frac{\,(-1)^{k} \big|S_1(2k+2,m+1)\big| \,}{(2\pi)^{2k+1}}
\underbrace{\sum_{n=1}^\infty \frac{\,\big|S_1(n,2k+1)\big|\,}{\,n\cdot n!\,}}_{\zeta(2k+2)} =\\[3mm]
\displaystyle\qquad\qquad\quad
=\,(-1)^m m!\!\sum_{k=1}^{\infty}\frac{\, \big|S_1(2k,m+1)\big| \cdot{B}_{2k}}{(2k)!}
\end{array}
\ee
Remarking that such operations are not permitted when series are not
absolutely convergent (that is why we wrote $\asymp$
instead of $=$),
we understand why the resulting series diverges. However, since this
series is alternating,
for any prescribed $m$ and $N$, one can always find such $\theta\in
(0,1)$, generally depending
on $m$ and $N$, that
\be\label{349fu3j4nf}
\begin{array}{ll}
\displaystyle
\gamma_m\:&\displaystyle
=\,\frac{1}{2}\delta_{m,0}+(-1)^{m} m!\cdot\!\sum_{k=1}^{N}\frac{\,\big|S_1(2k,m+1)\big|\cdot{B}_{2k}\,}{(2k)!}+ \\[7mm]
&\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad
\,+\, \theta\cdot\frac{\,(-1)^{m} m!\!\cdot \big|S_1(2N+2,m+1)\big|\cdot{B}_{2N+2}\,}{(2N+2)!}=\\[8mm]
&=\begin{cases}
\displaystyle \phantom{+}\frac{1}{2}+ \frac{1}{12}-\frac{1}{120}+\frac{1}{252}-\frac{1}{240}+\frac{1}{132}
- \ldots \,\quad& m=0 \\[6mm]
\displaystyle -\frac{1}{12}+\frac{11}{720}-\frac{137}{15\,120}+\frac{121}{11\,200}-\frac{7\,129}{332\,640}+\frac{57\,844\,301}{908\,107\,200}
-\ldots \,\quad& m=1 \\[6mm]
\displaystyle +0-\frac{1}{60}+\frac{5}{336}-\frac{469}{21\,600}+\frac{6\,515}{133\,056}-\frac{131\,672\,123}{825\,552\,000}
+\frac{63\,427}{89\,100}
-\ldots \,\quad& m=2 \\[6mm]
\displaystyle -0+\frac{1}{120}-\frac{17}{1\,008}+\frac{967}{28\,800}-\frac{4\,523}{49\,896}+\frac{33\,735\,311}{101\,088\,000}
-\frac{9\,301\,169}{5\,702\,400}
+ \ldots \,\quad& m=3
\end{cases}
\end{array}
\ee
holds strictly.\footnote{There is another way to
obtain {\eqref{349fu3j4nf}}. Consider
first {\eqref{jhx2uxhbcxed}} at $n=1$,
and then use {\eqref{iu2d092n1}}
to show that \break $\,\left[\frac{d^{n}}{dx^n}\frac{\ln^m \! x}{x}\right]_{x=1}\!=m!\, S_1(n+1,m+1)\,$.}
Moreover, taking into account {\eqref{uf87tfuy89}}, above series may be
always written in a form without Stirling numbers.
For instance, for Euler's constant and for first three Stieltjes
constants, it becomes
\be\label{c3pf3metdd}
\begin{array}{ll}
\displaystyle
\gamma\,
=\,+\frac{1}{2}\left\{1+\sum_{k=1}^{N}\frac{\,{B}_{2k}}{\,k\,}
+\, \theta\cdot\frac{\,{B}_{2N+2}\,}{\,N+1\,}\right\}\\[7mm]
\displaystyle
\gamma_1\,
=\,-\frac{1}{2}\sum_{k=1}^{N}\frac{\,{B}_{2k}\cdot H_{2k-1}\,}{\,k\,}
+\, \theta\cdot\frac{\,{B}_{2N+2}\cdot H_{2N+1}\,\,}{\,2N+2\,}
\\[7mm]
\displaystyle
\gamma_2\,
=\,+\frac{1}{2}\sum_{k=1}^{N}\frac{\,{B}_{2k}\cdot \big\{H^2_{2k-1} - H^{(2)}_{2k-1}\big\}\,}{\,k\,}
+\, \theta\cdot\frac{\,{B}_{2N+2}\cdot \big\{H^2_{2N+1} - H^{(2)}_{2N+1}\big\}\,\,}{\,2N+2\,}
\\[7mm]
\displaystyle
\gamma_3\,
=\,-\frac{1}{2}\sum_{k=1}^{N}\frac{\,{B}_{2k}\cdot \big\{H^3_{2k-1} - 3H_{2k-1} H^{(2)}_{2k-1}+2H^{(3)}_{2k-1}\big\}\,}{\,k\,}\,+ \\[4mm]
\displaystyle\qquad\qquad\qquad\qquad\quad
+\, \theta\cdot\frac{\,{B}_{2N+2}\cdot \big\{H^3_{2N+1} - 3H_{2N+1} H^{(2)}_{2N+1}+2H^{(3)}_{2N+1}\big\}\,\,}{\,2N+2\,}
\end{array}
\ee
where $0<\theta<1$ and $ N<\infty$ (these parameters are different in
all equations, and in each equation $\theta$, in general, depends on $N$).
By the way, one may notice that first of these formul{\ae}~coincide
with Stirling series {\eqref{lkce02m}}, while other formul{\ae}~are, to
our knowledge, new and
seem to be never released before.
Derived formal series are alternating and are sometimes referred to as
\emph{semi-convergent series} or \emph{divergent enveloping series}.\footnote{These series were
an object of study of almost all great
mathematicians; the reader interested
in a deeper study of these series may wish
to consult the following literature: \cite{borel_01,erdelyi_01,hardy_02}, \cite[Chapt.~XI]{bromwich_01},
\cite[Chapt.~4, \S1]{polya_01_eng},
\cite{knopp_01},
exercises \no374--\no388 in \cite[pp.~46--48]{gunter_03_eng}, \cite{whittaker_01,paplauskas_01_eng,vorobiev_01,copson_01,dingle_01,olver_01,bleistein_01,ramis_01,ramis_02,euler_03,malgrange_01}.}
It is also easy to see that they diverge very rapidly
\begin{eqnarray*}
\displaystyle
\frac{\,\big|S_1(2k,m+1)\big|\cdot{B}_{2k}}{(2k)!\,}\,&\sim&\,
2(-1)^{k-1}\frac{\,(2k-1)!\cdot\ln^m(2k-1)\,}{m!\cdot(2\pi)^{2k}}
\\[5mm]
\,&\sim&\,\frac{\,2\sqrt{\pi\,}(-1)^{k-1}}{m!}\cdot\frac{\,\ln^m k\,
}{\,\sqrt{k\,}\,}\cdot \left(\frac{k}{\pi e}\right)^{2k}\,,
\end{eqnarray*}
$k\to\infty$, $m=0, 1, 2,\ldots{}$,
so rapidly that even the corresponding
power series $\sum\frac{\,|S_1(2k,m+1)|\,}{(2k)!}{B}_{2k}x^{2k}$
diverges everywhere.\footnote{Coefficients $\big|S_1(2k,m+1)\big|$
and Bernoulli numbers both
grow very quickly: as $k\to\infty$ we have $\big|S_1(2k,m+\nobreak 1)\big
|\sim(2k-1)!\ln^m(2k-1)/m!$, see {\eqref{j6s8g64r}}, and
${B}_{2k}\sim2(-1)^{k-1}(2\pi)^{-2k}(2k)!$, see e.g.~\cite[p.~5]{krylov_01}, \cite[p.~261]{gelfond_01}.}
Behaviour of this series for first two Stieltjes constants is shown in
{Fig.~\ref{ihce239hb}}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.42\textwidth]{enveloping_g0_maple.eps}\hspace{8mm}
\includegraphics[width=0.42\textwidth]{enveloping_g1_maple.eps}
\caption{Partial sums of series \eqref{c3pf3metdd} for $\gamma$ and $\gamma_1$ (on the left and on the right respectively)
for $N=1, 2,\ldots, 10$.
Blue lines indicate the true value of $\gamma$ and $\gamma_1$, while dashed black lines with the red points display
corresponding partial sums given by \eqref{c3pf3metdd}.}
\label{ihce239hb}
\end{figure}
\subsection{Some series transformations applied to the derived
divergent series}
In order to convert {\eqref{349fu3j4nf}} into a convergent series, one
may try to apply various series transformations and regularization procedures.
However, since {\eqref{349fu3j4nf}} is strongly divergent, the use of
standard summation methods may not result in a convergent series. For example,
applying Euler's transformation\footnote{See e.g.~\cite[pp.~244--246]{knopp_01}, \cite[pp.~144 \& 170--171]{knopp_02},
\cite[pp.~269--278 \& 305--306]{vorobiev_01}.} we obtain another
series with rational coefficients only\\[4mm]
\be\notag
\begin{array}{ll}
\displaystyle
\gamma_m\, =\,\frac{1}{2}\delta_{m,0}+(-1)^{m} m!\!\sum_{k=1}^N \frac{1}{\,2^k}\sum_{n=1}^{k}
\frac{\,\big|S_1(2n,m+1)\big|\cdot{B}_{2n}}{\,(2n)!\,}\cdot\!\binom{k-1}{n-1} + R_m(N)=\\[8mm]
\end{array}
\ee
\be\label{kjwhe932hs}
\begin{array}{ll}
\displaystyle
=\begin{cases}
\displaystyle \phantom{+} \frac{1}{2}+\frac{1}{24}+\frac{3}{160}+\frac{89}{10\,080}+\frac{37}{8\,960}
+\frac{299}{147\,840}
+ \ldots\,\quad& m=0 \\[5mm]
\displaystyle -\frac{1}{24}-\frac{49}{2\,880}-\frac{187}{24\,192}-\frac{5\,431}{1\,612\,800}-\frac{91\,151}{53\,222\,400}
-\ldots \,\quad& m=1 \\[5mm]
\displaystyle -\frac{1}{240}-\frac{31}{13\,440}-\frac{4\,093}{2\,419\,200}-\frac{50\,789}{106\,444\,800}-\frac{602325403}{581188608000}
+\ldots \,\quad& m=2 \\[5mm]
\displaystyle +\frac{1}{480}-\frac{1}{40\,320}+\frac{1\,609}{3\,225\,600}-\frac{120\,749}{159\,667\,200}+\frac{694\,773\,847}{498\,161\,664\,000}
- \ldots \,\quad& m=3
\end{cases}
\vspace{2mm}
\end{array}
\ee
which are all divergent (their remainder $R(N)\to\infty$ as $N\to
\infty$).
At the same time, these series behave much better than {\eqref{349fu3j4nf}}. Thus,
the series for $\gamma$ starts to clearly diverge only from $N\geqslant
10$, and that
for $\gamma_1$ only from $N\geqslant8$, see {Fig.~\ref{lkf3rh0hjda}}.
The minimum error for the first series corresponds to $N=7$ and equals
$3\times10^{-4}$,
that for $\gamma_1$ also corresponds to $N=7$ and equals $9\times10^{-5}$.
Attempts to regularize series \eqref{349fu3j4nf}--\eqref{c3pf3metdd}
with the help of Ces\`aro summation are also fruitless
since its general term grows more rapidly than $k$ at $k\to\infty$.
Similarly, Borel's summation does not provide
a convergent result.
\begin{figure}[!t]
\centering
\includegraphics[width=0.42\textwidth]{enveloping_euler_g0_maple.eps}\hspace{8mm}
\includegraphics[width=0.42\textwidth]{enveloping_euler_g1_maple.eps}
\caption{Euler's transformation of series \eqref{c3pf3metdd} for $\gamma$ and $\gamma_1$ (on the
left and on the right respectively) for $N=1, 2,\ldots, 14$.
Blue lines indicate the true value of $\gamma$ and $\gamma_1$, while dashed black lines with the red points display
corresponding partial sums given by \eqref{kjwhe932hs}.}
\label{lkf3rh0hjda}
\end{figure}
\subsection{An estimate for generalized Euler's constants}
Finally, we note that derived formal series {\eqref{349fu3j4nf}}
provides an estimation
for generalized Euler's constants. Since this series is enveloping, its
true value always lies between
two neighbouring partial sums. If, for example, we retain only
the first non-vanishing term, the $m$th Stieltjes constant
will be stretched between the first and second non-vanishing partial sums.
Thus, accounting for known properties of the Stirling numbers of the
first kind
\be\notag
\begin{array}{ll}
\big|S_1(m+1,m+1)\big|\,=\,1 \\[3mm]
\big|S_1(m+2,m+1)\big|\,=\,\frac{1}{2}(m+1)(m+2) \\[3mm]
\big|S_1(m+3,m+1)\big|\,=\,\frac{1}{24}(m+1)(m+2) (m+3)(3m+8) \\[-3mm]
\end{array}
\ee
\be\notag
\big|S_1(m+4,m+1)\big|\,=\,\tfrac{1}{48}(m+1)(m+2) (m+3)^2(m+4)^2
\ee
which are valid for $m=1, 2, \ldots\,$, we have
\begin{equation}\label{lkx2xmx}
\begin{array}{ll}
\displaystyle-\frac{\,\big|{B}_{m+1}\big|\,}{m+1} < \gamma_m <
\frac{\,(3m+8)\cdot\big|{B}_{m+3}\big|\,}{24} - \frac{\,\big|{B}_{m+1}\big|\,}{m+1} ,
& m=1, 5, 9,\ldots\\[18pt]
\displaystyle
\frac{\,\big|{B}_{m+1}\big|\,}{m+1} - \frac{\,(3m+8)\cdot\big|{B}_{m+3}\big|\,}{24}
< \gamma_m < \frac{\,\big|{B}_{m+1}\big|\,}{m+1} , & m=3, 7, 11,\ldots\\[18pt]
\displaystyle -\frac{\,\big|{B}_{m+2}\big|\,}{2} < \gamma_m
< \frac{\,(m+3)(m+4)\cdot\big|{B}_{m+4}\big|\,}{48} - \frac{\,\big|{B}_{m+2}\big|\,}{2} ,
\qquad & m=2, 6, 10, \ldots\\[18pt]
\displaystyle
\frac{\,\big|{B}_{m+2}\big|\,
}{2} - \frac{\,(m+3)(m+4)\cdot\big|{B}_{m+4}\big|\,}{48}
< \gamma_m < \frac{\,\big|{B}_{m+2}\big|\,}{2}, & m=4, 8, 12, \ldots\\
\end{array}
\end{equation}
Case $m=4, 8, 12, \ldots$ may be also extended to $m=0$, if we recall that in this case it gives bounds
for $\gamma-\frac12$, see {\eqref{349fu3j4nf}}. This case yields
$\,\frac{23}{40}<\gamma<\frac{7}{12}\,$, which is undoubtedly true. Note also that above
bounds are always rational, which may be of interest in certain circumstances.
Estimation \eqref{lkx2xmx} is relatively tight for moderate values of $m$, and
becomes less and less accurate as $m$ increases.
However, even for large $m$, it remains more accurate than the
well-known Berndt's estimation
\be
\big|\gamma_m\big|\,\leqslant\,
\begin{cases}
\displaystyle \frac{2\,(m-1)!}{\pi^m}\,,\qquad & m=1, 3, 5,\ldots \\[4mm]
\displaystyle \frac{4\,(m-1)!}{\pi^m}\,,\qquad & m=2, 4, 6,\ldots
\end{cases}
\ee
see \cite[pp.~152--153]{berndt_02}, more accurate than Lavrik's
estimation $|\gamma_m|\leqslant m! \, 2^{-m-1}$, see
\cite[Lemma~4]{lavrik_01_eng}, more accurate than Israilov's estimation
\be
|\gamma_m|\leqslant \,\frac{ m!\, C(k)\,}{(2k)^{m}}
\ee
for $k=1,2,3$, where $C(1)=\frac
{1}{2}$, $C(2)=\frac{7}{12}$, $C(3)=\frac{11}{3}$,
see \cite{israilov_01,adell_01}, and more accurate than
Nan-You--Williams' estimation
\be
\big|\gamma_m\big|\,\leqslant\,
\begin{cases}
\displaystyle \frac{2\,(2m)!}{m^{m+1}(2\pi)^m}\,,\qquad & m=1, 3, 5,\ldots \\[4mm]
\displaystyle \frac{4\,(2m)!}{m^{m+1}(2\pi)^m}\,,\qquad & m=2, 4, 6,\ldots
\end{cases}
\ee
see \cite[pp.~148--149]{zhang_01}.
Besides, our estimation also contains a sign, while above estimations
are signless.
At the same time, {\eqref{lkx2xmx}} is worse than
Matsuoka's estimation \cite{matsuoka_01,matsuoka_02}, $
|\gamma_m|<10^{-4}\ln^m m$, $m\geqslant5$,
which, as far as we know, is currently the best known estimation in
terms of elementary functions
for the Stieltjes constants.\footnote{Numerical simulations suggest
that Matsuoka's estimation \cite{matsuoka_01,matsuoka_02} may
be considerably improved, see e.g.~\cite{kreminski_01}. Recently,
Knessl and Coffey reported
that they succeeded to significantly better Matsuoka's estimation and
even to predict the sign of $\gamma_m$. The authors published
their findings in \cite{knessl_01}, and also reprinted them in \cite[Theorem~1]{knessl_02}. We, however, were not able to verify these results,
because several important details related to $v(n)$ from pp.~179--180
\cite{knessl_01} were omitted.
Estimation of the similar nature was later proposed by Adell \cite{adell_01}, but Saad Eddin \cite[Tab.~2]{eddin_01}, \cite{eddin_02} reported
that Adell's estimation may provide less accurate results than
Matsuoka's estimation. Saad Eddin also provides an interesting
estimation for the Stieltjes constants, see \cite[Tab.~2]{eddin_01}
and mentions some further works related to the estimations
of the derivatives of certain $L$-functions. Yet, very recently we
found another work devoted to
the estimation of Stieltjes constants \cite{ahmed_01}; the latter
resorts to the Lambert $W$-function.\\[-17mm]}
Note, however, that estimation's bounds {\eqref{lkx2xmx}}
may be bettered if we transform parent series {\eqref{349fu3j4nf}}
into a less divergent series, provided the new series remains enveloping.
\section*{Acknowledgments}
The author is grateful to Yaroslav Naprienko for his remarks and comments.
\vspace{165mm}
\begin{figure}[!h]
\centering
\includegraphics[width=0.3\textwidth]{bluegobo_600dpi.eps}
\end{figure}
\vspace{35mm}
|
2,869,038,154,987 | arxiv | \section{Introduction}
\label{section:introduction}
X-ray binaries (XRBs) are systems in which a compact object such as a black hole (BH) or neutron
star (NS) accretes material from a secondary star. This material is believed to form an accretion disk surrounding the
compact object, producing intense X-ray radiation via both blackbody radiation from the inner regions of disk itself, and
Compton up-scattering of lower energy photons from a hot corona. These systems are further subdivided
into two classes, based upon their properties. High-mass X-ray binaries have O or B class secondaries, are
fairly steady X-ray sources and are thought to have evolved in-situ
from a binary system. In contrast, low-mass X-ray binaries (LMXBs) have very faint secondaries, show
dramatic changes in X-ray luminosity (X-ray bursts) and their evolutionary path is not clear. They may have evolved,
through mass transfer and loss, from a situation where the donor star was more massive \citep{2002ApJ...565.1107P},
or alternatively in dense regions like globular clusters it is possible for a lone NS or BH to capture a low mass
companion \citep[e.g.][]{2006csxs.book..341V}.
LMXBs are highly variable systems, sometimes exhibiting an accretion disk-like spectrum (referred
to as the `high/soft' state) and sometimes a power law spectrum (the `low/hard' state). Sources in the
low/hard state usually also show a radio jet which disappears as the source transitions into the
high/soft state \citep[][and refs therein]{2004MNRAS.355.1105F}
Accretion disks are predicted to have a flared geometry, meaning that
the centrally generated X-rays will illuminate the surface of the disk, heating it and causing the surface
layers to evaporate
\citep[e.g.][]{shakura_sunyaev,1983ApJ...271...70B,1991ApJ...374..721K,1993ApJ...412..267R,1996ApJ...461..767W,
2002ApJ...581.1297J,2002ApJ...565..455P} forming a hot disk atmosphere.
The observational signature of this hot atmosphere is both X-ray
and UV emission lines \citep[e.g.][]{1984ApJ...278..270B,1993ApJ...412..267R,2005ApJ...625..931J}
as the X-rays are reprocessed by
not only the surface of the disk, but also the surface of the secondary \citep[e.g.][]{1990A&A...235..162V}.
This heating of the secondary has been proposed as a possible mechanism by which is mass transferred from
the star to the accretion disk \citep[e.g][]{1981ApJ...243..970L,1982ApJ...258..260L}.
As observations of XRBs have improved,
blue shifted absorption lines have been observed
in more than a dozen high/soft state NS and BH LMXBs \citep[][and refs therein]{2013AcPol..53..659D}.
These provide good evidence of the existence of outflowing material,
most likely associated with the accretion disk, although the driving mechanism
of these `disk winds' is a subject of ongoing discussion.
A recent review of observations of XRBs is given by \cite{2013AcPol..53..659D} and we summarize their data here.
They cite
outflow velocities between 400 and 3000 $\rm{km~s^{-1}}$ in 30\% of NS systems, and
100 and 1300 $\rm{km~s^{-1}}$ in 85\% of BH systems. Photoionization modeling is usually employed to obtain
estimates of the physical state of the absorbing gas, and such analysis gives a column density of between
$\rm{4\times10^{22}}$ and $\rm{20\times10^{22}~cm^{-2}}$ and an ionization parameter of $2.5 \leq \log(\xi) \leq 4.5$ for NS
LMXBs. The ionization parameter is a measure of the ionization state of the gas, and we use the common
definition
\begin{equation}
\xi=\frac{L_x}{nr^2}
\label{equation:xi}
\end{equation}
where ${L_X}$ is the ionizing luminosity, n is the number density of the gas, and r is the distance
between the source of ionizing flux and the gas.
BH LMXBs have a wider range of properties, with a column density between
$0.5\times10^{20}$ and $\rm{6\times10^{23}~cm^{-2}}$ and an ionization parameter of $1.8 \leq \log(\xi) \leq 6$.
The ionization parameter is degenerate in density and distance, so in order to obtain information about the
size/distance of the absorbers, it is necessary to break the degeneracy by measuring the density of the absorbing
gas. This has been done for the microquasar GRO J1655-40 (\citealt{2006Natur.441..953M,2008ApJ...680.1359M}
but also see \citealt{2006ApJ...652L.117N}) and these measurements suggest a relatively high density
($n\simeq10^{14}~\rm{cm^{-1}}$) which in turn implies a small radius. Those systems
exhibiting absorption appear to be generally observed edge on \citep{2012MNRAS.422L..11P}, which implies an
equatorial geometry
for the absorbing gas. As we will discuss later, this does not necessarily mean that any outflow is
also equatorial - it can equally well be a bipolar flow, that exhibits stratification in physical properties such as
density, ionization parameter or both.
Possible mechanisms to drive disk winds are magnetocentrifugal
acceleration of gas guided by magnetic fields threading the disk
\citep[e.g.][]{blandford_payne_82,1992ApJ...385..460E,2006Natur.441..953M}, radiation pressure
acting on electrons or lines \citep[e.g.][]{1980AJ.....85..329I,1993ApJ...409..372S,1985ApJ...294...96S,
2002ApJ...565..455P} or thermal expansion of the hot disk atmosphere
as a `thermal wind' when
the gas thermal velocity exceeds the local escape velocity \citep{1983ApJ...271...70B,1996ApJ...461..767W}.
Whatever the mechanism, these winds are of great interest since they firstly provide a
way in which the XRB can interact with its surroundings, and secondly, if the mass flow is large enough, they
could be the reason for the observed state change \citep[e.g.][]{2009Natur.458..481N,2012MNRAS.422L..11P,
{2013ApJ...762..103K}}. The fact that these absorption features appear in high/soft state sources but not in low/hard state
sources \citep{2013AcPol..53..659D} is further evidence that they are linked to state change.
In this work, we build upon earlier simulations of thermal disk winds \cite[][hereafter L10]{2010ApJ...719..515L}.
L10 modeled the launching of a wind in a system based on GRO J1655-40.
In that work, a wind was launched but the velocity
was too slow to account for the observed blue shifts of absorption lines and the density in the fastest parts
of the wind was lower than the observed values for GRO J1655-40. Although these results suggest that
thermal winds are unlikely to be the source of the
absorbing material in that system, thermal driving is still an important mechanism and deserving of further
investigation. This is because it
is almost certain to operate at some level in LMXBs,
and even if it is not the principal source of the X-ray absorbing gas, it may well be important in the overall evolution
of the system.
For example, L10 showed that significant mass would be lost from the disk by thermal winds, and this mass
loss is of the same order of magnitude
as that which would be expected to destabilize the disk and perhaps drive state change \citep{1986ApJ...306...90S}.
In addition, we cannot be sure if the wind in GRO J1655-40 is typical or extreme,
and new observations which are likely come from the AstroH satellite \citep[][and refs therein]
{2014arXiv1412.1164D,2014arXiv1412.1173M}
may provide
more examples. We intend to investigate whether, with modifications to the heating and cooling rates, we can produce a
thermally driven wind with physical properties more in line with current observations, and provide a framework
which may be of use in understanding future observations.
The simulations we present here are all computed in the same way as described in L10, with three modifications.
Firstly, we consider only the optically thin case, so the radiation flux at any point in the simulation can be
simply computed assuming a $1/r^2$ drop off from a centrally located source of X-rays. Secondly, we reduce
the computational domain size from $20R_{IC}$ in the original simulations to just $2R_{IC}$.
$R_{IC}$ is the Compton radius, defined as the location in an accretion disk
where the local isothermal sound speed at the Compton temperature, $T_{C}$, of the illuminating spectral energy
distribution (SED) exceeds the local
escape velocity. The Compton radius is therefore given by
\begin{equation}
R_{IC}=\frac{GM_{BH}\mu m_H}{k_BT_C}
\end{equation}
where $M_{BH}$ is the mass of the central object, equal to 7$M_{\odot}$ for these simulations, $\mu$ is the
mean molecular mass which we set to 0.6, and other symbols have the usual meaning. The Compton temperature
for the illuminating SED in these simulations is $T_C=1.4\times10^7~\rm{K}$ which gives a Compton radius of
$\rm{4.8\times10^{11}~cm}$. In all these simulations, as in L10, the disk is assumed to be flat and thin - defined
via a density boundary condition at the midplane.
The change in domain size is motivated
by preliminary investigations which show that the acceleration zone for the wind was located inside $0.1R_{IC}$, in line
with previous work \citep[e.g.][]{1983ApJ...271...70B,1996ApJ...461..767W,2002ApJ...565..455P}. In essence, this is because
the most efficient acceleration of gas via thermal expansion occurs as the gas is heated past the lower equilibrium
temperature on the thermal equilibrium curve (see Figure \ref{figure:stability_curves} and related discussion in the
next section). This occurs at fairly low values of ${\xi}$, which
is where the gas is densest, i.e. close to the central object.
The gas then enters an unstable heating zone where the
next stable temperature is over an order of magnitude
higher. Rapid heating occurs resulting in rapid acceleration as the gas expands. This change to the domain size
means that we better simulate the densest parts of the wind where absorption is most likely to occur. We neglect the
outer parts of the disk which means that our calculated mass loss rates are lower limits.
Finally, but central to this project, we will modify the heating and cooling
rates assumed in the heating term in the thermodynamic equations. We examine several cases, each with different
rates, and each
representing a modification to the thermal equilibrium curve. This changes the way in which
the gas passes through the `heating' region, and we will see that this can have a profound impact on the velocity,
density and hence mass loss rate of the wind.
In the next section, we briefly discuss our methodology. We then discuss the details of the flows produced
by the different heating/cooling parameters. Finally, we discuss the relevance of our results to the ongoing
discussion of thermal wind in XRBs.
\section{Method}
\label{section:methods}
Our simulations are based upon the simulations presented in L10 (run C8). Fig. \ref{figure:luketic} shows
a rerun of that simulation, but computed upon a smaller grid, concentrating on the inner parts of the flow
where the acceleration occurs. The wind in this simulation, and all others presented here is shown
at a time of 220 000s which represents 25 sound crossing times (${R_{IC}/c_s}$), a time that is sufficient
for the wind to have settled
down to a steady state. Note that this timescale, about 2.5 days, also gives us an estimate of the lag between
a change in the luminosity or SED of the central source of ionizing radiation and the response of the wind
through a change in structure. Of course changes in the ionization state of the wind would likely occur
more quickly.
There are several important features of this model that can be seen in the
figure. Firstly, we see a dense, turbulent `corona' at ${\omega<0.3R_{IC},~z<0.2R_{IC}}$ where
gas flow is severely inhibited by gravity and reaches steady state only in a time averaged sense. The streamlines
starting just outside this region \emph{can} escape, however it is still time dependent, hence the streamlines show `kinks'.
Secondly, outside the inner corona, the
flow starts off moving vertically, as gas expands away
from the accretion disk which lies along the ${\omega}$ axis. The streamlines bend outwards, partly due to
the pressure gradient and partly as a result of conservation of angular momentum which means that the
centrifugal force acting on the rotating gas is not balanced by gravity. They are
self similar in this region. A fast, fairly dense flow
is produced which escapes at angles less than about 45\degree. Finally, we see a fast, low density infall
at polar angles.
\begin{figure*}
\includegraphics{fig1_hdf051dw83.eps}
\caption{The density structure
of model A (see Table \ref{table:wind_param}). Overplotted are streamlines (grey lines), the 80\degree~and
60\degree~sightlines (dashed lines) and arrows showing the velocity field.
Also shown is the location of the M=1 contour (black line). }
\label{figure:luketic}
\end{figure*}
Although the L10 simulation did produce a fast fairly dense wind, as mentioned in the introduction
the density was 2-3 orders of magnitude
lower than that inferred from observations of GRO J1655-40. Even so, the total mass loss rate
through the wind was
seven times the accretion rate.
We use the same version of the hydrodynamics code ZEUS-2D \cite{1992ApJS...80..753S} extended
by \cite{proga_stone_kallman} to carry out 2.5D simulations of the flow. In this code, radiative heating
and cooling of the gas is computed using the parametrized rates first described by \cite{1994ApJ...435..756B}.
This parameterization includes Compton heating and cooling, photoionization heating, recombination
cooling, bremsstrahlung cooling and collisional (line) cooling as functions of temperature T and ionization
parameter ${\xi}$. The ionization parameter is defined as in Equation \ref{equation:xi}. The number
density of the gas is related to the density of the gas by ${n=\rho/\mu m_H}$ where the mean molecular
weight ${\mu}$ is set to
0.6. Optical depth effects are not considered in the simulations here, so the factor of $L_X/r^2$ is related
to the flux, reduced only by distance effects.
Compton heating and cooling is given by
\begin{equation}
G_{Compton}=8.9\times10^{-36}\xi(T_X-4T)~\rm{(ergs~s^{-1}~cm^{3})}
\end{equation}
where ${T_X}$ is the temperature of the illuminating power law spectrum (set to $5.6\times10^7~\rm{K}$).
Photoionization heating and
recombinational cooling are subsumed into one term, referred to as ``X-ray heating/cooling'', given by
\begin{equation}
G_X=1.5\times10^{-21}\xi^{1/4}T^{-1/2}(1-T/T_X)~\rm{(ergs~s^{-1}~cm^{3})}
\label{equation:xray}
\end{equation}
whilst bremsstrahlung cooling is parametrized by
\begin{equation}
L_b=3.3\times10^{-27}T^{1/2}~\rm{(ergs~s^{-1}~cm^{3})}.
\end{equation}
Finally, line cooling is given by
\begin{equation}
L_l=\left[1.7\times10^{-18}\exp{\left(-T_L/T\right)}\xi^{-1}T^{-1/2}+10^{-24}\right]\delta
\end{equation}
where ${T_L}$ has the units of temperature and parametrizes the line cooling. It is set to $1.3\times10^5$ K.
The ${\delta}$ parameter allows one to
reduce the effectiveness of line cooling due to opacity effects. In an optically thin plasma, ${\delta}$ is
set to 1. The units of ${L_l}$ are the same as for the other rates.
We are interested in investigating whether simple
changes to the heating and cooling rates, thereby modifying the thermal equilibrium curve, can
increase the velocity and density of the wind to better
match observations.
To modify these rates, we apply pre-factors to each of the mechanisms
and so the equation for the net cooling rate ${\mathcal{L}}$ $\rm{(ergs~s^{-1}~g^{-1}}$),
which appears in the energy conservation equation, becomes
\begin{equation}
\rho\mathcal{L}=n^2(A_CG_{Compton}+A_XG_X-A_lL_l-A_bL_b).
\label{equation:heatcool}
\end{equation}
The first six lines of Table \ref{table:wind_param} gives the values of pre-factors,
${T_X}$ and ${L_X}$ for the original
L10 simulation (run C8, denoted A) and six further simulations.
The line cooling pre-factor effectively replaces the ${\delta}$ parameter and therefore represents
a measure of line opacity. Modifications of ${A_X}$, the photoionization/recombination rate pre-factor
can be justified by a change to the illuminating SED or metallicity of the gas.
Calculations of
the precise nature of the connection between these parameters and ${A_X}$ are beyond the scope of this
work, our values are not intended to represent a particular case, rather we adjust them to produce the
desired thermal equilibrium curve.
We change ${A_b}$ somewhat arbitrarily to make gas thermally stable everywhere. Our aim here
was to investigate the relationship between thermal instability (TI) and efficient acceleration. The upper and lower stable
temperatures remain the same and so we isolate the effect of instability with this experiment.
The quantity $\xi_{cold,max}$ is the value of the ionization parameter when
the flow becomes thermally unstable. \cite{1965ApJ...142..531F} demonstrated that in a non-dynamical flow,
subjected to isobaric perturbations, thermodynamic instability results when
$\left[\delta \mathcal{L}/\delta T\right]_p>0$. This is also where the gradient
${d\ln(T)/d\ln(\xi)}$ becomes greater than 1. Table \ref{table:wind_param} also gives $T_{eq}$, the equilibrium
temperature expected for $\xi_{cold,max}$. It can be shown that if line cooling is balanced by X-ray heating on the
cool branch of the stability curve, then this temperature is expected to be ${4/5~T_L}$. This is 104 000K,
and looking at the values in Table \ref{table:wind_param}, we see that the actual values
are very close to this. We also include values of $\Xi_{cold,max}$, which is the ratio of
radiation pressure to gas pressure when the flow becomes thermally unstable. This is given by
$\Xi=F_{ion}/nk_bTc=\xi/4\pi k_bTc$ \citep{1981ApJ...249..422K}, where the temperature T is set to
the equilibrium temperature at the onset of instability.
To produce comparable simulations, we use a density boundary condition to
ensure that the ionization parameter is equal to $\xi_{cold,max}$ at the Compton
radius. The density at the midplane at the start of our simulations is defined by the equation
\begin{equation}
\rho(r)=\rho_0\left(\frac{r}{R_{IC}}\right)^{-2},
\end{equation}
where ${\rho_0}$ is given by
\begin{equation}
\rho_0=\frac{L_Xm_H\mu}{\xi_{cold,max}R_{IC}^2}.
\end{equation}
This is given in the table along with ${R_{IC}}$ for each case. Since the density in the disk is
proportional to $1/r^2$, this means that ${\xi}$ is a constant in the disk plane, and it is the
same for all runs except Ah in which it is ten times bigger.
The hydrodynamic calculations are carried out in a spherical polar coordinate system,
running from 0 to 90\degree~ in angle, and from ${R_{in}}$ to ${R_{out}}$ in the
radial direction. The zone spacing increases with radius, such that ${ dr_{k+1}/dr_{k}=1.05}$
giving finer discretization in the inner parts of the flow. The zone spacing reduces with increasing
angle ${ d\theta_{k+1}/d\theta_{k}=0.95}$ giving more resolution close to the disk. These
parameters, together with the number of points used in the two dimensions are given in Table
\ref{table:wind_param}.
\begin{table*}
\begin{tabular}{p{6cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}p{1cm}}
\hline Prefactors & A &Ah& B & C & D &E &F\\
\hline \hline
$A_l$ & 1.0 &1.0& 0.2 & 1.0 & 1.0 & 1.0 & 0.076\\
$A_C$ & 1.0 &1.0& 1.0 & 1.0 & 1.0 & 1.0 & 1.0\\
$A_b$ & 1.0 &1.0& 1.0 & 3.9 & 1.0 & 1.0 & 1.0\\
$A_X$ & 1.0 &1.0& 4.0 & 1.0 & 1.0 & 1.0 & 1.0\\
\hline
\multicolumn{5}{l}{Physical Parameters}\\
\hline
$T_X (10^6~$K) & 56 & 56 & 56 &56 & 0.80 & 230 & 295 \\
$L_X (3.3\times10^{37}~\rm{ergs~s^{-1}})$& 1 & 10 & 1 & 1 & 1& 1 & 1\\
$log(\xi_{cold,max})$& 2.10 & 2.10 & 0.91 & N/A & N/A & 2.07 & 1.2\\
$log(\Xi_{cold,max})$& 1.33 & 1.33 & 0.17 & N/A & N/A & 1.32 & 0.43\\
$T_{eq}(\xi_{cold,max}) (10^3~\rm{K})$ & 111 & 111 & 106 & N/A & N/A & 109 & 113\\
$\rho_0 (10^{-12}~\rm{g~cm^{-3})}$ & 1.14 & 11.4 & 17.4 & 1.14 & 1.14 & 22.4 & 281 \\
$R_{IC} (10^{10}~$cm) & 48.2 & 48.2 & 48.2 & 48.2 & 3380 & 11.5 & 9.15 \\
\hline
\multicolumn{5}{l}{Grid parameters}\\
\hline
$R_{min} (10^{10}~$cm) & 2.4 & 2.4 & 2.4 & 2.4 & 2.4 & 1.2 & 1.2 \\
$R_{max} (10^{10}~$cm) & 96& 96& 96& 96& 96& 96& 96\\
$R_{ratio}$ &1.05&1.05&1.05&1.05&1.05&1.05&1.05\\
$N_R$ & 80& 80& 80& 80& 80& 80& 100\\
$\theta_{min}$ & 0.0& 0.0& 0.0& 0.0& 0.0& 0.0& 0.0\\
$\theta_{max}$ & 90.0& 90.0& 90.0& 90.0& 90.0& 90.0& 90.0\\
$\theta_{ratio}$ & 0.95& 0.95& 0.95& 0.95& 0.95& 0.95& 0.95\\
$N_{\theta}$ & 100& 100& 100& 100& 100& 100& 100\\
\hline
\multicolumn{5}{l}{Wind properties}\\
\hline
$V_r(max~blueshifted)/100 ~\rm{km~s^{-1}}$ & 4.47 & 6.76 & 6.78 & 4.28 & 2.11 & 14.5 & 16.3\\
$V_r(\rho>1e12,max~blueshifted) /100 ~\rm{km~s^{-1}}$ & 1.18 & 1.98 & 3.26 & 0.203 & 0.186 & 5.03 & 2.51\\
$n_H~(60\degree~sightline)(\times10^{22}~\rm{cm^{-2})}$ & 5.71 & 53.0 & 74.2 & 1.01 & 0.281 & 13.3 & 15.0\\
$n_H~(80\degree~sightline)(\times10^{22}~\rm{cm^{-2})}$ & 46.3 & 476 & 441 & 22.6 & 24.1 & 16.6 & 255\\
$\dot{M}_{wind,disk} (\dot{M}_{acc})$ & 3.72 & 3.61 & 41.5 & 0.878 & 0.723 & 4.20 & 25.7\\
$\dot{M}_{wind,outer}(\dot{M}_{acc})$ & 3.45 & 3.36 & 38.3 & 0.638 & 0.758 & 3.85 & 24.9\\
\hline
\end{tabular}
\caption{The heating and cooling parameters adopted in the simulations, and
some key parameters of the resulting winds.}
\label{table:wind_param}
\end{table*}
\begin{figure}[h]
\includegraphics{fig2.eps}
\caption{Thermal equilibrium curves for the seven cases considered.
The solid curves show the equilibrium temperature ($T$) vs ${\xi/T}$.
For each case, the crosses represent actual data from the 60\degree~ sightline and the
circles are from the 80\degree~ sightline. The size of the symbol shows
the radial distance along the sightline, with larger symbols representing
the largest distances. The triangles show the point at which the heating
curve becomes unstable.}
\label{figure:stability_curves}
\end{figure}
The solid lines on Figure \ref{figure:stability_curves} show the thermal equilibrium temperature
predicted for the
different cases plotted against ${\xi/T}$, which is equal to the ratio between the radiation
and gas pressures. In an outflow, the gas pressure cannot increase along a streamline. This means that
${\xi/T}$ must always increase\footnote{Note that this is in fact only true in the case of weak or
zero magnetic fields. If magnetic fields are strong, then the gas pressure \emph{can} increase and the
gas can move to the left on the equilibrium curve \citep{1992ApJ...385..460E,2000ApJ...537..134B}}, and so
as a parcel of gas is heated, starting at the bottom left of the graph,
it will follow these curves until
the gradient becomes negative (the onset of TI, if present).
This location on the curve represents the maximum temperature of the cold
branch. The triangle symbol on the graphs shows this point. At this point, one expects the
gas to quickly heat up to the upper equilibrium temperature. This maximizes the rate of
energy transfer between the radiation field and the gas, driving expansion and hence
acceleration. Thus it is in the unstable zone where the most `efficient' acceleration
of the gas takes place in order to form an
outflow.
The behavior of the different
cases is best explained in the context of the shape of these thermal equilibrium curves.
Case A and Ah have identical thermal equilibrium curves, becoming unstable at the same value of
${\xi}$ and reaching the same upper equilibrium temperature (set by the balance of
Compton heating and cooling). The difference in luminosity between the two cases has
no effect on the shape of the equilibrium curves, however as we will see in the next section,
it does affect how the gas is heated. Given the same unstable zone, but enhanced radiation
density, one would expect case Ah to produce a faster, denser flow \citep[see][]{1983ApJ...271...70B}.
Case B has reduced line cooling and enhanced photoionization heating. This means that
the gas becomes unstable at a lower ${\xi}$. Changes to the SED have been shown to
have this effect \citep[e.g., see][who use detailed photoionization calculations]{ 2000ApJ...537..134B},
so our approach is not unreasonable.
The distance to the radiation source will be
largely unchanged, so low ${\xi}$ equates to higher density
and so one would expect that the resulting outflow would be denser.
Case C and D are both attempts to remove the unstable zone entirely. Case C has strongly
enhanced bremsstrahlung cooling, through an increase in ${A_b}$. We did this, somewhat
arbitrarily, so that there is no TI for ${10^5\leq T\leq10^6}$ but the equilibrium temperatures
at the low and high end are the same as in A, Ab and B. Our goal here was to isolate the
importance of TI to acceleration of the gas. Case D achieves the same thing,
by reducing the X-ray temperature.
This reduces both the initial photoionization heating rate, and also the upper
equilibrium temperature.
Finally, cases E and F represent solutions with two unstable zones. In case E, we simply
increase the X-ray temperature. This does indeed produce two unstable zones, however the
second unstable zone is at a lower pressure than the first. In case F, we increase the
heating rate on the lower branch in a similar way to case B, by reducing line cooling. This
shifts the lower unstable zone to lower ${\xi}$ but leaves the upper unstable zone
unchanged. The aim for these two cases is to see if one can get a faster wind by extending the
acceleration zone. It is also interesting to see if gas `collects' at the stable zone between the
two unstable ones.
\section{Simulation results}
\label{section:sim_res}
We present the results of our simulations in the context of the thermal equilibrium curves.
This allows us to see how the hydrodynamics of the winds affects the thermal balance,
and thus give insight into how the winds are accelerating. The symbols, plotted over the solid
lines, on Figure \ref{figure:stability_curves} show the relationship between the temperature
and ionization parameter divided by that temperature
in a range of cells along two sightlines. We can therefore see if the equilibrium temperature is
reached in the simulations.
We note that there are no points on the lower stable branch of the
stability curves for any of the cases. This is by design, since points along the lower branch
are essentially in the disk, and merely exhibit turbulent motions.
We have carried out detailed resolution studies using 1D simulations which
demonstrated that the behavior of the wind in the
regions sampled by our sightlines is not dependent on resolving the transition from cool stable
branch to unstable zone. Our limited 2D resolution study confirmed this.
Turning our attention to individual cases, we first examine cases A and Ah. The increase
in luminosity has had the desired effect, in that all of the points for case Ah lie on the upper
branch of the unstable zone, whilst those of case A lie below the curve. Adiabatic cooling as the
hot gas expands holds the temperature below the
expected stable temperature. This indicates that more energy has been transferred to the gas in the
higher luminosity case. Note however that the points from the two simulations occupy remarkably
similar locations in ${T-\xi}$ space,
given that Ah has an order of magnitude higher luminosity. This is because the density of
the wind has increased accordingly, giving a very similar value of the ionization parameter for
both flows.
Case B has, as expected, produced hotter gas at lower ${\xi}$. However, as
with case A, the gas is cooler than the upper stable temperature. This suggests that with a higher luminosity,
one could obtain even hotter gas. Interestingly, the 80\degree ~points are all very close, indicating
that we are sampling points at the same relative distance along each streamline that the
sightline intercepts.
Cases C and D, where there is no unstable zone, the points tend to lie along the thermal
equilibrium curves
with some exceptions. The innermost 60\degree sightline data of case C are cooler than expected if
radiative processes dominated the heating/cooling. This is because adiabatic expansion is acting as
an additional cooling mechanism.
This simulation fails to launch a wind in the current simulation domain.
Case D also fails to launch a strong wind
and the polar infall which is always seen in these simulations extends to much larger inclination angles.
Compression of the gas by this flow at low radii produces some gas that is heated above the upper
stable temperature. This occurs for gas with an ionization parameter greater than about 10, and is
not shown on Figure \ref{figure:stability_curves}.
Case E shows two new effects. Firstly, despite there being two unstable zones,
the gas is unable to access the upper unstable zone since this lies at higher pressure. Therefore
the gas jumps straight from the lower unstable point towards the upper stable branch. Adiabatic cooling
prevents the gas from reaching the upper branch and the existence of gas in a
formally prohibited part of the thermal equilibrium graph demonstrates that hydrodynamic effects are
an important consideration in the thermal balance of this type of wind. It is often assumed that
gas will avoid unstable zones \citep[e.g.][and refs therein]{2013MNRAS.436..560C},
providing an explanation of why some ionization states are not seen
in observations. This simulation demonstrates that the situation may be more complex.
Finally, case F does produce points close to the stable zone between the two instabilities since
that part of the curve is now physically accessible to the flow.
In the lower section of Table \ref{table:wind_param} we provide some of the physical properties of the
simulated outflows. First, we list the maximum radial velocity seen in each of the simulations. It
is of the same order of magnitude (a few hundreds of kilometers per second) as the blue shifted
absorption lines seen in LMXBs \citep[e.g.][and references therein]{2013AcPol..53..659D,
2013AdSpR..52..732N}, however the velocity of gas is not
the only important factor.
The density is also important since only dense gas will produce observable
absorption. The maximum velocity in regions with a particle density greater than
$\rm{10^{12}~cm^{-1}}$ is much smaller. This is simply because we are probing regions deeper into the
wind, where the flow is still accelerating, and we see that the two stable cases do not produce fast
enough gas at high density. Cases B, E and F do show fast moving gas with density above the
threshold density. In cases E and F, the fast, dense gas is limited to a very narrow range of angles,
in case E it is within 3 degrees of the disk and would therefore probably not be observable. In
case E, the fastest dense gas is at small radii around ${\theta=75\degree}$. Whilst this could
in principle be observed, the small angular range over which an absorption feature would appear
would likely mean it would be transient.
Another important physical parameter that can be derived from observations is the total column
density. This is between about $10^{20}-10^{23}~\rm{cm^{-2}}$ for all kinds of LMXBs
\citep{2013AcPol..53..659D}. Our simulations produce column densities of the right order of magnitude
for equatorial sight lines, and indeed the 80\degree sightline would be Compton thick in case Ah and B.
Finally, we give the mass loss rate, both leaving the disk ($\dot{M}_{wind,disk}$) and leaving
the computational domain ($\dot{M}_{wind,outer}$). These rates are calculated directly
from the simulation results, using $\dot{M}=\sum A\rho v$ where $A$ is the area represented by a cell, either
on the disk or at the outer boundary, $\rho$ is the density of the cell, and $v$ is either the vertical velocity
for disk cells, or the radial velocity for cells at the outer boundary. The summation is carried out
over all relevant cells.
We report these values in terms of the accretion
rate, $\dot{M}_{acc}=4.4\times 10^{17}~\rm{g~s^{-1}}$ (assuming an efficiency of 8.3\%).
In L10, their version of model A produced an
outflow of about 7 times the accretion rate whereas we see an outflow rate of about half that much. This
is simply because our simulation has a much reduced radial extent compared to the L10 run. Increasing the luminosity
(model Ah) increases the mass loss rate, but the ratio of mass loss to accretion rate remains the
same. Cases B and F produce significantly higher mass loss rates, in excess of the threshold of $15\dot{M}_{acc}$
that \cite{1986ApJ...306...90S} demonstrated could induce oscillations in an accretion disk.
It should be noted that the winds are emerging from the disk
far outside the radius where most of the radiation is produced. Almost all of the ionizing radiation from a
thin disk in a system like this one is produced within 100 gravitational radii of the centre. By comparison,
the induced Compton radius is about half a million gravitational radii. Therefore, at least for the purposes
of these simulations, the model of a point-like, unvarying source of radiation at the centre of the simulation
is not necessarily invalidated by the prediction of large mass losses from the outer parts of the disk, at least over the
timescale of the simulation.
\section{Synthetic absorption line profiles}
\label{section:spectra}
It is useful to produce synthetic line profiles for our simulations, in order to get some idea of whether
the outflows could, in principle, produce the absorption features observed in XRBs. Computation
of the ionization state and level populations of the gas is beyond the scope of this work, and
we calculate the absorption using the simplified scheme discussed below. This scheme takes account
of thermal line broadening, and the doppler shifting due to the bulk flow of the wind, however we do not model
line emission here.
The opacity due to a resonance line (uncorrected for stimulated emission) is given by
\begin{equation}
\alpha(\nu)=\frac{h\nu}{4\pi}n_1B_{12}\phi(\nu)
\end{equation}
We arbitrarily set $B_{12}=1$ and use the hydrogen number
density for $n_1$. Therefore, our line opacity becomes
\begin{equation}
\alpha_{\nu}=\frac{h\nu}{4\pi}n_H\phi(\nu).
\end{equation}
For the line shape $\phi(\nu)$ we use a gaussian line profile of the form
\begin{equation}
\phi(\nu)=\left(\frac{c}{\nu_{0}}\right)\sqrt{\frac{m}{2\pi k_BT}}exp\left(-\frac{mc^2\left(\nu-\nu_0\right)^2}{2k_BT\nu_0^2}\right)
\end{equation}
For each radial cell i, we compute the line opacity as a function of frequency, and then doppler shift that line
profile to take account of the bulk flow velocity. We then
obtain the total, frequency dependent optical depth by summing up the opacity at each
frequency due to each cell of radial thickness dr,
\begin{equation}
\tau(\nu)=\sum_{i=inner}^{i=outer}\alpha_i(\nu)dr.
\end{equation}
This sum is computed from the innermost radial cell to the outermost, thus making the implicit assumption
that the continuum source is point-like, and located at the origin.
A measure of the absorption profile of a generic line is then computed
\begin{equation}
F(\nu)=e^{-\tau(\nu)}.
\end{equation}
Since no attempt is made to compute the density of any given ionic species,
these spectra are in no way accurate representations of what we would
expect to observe for a given system. Rather, they are just a means of comparing runs,
since each spectrum is calculated in a consistent way.
\begin{figure*}
\includegraphics{fig3.eps}
\caption{Simulated spectra for the 60\degree (left hand upper panel) and 80\degree (right hand
upper panel) sight lines. Lower panels show ionization parameter vs radial velocity for all cells
in the two sight lines. The size of the symbol represents the distance from the central mass (
small symbols are for small distances). Crosses represent cells with a number density of less
than $\rm{10^{12}~cm^{-3}}$
and filled circles show cells with a density above this threshold. Negative velocities represent
motion away from the centre of the simulation, and thus blue shifted absorption. Note that
in the bottom right hand panel, the black and blue points overlie one another.}
\label{figure:simulated_spectra}
\end{figure*}
The upper panels of Figure \ref{figure:simulated_spectra} show the results of the line profile calculation
for the 60\degree~ (left panel) and 80\degree~ (right panel) sightlines. We immediately notice that the 80\degree~
features are much deeper than those seen at 60\degree. This is simply due to the higher density
at the base of the wind. This means that the ionization parameter is also generally lower in the
80\degree sightline, and so it is likely that different species would be observed at the two angles.
A general feature of most of the spectra is an absorption feature close to zero velocity, and a second
feature at a blueshift that varies from model to model. Although absorption features are seen
at zero velocity in observations, it is fair to say that the very strong features we predict here are
not commonly observed. We know however that by including line emission these absorption features would
be weakened.
We are more interested in the second absorption feature which appears
at velocities between about $\rm{-150~km~s^{-1}}$ and $\rm{-800~km~s^{-1}}$.
Looking in detail at each of the cases now, we first see that the increased luminosity of case Ah
over case A was partially successful in that the absorption is stronger. This is of course due to
the increased density of the outflow. This can be seen clearly in the lower left panel, where the cells in case A
have density less than $\rm{10^{12}~cm^{-3}}$ (represented by the cross
symbol) whereas the same cells in case Ah have density greater than $\rm{10^{12}~cm^{-3}}$ (circles).
However, increase in density was not our only aim, we also
wanted to increase the velocity at which the absorption was seen. In this we have been only
partially successful, with the blue shifted absorption feature shifting to only very slightly higher
velocities.
Case B is perhaps the most successful new model with blue shifted absorption features seen in
both the 60\degree and 80\degree sightlines. The blue shifted absorption feature at 80\degree~ is
deep and its width is only
due to thermal broadening. This is because all of the cells producing the feature are at nearly the same
velocity. This is in turn because photons flying along this sightline encounter gas in a very similar
physical regime at all radii showing that, close to the disk at least, the gas is flowing along highly
self similar streamlines in this case. Since the ionization parameter is set to be the same at
all radii at the mid plane, all gas will start in the same physical state. In case B,
the physical state of the gas has evolved similarly at all radii, `remembering' its initial conditions, up to the 80\degree
point. In contrast, by the time it has moved up to the
60\degree sightline, that `memory' of the starting state has been lost, and gas at different radii is in
different physical conditions. Thus the absorption is produced by cells at a range of velocities and so the feature
is much shallower
but very broad.
As already discussed above, case C and D fail to produce fast outflows. This is clearly shown in the lower
panels, where all the cells from these simulations are clustered around zero velocity. These two
cases produce relatively narrow (the temperature of the gas is lower) features at zero velocity.
As shown in Table \ref{table:wind_param} cases E and F do both produce dense, fast moving
material. However, the material in case E is very close to the accretion disk, and is missed
by both sight lines shown here. Appearing at angles greater than 87\degree, it is unlikely
that it would be observable in any case. Case F does produce fast material at lower
${\theta}$ and this is seen in the spectra as absorption around $\rm{-200~km~s^{-1}}$.
Faster gas does exist in the simulation, but only over a very narrow range of angles.
\section{Discussion}
\label{section:discussion}
Our aim was to see if simple, physically motivated changes to
the heating and cooling balance of the thermal wind simulated in L10 could produce a wind model that was
more in line with observations. There are three main observational measures, the line velocities, the column density
and the density of the line forming region.
The wind model described by L10 failed to produce any gas with velocities greater than $\rm{100~km~s^{-1}}$ with
a density greater than $\rm{10^{12}~cm^{-3}}$ and we come to a similar conclusion. Since L10 were trying to
replicate the properties of the outflow observed in GRO J1655-40, which seems to have a very high density of
$\rm{5\times10^{15}cm^{-3}}$ \citep{2006Natur.441..953M} the conclusion was that the model failed in that aim.
Follow up work reduced the density estimate to around $\rm{10^{14}cm^{-3}}$
\citep{2008ApJ...680.1359M,2009ApJ...701..865K} but
this is still well above the densities seen in the L10 model. This failure of a thermal wind model to replicate
the velocity and density seen in the observations suggests that
for GRO J1655-40 at least, a thermal wind seems an unlikely source of the observed absorbing gas. Nonetheless,
the predicted wind produces a column density in line with observations, and a mass loss rate in excess of the
accretion rate.
The enhanced luminosity version of the L10 model, case Ah, does increase the velocity and perhaps
more importantly the density of the
wind. A radial velocity of $\rm{200~km~s^{-1}}$ of gas with a number density greater than $\rm{10^{12}~cm^{-3}}$
is still slow and less dense compared to the measurements of GRO J1655-40 mentioned above, but
it is not unreasonable to think that this wind would produce observable features, and further work to
characterize the ionization state of this wind allowing calculation of detailed spectra would be worthwhile. The ionization
parameter for the fastest moving parts of the wind has a narrow range, centered on ${\log{\xi}\sim4}$, and is certainly
similar to that inferred for the absorbing gas in many systems.
Case
B provides the best illustration of how simple changes to the heating and cooling rates in a simulation can affect the velocity
and density of the resulting wind. We reduced line cooling by a factor of 5, and increased
the photoionization heating rate by a factor of 4. Both of these changes can be broadly justified, by the effects of
line optical depth in the first case, and changes to SED and gas metallicity in the second. This simple change
made the gas thermally unstable at ${\xi}$ one order of magnitude lower. The radial location of the unstable
gas is largely unchanged, so the change in ${\xi}$ means that \emph{denser} gas is
accelerated, producing a denser and faster wind. Although the density and velocity is only a little higher than case Ah,
much more interesting is the huge increase in mass loss rate, now 40 times the accretion rate (even though we only
simulate a relatively small domain). This is almost 3 times the
rate that \cite{1986ApJ...306...90S} showed would induce instabilities in the disk. Therefore, even if
thermal winds are unable to reproduce the observed line absorption seen in XRBs,
they may well provide a mechanism for XRB state change and so searching for an observational signature is
a worthwhile exercise.
Another interesting result from these simulations is that there is a gas with `forbidden'
values of ${\xi}$, i.e. from the second, hotter, unstable zone of the stability curve in cases E and F. It has been suggested
\citep[e.g.][] {2013MNRAS.436..560C} that species which are expected to have peak abundances in gas
with such forbidden ionization parameters would not be seen in observations. Whilst our results do not
necessarily disprove such assumptions, they do illustrate that hydrodynamic effects (i.e. adiabatic expansion)
make the situation more complex.
It is often assumed that disk winds in XRBs are equatorial, because absorption is seen preferentially in
sources which exhibit dips \cite{2012MNRAS.422L..11P}. We find that the wind is in fact bipolar,
but the outflow is highly stratified with the high density region of the wind near the disk.
Therefore, in our simulation, absorption is only seen for equatorial sight lines even though the
wind flows out over a relatively wide range of angles.
Similar results were
seen in L10 and \cite{2012ApJ...758...70G} who computed line profiles based on L10's as well as other disk
wind simulations.
This stratification is also important with respect to the observed variability of X-ray absorption in XRBs
\citep[][and refs therein]{2013AdSpR..52..732N}.
This is very well illustrated
by cases E and F, both of which produce absorbing gas in narrow angular ranges. If the illuminating spectrum
is variable, the angle at which particular species would be seen could change - thereby making the absorption lines
associated with those species vanish. The wind could remain strong in this case, but would need to be detected
in different species.
Secondly, when we compare cases A and Ah, which differ only in luminosity,
we see that the density of the wind solution changes significantly giving rise to a very similar ionization parameter.
This also has relevance to studies of variable sources, where increases in luminosity are sometimes called upon to explain
increases in ${\xi}$ and hence the disappearance of some features \citep[e.g.][]{2014A&A...571A..76D}.
Our results show that it may be overly simplistic to assume the density remains constant in such cases, and a more
detailed investigation is required, taking into account how the wind responds to the increase in luminosity.
\section{Future work}
The simulations we have presented here use a simplified heating and cooling scheme, which permits swift
exploration of parameter space. In addition, radiative transfer through the wind is treated in the optically
thin limit. Previous detailed analysis of such hydrodynamic models \citep{sim_proga_10,2014ApJ...789...19H}
have shown that a more thorough treatment of radiative transfer including scattering can have a significant effect.
We therefore
plan to run such simulations on the more successful models
from this work (i.e. those that produced high velocity, dense flows). Not only will this work give more
information regarding the validity of our modified heating/cooling rates, but it will also produce detailed ionization
data for the wind and spectra. It will also provide information regarding line emission contribution.
\section*{Acknowledgements}
This work was supported by NASA under Astrophysics Theory Program
grants NNX11AI96G and NNX14AK44G. The authors would like to thank the
anonymous referee for very useful comments that have improved the paper.
\bibliographystyle{mn2e}
|
2,869,038,154,988 | arxiv | \section{#1}}
\newtheorem{dfn}{Definition}[section]
\newtheorem{thm}[dfn]{Theorem}
\newtheorem{lmma}[dfn]{Lemma}
\newtheorem{hypo}[dfn]{Hypothesis}
\newtheorem{ppsn}[dfn]{Proposition}
\newtheorem{crlre}[dfn]{Corollary}
\newtheorem{xmpl}[dfn]{Example}
\newtheorem{rmrk}[dfn]{Remark}
\newcommand{\begin{dfn}}{\begin{dfn}}
\newcommand{\begin{thm}}{\begin{thm}}
\newcommand{\otimes_\gamma}{\otimes_\gamma}
\newcommand{\otimes_{alg}}{\otimes_{alg}}
\numberwithin{equation}{section}
\newcommand{\begin{list}{$($\roman{cnt1}$)$} {\usecounter{cnt1}{\begin{list}{$($\roman{cnt1}$)$} {\usecounter{cnt1}
\setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}}
\newcommand{\begin{list}{$($\alph{cnt2}$)$} {\usecounter{cnt2}{\begin{list}{$($\alph{cnt2}$)$} {\usecounter{cnt2}
\setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}}
\newcommand{\begin{list}{$($\arabic{cnt3}$)$} {\usecounter{cnt3}{\begin{list}{$($\arabic{cnt3}$)$} {\usecounter{cnt3}
\setlength{\topsep}{0pt} \setlength{\itemsep}{0pt}}}
\newcommand{\end{list}}{\end{list}}
\newcommand{\noindent}{\noindent}
\newcommand{\begin{lmma}}{\begin{lmma}}
\newcommand{\begin{ppsn}}{\begin{ppsn}}
\newcommand{\begin{crlre}}{\begin{crlre}}
\newcommand{\begin{xmpl}}{\begin{xmpl}}
\newcommand{\begin{rmrk}}{\begin{rmrk}}
\newcommand{\end{dfn}}{\end{dfn}}
\newcommand{\end{thm}}{\end{thm}}
\newcommand{\end{lmma}}{\end{lmma}}
\newcommand{Tr_{|f><g|}}{Tr_{|f><g|}}
\newcommand{Tr_{|f^{\otimes^m}><g^{\otimes^n}|}}{Tr_{|f^{\otimes^m}><g^{\otimes^n}|}}
\newcommand{\end{ppsn}}{\end{ppsn}}
\newcommand{\end{crlre}}{\end{crlre}}
\newcommand{\end{xmpl}}{\end{xmpl}}
\newcommand{\end{rmrk}}{\end{rmrk}}
\newcommand{{I\! \! A}}{{I\! \! A}}
\newcommand{{I\! \! B}}{{I\! \! B}}
\newcommand{{I\! \! \!\! C}}{\mathbb{C}}
\newcommand{{I\! \! D}}{{I\! \! D}}
\newcommand{{I\! \! E}}{{I\! \! E}}
\newcommand{{I\! \! F}}{{I\! \! F}}
\newcommand{{I\! \! G}}{{I\! \! G}}
\newcommand{{I\! \! H}}{{I\! \! H}}
\newcommand{{I\! \! I}}{{I\! \! I}}
\newcommand{{I\! \! K}}{{I\! \! K}}
\newcommand{{I\! \! L}}{{I\! \! L}}
\newcommand{{I\! \! M}}{{I\! \! M}}
\newcommand{{I\! \! N}}{{I\! \! N}}
\newcommand{{I\! \! O}}{{I\! \! O}}
\newcommand{{I\! \! P}}{{I\! \! P}}
\newcommand{{I\! \! Q}}{\mathbb{Q}}
\newcommand{\mathcal}{\mathcal}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{{I\! \! S}}{{I\! \! S}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{{I\! \! U}}{{I\! \! U}}
\newcommand{{I\! \! V}}{{I\! \! V}}
\newcommand{{I\! \! W}}{{I\! \! W}}
\newcommand{{I\! \! X}}{{I\! \! X}}
\newcommand{{I\! \! Y}}{{I\! \! Y}}
\newcommand{{\ \! \! Z}}{\mathbb{Z}}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\gamma}{\gamma}
\newcommand{\Delta}{\Delta}
\newcommand{\delta}{\delta}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\epsilon}{\epsilon}
\newcommand{\kappa}{\kappa}
\newcommand{\lambda}{\lambda}
\newcommand{\Lambda}{\Lambda}
\newcommand{\omega}{\omega}
\newcommand{\left\langle}{\left\langle}
\newcommand{\right\rangle}{\right\rangle}
\newcommand{\Omega}{\Omega}
\newcommand{\hat{\pi}}{\hat{\pi}}
\newcommand{\sigma}{\sigma}
\newcommand{\Sigma}{\Sigma}
\newcommand{\theta}{\theta}
\newcommand{\Theta}{\Theta}
\newcommand{\vartheta}{\vartheta}
\newcommand{\zeta}{\zeta}
\newcommand{\partial}{\partial}
\newcommand{\Gamma}{\Gamma}
\newcommand{{\cal A}}{{\cal A}}
\newcommand{{\cal B}}{{\cal B}}
\newcommand{{\cal C}}{{\cal C}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal E}}{{\cal E}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal G}}{{\cal G}}
\newcommand{{\cal H}}{{\cal H}}
\newcommand{{\cal I}}{{\cal I}}
\newcommand{{\cal J}}{{\cal J}}
\newcommand{{\cal K}}{{\cal K}}
\newcommand{{\cal L}}{{\cal L}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{{\cal Q}}{{\cal Q}}
\newcommand{{\cal R}}{{\cal R}}
\newcommand{{\cal S}}{{\cal S}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{{\cal U}}{{\cal U}}
\newcommand{{\cal V}}{{\cal V}}
\newcommand{{\cal W}}{{\cal W}}
\newcommand{{\cal X}}{{\cal X}}
\newcommand{{\cal Y}}{{\cal Y}}
\newcommand{{\cal Z}}{{\cal Z}}
\newcommand{\hspace{-.05in}{\bf A}}{\hspace{-.05in}{\bf A}}
\def\widetilde{A}{\widetilde{A}}
\def\widetilde{B}{\widetilde{B}}
\def\widetilde{C}{\widetilde{C}}
\def\widetilde{D}{\widetilde{D}}
\def\widetilde{E}{\widetilde{E}}
\def\widetilde{F}{\widetilde{F}}
\def\widetilde{G}{\widetilde{G}}
\def\widetilde{H}{\widetilde{H}}
\def\widetilde{I}{\widetilde{I}}
\def\widetilde{J}{\widetilde{J}}
\def\widetilde{K}{\widetilde{K}}
\def\widetilde{L}{\widetilde{L}}
\def\widetilde{M}{\widetilde{M}}
\def\widetilde{N}{\widetilde{N}}
\def\widetilde{O}{\widetilde{O}}
\def\widetilde{P}{\widetilde{P}}
\def\widetilde{Q}{\widetilde{Q}}
\def\widetilde{R}{\widetilde{R}}
\def\widetilde{S}{\widetilde{S}}
\def\widetilde{T}{\widetilde{T}}
\def\widetilde{U}{\widetilde{U}}
\def\widetilde{V}{\widetilde{V}}
\def\widetilde{W}{\widetilde{W}}
\def\widetilde{X}{\widetilde{X}}
\def\widetilde{Y}{\widetilde{Y}}
\def\widetilde{Z}{\widetilde{Z}}
\def\widehat{\widehat}
\def{\cal A}_h{{\cal A}_h}
\def\a*{{\cal A}_{h,*}}
\def{\cal B}(h){{\cal B}(h)}
\def{\cal B}_1(h){{\cal B}_1(h)}
\def{\cal B}^{\rm s.a.}(h){{\cal B}^{\rm s.a.}(h)}
\def{\cal B}^{\rm s.a.}_1(h){{\cal B}^{\rm s.a.}_1(h)}
\def{\cal A}^{\perp}_{h}{{\cal A}^{\perp}_{h}}
\def{\cal A}^{\perp}{{\cal A}^{\perp}}
\newcommand{\prod \limits}{\prod \limits}
\newcommand{\lim \limits}{\lim \limits}
\newcommand{\sum \limits}{\sum \limits}
\newcommand{\int \limits}{\int \limits}
\newcommand{\widehat}{\widehat}
\newcommand{\Re}{\Re}
\newcommand{\otimes}{\otimes}
\newcommand{\dagger}{\dagger}
\newcommand{\bigotimes}{\bigotimes}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\Rightarrow}{\Rightarrow}
\newcommand{\Longrightarrow}{\Longrightarrow}
\newcommand{\subset}{\subset}
\newcommand{\subseteq}{\subseteq}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\underline}{\underline}
\newcommand{\overline}{\overline}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{\nonumber}{\nonumber}
\newcommand{\tnsr}{\mbox{$\bigcirc\hspace{-0.89em}\mbox{\raisebox%
{-.43ex}{$\top$}}\;$}}
\newcommand{\gtreqqless}{\gtreqqless}
\newcommand{\lesseqqgtr}{\lesseqqgtr}
\newcommand{\mbox{id}}{\mbox{id}}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{1\!\!1}{1\!\!1}
\newcommand{\mbox{{\boldmath $\eta$}}}{\mbox{{\boldmath $\eta$}}}
\newcommand{\noindent}{\noindent}
\newcommand {\CC}{\centerline}
\def \mbox{}\hfill $\sqare$\vspace{1ex} {$\Box$}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\vskip 1em}{\vskip 1em}
\newcommand {\ith}{(e^{itH_{0}}}
\newcommand {\itp}{(e^{itPH_{0}P}}
\newcommand {\iti}{\int \limits _{-\ity}^{\ity}}
\newcommand {\ipp}{\int \limits _{-\pi}^{\pi }}
\newcommand {\sumk} {{\sum _{k=1}^{\ity}}}
\newcommand {\sumj} {{\sum _{j=1}^{\ity}}}
\newcommand {\im}{\textup{i}}
\newcommand {\tr}{\textup{Tr}}
\newcommand {\Exp} {\textup{e}}
\newcommand {\Tr}{\textup {Tr}}
\newcommand {\Imm}{\textup {Im}}
\begin{document}
\title{Helton-Howe-Carey-Pincus Trace Formula and Krein's Theorem}
\author[Chattopadhyay] {Arup Chattopadhyay $^{^{1)}}$ }
\address
1) A. Chattopadhyay: Indian Institute of Technology Guwahati\\ Department of Mathematics
\\ Guwahati- 560059 \\Kamrup, Assam, India.}
\email{[email protected], [email protected]}
\author[Sinha]{Kalyan B. Sinha $^{^{2)}}$}
\address
2) K. B. Sinha: J.N.Centre for Advanced Scientific Research\\ and Indian Institute of Science,\\
Bangalore\\ India.}
\email{[email protected]}
\begin{abstract}
In this article, we derive the
Helton-Howe-Carey-Pincus trace formula as a consequence of Krein's trace formula.
\end{abstract}
\maketitle
{\textbf{Mathematics Subject Classification (2010):}} 47A13, 47A55, 47A56.
\vspace{0.1in}
{\textbf{Key words and Phrases:}} Trace formula, Perturbations of self-adjoint operators,
Spectral integral.
\newsection{Introduction}\label{sec:1}
\noindent \textbf{Notation:} In the following, we shall use the notations given below:
$\mathcal{H}$, $\mathcal{B}(\mathcal{H})$, $\mathcal{B}_{sa}(\mathcal{H})$,
$\mathcal{B}_1(\mathcal{H})$, $\mathcal{B}_{1+}(\mathcal{H})$, $\mathcal{B}_{1-}(\mathcal{H})$,
$\mathcal{B}_2(\mathcal{H})$,
$\mathcal{B}_p(\mathcal{H})$
denote a separable Hilbert space, set of bounded linear operators, set of bounded self-adjoint linear operators,
set of trace class operators, set of positive
trace class operators, set of negative
trace class operators, set of Hilbert-Schmidt operators and Schatten-p class operators respectively
with $\|.\|_p$ as the associated Schatten-p norm. Furthermore by $\sigma(A)$, $E_A(\lambda)$, $D(A)$,
$\rho(A)$, we shall mean spectrum, spectral family, domain, resolvent set,
and resolvent of a self-adjoint operator $A$ respectively,
and $\textup{Tr}(A)$ will denote the trace of a trace class operator $A$ in $\mathcal{H}$.
Also we denote
the set of natural numbers and the set of real numbers by $\mathbb{N}$ and $\mathbb{R}$
respectively. The set $C(I)$ is
the Banach space of continuous
functions over a compact interval $I\subseteq \mathbb{R}$ with sup-norm $\|.\|_{\infty}$,
and $C^n(I)~(n\in \mathbb{N} \cup \{0\})$, the space of $n$-th continuously differentiable
functions over a compact interval $I$ with norm
\[
\|f\|^{n} = \sum_{j=0}^n\|f^{(j)}\|_{\infty} ~~~\textup{for}~~f\in C^n(I)
\]
and $f^{(j)}$ is the $j$-th derivative of $f$ (for $n=0$, $C^n(I)$ is $C(I)$),
and $L^p(\mathbb{R})$ the standard Lebesgue space. We shall denote
$f^{(1)}$, the first derivative, as $f'$.
Next we define the class $C_1^1(I)\subseteq C(I)$
as follows
\[
C_1^1(I) = \Big\{f\in C(I): \|f\|_1^{1}=
\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}
|\hat{f}(\alpha)|(1+|\alpha|)d\alpha<\infty\Big\},
\]
where $\hat{f}$ is the Fourier transform of $f$;
and it is easy to see that $C_1^1(I)\subseteq C^1(I)$ (since $\|.\|^{1}\leq \|.\|_1^{1}$);
and denote the set all $f\in C_1^1(I)$ such that
$f'\geq 0$ ($\leq 0$ respectively) by $C_{1+}^1(I)$ ($C_{1-}^1(I)$ respectively). Similarly
we denote the set of all polynomials with complex coefficients in $I$ by $\mathcal{P}(I)$
and the set of all $p\in \mathcal{P}(I)$ such that $p'\geq 0$ ($\leq 0$ respectively)
by $\mathcal{P}_{+}(I)$ ($\mathcal{P}_{-}(I)$ respectively).
Let $T\in \mathcal{B}(\mathcal{H})$ be a hyponormal operator,
that is, $[T^*,T]\geq 0$. Set $T=X+\im Y$, where $X,Y\in \mathcal{B}_{sa}(\mathcal{H})$
and it is known that $\textup{Re}(\sigma(T))=\sigma(X)$, $\textup{Im}(\sigma(T))=\sigma(Y)$
\cite{putnam}, $[T^*,T]\in \mathcal{B}_1(\mathcal{H})$
if an additional assumption of finiteness
of spectral multiplicity is assumed \cite{bergershaw, kato, martinputinar, putnam}.
A hyponormal operator $T$ is said to be
purely hyponormal if there exists no subspace $\mathcal{S}$ of $\mathcal{H}$ which is invariant
under $T$ such that the restriction of $T$ to $\mathcal{S}$ is normal. For a purely hyponormal
operator $T,$ it is also known that its real and imaginary parts, that is, $X$ and $Y$ are spectrally
absolutely continuous \cite{kato, putnam3}.
\textbf{\emph{(A):~The main assumption of the whole paper is
that}} $T=X+\textup{i}Y$ \textbf{\emph{is a purely hyponormal operator in
$\mathcal{B}(\mathcal{H})$ such that}}
$[T^*,T]=-2\textup{i}[Y,X]= 2D^2\in \mathcal{B}_{1+}(\mathcal{H})$\textbf{\emph{
and $\sigma(X) \cup \sigma(Y)\subseteq [a,b]$
$\subseteq \mathbb{R}$.}}
\newsection{Main Section}
Let us start with few lemmas which will be useful to prove our main result.
\begin{lmma}\label{lma1}
Let $T$ satisfy $\textbf{(A)}$.
Then for $\psi\in C_1^1([a,b])$ or $\mathcal{P}([a,b])$,
\begin{equation}
[\psi(Y),X]\in \mathcal{B}_1(\mathcal{H})
\quad \textup{and} \quad
-\im \textup{Tr}\{[\psi(Y),X]\} = \textup{Tr}\{\psi'(Y)D^2\}.
\end{equation}
Similarly,
\begin{equation}\label{useq35}
[Y, \psi(X)]\in \mathcal{B}_1(\mathcal{H})
\quad \textup{and} \quad
-\im \textup{Tr}\{[Y,\psi(X)]\} = \textup{Tr}\{\psi'(X)D^2\}.
\end{equation}
\end{lmma}
\begin{proof}
Now for $\psi\in C_1^1([a,b])$ we have
\begin{equation}\label{useq80}
\begin{split}
&[\psi(Y),X] = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}\hat{\psi}(\alpha)[\Exp^{\im \alpha Y},X]d\alpha
= \frac{1}{\sqrt{2\pi}}
\int_{\mathbb{R}}\hat{\psi}(\alpha)\left(\Exp^{\im \alpha Y}X-X\Exp^{\im \alpha Y}\right)d\alpha\\
& = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}\hat{\psi}(\alpha)d\alpha \int_0^{\alpha}
\im \Exp^{\im (\alpha-\beta)Y}[Y,X]\Exp^{\im \beta Y} d\beta= - \frac{1}{\sqrt{2\pi}}
\int_{\mathbb{R}}\hat{\psi}(\alpha)d\alpha \int_0^{\alpha}
\Exp^{\im (\alpha-\beta)Y}D^2\Exp^{\im \beta Y}d\beta.
\end{split}
\end{equation}
Since $D^2\in \mathcal{B}_1(\mathcal{H})$ and
$\int_{\mathbb{R}}|\hat{\psi}(\alpha)||\alpha|d\alpha < \infty,$ then
from the above equation \eqref{useq80} we conclude that
\begin{equation*}
[\psi(Y),X]\in \mathcal{B}_1(\mathcal{H})~~~\text{and}~~~
\|[\psi(Y),X]\|_1\leq \|D^2\|_1 \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} |\hat{\psi}(\alpha)||\alpha|d\alpha.
\end{equation*}
Moreover,
\begin{equation*}
\begin{split}
& -\im \Tr\{[\psi(Y),X]\}
= \frac{\im}{\sqrt{2\pi}}
\int_{\mathbb{R}}\hat{\psi}(\alpha)d\alpha \int_0^{\alpha}
\Tr\{\Exp^{\im \alpha Y}D^2\}d\beta = \frac{\im}{\sqrt{2\pi}}
\int_{\mathbb{R}}\alpha \hat{\psi}(\alpha)d\alpha
\Tr\{\Exp^{\im \alpha Y}D^2\}\\
& = \Tr\{\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}\im \alpha \hat{\psi}(\alpha)
\Exp^{\im \alpha Y} d\alpha ~~D^2\} = \Tr\{\psi'(Y)D^2\},
\end{split}
\end{equation*}
where we have used the cyclicity of trace and the fact that
\begin{equation*}
\psi'(\beta) = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\im \alpha \hat{\psi}(\alpha)
\Exp^{\im \alpha \beta} d\alpha.
\end{equation*}
Next for $\psi(t)=\sum\limits_{j=0}^nc_jt^j\in \mathcal{P}([a,b])$ we get
\begin{equation}\label{useq95}
\begin{split}
[\psi(Y),X] = \sum_{j=0}^nc_j[Y^j,X]
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}Y^{j-k-1}[Y,X]Y^j
= \im \sum_{j=0}^nc_j \sum_{k=0}^{j-1}Y^{j-k-1}D^2 Y^k.
\end{split}
\end{equation}
Since $D^2\in \mathcal{B}_1(\mathcal{H}),$ then
from the above equation \eqref{useq95} we conclude that
\begin{equation*}
[\psi(Y),X]\in \mathcal{B}_1(\mathcal{H})
\quad \text{and} \quad \left\|[\psi(Y),X]\right\|_1
\leq \|D^2\|_1\left(\sum_{j=0}^n j |c_j| \|Y\|^{j-1}\right).
\end{equation*}
Furthermore,
\begin{equation*}
\begin{split}
& -\im \Tr\{[\psi(Y),X]\}
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}\Tr\{Y^{j-k-1}D^2 Y^k\}
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}\Tr\{Y^{j-1}D^2 \}\\
& = \sum_{j=0}^nj c_j \Tr\{Y^{j-1}D^2 \}
= \Tr\{\sum_{j=0}^nj c_jY^{j-1}D^2\}= \Tr\{\psi'(Y)D^2\},
\end{split}
\end{equation*}
where we have used the cyclicity of property of trace.
By interchanging the role of $X$ and $Y$ in the above calculations, we conclude that
\begin{equation*}
[Y, \psi(X)]\in \mathcal{B}_1(\mathcal{H})
\quad \textup{and} \quad
-\im \textup{Tr}\{[Y,\psi(X)]\} = \textup{Tr}\{\psi'(X)D^2\}.
\end{equation*}
This completes the proof.
\end{proof}
\begin{lmma}\label{lmma2}
Let $T$ satisfy $\textbf{(A)}$. Then
$-\im[\psi(Y),X]$ $\in \mathcal{B}_{1\pm}(\mathcal{H})$ according as
$\psi\in C_{1\pm}^1([a,b])$ or $\mathcal{P}_{\pm}([a,b])$
respectively. Similarly, $-\im[Y,\psi(X)]$ $\in \mathcal{B}_{1\pm}(\mathcal{H})$
according as $\psi\in C_{1\pm}^1([a,b])$ or $\mathcal{P}_{\pm}([a,b])$ respectively.
\end{lmma}
\begin{proof}
Let $\psi\in C_1^1([a,b]).$ Then from equation \eqref{useq80} in lemma~\ref{lma1}, we have
\[
-\im [\psi(Y),X] = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}\im \hat{\psi}(\alpha)d\alpha \int_0^{\alpha}
\Exp^{\im (\alpha-\beta)Y}D^2\Exp^{\im \beta Y}d\beta.
\]
Next by the spectral theorem for $Y$ we get
\begin{equation*}
\begin{split}
& -\im [\psi(Y),X]
= \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \im \hat{\psi}(\alpha) d\alpha
\int_0^{\alpha}
d\beta\int_a^b \int_a^b \Exp^{\im(\alpha-\beta) t}\Exp^{\im\beta t'}
E^{(Y)}(dt)D^2E^{(Y)}(dt'),
\end{split}
\end{equation*}
where $E^{(Y)}(.)$ is the spectral
family of the self-adjoint operator $Y$.
Note that $\mathcal{E}(\Delta \times \delta) (S) \equiv E^{(Y)}(\Delta)SE^{(Y)}(\delta)$
($S\in \mathcal{B}_2(\mathcal{H})$ and $\Delta \times \delta \subseteq
\mathbb{R}\times \mathbb{R}$) extends to a spectral measure (finite) on $\mathbb{R}^2$
in the Hilbert space $\mathcal{B}_2(\mathcal{H})$. Therefore by Fubini's theorem
\begin{equation}\label{useq15}
\begin{split}
& -\im [\psi(Y),X] = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \im \hat{\psi}(\alpha) d\alpha
\int_a^b \int_a^b \Exp^{\im\alpha t} \int_0^{\alpha}
d\beta~\Exp^{-\im \beta (t-t')}
E^{(Y)}(dt)D^2E^{(Y)}(dt') \\
& = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} \im \hat{\psi}(\alpha) d\alpha
\int_a^b \int_a^b \frac{\Exp^{\im \alpha t'}-\Exp^{\im\alpha t} }{\im(t'-t)}
E^{(Y)}(dt)D^2E^{(Y)}(dt') \}\\
& = \frac{1}{\sqrt{2\pi}}
\int_a^b \int_a^b
\frac{\psi(t')
-\psi(t) }{t'-t}
E^{(Y)}(dt)D^2E^{(Y)}(dt') \}
= \frac{1}{\sqrt{2\pi}} \int_{[a,b]^2} \tilde{\psi}(t,t')
\mathcal{E}(dt\times dt')(D^2),
\end{split}
\end{equation}
where
\begin{equation*}
\tilde{\psi}(t,t')=
\begin{cases}
\frac{\psi(t')
-\psi(t) }{t'-t} , &\text{if $t \neq t'$,}\\
\psi'(t), ~&\text{if $t = t'$.}
\end{cases}
\end{equation*}
Note that for $\psi \in C^1_{1\pm}([a,b])$,
we have
\[
\tilde{\psi}(t,t') \geq 0 \quad \text{(or}~ \leq 0) \quad \text{respectively for} \quad t,t'\in [a,b]
\]
and hence from the equation \eqref{useq15} we conclude that $-\im [\psi(Y),X]\geq 0$ or
$-\im [\psi(Y),X]\leq 0$ accordingly. By repeating the above calculations
with $X$ and $Y$ interchanged we get that
\begin{equation}\label{useq52}
-\im [Y,\psi(X)]= \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}\im \hat{\psi}(\alpha)d\alpha \int_0^{\alpha}
\Exp^{\im (\alpha-\beta)X}D^2\Exp^{\im \beta X}d\beta = \frac{1}{\sqrt{2\pi}}
\int_{[a,b]^2}\tilde{\psi}(t,t') \tilde{\mathcal{E}}(dt\times dt')(D^2),
\end{equation}
where $\tilde{\mathcal{E}}(\Delta \times \delta) (S) = E^{(X)}(\Delta)SE^{(X)}(\delta)$
($S\in \mathcal{B}_2(\mathcal{H})$ and $\Delta \times \delta \subseteq
\mathbb{R}\times \mathbb{R}$) extends to a spectral measure on $\mathbb{R}^2$
in the Hilbert space $\mathcal{B}_2(\mathcal{H})$ and $E^{(X)}(.)$ is the spectral
family of the self-adjoint operator $X$. Therefore as above we conclude that
$-\im [Y,\psi(X)]\geq 0$ or $\leq 0$ if $\psi \in C^1_{1+}([a,b])$ or $C^1_{1-}([a,b])$
respectively.
Next assume that $\psi(t)=\sum_{j=0}^nc_jt^j\in \mathcal{P}([a,b])$.
Then from equation \eqref{useq95} in lemma~\ref{lma1}, we get
\begin{equation*}
-\im [\psi(Y),X]
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}Y^{j-k-1}D^2 Y^k.
\end{equation*}
Thus by using spectral theorem for $Y$ and Fubini's theorem we get
\begin{equation}\label{useq51}
\begin{split}
& -\im [\psi(Y),X]
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}Y^{j-k-1}D^2 Y^k
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1} \int_a^b \int_a^b t^{j-k-1}t'^k
E^{(Y)}(dt)D^2E^{(Y)}(dt')\\
& = \sum_{j=0}^nc_j \int_a^b \int_a^b \frac{t'^j-t^j}{t'-t}
E^{(Y)}(dt)D^2E^{(Y)}(dt')
= \int_{[a,b]^2} \tilde{\psi}(t,t')
\mathcal{E}(dt\times dt')(D^2),
\end{split}
\end{equation}
where $E^{(Y)}(.)$ is the spectral
family of the self-adjoint operator $Y$ and $\tilde{\psi}(t,t')$, $\mathcal{E}(dt\times dt')$
as in equation \eqref{useq15}. Note that
\[
\tilde{\psi}(t,t')\geq 0 (~\text{or}~\leq 0) \quad \text{according as} \quad \psi\in \mathcal{P}_+
(~\text{or}~ \mathcal{P}_-)
\]
and hence from \eqref{useq51} we conclude that
\[
-\im [\psi(Y),X] \geq 0 (~\text{or}~\leq 0) \quad \text{according as} \quad \psi\in \mathcal{P}_+
(~\text{or}~ \mathcal{P}_-).
\]
By repeating the above calculations with $X$ and $Y$ interchanged we get that
\[
-\im [Y,\psi(X)]=
\int_{[a,b]^2}\tilde{\psi}(t,t') \tilde{\mathcal{E}}(dt\times dt')(D^2),
\]
where $\tilde{\mathcal{E}}(dt\times dt')$ as in \eqref{useq52}. Therefore as above we conclude that
\[
-\im [Y,\psi(X)] \geq 0 (~\text{or}~\leq 0) \quad \text{according as} \quad \psi\in \mathcal{P}_+
(~\text{or}~ \mathcal{P}_-).
\]
This completes the proof.
\end{proof}
\begin{lmma}\label{lmma6}
Let $T$ satisfy $\textbf{(A)}.$ Then $-\im [\psi(Y),\phi(X)]\in \mathcal{B}_{1\pm}(\mathcal{H})$
according as $\psi$ and $\phi$ $\in C_{1\pm}^1([a,b])$ or $\mathcal{P}_{\pm}([a,b])$ respectively.
\end{lmma}
\begin{proof}
Let $\psi, \phi \in C_{1}^1([a,b])$ or $\mathcal{P}([a,b])$. Then by repeating the same calculations as in \eqref{useq15}
and in \eqref{useq51} of the above lemma~\ref{lmma2} we get,
\begin{equation}\label{useq54}
\begin{split}
& -\im [\psi(Y),\phi(X)]
= \frac{1}{\sqrt{2\pi}}
\int_a^b \int_a^b
\frac{\psi(t')
-\psi(t) }{t'-t}
E^{(Y)}(dt)(-\im[Y,\phi(X)])E^{(Y)}(dt') \}\\
& \hspace{2.95cm} = \frac{1}{\sqrt{2\pi}} \int_{[a,b]^2} \tilde{\psi}(t,t')
\mathcal{E}(dt\times dt')(-\im[Y,\phi(X)]),
\end{split}
\end{equation}
where the description of $\tilde{\psi}$ and $\mathcal{E}$ has been discussed in the proof of lemma~\ref{lmma2}.
On the other hand from lemma~\ref{lmma2} we say that
\begin{equation}\label{useq55}
-\im[Y,\phi(X)] \in \mathcal{B}_{1\pm}(\mathcal{H})
\quad \text{according as} \quad \phi\in C_{1\pm}^1([a,b]) \quad \text{or} \quad \mathcal{P}_{\pm}([a,b])\quad \text{respectively.}
\end{equation}
Therefore by combining equations \eqref{useq54} and \eqref{useq55} we conclude that
\[
-\im[\psi(Y),\phi(X)] \in \mathcal{B}_{1\pm}(\mathcal{H})
\quad \text{according as} \quad \psi, \phi\in C_{1\pm}^1([a,b]) \quad \text{or} \quad \mathcal{P}_{\pm}([a,b])\quad \text{respectively.}
\]
This completes the proof.
\end{proof}
As the title suggests, we shall next state Krein's theorem and
study its consequences on commutators like
$[\psi(Y),\phi(X)]$ for $\psi, \phi$ $\in C_1^1([a,b])$ or $\mathcal{P}([a,b])$.
\begin{ppsn} \label{prop1}(Krein's Theorem)\cite{mgkrein53,krein, sinmoha,Voiculescu87}
Let $H$ and $H_0$ be two bounded self-adjoint operators in
$\mathcal{H}$ such that $V$ $=H-H_0 \in \mathcal{B}_1(\mathcal{H})$.
Then there exists a unique $\xi_{H_0,H}(.)\in L^1(\mathbb{R})$
such that for $\phi \in C_1^1([a,b])$ or $\mathcal{P}([a,b])$, $\phi(H)-\phi(H_0)\in \mathcal{B}_1(\mathcal{H})$
and
\[
\Tr\{\phi(H)-\phi(H_0)\} = \int_a^b \phi'(\lambda)\xi_{H_0,H}(\lambda)d\lambda,
\]
where $\sigma(H) \cup \sigma(H_0)\subseteq [a,b]$. Furthermore
\[
\int_a^b |\xi_{H_0,H}(\lambda)| d\lambda \leq \|V\|_1 ;
\int_a^b \xi_{H_0,H}(\lambda) d\lambda = \Tr V,
\]
and if
$V\in \mathcal{B}_{1+}(\mathcal{H})$ or $\mathcal{B}_{1-}(\mathcal{H})$,
then $\xi_{H_0,H}(\lambda)$ is positive or negative respectively
for almost all $\lambda \in [a,b].$
\end{ppsn}
\begin{thm}\label{thm3}
Assume $\textbf{(A)}$.
Let $\phi$ and $\psi$ be two
complex-valued functions
such that $\phi,$ $\psi \in C_1^1([a,b])$ or $\mathcal{P}([a,b])$.
Then $[\psi(Y),\phi(X)]$ is a trace class operator and
there exist unique $L^1(\mathbb{R})$- functions $\xi(t;\psi)$
and $\eta(\phi;\lambda)$ such that
\begin{equation}\label{krehelhow1}
-\im \Tr\{[\psi(Y),\phi(X)]\} = \int_a^b \phi '(t) \xi(t;\psi) dt
=\int_a^b \psi '(\lambda) \eta(\phi;\lambda) d\lambda.
\end{equation}
Furthermore, if $\phi,\psi\in C_{1+}^1([a,b])$ or $\mathcal{P}_+([a,b])$,
then $\xi(t;\psi)$, $\eta(\phi;\lambda)\geq 0$
for almost all
$t,\lambda \in [a,b],$
\[
\int_a^b |\xi(t;\psi)|dt \leq \left\|-\im [\psi(Y),X]\right\|_1~~,~~
\int_a^b \xi(t;\psi) dt = \Tr\left(\psi'(Y)D^2\right) \quad \text{and}
\]
\[
\int_a^b |\eta(\phi;\lambda)|d\lambda \leq \left\|-\im [Y,\phi(X)]\right\|_1~~,~~
\int_a^b \eta(\phi;\lambda)d\lambda =\Tr \left(\phi'(X)D^2\right).
\]
\end{thm}
\begin{proof}
At first we assume $\phi,\psi$ are real valued.
Now let us consider the self-adjoint operators $H_0=X$ and $H=\Exp^{\im\psi(Y)}X\Exp^{-\im\psi(Y)}$.
Then
\begin{equation}\label{useq1}
\begin{split}
H-H_0
= \int_0^1\frac{d}{ds}\left(\Exp^{\im s\psi(Y)}X\Exp^{-\im s\psi(Y)} \right)ds
= \im \int_0^1 \Exp^{\im s\psi(Y)}[\psi(Y),X]\Exp^{-\im s\psi(Y)}ds \in \mathcal{B}_1(\mathcal{H}),
\end{split}
\end{equation}
by lemma~\ref{lma1}. On the other hand for $\psi, \phi\in C_1^1([a,b])$, a computation similar to that in
\eqref{useq80} yields that
\begin{equation}\label{useq13}
\begin{split}
[\psi(Y),\phi(X)] = \int_{\mathbb{R}} \im \hat{\psi}(\alpha) d\alpha \int_0^{\alpha}
\Exp^{\im(\alpha-\beta) Y}[Y,\phi(X)]\Exp^{\im\beta Y} d\beta \in \mathcal{B}_1(\mathcal{H}),
\end{split}
\end{equation}
since $\int_{\mathbb{R}} |\hat{\psi}(\alpha)||\alpha|d\alpha <\infty$ and since
$[Y,\phi(X)]\in \mathcal{B}_1(\mathcal{H})$, by lemma~\ref{lma1}. Similarly for $\phi$, $\psi(t)= \sum\limits_{j=0}^n c_jt^j\in \mathcal{P}([a,b]),$
by repeating the same calculations as in \eqref{useq95} we conclude that
\begin{equation}\label{useq70}
\begin{split}
[\psi(Y),\phi(X)] = \sum_{j=0}^nc_j[Y^j,\phi(X)]
= \sum_{j=0}^nc_j \sum_{k=0}^{j-1}Y^{j-k-1}[Y,\phi(X)] Y^k \in \mathcal{B}_1(\mathcal{H}),.
\end{split}
\end{equation}
since
$[Y,\phi(X)]\in \mathcal{B}_1(\mathcal{H})$, by lemma~\ref{lma1}.
Thus by applying proposition~\ref{prop1} for the above operators $H,H_0$ with the function
$\phi$, we conclude that there exists a unique function $\tilde{\xi}(t;\psi)$ $\in L^1(\mathbb{R})$ such that
$\phi(H)-\phi(H_0)$ is trace class and
\begin{equation}\label{useq3}
\Tr \{\phi(H)-\phi(H_0)\} = \int_a^b \phi '(t)\tilde{\xi}(t;\psi)dt.
\end{equation}
Furthermore from equation \eqref{useq1} we conclude that $H-H_0 \leq 0$, since
$\im [\psi(Y),X]\leq 0$ by lemma~\ref{lmma2} for $\psi\in C_{1+}^1([a,b])$ or $\mathcal{P}_+([a,b])$. Therefore
from Proposition~\ref{prop1} we also note that $\tilde{\xi}(t;\psi)\leq 0$ for almost all
$t\in [a,b]$.
Now if we compute the left hand side of \eqref{useq3}, we get
\begin{equation}\label{useq4}
\begin{split}
&\Tr \{\phi(H)-\phi(H_0)\} = \Tr\{\phi (\Exp^{\im\psi(Y)}X\Exp^{-\im\psi(Y)}) - \phi(X)\}
= \Tr\{\Exp^{\im\psi(Y)}\phi(X)\Exp^{-\im\psi(Y)})-\phi(X)\}\\
& = \im \Tr \{ \int_0^1 \Exp^{\im s\psi(Y)}[\psi(Y),\phi(X)]\Exp^{-\im s\psi(Y)}ds \}
= \im \Tr \{[\psi(Y),\phi(X)]\},
\end{split}
\end{equation}
where for the second equality we have used functional calculus, for the third equality we have
used equation \eqref{useq1} and for the last equality we have used the cyclicity of trace. Thus by
combining \eqref{useq3} and \eqref{useq4} we have
\begin{equation}\label{useq5}
-\im \Tr \{[\psi(Y),\phi(X)]\} = -\int_a^b \phi'(t)\tilde{\xi}(t;\psi)dt
= \int_a^b \phi '(t)\xi(t;\psi)dt,
\end{equation}
where $\xi(t;\psi)\equiv-\tilde{\xi}(t;\psi)\geq 0$.
Next if we consider the operators $H_0=Y$
and $H=\Exp^{\im\phi(X)}Y\Exp^{-\im\phi(X)}$.
The by a similar calculation we conclude that
\begin{equation}\label{useq16}
H-H_0
=-\im \int_0^1 \Exp^{\im s\phi(X)} [Y,\phi(X)] \Exp^{-\im s \phi(X)} ds \in \mathcal{B}_1(\mathcal{H}),
\end{equation}
since $[Y,\phi(X)]\in \mathcal{B}_1(\mathcal{H})$, by lemma~\ref{lma1}.
Therefore by applying proposition~\ref{prop1} for the above operators $H,H_0$ with the function
$\psi$, we conclude that there exists a unique function $\eta(\phi;\lambda)$ $\in L^1(\mathbb{R})$ such that
$\psi(H)-\psi(H_0)$ is trace class and
\begin{equation}\label{useq6}
\Tr \{\psi(H)-\psi(H_0)\} = \int_a^b \psi '(\lambda)\eta(\phi;\lambda)d\lambda.
\end{equation}
Moreover, form equation \eqref{useq16} we note that $H-H_0\geq 0$, since $-\im [Y,\phi(X)]\geq 0$
by lemma~\ref{lmma2} for $\phi\in C_{1+}^1([a,b])$ or $\mathcal{P}_+([a,b])$. Therefore
from proposition~\ref{prop1} we also note that $\eta(\phi;\lambda)\geq 0$ for almost all
$\lambda \in [a,b]$.
As before if we compute the left hand side of \eqref{useq6} we get
\begin{equation}\label{useq7}
\begin{split}
&\Tr \{\psi(H)-\psi(H_0)\} = \Tr\{\psi (\Exp^{\im\phi(X)}Y\Exp^{-\im\phi(X)}) - \psi(Y)\}
= \Tr\{\Exp^{\im\phi(X)}\psi(Y)\Exp^{-\im\phi(X)})-\psi(Y)\}\\
& = \im \Tr \{ \int_0^1 \Exp^{\im s\phi(X)}[\phi(X),\psi(Y)]\Exp^{-\im s\phi(X)}ds \}
= \im \Tr \{[\phi(X),\psi(Y)]\} = -\im \Tr \{[\psi(Y),\phi(X)]\}.
\end{split}
\end{equation}
Thus by combining \eqref{useq6} and \eqref{useq7} we have
\begin{equation}\label{useq8}
-\im \Tr \{[\psi(Y),\phi(X)]\} = \int_a^b \psi '(\lambda)\eta(\phi,\lambda)d\lambda.
\end{equation}
Therefore the conclusion of the theorem follows from \eqref{useq5} and \eqref{useq8}
for real valued $\phi, \psi$. The same above conclusions can be achieved
for complex valued functions $\phi,\psi\in C_1^1([a,b])$ or $\mathcal{P}([a,b])$ by decomposing
\[
\phi =\phi_1 + \im \phi_2 \quad \text{and} \quad \psi =\psi_1 +\im \psi_2,
\]
and by applying the conclusion of the
theorem for real valued functions $\phi_1,\phi_2,\psi_1,\psi_2$.
By equation \eqref{useq1} of proposition \ref{prop1} and lemma \ref{lma1}, it follows that
\[
\int_a^b |\xi(t;\psi)|dt \leq \left\|H_0-H\right\|_1\leq \left\|-\im [\psi(Y),X]\right\|_1
\]
and
\[
\int_a^b \xi(t;\psi)dt = \Tr (H_0-H)=\Tr \left(-\im [\psi(Y),X]\right) = \Tr \left(\psi'(Y)D^2\right).
\]
The other results for $\eta$ follows similarly. This completes the proof.
\end{proof}
\begin{rmrk}
It is clear from equation \eqref{krehelhow1} that both $\xi(t; \cdot )$ and $\eta(\cdot ;\lambda)$ depend
linearly on $\psi'$ and $\phi'$ respectively and not on $\psi$ and $\phi$ themselves as the left-handside
in \eqref{krehelhow1} appears to. Therefore, to avoid confusion it is preferable to replace $\psi'$, $\phi'$
by $\psi$ and $\phi$ respectively, demand that $\psi$, $\phi$ $\in \mathcal{P}([a,b])$, and
consequently replace $\psi$, $\phi$ by their indefinite integrals $\mathcal{J}(\psi)$ and $\mathcal{J}(\phi)$ respectively.
Thus the equation \eqref{krehelhow1} now reads: For $\psi, \phi $ $\in \mathcal{P}([a,b])$
\begin{equation}\label{rmkeq}
\Tr \{-\im \left[\mathcal{J}(\psi)(Y), \mathcal{J}(\phi)(X)\right]\} =
\int_a^b \phi(t) \xi (t;\psi) dt = \int_a^b \psi(\lambda) \eta (\phi;\lambda)d\lambda,
\end{equation}
where we have retained the earlier notation $\xi(t;\psi)$ and $\eta(\phi;\lambda)$. Furthermore, for almost all $t,\lambda \in [a,b]$, the maps
\[
\mathcal{P}([a,b])\ni \psi \longmapsto \xi(t;\psi)\in L^1(\mathbb{R}) \quad \text{and} \quad
\mathcal{P}([a,b])\ni \phi \longmapsto \eta(\phi;\lambda)\in L^1(\mathbb{R})
\]
are positive linear maps. The next theorem gives $L^1$-estimates for $\xi(\cdot; \psi)$ and $\eta(\phi; \cdot)$ which
allows one to extend these maps for all $\psi, \phi$ $\in C([a,b])$.
\end{rmrk}
\begin{thm}\label{thm1}
Assume $\textbf{(A)}$.
\noindent \textup{(i)} Then
\[
\mathcal{P}([a,b])\times \mathcal{P}([a,b])\ni (\psi,\phi) \longmapsto \Tr\{-\im \left[\mathcal{J}(\psi)(Y),
\mathcal{J}(\phi)(X)\right]\}
\]
can be extended as a positive linear map on $C([a,b])\times C([a,b])$. Furthermore if $\Delta,$ $ \Omega$ $\in Borel ([a,b]),$
then
\begin{equation}\label{useq73}
\Tr\{-\im \left[\mathcal{J}(\chi_{_{\Delta}})(Y),
\mathcal{J}(\chi_{_{\Omega}})(X)\right]\} = \int_{\Omega} \xi(t; \Delta)dt = \int_{\Delta} \eta (\Omega;\lambda)d\lambda,
\end{equation}
where we have written $\xi(t;\Delta)$ for $\xi(t; \chi_{_{\Delta}})$ and $\eta(\Omega;\lambda)$ for
$\eta(\chi_{_{\Omega}}; \lambda)$. For almost all fixed $t,\lambda \in [a,b]$, $\xi(t; \cdot)$ and $\eta(\cdot; \lambda)$
are countably additive positive measures such that
\[
\int_a^b \xi(t;\Delta) dt = \Tr \left(\chi_{_{\Delta}}(Y)D^2\right), \int_a^b \eta(\Omega; \lambda) d\lambda =
\Tr \left(\chi_{_{\Omega}}(X)D^2\right),
\]
and
\[
\int_a^b \xi(t; [a,b]) dt = \int_a^b \eta([a,b];\lambda)d\lambda = \Tr (D^2).
\]
\noindent \textup{(ii)} The set functions
\[
Borel([a,b])\ni \Delta \longmapsto \xi (t: \Delta) \quad \text{and} \quad Borel([a,b])\ni \Omega \longmapsto \eta(\Omega; \lambda)
\]
are absolutely continuous with respect to the Lebesgue measures and the Radon-Nikodym derivatives satisfy:
\[
\frac{\xi(t; d\lambda)}{d\lambda} = \frac{\eta(dt;\lambda)}{dt}\equiv r(t,\lambda) \geq 0
\]
for almost all $t,\lambda,$ with $\|r\|_{L^1([a,b]^2)} = \Tr (D^2)$.
\vspace{0.1in}
\noindent \textup{(iii)} The statement of the theorem \ref{thm3} now takes the form: For $\psi, \phi \in C^1([a,b])$
\begin{equation}\label{useq53}
\Tr\{-\im [\psi(Y),\phi(X)]\} = \int_{[a,b]^2} \phi'(t) \psi'(\lambda) r(t,\lambda) dt d\lambda,
\end{equation}
with the unique non-negative $L^1([a,b]^2)$ function $r$, which is sometimes called Carey-Pincus Principal function.
\end{thm}
\begin{proof}
Let $\psi, \phi \in \mathcal{P}([a,b])$, then $\mathcal{J}(\psi)$ and $\mathcal{J}(\phi)$ are also polynomials.
As in \eqref{useq70}, a similar computation with $\psi, \phi \in \mathcal{P}([a,b])$ and if
$\mathcal{J}(\phi)(t) = \sum\limits_{j=0}^n c_jt^j$ leads to
\[
-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right] =
-\im \sum_{j=0}^n c_j \left(\sum_{k=0}^{j-1}X^k \left[\mathcal{J}(\psi)(Y),X\right]X^{j-k-1}\right)
\]
and taking trace
\begin{equation}\label{useq71}
\begin{split}
& \Tr\{-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]\}
= -\im \Tr\{\sum_{j=1}^n j c_j X^{j-1}\left(\left[\mathcal{J}(\psi)(Y),X\right]\right)\}\\
&\hspace{5.05cm} = \Tr\{\phi(X) \left(-\im \left[\mathcal{J}(\psi)(Y),X\right]\right)\}
\end{split}
\end{equation}
and interchanging the role of $X$ and $Y$ (along with an associated negative sign) the above is equal to
\begin{equation}\label{useq50}
\Tr\{-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]\}
= \Tr\{\psi(Y)\left(-\im\left[Y,\mathcal{J}(\phi)(X)\right]\right)\},
\end{equation}
and all these expressions are also equal to (by theorem \ref{thm3})
\[
\int_a^b \phi(t) \xi(t; \psi) dt = \int_a^b \psi(\lambda) \eta(\phi; \lambda) d\lambda
\]
for respective $\phi$ and $\psi$. Now let $\phi =\phi_+ - \phi_-$ and $\psi = \psi_+-\psi_-$, then
$\phi_{\pm}$, $\psi_{\pm}$ are all non-negative. The domains of definitions of $\phi_{\pm}$ are open sets
which are each a disjoint union of a countable collection of open intervals and furthermore, clearly
\emph{Supp} $\phi_{+}$ $\cap$ \emph{Supp} $\phi_-$ $=$ $ \{t \in [a,b]| \phi(t) =0\}$, which is a finite discrete set.
Therefore $\phi_+$ and $\phi_-$ and hence $|\phi|=\phi_+ + \phi_-$ are polynomials if $\phi \in \mathcal{P}([a,b])$.
By lemma~\ref{lmma6}, $-\im [\psi(Y),\phi(X)]\in \mathcal{B}_{1\pm}(\mathcal{H})$
according as $\psi, \phi \in \mathcal{P}_{\pm}([a,b])$ respectively.
Therefore by linearity,
\begin{equation*}
\begin{split}
& -\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]
= -\im \left[\mathcal{J}(\psi_+)(Y),\mathcal{J}(\phi_+)(X)\right]
+ \im \left[\mathcal{J}(\psi_-)(Y),\mathcal{J}(\phi_+)(X)\right]\\
& \hspace{4.8cm} + \im \left[\mathcal{J}(\psi_+)(Y),\mathcal{J}(\phi_-)(X)\right]
- \im \left[\mathcal{J}(\psi_-)(Y),\mathcal{J}(\phi_-)(X)\right],
\end{split}
\end{equation*}
which by using lemma~\ref{lmma6} and equations \eqref{useq71} and \eqref{useq50} we conclude that
\begin{equation}
\begin{split}
& \left\|-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]\right\|_1\\
& \leq \Tr\{-\im \left[\mathcal{J}(\psi_+)(Y),\mathcal{J}(\phi_+)(X)\right]\}
+ \Tr\{-\im \left[\mathcal{J}(\psi_-)(Y),\mathcal{J}(\phi_+)(X)\right]\} \\
& + \Tr\{-\im \left[\mathcal{J}(\psi_+)(Y),\mathcal{J}(\phi_-)(X)\right]\}
+ \Tr\{- \im \left[\mathcal{J}(\psi_-)(Y),\mathcal{J}(\phi_-)(X)\right]\}\\
& = \Tr \{|\phi|(X) \left(-\im \left[\mathcal{J}(|\psi|)(Y),X\right]\right)\}
\end{split}
\end{equation}
and similarly
\begin{equation}
\left\|-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]\right\|_1
\leq \Tr\{|\psi|(Y)\left(-\im \left[Y,\mathcal{J}(|\phi|)(X)\right]\right)\}.
\end{equation}
These estimates allow us to extend the formulae to $\psi$, $\phi$ $\in C([a,b])$. Indeed,
consider a sequence $\{\phi_m\}$ of polynomials converging to $\phi \in C([a,b])$ uniformly
in $[a,b]$. Then for a fixed $\psi \in \mathcal{P}([a,b])$
\[
\left\|-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi_n-\phi_m)(X)\right]\right\|_1
\leq \Tr\{|\phi_n-\phi_m|(X) \left(-\im \left[\mathcal{J}(|\psi|)(Y),X\right]\right)\}
\]
which converges to zero as $n,m\longrightarrow \infty$, since
$\||\phi_n-\phi_m|(X)\| = \|\phi_n-\phi_m\|_{\infty}$ and since $-\im \left[\mathcal{J}(|\psi|)(Y),X\right]
\in \mathcal{B}_1(\mathcal{H})$. Proceeding similarly for $\psi$ with $\phi$ fixed, one concludes that for $\psi$, $\phi$ $\in C([a,b])$
\[
-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right] \in \mathcal{B}_1(\mathcal{H}) \quad \text{and}
\]
\begin{equation}
\begin{split}
& \Tr\{-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi)(X)\right]\} =
\Tr\{\phi(X) \left(-\im \left[\mathcal{J}(\psi)(Y),X\right]\right)\}\\
&\hspace{5.05cm} = \Tr\{\psi(Y)\left(-\im \left[Y,\mathcal{J}(\phi)(X)\right]\right)\}.
\end{split}
\end{equation}
On the other hand
\begin{equation}
\begin{split}
&\left|\int_a^b [\phi_n(t)-\phi(t)] \xi(t;\psi) dt)\right|
\leq \|\phi_n-\phi\|_{\infty} \int_a^b |\xi(t;\psi)|dt \\
& \leq \|\phi_n-\phi\|_{\infty} \left\|-\im \left[\mathcal{J}(\psi)(Y),X\right]\right\|_1,
\end{split}
\end{equation}
and similarly
\begin{equation*}
\begin{split}
&\left|\int_a^b \psi(\lambda) \eta(\phi_n-\phi; \lambda) d\lambda\right|
\leq \|\psi\|_{\infty} \int_a^b |\eta(\phi_n-\phi; \lambda)| d\lambda\\
& \leq \|\psi\|_{\infty} \left\|-\im \left[Y, \mathcal{J}(\phi_n-\phi)(X)\right]\right\|_1
\leq \|\psi\|_{\infty} \Tr\{-\im \left[Y, \mathcal{J}(|\phi_n-\phi|)(X)\right]\},
\end{split}
\end{equation*}
which by lemma \ref{lma1} is equal to
\[
\|\psi\|_{\infty} \Tr\{|\phi_n-\phi|(X) D^2\} \longrightarrow 0 \quad \text{as} \quad n\longrightarrow \infty.
\]
Therefore, one has that all the equal expressions in \eqref{useq71} and \eqref{useq50} are also equal to
\begin{equation}\label{useq72}
\int_a^b \phi(t) \xi(t;\psi) dt = \int_a^b \psi(\lambda) \eta(\phi;\lambda) d\lambda
\end{equation}
for all $\phi\in C([a,b])$ and $\psi \in \mathcal{P}([a,b])$. Now holding $\phi$ fixed in $C([a,b])$ and
approximating $\psi\in C([a,b])$ by a sequence $\{\psi_m\}$ of polynomials, exactly as above, one establishes the equality
\eqref{rmkeq} for all $\phi, \psi \in C([a,b])$.
\vspace{0.1in}
\noindent Next we want to approximate characteristic functions by continuous functions in the same formula. Let
$\{\phi_m\}$ be a sequence of uniformly bounded continuous
functions in $[a,b]$ converging point-wise almost everywhere to $\chi_{_{\Omega}}$ with
$\Omega \in Borel (\sigma(X))$ and let $\{\psi_m\}$ be a sequence similarly approximating $\chi_{_{\Delta}}$
for $\Delta \in Borel(\sigma(Y))$. Then by \eqref{useq72}, we have that for $\psi \in C([a,b])$
\begin{equation*}
\begin{split}
& \int_a^b \left[\phi_n(t) - \chi_{_{\Omega}} \right] \xi(t;\psi) dt
= \Tr\{-\im \left[\mathcal{J}(\psi)(Y),\mathcal{J}(\phi_n-\chi_{_{\Omega}})(X)\right]\}\\
& \hspace{4.4cm} =\Tr\{(\phi_n-\chi_{_{\Omega}})(X)\left( \left[\mathcal{J}(\psi)(Y),X\right]\right)\},
\end{split}
\end{equation*}
which converges to zero as $n\longrightarrow \infty$ since $\phi_n(X)$ converges strongly
to $\chi_{_{\Omega}}(X)$ and since $-\im \left[\mathcal{J}(\psi)(Y),X\right]
\in \mathcal{B}_1(\mathcal{H})$. Thus
\begin{equation}
\begin{split}
\int_{\Omega} \xi(t;\psi) dt = \lim_{n\longrightarrow \infty} \int_a^b \phi_n(t) \xi(t;\psi) dt
= \lim_{n\longrightarrow \infty} \int_a^b \psi(\lambda) \eta(\phi_n;\lambda) d\lambda.
\end{split}
\end{equation}
Now
\begin{equation*}
\begin{split}
\int_a^b |\eta(\phi_n-\chi_{_{\Omega}});\lambda| d\lambda \leq
\left\|-\im \left[Y,\mathcal{J}(\phi_n-\chi_{_{\Omega}})(X)\right]\right\|_1
\leq \Tr\{(|\phi_n-\chi_{_{\Omega}})|)(X) D^2\},
\end{split}
\end{equation*}
which converges to zero as $n\longrightarrow \infty$ by the same reasoning exactly as above, and therefore
\begin{equation}
\lim_{n\longrightarrow \infty} \int_a^b \psi(\lambda) \eta (\phi_n;\lambda) d\lambda =
\int_a^b \psi(\lambda) \eta(\chi_{_{\Omega}};\lambda) d\lambda.
\end{equation}
In the equality
\begin{equation*}
\int_{\Omega} \xi(t;\psi_m) dt = \int_a^b \psi_m(\lambda) \eta (\chi_{_{\Omega}}; \lambda) d\lambda
\end{equation*}
with $\{\psi_m\}$ approximating $\chi_{_{\Delta}}$ as described earlier, we get required equality \eqref{useq73}.
This completes the proof of $\textup(i)$.
\vspace{0.1in}
\noindent \textup{(ii)} From the equality, for $\Omega \in Borel (\sigma(X))$,
$\Delta \in Borel (\sigma(Y)),$
\[
\int_{\Omega} \xi(t;\Delta) dt = \int_{\Delta} \eta(\Omega; \lambda) d\lambda
\]
with $\xi$ and $\eta$ both non-negative, it follows that
\[
\textup{Lebesgue measure}(\Omega) =0 \Longrightarrow \eta(\Omega;\lambda) = 0 \quad \text{and similarly}
\]
\[
\textup{Lebesgue measure}(\Delta) =0 \Longrightarrow \xi(t;\Delta) = 0, \quad \text{and clearly }
\]
for fixed $t,\lambda$; $\xi(t;\cdot)$ and $\eta(\cdot; \lambda)$ are countably additive point-wise set
functions. Therefore they are both absolutely continuous with respect to the respective Lebesgue measures, and we set
\[
r(t,\lambda) = \frac{\xi(t;d\lambda)}{d\lambda} = \frac{\eta(dt;\lambda)}{dt} \geq 0.
\]
The uniqueness of $r$ follows from the equation \eqref{useq53} and the fact that $r\in L^1([a,b]^2)$ and that it has compact support.
This completes the proof.
\end{proof}
\noindent Next, we want to compute the trace of
\begin{equation*}
\Tr\{[\alpha(X)\psi(Y),\phi(X)]\} = \Tr\{\alpha(X)[\psi(Y),\phi(X)]\}
\end{equation*}
and by symmetry between $X$ and $Y$,~$
\Tr\{[\alpha(X)\psi(Y),\beta(Y)]\} = -\Tr\{\psi(Y)[\beta(Y),\alpha(X)]\},$
where $\phi, \psi, \alpha, \beta \in \mathcal{P}([a,b]),$ which
constitutes the next theorem.
\begin{thm}\label{thm2}
Let $T$ satisfy $\textbf{(A)}$.
Let $\phi,\psi,\alpha,\beta \in \mathcal{P}([a,b]).$
Then $[\alpha(X)\psi(Y),\phi(X)] \in \mathcal{B}_1(\mathcal{H})$ and
\begin{equation}
\begin{split}
& -\im \Tr\{[\alpha(X)\psi(Y),\phi(X)]\} =
\int_{[a,b]^2}\alpha(t)\phi'(t)\psi'(\lambda)r(t,\lambda)dtd\lambda\\
& =\int_{[a,b]^2} -J(\alpha \psi, \phi)(t,\lambda)r(t,\lambda)dtd\lambda,
\end{split}
\end{equation}
where $r$ is the function obtained in theorem~\ref{thm1} and
\[J(\alpha \psi, \phi)(t,\lambda)=
\frac{\partial}{\partial t}(\alpha(t) \psi(\lambda))\frac{\partial}{\partial \lambda}(\phi(t))
-\frac{\partial}{\partial \lambda}(\alpha(t) \psi(\lambda))\frac{\partial}{\partial t}(\phi(t))\]
is the Jacobian of $\alpha \psi$ and $\phi $ in $[a,b]\times [a,b]\equiv [a,b]^2$.
Similarly, $[\alpha(X)\psi(Y),\beta(Y)]\in \mathcal{B}_1(\mathcal{H})$ and
\begin{equation}
\begin{split}
& -\im \Tr\{[\alpha(X)\psi(Y),\beta(Y)]\}=
\int_{[a,b]^2}-\alpha'(t)\psi(\lambda)\beta'(\lambda)r(t,\lambda)dtd\lambda \\
& =\int_{[a,b]^2} -J(\alpha \psi, \beta)(t,\lambda)r(t,\lambda)dtd\lambda,
\end{split}
\end{equation}
where $r$ is the function obtained in theorem~\ref{thm1} and
\[J(\alpha \psi, \phi)(t,\lambda)=
\frac{\partial}{\partial t}(\alpha(t) \psi(\lambda))\frac{\partial}{\partial \lambda}(\beta(\lambda))
-\frac{\partial}{\partial \lambda}(\alpha(t) \psi(\lambda))\frac{\partial}{\partial t}(\beta(\lambda))\]
is the Jacobian of $\alpha \psi$ and $\beta $ in $[a,b]^2$.
\end{thm}
\begin{proof}
Using \eqref{useq13} we say that
\[
[\alpha(X)\psi(Y),\phi(X)] = \alpha(X)[\psi(Y),\phi(X)]\in \mathcal{B}_1(\mathcal{H}).
\]
Next from \eqref{useq70} we conclude for $\psi,\phi\in \mathcal{P}([a,b])$ that
\begin{equation}\label{useq27}
\begin{split}
-\im \Tr\{[\psi(Y),\phi(X)]\} = -\im \Tr \{\phi'(X) [\psi(Y),X]\}
= \int_a^b \phi'(t) \Tr\left(E^{(X)}(dt)\{-\im [\psi(Y),X]\}\right),
\end{split}
\end{equation}
where we have used spectral theorem for the self-adjoint operator $X$ and
$E^{(X)} (.)$ is the spectral family of $X$. On the other hand from theorem~\ref{thm1}$(iii)$ we conclude that
\begin{equation}\label{useq28}
-\im \Tr\{[\psi(Y),\phi(X)]\} = \int_{[a,b]^2}
\phi'(t)\psi'(\lambda)r(t,\lambda) dt d\lambda,
\end{equation}
for $\psi,\phi\in \mathcal{P}([a,b])$. Therefore combining \eqref{useq27} and \eqref{useq28}
we get
\begin{equation}\label{useq29}
\int_a^b \phi'(t) \Tr\left(E^{(X)}(dt)\{-\im [\psi(Y),X]\}\right)
= \int_a^b \phi'(t) \left(\int_a^b \psi'(\lambda) r(t,\lambda) d\lambda\right) dt,
\end{equation}
for $\psi,\phi\in \mathcal{P}([a,b])$. Since
\[
\Delta \longrightarrow \Tr\left(E^{(X)}(\Delta)\{-\im [\psi(Y),X]\}\right)\quad \
(\Delta \subseteq \mathbb{R},~ \text{a Borel subset of}~ \mathbb{R})
\]
is a complex measure with finite total variation and $r\in L^1{[a,b]^2}$, then we can extend
the above equality \eqref{useq29} to all $\phi\in C([a,b])$ and therefore
\begin{equation}\label{useq30}
\begin{split}
& \int_a^b \phi(t) \Tr\left(E^{(X)}(dt)\{-\im [\psi(Y),X]\}\right)\\
& \hspace{1.5in}
= \int_a^b \phi(t) \left(\int_a^b \psi'(\lambda) r(t,\lambda) d\lambda\right) dt
\quad \text{for all}\quad
\phi\in C([a,b]).
\end{split}
\end{equation}
Thus by approximating characteristic function $\chi_{_{\Delta}}$ (for Borel subset
$\Delta \subseteq \mathbb{R}$) through continuous functions we conclude from
the above equation \eqref{useq30}
that
\begin{equation}
\Tr\left(E^{(X)}(\Delta)\{-\im [\psi(Y),X]\}\right)
= \int_{t\in \Delta} dt \left(\int_a^b \psi'(\lambda) r(t,\lambda) d\lambda\right),
\end{equation}
which shows that the measure
\[
\Delta \longrightarrow \Tr\left(E^{(X)}(\Delta)\{-\im [\psi(Y),X]\}\right)
\]
is absolutely continuous with respect to a Lebesgue measure $dt$ and
\begin{equation}\label{useq31}
\Tr\left(E^{(X)}(dt)\{-\im [\psi(Y),X]\}\right)
= \left(\int_a^b \psi'(\lambda) r(t,\lambda) d\lambda\right) dt.
\end{equation}
As in \eqref{useq71}, a similar computation with $\psi, \phi \in \mathcal{P}([a,b])$ and if
$\phi(\lambda) = \sum\limits_{j=0}^nb_j\lambda^j$ leads to
\begin{equation}\label{useq32}
\begin{split}
& -\im \Tr\{[\alpha(X)\psi(Y),\phi(X)]\} = -\im \Tr\{\alpha(X)[\psi(Y),\phi(X)]\}
= -\im \Tr\{\alpha(X)\sum_{j=0}^nb_j[\psi(Y),X^j]\}\\
& = -\im \sum_{j=0}^nb_j \sum_{k=0}^{j-1}\Tr\{\alpha(X) X^k[\psi(Y),X]X^{j-k-1}\}
=-\im \sum_{j=0}^nb_j \sum_{k=0}^{j-1}\Tr\{\alpha(X) X^{j-1}[\psi(Y),X]\}\\
& = -\im \Tr\{ \alpha(X) \sum_{j=1}^n j b_j X^{j-1}[\psi(Y),X]\}
= -\im \Tr\{ \alpha(X) \phi'(X)[\psi(Y),X]\}\\
& = -\im \Tr\{\int_a^b \alpha(t) \phi'(t) E^{(X)}(dt)[\psi(Y),X]\}
= \int_a^b \alpha(t) \phi'(t) \Tr\left(E^{(X)}(dt)\{-\im [\psi(Y),X]\}\right),
\end{split}
\end{equation}
where for fifth equality we have used the cyclicity of trace and for eighth equality we
have used the spectral theorem for the self-adjoint operator $X$ and
$E^{(X)} (.)$ is the spectral family of $X$. Next by combining \eqref{useq31}
and \eqref{useq32} we conclude that
\begin{equation*}
\begin{split}
& -\im \Tr\{[\alpha(X)\psi(Y),\phi(X)]\} =
\int_a^b \alpha(t) \phi'(t) \left(\int_a^b \psi'(\lambda) r(t,\lambda) d\lambda\right) dt\\
& =\int_{[a,b]^2}\alpha(t)\phi'(t)\psi'(\lambda)r(t,\lambda)dtd\lambda
=\int_{[a,b]^2} -J(\alpha \psi, \phi)(t,\lambda)r(t,\lambda)dtd\lambda.
\end{split}
\end{equation*}
Next by interchanging the role of $X$ and $Y$ in the above calculations, we get that
\begin{equation*}
\begin{split}
& -\im \Tr\{[\alpha(X)\psi(Y),\beta(Y)]\}=
\int_{[a,b]^2}-\alpha'(t)\psi(\lambda)\beta'(\lambda)r(t,\lambda)dtd\lambda \\
& =\int_{[a,b]^2} -J(\alpha \psi, \beta)(t,\lambda)r(t,\lambda)dtd\lambda.
\end{split}
\end{equation*}
This completes the proof.
\end{proof}
\begin{rmrk}
If $T$ satisfy $\textbf{(A)}$, then the conclusion of the above theorem \ref{thm2}
also can be achieved for $\phi,\psi,\alpha,\beta \in C_1^1([a,b]).$
\end{rmrk}
\noindent The next theorem replaces effectively the so called
\textquotedblleft Wallach's Collapse Theorem"
\cite{martinputinar}.
\begin{thm}\label{Wthm}
Let $T$ be as in the statement of theorem~\ref{thm2}.
Let $\phi,\psi, \alpha,\beta \in \mathcal{P}([a,b])$.
Then the following is true
\begin{equation*}
-\im \Tr \{[\alpha(X) \psi(Y),\phi(X) \beta(Y)]\}
= \int_{[a,b]^2} -J(\alpha \psi, \phi \beta)(t,\lambda) r(t,\lambda) dt d\lambda,
\end{equation*}
where
\[
J(\alpha \psi, \phi \beta)(t,\lambda) =
\frac{\partial}{\partial t}(\alpha(t) \psi(\lambda))
\frac{\partial}{\partial \lambda}(\phi(t) \beta(\lambda))
-\frac{\partial}{\partial \lambda}(\alpha(t) \psi(\lambda))
\frac{\partial}{\partial t}(\phi(t) \beta(\lambda))
\]
is the Jacobian of $\alpha \psi$ and $\phi \beta$ in $[a,b]^2$.
\end{thm}
\begin{proof}
By simple computation we get the following
\begin{equation*}
\begin{split}
& [\alpha(X) \psi(Y),\phi(X) \beta(Y)]- \{
[\alpha(X)(\psi \beta)(Y), \phi(X)]+[(\alpha \phi)(X)\psi(Y),\beta(Y)]\}\\
& = \alpha(X) \psi(Y) \phi(X) \beta(Y) - \phi(X) \beta(Y) \alpha(X) \psi(Y)
- \alpha(X) \psi(Y) \beta(Y) \phi(X) \\
& + \beta(Y) \alpha(X) \phi(X) \psi(Y)\\
& = \alpha(X) \psi(Y) [\phi(X), \beta(Y) ] - [\phi(X), \beta(Y) ] \alpha(X) \psi(Y)
\in \mathcal{B}_1(\mathcal{H}),
\end{split}
\end{equation*}
by equation \eqref{useq13} and therefore
\begin{equation}
\begin{split}
& \Tr\{[\alpha(X) \psi(Y),\phi(X) \beta(Y)]- (
[\alpha(X)(\psi \beta)(Y), \phi(X)]+[(\alpha \phi)(X)\psi(Y),\beta(Y)]) \}\\
& = \Tr \{\alpha(X) \psi(Y) [\phi(X), \beta(Y) ] - [\phi(X), \beta(Y) ] \alpha(X) \psi(Y) \} = 0,
\end{split}
\end{equation}
where we have used the cyclicity of trace and the fact that
$[\phi(X), \beta(Y) ] \in \mathcal{B}_1(\mathcal{H})$.
Thus we have shown that
\begin{equation}\label{useq9}
-\im \Tr\{[\alpha(X) \psi(Y),\phi(X) \beta(Y)]\}
= -\im \Tr\{[\alpha(X)(\psi \beta)(Y), \phi(X)]\} -\im \Tr\{ [(\alpha \phi)(X)\psi(Y),\beta(Y)]\}.
\end{equation}
Therefore by using theorem~\ref{thm2} we compute the right hand side of \eqref{useq9} to get
\begin{equation}\label{useq10}
\begin{split}
&-\im \Tr\{[\alpha(X)(\psi \beta)(Y), \phi(X)]\} -\im \Tr\{ [(\alpha \phi)(X)\psi(Y),\beta(Y)]\}\\
& = \int_{[a,b]^2} \alpha(t) \phi'(t)(\psi \beta)'(\lambda) r(t,\lambda)dt d\lambda
- \int_{[a,b]^2} (\alpha \phi)'(t) \psi(\lambda)\beta'(\lambda) r(t,\lambda)dt d\lambda\\
& = \int_{[a,b]^2} -J(\alpha \psi, \phi \beta)(t,\lambda) r(t,\lambda) dt d\lambda.
\end{split}
\end{equation}
Therefore combining \eqref{useq9} and \eqref{useq10} we get
\begin{equation*}
-\im \Tr\{[\alpha(X) \psi(Y),\phi(X) \beta(Y)]\}
= \int_{[a,b]^2} -J(\alpha \psi, \phi \beta)(t,\lambda) r(t,\lambda) dt d\lambda.
\end{equation*}
This completes the proof.
\end{proof}
Now we are in a position to state our main result, the Helton-Howe-Carey-Pincus trace formula \cite{alexpeller, clancybook, heltonhowe73,heltonhowe75,martinputinar}.
\begin{thm}
Let $\Psi(t,\lambda) = \sum\limits_{j=1}^n c_j \alpha_j(t) \psi_j(\lambda)$
and $\Phi(t,\lambda) = \sum\limits_{k=1}^m d_j \phi_k(t) \beta_k(\lambda),$
($m, n$ $\in \mathbb{N}$) and $\alpha_j,$ $\psi_j,$ $\phi_j,$ $\beta_j$ are
all in $\mathcal{P}([a,b])$. Then $-\im \left[\Psi(X,Y), \Phi(X,Y)\right]$
$\in \mathcal{B}_1(\mathcal{H})$ and
\[
\Tr\{-\im \left[\Psi(X,Y), \Phi(X,Y)\right]\}
= \int_{[a,b]^2} J(\Psi,\Phi)(t,\lambda) r(t,\lambda) dt d\lambda.
\]
\end{thm}
\begin{proof}
Proof follows easily by applying theorem~\ref{Wthm} and the fact that
\[
\Tr\{-\im \left[\Psi(X,Y), \Phi(X,Y)\right]\}
= \sum\limits_{j=1}^n \sum\limits_{k=1}^m c_jd_k
\Tr\{-\im \left[\alpha_j(X) \psi_j(Y), \phi_k(X) \beta_k(Y)\right]\}.
\]
\end{proof}
\noindent\textbf{Acknowledgement:} The first author is grateful to ISI, Bangalore Centre and IIT, Guwahati for
warm hospitality and the second author
is grateful to Jawaharlal Nehru Centre for Advanced Scientific Research
and SERB-Distinguished Fellowship for support.
|
2,869,038,154,989 | arxiv | \section{Introduction}
The series of double-layered ruthenates, Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ ($0 \leq x \leq 3$),
possesses a variety of phases under a magnetic field ($H$) or pressure ($P$).
One end-member of the series, Sr$_3$Ru$_2$O$_7$, shows the magnetic field-tuned quantum criticality,
which is accompanied by the metamagnetic transition around $H\sim 7.85$T.~\cite{Grigera2001} The metamagnetic transition,
which was initially observed by Cao {\textit et al.}~\cite{Cao1997_1}, was revealed to be a double transition,
and its second transition in the higher-field is sensitive to the field angle.~\cite{Ohmichi2003}
Without an applied magnetic field, Sr$_3$Ru$_2$O$_7$ behaves as a Fermi liquid at low temperature.~\cite{SIIkeda2000}
Angle-resolved photo-emission spectroscopy (ARPES) has been used to observe its well-defined Fermi surfaces,~\cite{Puchkov1998_1}
and neutron diffraction methods revealed a lack of that long-range magnetic order.~\cite{Huang1998}
Meanwhile, inelastic neutron scattering has been used to observe two-dimensional ferromagnetic fluctuations
as incommensurate peaks attributed to Fermi surface nesting.~\cite{Capogna2003}
These fluctuations induce ferromagnetism when uniaxial pressures along the $c$-axis are applied.~\cite{SIIkeda2001,SIIkeda2004}
Since neutron diffraction analysis displays the temperature and
pressure effects on the crystal structure,~\cite{Shaked2000}
the phase transition induced by uniaxial pressures suggests that the ferromagnetic fluctuations are susceptible to structural changes.
This situation is similar to the ferromagnetic ground state at the surface of
Sr$_2$RuO$_4$, as observed by scanning tunneling microscopy (STM).~\cite{Matzdorf2000}
In Sr$_2$RuO$_4$, the ferromagnetic ground state arises due to a perturbative lattice distortion at the surface, that is, an
in-plane rotation of the RuO$_6$ octahedron produced by the surface strain.
Another end-member of the series, Ca$_3$Ru$_2$O$_7$, shows two transitions at
$T_M=48$K and $T_N=56$K.~\cite{Cao1997_2}
While $T_N$ has been confirmed as an antiferromagnetic ordering temperature,~\cite{Liu1999}
$T_M$ was believed to be a Mott-like metal-insulator transition temperature though quantum oscillations
in the $c$-axis resistivity $\rho_c$, for $H \parallel c$ are observed below $T_M$.~\cite{Cao2003_1,Cao2003_2}
The electrical resistivity and optical conductivity spectra
of single crystals grown by a floating-zone method proved that the ground state
of Ca$_3$Ru$_2$O$_7$ is quasi-two-dimensional metallic.~\cite{YYoshida2004,JSLee2004}
Furthermore, the magnetostriction data for the single crystals demonstrated that
the first-order transition at $T_M$ can be attributed to a discontinuous change in the lattice
constants.~\cite{Ohmichi2004} The quantum oscillations observed in Ca$_3$Ru$_2$O$_7$
have also shown that its ground state is metallic with low-carrier density.~\cite{Kikugawa2010}
The magnetic structure of Ca$_3$Ru$_2$O$_7$ in the ground state was clarified using neutron diffraction analysis:
the magnetic moments align ferromagnetically within the double layer and antiferromagnetically between
the double layers.~\cite{YYoshida2005} While these magnetic moments lie along the $b$-axis for $T<T_M$,
the first-order transition at $T_M$ changes their directions to align with the $a$-axis for
$T>T_M$.~\cite{Bohnenbuck2008,Bao2008}
Moreover, the two transitions at $T_N$ and $T_M$ are respectively weakened by the pressures along the $c$-axis
and those within the $ab$-plane.~\cite{YYoshida2008} These results indicate that
the magnetic properties of Ca$_3$Ru$_2$O$_7$ are also susceptible to structural changes.
The concentration range $0 < x < 3$ of the Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ series has been
investigated in addition to the end members.~\cite{Cao1997_3,SIIkeda1998,Puchkov1998_2,Iwata2008,Qu2008,Qu2009,Peng2010}
For the intermediate $x$ of this range, the system exhibits a variety of spin structures:
a cluster spin-glass phase for $0.24 \lesssim x \lesssim 1.2$,~\cite{Iwata2008,Qu2008,Qu2009}
and a canted antiferromagnetic phase for $1.2 \lesssim x \lesssim 2.0$.~\cite{SIIkeda1998}
By comparing the Weiss temperatures for $H \parallel ab$ with those for $H \parallel c$, Iwata {\textit et al.}~\cite{Iwata2008}
elucidate that the magnetic easy axis changes continuously from the $ab$-plane to the $c$-axis with decreasing
$x$. Peng {\textit et al.}~\cite{Peng2010} confirm that the magnetic easy axis is the $b$-axis for $x=2.4$ and $x=3.0$.
This result for $x=3.0$ is consistent with the result of a neutron diffraction analysis for
Ca$_3$Ru$_2$O$_7$.~\cite{YYoshida2005} Moreover, it has been reported that lattice constants vary with
$x$ in Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$.~\cite{Iwata2008,Peng2010}
Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ ($0 \leq x \leq 3$) has also attracted a great deal of theoretical interest.
The band structures of its end-members, Sr$_3$Ru$_2$O$_7$ and Ca$_3$Ru$_2$O$_7$, have been investigated
with a local density approximation~\cite{Hase1997} or with the local spin density approximation (LSDA).
~\cite{Singh2001,Singh2006} In particular, the magnetic field-tuned metamagnetic transition of Sr$_3$Ru$_2$O$_7$
has been intensively studied as one of the electronic nematic phase transitions on the basis of microscopic theories.
~\cite{Kee2005,Puetter2007,Puetter2010,Raghu2009,WCLee2009,WCLee2010,Fischer2010} The field-induced
orbital-ordered phase of Ca$_3$Ru$_2$O$_7$ was also investigated on the basis of the spin/orbital model.~\cite{Forte2010}
However, few theoretical studies have investigated Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ ($0 < x < 3$),
while a number of theoretical analyses have been performed for the series of single-layered ruthenates,
~\cite{Nomura2000,Hotta2001,Eremin2002,Kurokawa2002,Mizokawa2004,Okamoto2004,Oguchi2009,Kita2009}
Ca$_{2-x}$Sr$_x$RuO$_4$ ($0 \leq x \leq 2$), which exhibit the Mott transition at $x \simeq 0.2$.~\cite{Nakatsuji2000}
The two-dimensional multiband Hubbard model has been utilized by these theoretical analyses of the series of single-layered ruthenates.
Meanwhile, in order to understand the series of double-layered ruthenates, we need to consider its three-dimensionality.
The intrinsic importance of the three-dimensionality can be easily found in the experimental results introduced above
(e.g., the magnetic structure of Ca$_3$Ru$_2$O$_7$). In this paper, we investigate the electronic and magnetic structures of the double-layered ruthenate
Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ ($0 \leq x \leq 3$) on the basis of the three-dimensional (3D) multiband Hubbard model.
Fully considering possible unequivalent sites and the spin-orbit interaction (SOI),
we determine the ground state of our model for each lattice distortion within the unrestricted Hartree-Fock (UHF) approximation.
Then, we find that the change in lattice distortion severely affects the electronic and magnetic structures.
Our results suggest that the many physical phenomena of Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ ($0 \leq x \leq 3$) have a critical relationship with the change in
the lattice distortion.
\section{Formulation}
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig1.eps}
\caption{\label{figure:1}(Color online) Unit cell of our 3D Hubbard model:
(left) projection onto the $a^\prime c$-plane,
(center) projection onto the $a^\prime b^\prime(ab)$-plane, and (right) projection onto the $b^\prime c$-plane.
The diamonds with solid circles(squares) represent the RuO$_6$ octahedrons on the A(B) sublattice.
The Ru sites in these octahedrons are indicated by solid circles. L$i$ indicates the
$i$-th layer ($i=1,2,3,4$). $\phi$ and $\theta$ represent the rotation and tilting angles of the RuO$_6$ octahedron, respectively.}
\end{figure}
Our 3D hubbard model with lattice distortion (Fig.~\ref{figure:1}) consists of A and B sub lattices.
We consider every three $t_{2g}$ orbitals of the Ru $4d$ electrons located in these sublattices on the $i$-th layer
($i=1,2,3,4$).
Thus, our 3D Hubbard model Hamiltonian, $\hat{H}$, is composed as follows:
\begin{eqnarray}
\hat{H} & = & \sum_{i=1}^4\sum_{j=1}^4\sum_{{\mib k}}\sum_{\sigma}\left[\hat{A}^\dagger_{i{\mib k}\sigma}\hat{h}^{AA}_{ij{\mib k}}\hat{A}_{j{\mib k}\sigma}+\hat{A}^\dagger_{i{\mib k}\sigma}\hat{h}^{AB}_{ij{\mib k}}\hat{B}_{j{\mib k}\sigma}+\hat{B}^\dagger_{i{\mib k}\sigma}\hat{h}^{BA}_{ij{\mib k}}\hat{A}_{j{\mib k}\sigma}+\hat{B}^\dagger_{i{\mib k}\sigma}\hat{h}^{BB}_{ij{\mib k}}\hat{B}_{j{\mib k}\sigma}\right] \nonumber \\
& & +\sum_{i=1}^4\sum_{{\mib k}}\sum_{\sigma\sigma^\prime}\left[\hat{A}^\dagger_{i{\mib k}\sigma}\hat{l}^{A}_{\sigma\sigma^\prime}\hat{A}_{i{\mib k}\sigma^\prime}+\hat{B}^\dagger_{i{\mib k}\sigma}\hat{l}^{B}_{\sigma\sigma^\prime}\hat{B}_{i{\mib k}\sigma^\prime}\right] \nonumber \\
& & +\hat{H}^\prime -\mu\sum_{i=1}^4\sum_{{\mib k}}\sum_{\sigma}\left[\hat{A}^\dagger_{i{\mib k}\sigma}\hat{A}_{i{\mib k}\sigma}+\hat{B}^\dagger_{i{\mib k}\sigma}\hat{B}_{i{\mib k}\sigma}\right].
\label{eq:1}
\end{eqnarray}
Here we use the abbreviations
$\hat{A}^\dagger_{i{\mib k}\sigma}\equiv\left(A^{yz\dagger}_{i{\mib k}\sigma}\,A^{zx\dagger}_{i{\mib k}\sigma}\,A^{xy\dagger}_{i{\mib k}\sigma}\right)$,
$\hat{A}_{i{\mib k}\sigma}\equiv\,^t\!\left(A^{yz}_{i{\mib k}\sigma}\,A^{zx}_{i{\mib k}\sigma}\,A^{xy}_{i{\mib k}\sigma}\right)$,
$\hat{B}^\dagger_{i{\mib k}\sigma}\equiv\left(B^{yz\dagger}_{i{\mib k}\sigma}\,B^{zx\dagger}_{i{\mib k}\sigma}\,B^{xy\dagger}_{i{\mib k}\sigma}\right)$, and
$\hat{B}_{i{\mib k}\sigma}\equiv\,^t\!\left(B^{yz}_{i{\mib k}\sigma}\,B^{zx}_{i{\mib k}\sigma}\,B^{xy}_{i{\mib k}\sigma}\right)$
, where $A^{\varphi}_{i{\mib k}\sigma} (A^{\varphi\dagger}_{i{\mib k}\sigma})$ and $B^{\varphi}_{i{\mib k}\sigma} (B^{\varphi\dagger}_{i{\mib k}\sigma})$ are the annihilation (creation) operators for the electron in the A and B sublattice on the $i$-th layer ($i=1,2,3,4$), as specified by orbital $\varphi=\{yz,zx,xy\}$, momentum ${\mib k}$, and spin $\sigma=\{\uparrow,\downarrow\}$, respectively. $\mu$ is the chemical potential. The nonvanishing $\hat{h}^{AA}_{ij{\mib k}}$, $\hat{h}^{AB}_{ij{\mib k}}$, $\hat{h}^{BA}_{ij{\mib k}}$, and $\hat{h}^{BB}_{ij{\mib k}}$ in eq.~(\ref{eq:1}) are
\begin{equation}
\hat{h}^{AA}_{ii{\mib k}} = \hat{h}^{BB}_{ii{\mib k}}
= \left(\begin{array}{ccc}
0 & \lambda_{\mib k} & 0 \\
\lambda_{\mib k} & 0 & 0 \\
0 & 0 & \epsilon_{\mib k}
\end{array}
\right)\,(i=1,2,3,4),
\end{equation}
\begin{equation}
\hat{h}^{AB}_{ii{\mib k}} = \hat{h}^{BA}_{ii{\mib k}}
= \left(\begin{array}{ccc}
t^{yz}_{ \mib k} & 0 & 0 \\
0 & t^{zx}_{\mib k} & 0 \\
0 & 0 & t^{xy}_{\mib k}
\end{array}
\right)\,(i=1,2,3,4),
\end{equation}
\begin{equation}
\hat{h}^{AB}_{12{\mib k}} = \hat{h}^{BA}_{12{\mib k}} = \left[\hat{h}^{AB}_{21{\mib k}}\right]^* = \left[\hat{h}^{BA}_{21{\mib k}}\right]^* = \hat{h}^{AB}_{34{\mib k}} = \hat{h}^{BA}_{34{\mib k}} = \left[\hat{h}^{AB}_{43{\mib k}}\right]^* = \left[\hat{h}^{BA}_{43{\mib k}}\right]^*
= \left(\begin{array}{ccc}
c_{\mib k}^z & 0 & 0 \\
0 & c_{\mib k}^z & 0 \\
0 & 0 & 0
\end{array}
\right),
\end{equation}
\begin{equation}
\hat{h}^{AA}_{32{\mib k}} = \hat{h}^{BB}_{32{\mib k}} = \left[\hat{h}^{AA}_{23{\mib k}}\right]^* = \left[\hat{h}^{BB}_{23{\mib k}}\right]^*
= \hat{h}^{AA}_{14{\mib k}} = \hat{h}^{BB}_{14{\mib k}} = \left[\hat{h}^{AA}_{41{\mib k}}\right]^* = \left[\hat{h}^{BB}_{41{\mib k}}\right]^*
= \left(\begin{array}{ccc}
c_{\mib k}^y & 0 & 0 \\
0 & c_{\mib k}^y & 0 \\
0 & 0 & 0
\end{array}
\right),
\end{equation}
and
\begin{equation}
\hat{h}^{AB}_{32{\mib k}} = \hat{h}^{BA}_{32{\mib k}} = \left[\hat{h}^{AB}_{23{\mib k}}\right]^* = \left[\hat{h}^{BA}_{23{\mib k}}\right]^*
= \hat{h}^{AB}_{14{\mib k}} = \hat{h}^{BA}_{14{\mib k}} = \left[\hat{h}^{AB}_{41{\mib k}}\right]^* = \left[\hat{h}^{BA}_{41{\mib k}}\right]^*
= \left(\begin{array}{ccc}
c_{\mib k}^x & 0 & 0 \\
0 & c_{\mib k}^x & 0 \\
0 & 0 & 0
\end{array}
\right),
\end{equation}
where we use the abbreviations
\begin{eqnarray}
\label{eq:12}
c^x_{\mib k} & = & -2t_\perp^\prime e^{{\mathrm i}(3k_z/10)} \cos\frac{k_x}{2}, \\
c^y_{\mib k} & = & -2t_\perp^\prime e^{{\mathrm i}(3k_z/10)} \cos\frac{k_y}{2}, \\
c^z_{\mib k} & = & -t_\perp (\cos\phi\cos2\theta)^2 e^{-{\mathrm i}(k_z/5)},
\label{eq:13}
\end{eqnarray}
\begin{eqnarray}
\label{eq:14}
t^{xy}_{\mib k} & = & -2t_1(\cos2\phi\cos\theta)^2\!\left[\cos \frac{k_x+k_y}{2} +\cos \frac{k_x-k_y}{2}\right], \\
\label{eq:17}
t^{yz}_{\mib k} & = & -2t_4\cos \frac{k_x+k_y}{2}-2t_3(\cos\phi \cos2\theta)^2\cos \frac{k_x-k_y}{2}, \\
\label{eq:18}
t^{zx}_{\mib k} & = & -2t_3(\cos\phi\cos2\theta)^2 \cos \frac{k_x+k_y}{2}-2t_4 \cos \frac{k_x-k_y}{2},
\end{eqnarray}
\begin{equation}
\epsilon_{\mib k} = -2t_2\left(\cos k_x+\cos k_y\right)-\Delta,
\label{eq:16}
\end{equation}
and
\begin{equation}
\lambda_{\mib k} = 2\lambda_0(\cos k_x-\cos k_y).
\label{eq:15}
\end{equation}
In eqs.~(\ref{eq:12})--(\ref{eq:13}) $t_\perp$ and $t_\perp^\prime$ represent inter-layer transfers, and in eqs.~(\ref{eq:14})--(\ref{eq:15}), $t_1$, $t_2$, $t_3$, $t_4$, and $\lambda_0$ represent intra-layer transfers. $\Delta$ in eq.~(\ref{eq:16}) represents the energy level difference between the $d_{xy}$ and $d_{yz}(d_{zx})$ orbitals due to the crystal field.
The terms $\hat{l}^{A}_{\sigma\sigma^\prime}$ and $\hat{l}^{B}_{\sigma\sigma^\prime}$ in eq.~(\ref{eq:1}),
arising from the spin-orbit interaction, are determined from formulas that depend on the choice of the spin-quantization axis. Here we only
consider collinear spin states, with the five different spin-quantization axes as candidates for
the most stable states. These five are the $c$-, $a^\prime$-, $b^\prime$-,
$a$-, and $b$-axes. When we consider the state with the spin-quantization axis parallel to the $c$-axis,
we represent $\hat{l}^{A}_{\sigma\sigma^\prime}$ and $\hat{l}^{B}_{\sigma\sigma^\prime}$ by $\hat{l}^{A(c)}_{\sigma\sigma^\prime}$ and $\hat{l}^{B(c)}_{\sigma\sigma^\prime}$, respectively. These are defined as follows:
\begin{equation}
\hat{l}^{A(c)}_{\uparrow\uparrow} = -\hat{l}^{A(c)}_{\downarrow\downarrow}
= \hat{l}^{B(c)}_{\uparrow\uparrow} = -\hat{l}^{B(c)}_{\downarrow\downarrow}
= \left(\begin{array}{ccc}
0 & \frac{i}{2}\zeta & 0 \\
-\frac{i}{2}\zeta & 0 & 0 \\
0 & 0 & 0
\end{array}
\right)
\end{equation}
and
\begin{equation}
\hat{l}^{A(c)}_{\uparrow\downarrow} = -\left[\hat{l}^{A(c)}_{\downarrow\uparrow}\right]^*
= \hat{l}^{B(c)}_{\uparrow\downarrow} = -\left[\hat{l}^{B(c)}_{\downarrow\uparrow}\right]^*
= \left(\begin{array}{ccc}
0 & 0 & -\frac{1}{2}\zeta \\
0 & 0 & \frac{i}{2}\zeta \\
\frac{1}{2}\zeta & -\frac{i}{2}\zeta & 0
\end{array}
\right).
\end{equation}
Similarly, when we consider the state with the spin-quantization axis parallel to the $a^\prime$-axis, we have
\begin{equation}
\hat{l}^{A(a^\prime)}_{\uparrow\uparrow} = -\hat{l}^{A(a^\prime)}_{\downarrow\downarrow}
= \hat{l}^{B(a^\prime)}_{\uparrow\uparrow} = -\hat{l}^{B(a^\prime)}_{\downarrow\downarrow}
= \cos\phi \left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & \frac{i}{2}\zeta \\
0 & -\frac{i}{2}\zeta & 0
\end{array}
\right)
\label{eq:2}
\end{equation}
and
\begin{equation}
\hat{l}^{A(a^\prime)}_{\uparrow\downarrow} = -\left[\hat{l}^{A(a^\prime)}_{\downarrow\uparrow}\right]^*
= \hat{l}^{B(a^\prime)}_{\uparrow\downarrow} = -\left[\hat{l}^{B(a^\prime)}_{\downarrow\uparrow}\right]^*
= \cos\phi \left(\begin{array}{ccc}
0 & \frac{1}{2}\zeta & -\frac{i}{2}\zeta \\
-\frac{1}{2}\zeta & 0 & 0 \\
\frac{i}{2}\zeta & 0 & 0
\end{array}
\right),
\label{eq:3}
\end{equation}
and when we consider the state with the spin-quantization axis parallel to the $b^\prime$-axis, we have
\begin{equation}
\hat{l}^{A(b^\prime)}_{\uparrow\uparrow} = -\hat{l}^{A(b^\prime)}_{\downarrow\downarrow}
= \hat{l}^{B(b^\prime)}_{\uparrow\uparrow} = -\hat{l}^{B(b^\prime)}_{\downarrow\downarrow}
= \cos\phi \left(\begin{array}{ccc}
0 & 0 & -\frac{i}{2}\zeta \\
0 & 0 & 0 \\
\frac{i}{2}\zeta & 0 & 0
\end{array}
\right)
\label{eq:4}
\end{equation}
and
\begin{equation}
\hat{l}^{A(b^\prime)}_{\uparrow\downarrow} = -\left[\hat{l}^{A(b^\prime)}_{\downarrow\uparrow}\right]^*
= \hat{l}^{B(b^\prime)}_{\uparrow\downarrow} = -\left[\hat{l}^{B(b^\prime)}_{\downarrow\uparrow}\right]^*
= \cos\phi \left(\begin{array}{ccc}
0 & \frac{i}{2}\zeta & 0 \\
-\frac{i}{2}\zeta & 0 & \frac{1}{2}\zeta \\
0 & -\frac{1}{2}\zeta & 0
\end{array}
\right).
\label{eq:5}
\end{equation}
Furthermore, when we consider the state with the spin-quantization axis parallel to the $a$-axis, we have
\begin{equation}
\hat{l}^{A(a)}_{\sigma\sigma^\prime} =
-\frac{1}{\sqrt{2}}(1+\tan\phi)\hat{l}^{A(a^\prime)}_{\sigma\sigma^\prime}
+\frac{1}{\sqrt{2}}(1-\tan\phi)\hat{l}^{A(b^\prime)}_{\sigma\sigma^\prime}
\label{eq:6}
\end{equation}
and
\begin{equation}
\hat{l}^{B(a)}_{\sigma\sigma^\prime} =
-\frac{1}{\sqrt{2}}(1-\tan\phi)\hat{l}^{B(a^\prime)}_{\sigma\sigma^\prime}
+\frac{1}{\sqrt{2}}(1+\tan\phi)\hat{l}^{B(b^\prime)}_{\sigma\sigma^\prime},
\label{eq:7}
\end{equation}
using eqs.~(\ref{eq:2})- (\ref{eq:5}).
Here we ignore their $\theta$-dependencies.
Similarly, when we consider the state with the spin-quantization axis parallel to the $b$-axis, we have
\begin{equation}
\hat{l}^{A(b)}_{\sigma\sigma^\prime} =
\frac{1}{\sqrt{2}}(1+\tan\phi)\hat{l}^{A(a^\prime)}_{\sigma\sigma^\prime}
+\frac{1}{\sqrt{2}}(1-\tan\phi)\hat{l}^{A(b^\prime)}_{\sigma\sigma^\prime}
\label{eq:8}
\end{equation}
and
\begin{equation}
\hat{l}^{B(b)}_{\sigma\sigma^\prime} =
\frac{1}{\sqrt{2}}(1-\tan\phi)\hat{l}^{B(a^\prime)}_{\sigma\sigma^\prime}
+\frac{1}{\sqrt{2}}(1+\tan\phi)\hat{l}^{B(b^\prime)}_{\sigma\sigma^\prime}.
\label{eq:9}
\end{equation}
The interacting part $\hat{H}^\prime$ in eq.~(\ref{eq:1}) is represented by
\begin{eqnarray}
\hat{H}^\prime & = & \frac{U}{2N} \sum_{i=1}^4\sum_{{\mib k} {\mib k}^\prime {\mib q}}
\sum_{\varphi}\sum_{\sigma}
\left[A_{i{\mib{k+q}} \sigma}^{\varphi\dagger}A_{i{\mib{k^\prime\!-q}} -\!\sigma}^{\varphi\dagger}
A_{i{\mib{k^\prime}} -\!\sigma}^\varphi A_{i{\mib k} \sigma}^\varphi
+B_{i{\mib{k+q}} \sigma}^{\varphi\dagger} B_{i{\mib{k^\prime\!-q}} -\!\sigma}^{\varphi\dagger}
B_{i{\mib{k^\prime}} -\!\sigma}^\varphi B_{i{\mib k} \sigma}^\varphi \right] \nonumber \\
& & \hspace{-0.1em}+\frac{V}{2N} \sum_{i=1}^4\sum_{{\mib k} {\mib k}^\prime {\mib q}}
\sum_{\varphi}\sum_{\varphi^\prime\neq\varphi}\sum_{\sigma \sigma^\prime}
\left[A_{i{\mib{k+q}} \sigma}^{\varphi\dagger}
A_{i{\mib{k^\prime\!-q}} \sigma^\prime}^{\varphi^\prime\dagger}
A_{i{\mib{k^\prime}} \sigma^\prime}^{\varphi^\prime} A_{i{\mib{k}} \sigma}^\varphi
+B_{i{\mib{k+q}} \sigma}^{\varphi\dagger}
B_{i{\mib{k^\prime\!-q}} \sigma^\prime}^{\varphi^\prime\dagger}
B_{i{\mib{k^\prime}} \sigma^\prime}^{\varphi^\prime} B_{i{\mib{k}} \sigma}^\varphi\right] \nonumber \\
& & \hspace{-0.1em}+\frac{J}{2N} \sum_{i=1}^4\sum_{{\mib k} {\mib k}^\prime {\mib q}}
\sum_{\varphi}\sum_{\varphi^\prime\neq\varphi}\sum_{\sigma \sigma^\prime}
\left[A_{i{\mib{k+q}} \sigma}^{\varphi\dagger}
A_{i{\mib{k^\prime\!-q}}\sigma^\prime}^{\varphi^\prime\dagger}
A_{i{\mib{k^\prime}} \sigma^\prime}^\varphi A_{i{\mib{k}} \sigma}^{\varphi^\prime}
+B_{i{\mib{k+q}} \sigma}^{\varphi\dagger}
B_{i{\mib{k^\prime\!-q}}\sigma^\prime}^{\varphi^\prime\dagger}
B_{i{\mib{k^\prime}} \sigma^\prime}^\varphi B_{i{\mib{k}} \sigma}^{\varphi^\prime}\right],\nonumber \\
\label{eq:11}
\end{eqnarray}
where $N$ is the number of ${\mib k}$-space points in the first Brillouin zone (FBZ).
Here we only consider the on-site interactions, i.e. Coulomb repulsion in the same orbital $U$,
Coulomb repulsion in different orbitals $V$, and the exchange interaction $J$.
We adopt the UHF approximation with respect to every two sublattices, four layers,
three orbitals and two spin states. Thus, we define
\begin{equation}
\frac{1}{N}\left<A_{i{\mib{k}} \sigma}^{\varphi\dagger}
A_{i{\mib{k^\prime}} \sigma^\prime}^{\varphi^\prime} \right>
\equiv n_{i\varphi \sigma}^A\delta_{{\mib k}{\mib k}^\prime}
\delta_{\varphi \varphi^\prime}
\delta_{\sigma \sigma^\prime}
\end{equation}
and
\begin{equation}
\frac{1}{N}\left<B_{i{\mib{k}} \sigma}^{\varphi\dagger}
B_{i{\mib{k^\prime}} \sigma^\prime}^{\varphi^\prime} \right>
\equiv n_{i\varphi \sigma}^B\delta_{{\mib k}{\mib k}^\prime}
\delta_{\varphi \varphi^\prime}
\delta_{\sigma \sigma^\prime}.
\end{equation}
Then, we can approximate $\hat{H}^\prime$ as follows:
\begin{eqnarray}
\hat{H}^\prime & \approx & \sum_{i=1}^4\sum_{\varphi} \sum_{\sigma}
\left[\left\{Un_{i\varphi -\!\sigma}^A
+\!\sum_{\varphi^\prime \neq \varphi}\left(
V\sum_{\sigma^\prime}n_{i\varphi^\prime \sigma^\prime}^A
-J n_{i\varphi^\prime \sigma}^A \right)\right\}\!\left(\sum_{{\mib k}}
A_{i {\mib k} \sigma}^{\varphi\dagger}
A_{i {\mib k} \sigma}^\varphi-\frac{N}{2}n_{i\varphi \sigma}^A\right)\right. \nonumber \\
& & \hspace{4.4em}+\!\left.\left\{Un_{i\varphi -\!\sigma}^B
+\!\sum_{\varphi^\prime \neq \varphi}\left(
V\sum_{\sigma^\prime}n_{i\varphi^\prime \sigma^\prime}^B
-J n_{i\varphi^\prime \sigma}^B \right)\right\}\!\left(\sum_{{\mib k}}
B_{i {\mib k} \sigma}^{\varphi\dagger}
B_{i {\mib k} \sigma}^\varphi-\frac{N}{2}n_{i\varphi \sigma}^B\right)\right]\nonumber \\
\label{eq:10}
\end{eqnarray}
In order to determine the most stable of the five candidates,
we conduct a self-consistent calculation for each candidate and find the one
with the lowest electronic energy as estimated by eqs.~(\ref{eq:1}) and (\ref{eq:10}).
Hence, we can translate the most stable spin-quantization axis as the magnetic easy axis.
Moreover, for the electronic state that we determine has the most stable spin-quantization axis,
we can calculate the five types of the magnetic order parameters. Four of these types are antiferromagnetic order parameters,
referred to as A$_1$-AFM, A$_2$-AFM, C$_1$-AFM, and C$_2$-AFM and expressed as
\begin{eqnarray}
& & m({\mathrm{A_1}}) = \left|\sum_{i=1}^2(m_i^A+m_i^B)-\sum_{i=3}^4(m_i^A+m_i^B)\right|, \\
& & m({\mathrm{A_2}}) = \left|\sum_{i=2}^3(m_i^A+m_i^B)-\sum_{i=4}^1(m_i^A+m_i^B)\right|, \\
& & m({\mathrm{C_1}}) = \left|\sum_{i=1}^4(-1)^i(m_i^A-m_i^B)\right|, \\
& & m({\mathrm{C_2}}) = \left|\sum_{i=2}^3(m_i^A-m_i^B)-\sum_{i=4}^1(m_i^A-m_i^B)\right|,
\end{eqnarray}
respectively. The fifth type is the ferromagnetic order parameter, expressed as
\begin{equation}
m({\mathrm{FM}}) = \left|\sum_{i=1}^4(m_i^A+m_i^B)\right|.
\end{equation}
Here, we introduce the magnetic momentum on each site:
\begin{equation}
m_i^{A(B)} \equiv \frac{1}{2}\sum_\varphi \left[n_{i\varphi \uparrow}^{A(B)}-n_{i\varphi \downarrow}^{A(B)}\right].
\end{equation}
Then we determne the magnetic order with the largest magnitude of these five order parameters for each parameter set.
\section{Results and Discussion}
In our numerical calculations, we divide FBZ into a $20 \times 20 \times 20$ equally spaced meshes.
The parameter sets in eqs.~(\ref{eq:12})--(\ref{eq:5}) and (\ref{eq:11}) for these calculations
are selected as shown in Table~\ref{table:1}. The choice of these parameters is based on preceding theoretical works.~\cite{WCLee2010}
\begin{table*}
\begin{tabular}{ccccccccccccl}
\hline
$t_1$ & $t_2$ & $t_3$ & $t_4$ & $t_\perp$ & $t_\perp^\prime$ & $\lambda_0$ & $\Delta$ & $\zeta$ & $U$ & $V$ & $J$ & \\
\hline
$0.40$ & $0.08$ & $0.40$ & $0.04$ & $0.24$ & $0.04$ & $0.04$ & $0.00$ & $0.16$ & $0.80$ & $0.40$ & $0.20$ & Fig.~\ref{figure:2} \\
$0.40$ & $0.08$ & $0.40$ & $0.04$ & $0.24$ & $0.04$ & $0.04$ & $0.16$ & $0.16$ & $0.80$ & $0.40$ & $0.20$ & Fig.~\ref{figure:4} \\
$0.40$ & $0.08$ & $0.40$ & $0.04$ & $0.24$ & $0.04$ & $0.04$ & $0.00$ & $0.16$ & $1.00$ & $0.50$ & $0.25$ & Fig.~\ref{figure:3} \\
$0.40$ & $0.08$ & $0.40$ & $0.04$ & $0.24$ & $0.04$ & $0.04$ & $0.16$ & $0.16$ & $1.00$ & $0.50$ & $0.25$ & Fig.~\ref{figure:5} \\
\hline
\end{tabular}
\caption{\label{table:1}The parameter sets for the calculations. The parameter uint is eV.}
\end{table*}
The ratios $V/U$ and $J/U$ in each set are fixed at $0.5$ and $0.25$, respectively.
For each set in Table~\ref{table:1}, both the rotation angle $\phi$ and the tilting angle $\theta$ are varied from $0^\circ$ to $20^\circ$ by $5^\circ$.
All resulting self-consistent fields $n_{i\varphi \sigma}^A$ and $n_{i\varphi \sigma}^B$
have four digits of accuracy, and they satisfy
\begin{equation}
\sum_{i=1}^4 \sum_\varphi \sum_\sigma(n_{i\varphi \sigma}^A+n_{i\varphi \sigma}^B)=32,
\end{equation}
which means that there are four electrons per Ru site.
We summarize our numerical results, as indicated by the last column of Table~\ref{table:1}, in Figs.~\ref{figure:2}-\ref{figure:5}.
The figures show the magnetic easy axis, magnetic phase, and density of states (DOS's) determined for each pair of
$(\phi,\theta)$.
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig2.eps}
\caption{\label{figure:2}(Color online) The magnetic easy axis, magnetic phase, and density of states (DOS) determined when $\Delta=0.00\,{\mathrm{eV}}$ and $U=0.8\,{\mathrm{eV}}$. The unit of each $(\phi,\theta)$ is provided in $^\circ$(degrees). Positive DOS is for spin up and negative DOS is for spin down.}
\end{figure}
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig3.eps}
\caption{\label{figure:4}(Color online) The magnetic easy axis, magnetic phase, and DOS determined when $\Delta=0.16\,{\mathrm{eV}}$ and $U=0.8\,{\mathrm{eV}}$.}
\end{figure}
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig4.eps}
\caption{\label{figure:3}(Color online) The magnetic easy axis, magnetic phase, and DOS determined when $\Delta=0.00\,{\mathrm{eV}}$ and $U=1.0\,{\mathrm{eV}}$.}
\end{figure}
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig5.eps}
\caption{\label{figure:5}(Color online) The magnetic easy axis, magnetic phase, and DOS determined when $\Delta=0.16\,{\mathrm{eV}}$ and $U=1.0\,{\mathrm{eV}}$.}
\end{figure}
Here, the magnetic phase is defined as the paramagnetism (PM) when none of the order parameters
reach a finite value, and the magnetic phase is defined as $\mathrm{X}$ when
several order parameters simultaneously reach the maximum value of the five.
We can easily recognize that the various magnetic phases appear and
that their phase transition is caused by the fractional change of the lattice distortion.
The magnetic easy axis can vary even in the same magnetic phase. The variations in the magnetic phase and
easy axis are caused by the transfers and SOI, which both depend on the lattice distortion.
While the SOI dependence on the lattice distortion plays the primary role in the determination of the magnetic easy axis,
the transfers dependence is more responsible for determining the magnetic phase.
The energy level difference between $d_{xy}$ and $d_{yz}(d_{zx})$, i.e. $\Delta$,
also affects the electronic states
since it relatively lowers the bands from the $d_{xy}$ orbital as well as enhances the full bandwidth $W$.
When we compare Fig.~\ref{figure:2} with Fig.~\ref{figure:4}, or
Fig.~\ref{figure:3} with Fig.~\ref{figure:5}, we find that a positive
$\Delta$ makes the PM phase stable for a wider range of lattice distortion parameters.
This can be derived from the decrease in $U/W$ in conjunction with the change in $\Delta$.
In Ca$_3$Ru$_2$O$_7$, the lattice constants abruptly change at $T_M=48$K,
where the first-order transition occurs~\cite{Cao2003_1,Ohmichi2004}
due to Jahn-Teller distortions of the RuO$_6$ octahedra.~\cite{Cao2003_1}
Thus, we can naturally assign our results for $\Delta > 0$ (Figs.~\ref{figure:4} and \ref{figure:5})
to the quasi-two-dimensional metallic state of Ca$_3$Ru$_2$O$_7$ for $T<T_M$.~\cite{YYoshida2004,JSLee2004}
In our model, all bands can be categorized into two types:
bands derived from the $d_{xy}$ orbital or bands derived from the $d_{yz}(d_{zx})$ orbital.
When we respectively define their bandwidths as $W_{xy}$ and $W_{yz,zx}$,
their dependence on $\phi$ and $\theta$ is like $W_{xy} \propto (\cos 2\phi \cos \theta)^2$ and
$W_{yz,zx} \propto (\cos \phi \cos 2\theta)^2$, due to eqs.~(\ref{eq:14})-(\ref{eq:18}).
When either the rotation ($\phi$) or the tilting ($\theta$) increases, both $W_{xy}$ and $W_{yz,zx}$ decrease and $U/W$ increases.
Thus, the lattice distortion prefers the AFM phase rather than the PM phase due to a large $U/W$,
as shown in Figs.~\ref{figure:2} and \ref{figure:4}.
Moreover, each bandwidth dependence creates differences in the lattice distortion effects
on the electronic state between $\phi$ and $\theta$. The results for
$(\phi,\theta)=(20,0)$ and $(\phi,\theta)=(0,20)$ in Figs.~\ref{figure:2} and \ref{figure:4}
provide clear evidence of these differences.
\begin{figure}
\includegraphics[width=11.8cm]{65055Fig6.eps}
\caption{\label{figure:6}(Color online) Antiferromagnetic structures obtained in our calculations. The black(red) solid circles represent the Ru site on the A(B) sublattice.}
\end{figure}
\begin{figure}
\includegraphics[width=15.6cm]{65055Fig7.eps}
\caption{\label{figure:7}(Color online) Antiferromagnetic structures obtained in our calculations. The black(red) solid circles represent the Ru site on the A(B) sublattice.}
\end{figure}
Figs.~\ref{figure:6} and \ref{figure:7} show the obtained antiferromagnetic structures.
In these structures, A$_1$-AFM along the $b$-axis is consistent with the structure of Ca$_3$Ru$_2$O$_7$
observed by a neutron diffraction analysis.~\cite{YYoshida2005} The A$_1$-AFM phase appears
as shown in Figs.~\ref{figure:2}-\ref{figure:4}. This phase has been proved to be
the most stable state by Singh and Auluck in the LSDA for Ca$_3$Ru$_2$O$_7$, where
it was noted as AF1.~\cite{Singh2006}
The FM phase sometimes appears in the neighborhood of the A$_1$-AFM phase as shown in Figs.~\ref{figure:2} and \ref{figure:3}.
In the A$_1$-AFM phase, the magnetic moments align ferromagnetically within the double layer, and the ferromagnetic
correlation within this double layer is stronger than the antiferromagnetic correlation; this
has been confirmed by an inelastic neutron scattering study for the spin-wave excitation
in Ca$_3$Ru$_2$O$_7$.~\cite{Ke2011}
In the intermediate regime where the AFM correlation between the double layer is not fully
developed, electrons show the FM order as a whole although their magnetic moments are small.
Thus, FM can be derived from the perturbative change of lattice distortion in our model,
and this supports the emergence of FM induced by uniaxial pressures along the $c$-axis
in Sr$_3$Ru$_2$O$_7$.~\cite{SIIkeda2001,SIIkeda2004}
Let us note that the $\mathrm{X}$ magnetic phase appears in Figs.~\ref{figure:3} and \ref{figure:5}.
Several spin configurations have almost the equivalent energy in this phase, so that
the energy profile of this phase has a multi-valley structure in its ground state.
Thus, we can identify the $\mathrm{X}$ magnetic phase
as a cluster spin-glass phase in Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$
for $0.24 \lesssim x \lesssim 1.2$.~\cite{Iwata2008,Qu2008,Qu2009}
The variation of $x$ in Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ is accompanied by the change in
lattice distortion,~\cite{Iwata2008,Peng2010} which must be one reason why the electronic state changes
with $x$ unless the carrier density remains the same.
\section{Conclusion}
In this paper, we examined the lattice distortion effects on Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$
using the double-layered 3D multiband Hubbard model with SOI
by the unrestricted Hartree-Fock calculation.
For some types of lattice distortion, we obtained the A$_1$-AFM phase along the $b$-axis,
consistent with the neutron scattering result for Ca$_3$Ru$_2$O$_7$. Our results also indicate
a possible ferromagnetic transition which is susceptible to the change in lattice distortion and the
existence of a cluster spin-glass phase for the intermediate $x$. The electronic states with these above
results are all metallic. This suggests that a number of physical phenomena in zero-field can be explained
without the existence of a metal-insulator transition. In a recent experiment it was reported that the field-induced
metamagnetic transition drives small lattice distortion in Sr$_3$Ru$_2$O$_7$.~\cite{Stingl2011}
To elucidate such a relation between lattice distortion and quantum critical phenomena,
the study of the lattice distortion effects on Sr$_{3-x}$Ca$_x$Ru$_2$O$_7$ in a finite magnetic field is needed in the future.
\section*{Acknowledgments}
The authors are grateful to Drs. Y. Yoshida, S.-I. Ikeda, I. Hase, N. Shirakawa, K. Iwata, and S. Kouno
for their fruitful discussions. The early stage of our computational study has been achieved
with the use of Intel Xeon servers at NeRI in AIST.
|
2,869,038,154,990 | arxiv | \section{Introduction}
A quasi-Hermitian variety in $\PG(r,q^2)$ is a set of points which
has the same size and the same intersection numbers with hyperplanes
as a Hermitian variety and hence it is a two-character set, see \cite{CK,De} for an overview of their applications.
For $r=2$, a quasi-Hermitian variety is
called a \emph{unital}. In \cite{ACK}, new quasi-Hermitian varieties $\cM_{\alpha,\beta}$
in $\PG(r,q^2)$ depending on a pair of parameters $\alpha,\beta$
from the underlying field $\GF(q^2)$, were constructed.
For $r=2$ these varieties
are Buekenhout-Metz (BM) unitals, see \cite{GE}. As such, and for the convenience of
the reader, we shall call them \emph{BM-quasi-Hermitian varieties} of parameters $\alpha$ and $\beta$.
The BM-quasi-Hermitian varieties $\cM_{\alpha,\beta}$ of $\PG(3,q^2)$ we study
in the present paper, were constructed in the following way.
Fix a projective frame in $\PG(3,q^2)$ with homogeneous coordinates
$(J,X,Y,Z)$, and consider the affine space $\AG(3,q^2)$
with infinite hyperplane $\Sigma_{\infty}$ of $J=0$.
Then, the affine coordinates for points of $\AG(3,q^2)$ are denoted by $(x,y,z)$,
where $x=X/J$, $y=Y/J$ and $z=Z/J$.
Set \begin{equation}
\label{heinf}
\cF=\{(0,X,Y,Z)|X^{q+1}+Y^{q+1}=0\};
\end{equation}
this can be viewed as a Hermitian cone of $\Sigma_{\infty}\cong\PG(2,q^2)$ projecting a Hermitian variety of $\PG(1,q^2)$.
Taken now $\alpha \in\GF(q^2)^*$ and $\beta\in\GF(q^2)\setminus\GF(q)$
and consider the algebraic variety $\cB_{\alpha,\beta}$
of affine equation
\begin{equation}\label{eqqh}
\cB_{\alpha,\beta}: \ z^q-z+\alpha^q(x^{2q}+y^{2q})-\alpha(x^2+y^2)=(\beta^q-\beta)(x^{q+1}+y^{q+1}).
\end{equation}
It is shown in~\cite{ACK} that the point set \[
\cM_{\alpha,\beta}:= (\cB_{\alpha,\beta}\setminus \Sigma_{\infty})\cup \cF\] that is, the union of the affine
points of $\cB_{\alpha,\beta}$ and $\cF$, is
a quasi-Hermitian variety of $\PG(3,q^2)$
for any $q>2$ even or
for $q$ odd and $4\alpha^{q+1}+(\beta^q-\beta)^2 \neq 0$.
This is the variety we shall consider in the present paper in the case in which $q$ is odd.
We observe that, \eqref{eqqh} is not the equation of $\cM_{\alpha,\beta}$.
However, any set of points in a finite projective space can be
endowed of the structure of an algebraic variety, so we shall
speak of the \emph{variety} $\cM_{\alpha,\beta}$ even if we did
not provide an equation for it.
The paper is organized into 3 sections.
In section~\ref{sec:2} we determine the number of lines of $\PG(3,q^2)$, $q$ odd, through a point of $\cM_{\alpha,\beta}$ which are entirely contained in $\cM_{\alpha,\beta}$.
By using this result in Section~\ref{sec:3}, we prove that the point-collinearity graph of $\cM_{\alpha,\beta}$ is connected for $q\equiv 1\pmod 4$ (which is the only interesting case, as for $q\equiv 3\pmod 4$
the set $\cM_{\alpha,\beta}$ contains only the lines of $\cF$).
Finally, in Section~\ref{sec:4}, we prove our main result:
\begin{theorem}
\label{main-th}
Let $q=p^n$ with $p$ an odd prime. Then
the number $N$ of projectively inequivalent quasi-Hermitian varieties $\cM_{\alpha,\beta}$ of $\PG(3,q^2)$ is
\[N=\frac{1}{n}\sum_{k|n}\Phi\left(\frac{n}{k}\right)p^k-2,\]
where $\Phi$ is the Euler $\Phi$-function.
\end{theorem}
We conjecture that similar results hold for all odd $r\geq2$ , but,
due to its technical difficulties, we leave
this extension of the result
to a future work.
\section{Combinatorial properties of $\cM_{\alpha,\beta}$}
\label{sec:2}
We are going to determine the number of lines passing through each point of $\cM_{\alpha,\beta}$ of $\PG(3,q^2)$, for $q$ odd.
We recall the following (see also \cite[Corollary 1.24]{HJ}).
\begin{lemma}\label{tec}
Let $q$ be an odd prime power. The equation
\begin{equation}
\label{x0} X^q+aX+b=0
\end{equation}
admits exactly exactly one solution in $\GF(q^2)$ if and only if
$a^{q+1}\neq1$.
\end{lemma}
\begin{proof}
We have $X^q=-aX-b$, whence
\begin{equation}
\label{x1}
X=-a^qX^q-b^q
\end{equation}
Replacing $X$ in~\eqref{x0} we obtain
\begin{equation}
X^q-a(a^qX^q+b^q)+b=0,
\end{equation}
whence
\begin{equation}
\label{x3}
X^q(1-a^{q+1})=ab^q-b.
\end{equation}
which has exactly one solution if and only if $a^{q+1}\neq 1$.
\end{proof}
\begin{lemma}
\label{lm32}
If $q\equiv 1\pmod 4$ then through each affine point of $\cB_{\alpha,\beta}$ there pass two lines of $\cB_{\alpha,\beta}$ whereas through a point at infinity of $(\cB_{\alpha,\beta}\cap \Sigma_{\infty}) \setminus P_{\infty}$ there pass $q+1$ lines of a pencil which are entirely contained in $\cB_{\alpha,\beta}$.
If $q\equiv 3\pmod 4$ then no line of $\cB_{\alpha,\beta}$ passes through any affine point of $\cB_{\alpha,\beta}$ whereas through a point at infinity of $(\cB_{\alpha,\beta}\cap \Sigma_{\infty}) \setminus P_{\infty}$ there pass only one line contained in $\cB_{\alpha,\beta}$.
There are exactly two lines of $\cB_{\alpha,\beta}$ through $P_{\infty}$ for all odd $q$.
\end{lemma}
\begin{proof} Let $\ell$ be a line of $\PG(3,q^2)$ passing through an affine point of $\cB_{\alpha,\beta}$.
From~\cite{ACK}, it can be directly
seen that the collineation group
of $\cB_{\alpha,\beta}$ acts transitively on its affine points.
Thus, we can assume that $\ell$ passes through the origin of the fixed frame. We study the following system
\begin{small}
\begin{equation}
\left\{\begin{array}{l}\label{sis1}
z^q-z+\alpha^q(x^{2q}+y^{2q})-\alpha(x^2+y^2)=(\beta^q-\beta)(x^{q+1}+y^{q+1})\\
x=m_1t\\
y=m_2t\\
z=m_3t
\end{array}\right.
\end{equation}
\end{small}
As proved in \cite[Theorem 4.3]{ACG} only if $m_3=0$ then $\ell$ can be contained in $\cB_{\alpha,\beta}$. Thus assume $m_3=0$ and replace the parametric
values of $(x,y,z)$ in the first equation of~\eqref{sis1}.
We obtain that
\begin{equation}
\label{eq4}
(t^{2}\alpha (m_1^{2}+m_2^2))^q-t^2\alpha(m_1^2+m_2^2)=
t^{q+1}(\beta^q-\beta)(m_1^{q+1}+m_2^{q+1})
\end{equation}
must hold for all $t\in\GF(q^2)$. Considering separately the cases $t\in\GF(q)$
and $t=\lambda$ with $\lambda\in\GF(q^2)\setminus\GF(q)$ we obtain the following system
\[
\begin{cases} (\alpha^q(m_1^2+m_2^2)^q-\alpha(m_1^2+m_2^2))=(\beta^q-\beta)(m_1^{q+1}+m_2^{q+1})
\\
\lambda^{2q}\alpha^q(m_1^2+m_2^2)^q-\lambda^2\alpha(m_1^2+m_2^2)=
\lambda^{q+1}(\beta^q-\beta)(m_1^{q+1}+m_2^{q+1}),\quad \forall\lambda\in\GF(q^2)\setminus\GF(q)
\end{cases}
\]
so, replacing the first equation in the second, we get
\[ \forall\lambda\in\GF(q^2)\setminus\GF(q): \lambda^{2q}\alpha^q(m_1^2+m_2^2)^q(1-\lambda^{1-q})=
\lambda^{2}\alpha(m_1^2+m_2^2)(1-\lambda^{q-1}).
\]
Observe that $(1-\lambda^{1-q})=\frac{\lambda^{q-1}-1}{\lambda^{q-1}}$. Suppose $m_1^2+m_2^2\neq 0$. Then,
\[ \lambda^{2q-2}\alpha^{q-1}(m_1^2+m_2^2)^{q-1}=-\lambda^{q-1}, \]
whence $(\lambda\alpha(m_1^2+m_2^2))^{q-1}=-1$ for all $\lambda\in\GF(q^2)\setminus\GF(q)$.
This is clearly not possible, as the equation $X^{q-1}=-1$ cannot have more than $q-1$ solutions.
So $m_1^2+m_2^2=0$. It follows that it must be $m_2=\pm\nu m_1$ where
$\nu^2=-1$. On the other hand, if $m_2=\pm\nu m_1$ and
$q\equiv 1\pmod4$, then $m_1^{q+1}+m_2^{q+1}=m_1^{q+1}(1+\nu^{q+1})=0$,
so~\eqref{eq4} is satisfied and the line $\ell: y-\nu x =z=0$ or $\ell: y+\nu x =z=0$ is contained in $\cB_{\alpha,\beta}$.
On the other hand, if $q\equiv3\pmod 4$, then $m_1^{q+1}+m_2^{q+1}=2m_1^{q+1}\neq 0$; so~\eqref{eq4} is not satisfied
and there is no line contained in $\cB_{\alpha,\beta}$.
Now, let us consider $P(0,a,b,c)$ in $(\cB_{\alpha,\beta}\cap \Sigma_{\infty}) \setminus P_{\infty}$; hence $a^2+b^2=0$ and $a, b \neq 0$. Let $r$ be a line through
$P$. We may assume that $r$ has parametric equations
\[\left\{\begin{array}{l}
x=l+at\\
y=m+bt\\
z=n+ct
\end{array}\right.\]
where $t$ ranges over $\GF(q^2)$. Assume also that $(l,m,n)$ are the affine coordinates of a point in $\cB_{\alpha,\beta}$ that is,
\begin{equation}\label{add1}
n^q-n+\alpha^q(m^{2q}+l^{2q})-\alpha(m^2+l^2)=(\beta^q-\beta)(l^{q+1}+m^{q+1})
\end{equation}
Now, $r$ is contained in $\cB_{\alpha,\beta}$ if and only if $q\equiv 1\pmod4$ and the following condition holds:
\begin{equation}\label{fond}
c+2\alpha (al+bm)+(\beta-\beta^q)(al^q+bm^q)=0
\end{equation}
Since $b=\nu a$ where $\nu^2=-1$ and $\nu \in \GF(q)$, setting $k=l+\nu m$ equation \eqref{fond} becomes
\begin{equation}\label{fond1}
c+2a\alpha k+a(\beta-\beta^q)k^q=0.
\end{equation}
From Lemma \ref{tec}, the above equation has exactly one solution if and only if
\begin{equation}\label{int} (2\alpha)^{q+1}\neq(\beta-\beta^q)^{q+1}. \end{equation}
Considering that $2\in\GF(q)$ and $(\beta-\beta^q)^q=(\beta^q-\beta)$
we obtain that \eqref{int} is equivalent to
\[ 4\alpha^{q+1}+(\beta^q-\beta)^2\neq 0 \]
which holds true.
Let $\bar{k}$ be the unique solution of \eqref{fond1}. Since $\bar{k}=l+\nu m$, we find $q^2$ pairs $(l,m)$ satisfying \eqref{fond} and $q$ possible values of $n$ for each of these pairs $(l,m)$ because of \eqref{add1}. Thus we obtain that the number of affine lines through the point $P$ and contained in $\cB_{\alpha,\beta}$ is $q^2q/q^2=q$.
Furthermore, the $q+1$ lines through $P$ lie on the plane of equation:
$x+\nu y=\bar{k}$ and our theorem follows.
\end{proof}
Now, we observe that
\begin{itemize}
\item $\cB_{\infty}:=\cB_{\alpha,\beta}\cap \Sigma_{\infty}$ is the union of two lines $\ell_1: X-\nu Y=0=J$ and $\ell_2: X+\nu Y=0=J$,
with $\nu \in \GF(q^2)$ such that $\nu^2+1=0$;
\item if $q\equiv 1$ mod 4 then $ \cB_{\infty} \subseteq \cF$.
\end{itemize}
Hence, from Lemma \ref{lm32} we get
\begin{theorem}
\label{th32}
If $q\equiv 1\pmod 4$ then through each affine point of $\cM_{\alpha,\beta}$ there pass two lines of $\cM_{\alpha,\beta}$ whereas through a point at infinity of $\cM_{\alpha,\beta}$ on the union of the two lines $\ell_1 \cup \ell_2$ there pass $q+1$ lines of a pencil contained in $\cM_{\alpha,\beta}$; finally through a point at infinity of $\cM_{\alpha,\beta}$ which is not on $\ell_1 \cup \ell_2$ there passes only one line.
If $q\equiv 3\pmod 4$ then no line of $\cM_{\alpha,\beta}$ passes through any affine point of $\cM_{\alpha,\beta}$ whereas through a point at infinity of $(\cM_{\alpha,\beta}\cap \Sigma_{\infty}) \setminus P_{\infty}$ there passes only one line contained in $\cM_{\alpha,\beta}$.
Through the point $P_{\infty}$ there are always $q+1$ lines contained in
$\cM_{\alpha,\beta}$.
\end{theorem}
\section{Connected graphs from $\cM_{\alpha,\beta}$ in $\PG(3,q^2)$, $q\equiv 1\pmod 4$ }
\label{sec:3}
Let $\cV$ be an algebraic variety in $\PG(n-1,q^2)$ and suppose that
$\cV$ contains some projective lines.
Then we can define the \emph{point-collinearity graph} of $\cV$, say
$\Gamma(\cV)=(\cP,\cE)$ as the graph whose vertices $\cP$
are the points of $\cV$ and such that two points $P$ and $Q$ are collinear
in $\Gamma(\cV)$ if and only if the line $\langle P, Q\rangle$ is
contained in $\cV$.
The properties of the graph $\Gamma(\cV)$ provide insight on
the geometry of $\cV$; in particular any automorphism of $\cV$
is also an automorphism of $\Gamma(\cV)$, but the converse
is not true in general. For more detail on the point-collinearity
graphs in the case of polar spaces, see~\cite{S11}.
We begin with the following lemma.
\begin{lemma}\label{diam}
Let $\cV$ be an algebraic variety containing some lines and let $\cV_{\infty}=\cV\cap \Sigma_{\infty}$. If the graph $G_{\cV_{\infty}}$ of the incidence point-line of $\cV_{\infty}$ is connected and for each point of $\cV$ there passes at least one line of $\cV$ then the graph $\Gamma(\cV)$ of the point-line incidence of $\cV$ is connected and its diameter $d(\Gamma({\cV}))$ is at most $d(\Gamma({\cV_{\infty}}))+2$.
\end{lemma}
\begin{proof}
Each line of $\cV$ has at least a point at infinity hence, given two points $P$ and $Q$ there exists a path from $P$ to a point at infinity $P'$ and from $Q$ to another point at infinity $Q'$ and finally a path
consisting of points in $\cV_{\infty}$ which is entirely contained
from $P'$ to $Q'$.
\end{proof}
Consider the variety $\cB_{\alpha,\beta}$ in $PG(3,q^2)$, $q$ odd with equation \eqref{eqqh} such that $4\alpha^{q+1}+(\beta^q-\beta)^2\neq 0$.
\begin{lemma}
\label{rlem}
Let $\cB_{\infty}$ be the intersection of the variety $\cB_{\alpha,\beta}$ with the
hyperplane at infinity $\Sigma_{\infty}:J=0$ of $\PG(3,q^2)$, $q \equiv 1$ mod $4$.
For any point $P\in\cB_{\alpha,\beta}\setminus
\cB_{\infty}$ there are two lines $r_1(P)$ and $r_2(P)$ through $P$ contained in
$\cB_{\alpha,\beta}$ such that
$r_i(P)\cap(\ell_i\setminus\{P_{\infty}\})\neq\emptyset$.
\end{lemma}
\begin{proof}
As we already know, $\cB_{\infty}$ is the union of two lines $\ell_1$ and $\ell_2$, through the point $P_{\infty}$.
By considering~\eqref{eq4}, we see that the point at infinity of
the two lines through the origin are one on $\ell_1$ and one
on $\ell_2$. As the automorphism group of $\cB_{\alpha,\beta}$ is transitive
on the affine points, see \cite{ACK}, maps lines into lines and fixes
$P_{\infty}$, it follows that
for each affine point $P$ we have that one of the lines is
incident with $\ell_1$ and the other with $\ell_2$.
\end{proof}
Let $\cM_{\alpha,\beta}$ be the BM-quasi-Hermitian variety $(\cB_{\alpha,\beta}\setminus \Sigma_{\infty})\cup \cF$ of $\PG(3,q^2)$.
\begin{theorem}
If $q\equiv 1$ mod $4$, then the graph $\Gamma(\cM_{\alpha,\beta})$ is connected and its diameter is $3$.
\end{theorem}
\begin{proof}
Under our assuptions $\cB_{\alpha,\beta}\subseteq\cM_{\alpha,\beta}$,
and $\cB_{\alpha,\beta}\setminus\cB_{\infty}=\cM_{\alpha,\beta}\setminus \Sigma_{\infty}$.
$\cB_{\infty}$
splits in the
union of the two distinct lines $\ell_1,\ell_2$ through $P_{\infty}$.
In particular, $\Gamma(\cB_{\infty})$ is a connected graph of diameter
$2$.
Take now two points $P,Q\in\cB_{\alpha,\beta}$.
If $P,Q\in \cB_{\infty}$, then we have $d(P,Q)\leq 2$ and there is nothing
to prove.
Suppose now $P\in \cB_{\alpha,\beta}\setminus\cB_{\infty}$ and $Q\in\cB_{\infty}$.
Suppose $Q\in\ell_i$. Then, from Lemma \ref{rlem} we can consider a point
$P'=r_i(P)\cap\ell_i$ where $r_i(P)$ is one of the two lines through $P$ which is contained in $\cB_{\alpha,\beta}$. If $P'=Q$, then $d(P,Q)=1$; otherwise
$d(P,Q)=2$.
Take now $P,Q\in\cB_{\alpha,\beta}\setminus\cB_{\infty}$. Then, again from Lemma \ref{rlem}, the lines
$r_1(P)$ and $r_1(Q)$ meet $\ell_1$. Put $P'=r_1(P)\cap\ell_1$
and $Q'=r_1(Q)\cap\ell_2$. If $P'=Q'$, then $d(P,Q)\leq 2$;
otherwise $d(P,Q)\leq 3$.
We now show that there are pairs of points in $\cM_{\alpha,\beta}$ which are at distance $3$.
Take $P\in\cB_{\alpha,\beta}\setminus\cB_{\infty}$ and $Q\in\cM_{\alpha,\beta}\setminus\cB_{\alpha,\beta}$.
Then, $Q$ is collinear with any affine point; also $Q$ is not collinear
with $P_i:=\ell_i\cap r_i(P)$, $i=1,2$.
So, the shortest paths from $P$ to $Q$ are of the form $P~P_i~P_{\infty}~Q$.
It follows that $d(P,Q)=3$ and thus the diameter of the graph is $3$.
\end{proof}
\section{Main result}
\label{sec:4}
In this section
we show that the arguments of~\cite{BE92} for classifying BM-unitals in
$\PG(2,q^2)$ can be extended to BM-quasi-Hermitian varieties in
$\PG(3,q^2)$, $q$ odd. We keep all previous notations.
\begin{lemma}
\label{l51}
Let $\psi$ be a collineation of $\PG(3,q^2)$, $q$ odd, such that
$\psi(\cM_{\alpha,\beta})=\cM_{\alpha',\beta'}$ where $\cM_{\alpha,\beta}$ and $\cM_{\alpha',\beta'}$ are two $BM$-quasi-Hermitian varieties.
Then $\psi$ fixes $P_{\infty}$ and stabilizes $\Sigma_{\infty}$.
Also, if $q\equiv1\pmod4$ then $\psi(\cB_{\alpha,\beta})=\cB_{\alpha',\beta'}$.
\end{lemma}
\begin{proof}
First, we show that $\psi$ fixes $P_{\infty}$. If $q\equiv 3\pmod 4$ then from Theorem \ref{th32} we have that $P_{\infty}$ is the only point of the two varieties contained in $q+1$ lines and hence $\psi(P_{\infty})= P_{\infty}$. In the case in which $q\equiv 1\pmod 4$, again from Theorem \ref{th32}, for each point in $\ell_1 \cup \ell_2$ there pass $q+1$ lines of the quasi-Hermitian varieties however
$P_{\infty}$ is the only point on $\ell_1\cup \ell_2$ such that the other $q-1$ lines through it are not incident with other lines of the two varieties, hence we again obtain
$\psi(P_{\infty})= P_{\infty}$. Furthermore since $\Sigma_{\infty}$ is the plane through $P_{\infty}$ meeting both $\cM_{\alpha,\beta}$ and $\cM_{\alpha',\beta'}$ in $q^3+q^2+1$ points which are on the $q+1$ lines through $P_\infty$.
All of the $q^3-q^2$ points of $\cM_{\alpha,\beta}$ and $\cM_{\alpha',\beta'}$
lying on exactly one line contained in the respective variety are
in this plane, and these points also span $\Sigma_{\infty}$.
So also $\Sigma_{\infty}$ is left invariant by $\psi$.
Now assume $q\equiv 1 \pmod 4$. In this case $\cB_{\alpha,\beta}\subseteq\cM_{\alpha,\beta}$.
Since $\psi(\Sigma_{\infty})=\Sigma_{\infty}$,
we have
\[ \psi(\cB_{\alpha,\beta}\setminus\Sigma_{\infty})=\psi(\cM_{\alpha,\beta}\setminus\Sigma_{\infty})=\cM_{\alpha',\beta'}\setminus\Sigma_{\infty}=\cB_{\alpha',\beta'}
\setminus\Sigma_{\infty},\]
that is $\psi$ stabilizes the affine part of $\cB_{\alpha,\beta}$.
Furthermore $\cB_{\infty}=\cB_{\alpha,\beta}\cap\Sigma_{\infty}$
consists of the union of the two lines, say $\ell_1$ and $\ell_2$.
Observe also that the lines through the affine points of
$\cM_{\alpha,\beta}$ are also lines of $\cB_{\alpha,\beta}$ (see
Theorem~\ref{th32}) and, in particular they are incident either
$\ell_1$ or $\ell_2$. This is to say that the points of
$\ell_1\cup\ell_2$ different from $P_{\infty}$
are exactly the points of $\Sigma_{\infty}$ through which
there pass some affine lines of $\cM_{\alpha,\beta}$.
This implies that $\psi(\ell_1\cup\ell_2)=\ell_1\cup\ell_2$ and,
consequently
\[
\psi(\cB_{\alpha,\beta})=\psi(\cB_{\alpha,\beta}\setminus\Sigma_{\infty})
\cup\psi(\ell_1\cup\ell_2)=(\cM_{\alpha',\beta'}\setminus\Sigma_{\infty})
\cup(\ell_1\cup\ell_2)=\cB_{\alpha',\beta'}.
\]
\end{proof}
\begin{theorem}
\label{lorb}
Suppose $q\equiv1\pmod4$. Let $\cG$ be the group of collineations
$\cG=\mathrm{Aut}(\cM_{\alpha,\beta})$ and
$\fG$
the group of graph automorphisms
$\fG=
\mathrm{Aut}(\Gamma(\cM_{\alpha,\beta}))$.
Then the sets
\begin{itemize}
\item $\Omega_0:=\{P_{\infty}\}$;
\item $\Omega_1$ consisting of
the points at infinity of $\cB_{\alpha,\beta}$ different from
$P_{\infty}$;
\item $\Omega_2:=\cM_{\alpha,\beta}\setminus\Sigma_{\infty}$
\end{itemize}
are all orbits of both $\cG$ and $\fG$.
Furthermore, $\Omega_3=\cM_{\alpha,\beta}\setminus\cB_{\alpha,\beta}$ is
an orbit for $\fG$.
\end{theorem}
\begin{proof}
By~\cite{ACK}, we know that there is a subgroup of $\cG$
which is transitive on the affine points of
$\cM_{\alpha,\beta}$ i.e. on $\Omega_2$.
By Lemma~\ref{l51}, any collineation of $\cM_{\alpha,\beta}$ must
stabilize the plane $\Sigma_{\infty}$; so any element of $\cG$ maps
points of $\Omega_2$ into points of $\Omega_2$ and $\Omega_2$ is
an orbit of $\cG$.
Also by Lemma~\ref{l51}, $\Omega_0:=\{P_{\infty}\}$
is fixed by any $\gamma\in\cG$.
So we have that the points at infinity of
$\cB_{\alpha,\beta}\setminus\{P_{\infty}\}$, as well as the points of
$\cM_{\alpha,\beta}\setminus\cB_{\alpha,\beta}$, are union of orbits.
Denote by $\ell_1,\ell_2$ the two lines of $\cB_{\alpha,\beta}$ at infinity.
Using Lemma~\ref{rlem}, we see that $\cG$ is transitive on
$\Omega_1=(\ell_1\cup\ell_2)\setminus\{P_{\infty}\}$.
Indeed, for any two points $P,Q\in\ell_1\setminus\{P_{\infty}\}$
there are points $P_0, Q_0\in\Omega_2$ such that $r_1(P_0)\cap\Sigma_{\infty}=\{P\}$ and
$r_1(Q_0)\cap\Sigma_{\infty}=\{Q\}$.
Since $\cG$ is transitive on $\Omega_2$, there is
$\gamma\in\cG$ such that $\gamma(P_0)=Q_0$. It follows that
$\gamma ((r_2(P_0)\cap\Sigma_{\infty})\cup\{P\})=
(r_2(Q_0)\cap\Sigma_{\infty})\cup\{Q\}$.
If $\gamma(P)=Q$, then we are done.
Otherwise, consider
the element $\theta:(J,X,Y,Z)\to (J,X,-Y,Z)$ of $\cG$.
Observe that $\theta(r_2(Q_0))\cap\Sigma_{\infty}=
r_1(Q_0)\cap\Sigma_{\infty}$. Hence,
$\theta\gamma(P)=Q$.
Also, $\theta(\ell_1)=\ell_2$; so it follows that
$\Omega_1:=(\ell_1\cup\ell_2)\setminus\{P_{\infty}\}$
is an orbit of $\cG$.
Since $\fG$ contains $\cG$, the orbits of $\fG$ are possibly unions
of orbits of $\cG$. However, observe that the points of
$\Omega_3$ are the only points of $\cM_{\alpha,\beta}$ which are
on exactly one line of $\cM_{\alpha,\beta}$ through the point $P_{\infty}$.
So these points must be permuted among each other also by $\fG$.
The same argument shows that $\Omega_0$ is
also an orbit for $\fG$.
Consider now the points of $\Omega_2$. They are the points of
$\cB_{\alpha,\beta}\setminus\Omega_0$ incident with exactly $2$ lines,
while the points of $\Omega_1$ are incident with more than $2$ lines.
So $\fG$ cannot map a vertex in $\Omega_2$ into a vertex in $\Omega_1$
and these orbits are distinct.
Put $\Gamma:=\Gamma(\cM_{\alpha,\beta})$.
Observe that the graph $\Gamma\setminus\{P_{\infty}\}$
is the disjoint union of $\Gamma(\Omega_3)$ and
$\Gamma(\Omega_1\cup\Omega_2)$.
In turn, $\Gamma(\Omega_3)$ consists of the disjoint union $K_1\cup K_2\cup\dots\cup K_{q-1}$ of $q-1$ copies of the complete graph on $q^2$ elements.
Write $\{ v_{i}^j\}_{j=1,\dots,q^2}$ for the list of vertices of $K_i$ with
$i=1,\dots,q-1$.
Also, each vertex of $\Gamma(\Omega_3\cup\{P_{\infty}\})$
is collinear with $P_{\infty}$.
Let $S_{q^2}$ be the symmetric group on $q^2$ elements, and consider its
action on $\Gamma$ given by
\[ \forall\xi\in S_{q^2}: \check{\xi}(v_1^j):=v_{1}^{\xi(j)}\]
if $v_1^j\in K_1$ and fixing all remaining vertices.
Obviously $\check{S}_{q^2}<\fG$ and $\check{S}_{q^2}$ is transitive on $K_1$.
Let $S_{q-1}$ be the symmetric group on $\{1,\dots,q-1\}$ and consider
its action on $\Gamma$ given by
\[ \forall \sigma\in S_{q-1}:
\hat{\sigma}(v_i^j):=v_{\sigma(i)}^{j},\quad j=1,\dots,q^2 \]
and all the remaining vertices of $\Gamma$ are fixed.
We also have $\hat{S}_{q-1}<\fG$ and $\hat{S}_{q-1}$ permutes the
sets $K_i$ for $i=1,\dots,q-1$.
By construction, we see that
the wreath product
$\check{S}_{q}\wr \hat{S}_{q-1}$ is a subgroup of $\fG$,
it acts naturally on $\Gamma$, fixes all vertices not
in $\Omega_3$ and acts transitively on $\Omega_3$.
It follows that $\fG$ is transitive on $\Omega_3$.
\end{proof}
\begin{remark}
It can be easily seen that the
autormorphism group of $\Gamma:=\Gamma(\cM_{\alpha,\beta})$ is in
general much larger than the subgroup of collineations
stabilizing $\cM_{\alpha,\beta}$. In particular the elements
of $\check{S}_q\wr\hat{S}_{q-1}$ are not, in general, collineations.
For instance, in the case $q=5$ with $\alpha=\beta=\varepsilon$
where $\varepsilon$ is a primitive element of $\GF(25)$,
the group $\cG$ has order
$2^65^5$, while
$\fG$ has order
$2^{99}3^{42}5^{30}7^{12}11^{8}13^417^419^423^4$.
In this case also $\cG$ is transitive on $\Omega_3$.
\end{remark}
\begin{lemma}
\label{col-lemma}
If $\cM_{\alpha,\beta}$ and $\cM_{\alpha',\beta'}$ are two projectively equivalent BM-quasi-Hermitian varieties then there is a semilinear collineation $\phi : \cM_{\alpha,\beta}\rightarrow \cM_{\alpha',\beta'}$ of the following type
\[\phi(j,x,y, z) =(j^{\sigma},x^{\sigma},y^{\sigma},z^{\sigma})M,\ where \]
\[M=\begin{pmatrix}
a&0&0&0\\
0&b&c&0\\
0& c& -b&0\\
0&0&0&1
\end{pmatrix}, \ or \ M=\begin{pmatrix}
a&0&0&0\\
0&b&c&0\\
0& -c& b&0\\
0&0&0&1
\end{pmatrix},\]
$\sigma \in\mathrm{Aut}(\GF(q^2))$, $a\in \GF(q)\setminus \{0\}$, $b,c\in \GF(q^2)$, $b^2+c^2\neq 0$ and if $b\neq 0 \neq c$ then $c=\lambda b$ with $\lambda \in \GF(q)\setminus \{0\}$ such that $\lambda^2+1\neq0$.
\end{lemma}
\begin{proof}
By Lemma~\ref{l51}, $\phi$ fixes the point $P_{\infty}$ and stabilizes
$\Sigma_{\infty}$. As the automorphism group of $\cM_{\alpha,\beta}$ is
transitive on its affine points,
we can also assume that $\phi(1,0,0,0)=(1,0,0,0)$.
More in detail, let $G'$ be the collineation group of $\cM_{\alpha',\beta'}$ fixing $P_{\infty}$, leaving $\cF \setminus P_{\infty}$ invariant and transitive on the affine points of $\cM_{\alpha',\beta'}$. If $\phi(1,0,0,0)\neq (1,0,0,0)$ we can consider the collineation $\phi' \in G'$ mapping $\phi(1,0,0,0)$ to $(1,0,0,0)$ and then we replace $\phi$ by $\phi \phi'$. This implies that
$\phi$ has the following form
\[\phi(j,x,y, z) =(j^{\sigma},x^{\sigma},y^{\sigma},z^{\sigma})\begin{pmatrix}
a&0&0&0\\
0&b&c&d\\
0& e& f&g\\
0&0&0&1
\end{pmatrix}, \]
where $\sigma \in\mathrm{Aut}(\GF(q^2))$, $a,b,c,d,e,f,g \in \GF(q^2)$ and $a \neq 0 \neq bf-ce$.
Since $(1,0,0,c)$, belongs to $\cM_{\alpha, \beta}$ if
and only if $c\in\GF(q)$, it follows that
$\phi(1,0,0,c)=(a,0,0,c)\in \cM_{\alpha', \beta'}$ implies $ca^{-1}\in
\GF(q)$, and thus $a\in\GF(q)^*$.
Now we observe that the affine plane $Y=0$ has in common with $\cM_{\alpha,\beta}$ the points $(1,x,0,z)$ for which
$-\alpha x^2+\beta x^{q+1}-z \in \GF(q)$; so, $a^{-1}(-\alpha^{\sigma} x^{2\sigma}+\beta^{\sigma}x^{\sigma(q+1)}-z^{\sigma})\in \GF(q)$.
Thus, suppose that $(1,x,0,z)\in \cM_{\alpha,\beta}$; we have $\phi(1,x,0,z) \in \cM_{\alpha', \beta'}$ and therefore
\begin{equation}\label{col1}
(\alpha^{\sigma}-\alpha'(b^2+c^2)/a)x^{2\sigma}-(\beta^{\sigma}-\beta'(b^{q+1}+c^{q+1})/a)x^{\sigma(q+1)}-dx^{\sigma}\in \GF(q),
\end{equation}
as $\sigma$ stabilizes $\GF(q)$.
Let $\eta \in \GF(q^2)\setminus \GF(q) $ such that $\eta^2$ is a primitive element of $\GF(q)$.
Considering $x^{\sigma}=1,-1,\eta, -\eta, 1+\eta$ in \eqref{col1}, we get
\[d=0,\]
\begin{equation}\label{add2}
\alpha^{\sigma}-\alpha'(b^2+c^2)/a=0,\end{equation}
\begin{equation}\label{add3}
\beta^{\sigma}-\beta'(b^{q+1}+c^{q+1})/a\in \GF(q).\end{equation}
Similarly if we consider the affine points in common between the plane $X=0$ and $\cM_{\alpha,\beta}$, arguing as before, we obtain \[g=0\]
\begin{equation}\label{eqq1}
\alpha^{\sigma}-\alpha'(e^2+f^2)/a=0,
\end{equation}
\begin{equation}\label{eqq2}
\beta^{\sigma}-\beta'(e^{q+1}+f^{q+1})/a\in \GF(q).\end{equation}
In particular,
\begin{equation}\label{eqq3}
b^2+c^2=e^2+f^2\neq 0.
\end{equation}
Also, since $\beta'\not\in\GF(q)$,
\begin{equation}\label{eqq4}
b^{q+1}+c^{q+1}=e^{q+1}+f^{q+1}\neq 0.
\end{equation}
Now we recall that a generic point
$(1,x,y,z)\in \cM_{\alpha,\beta}$ if and only if $\phi(1,x,y,z)\in \cM_{\alpha',\beta'}$.
On the other hand,
\[(1,x,y,z)\in \cM_{\alpha,\beta} \Leftrightarrow -\alpha(x^2+y^2)+\beta(x^{q+1}+y^{q+1})-z\in \GF(q).\]
Since $a \in\GF(q)\setminus\{0\}$ and $\sigma$ stabilizes $\GF(q)$,
the former equation is equivalent to
\begin{equation}\label{res}
a^{-1}\{-\alpha^{\sigma} (x^{2\sigma}+y^{2\sigma})+\beta^{\sigma}[x^{\sigma(q+1)}+y^{\sigma(q+1)}]-z^{\sigma}\}\in \GF(q).
\end{equation}
Next, we observe that
$\phi(1,x,y,z)=(1,\frac{bx^{\sigma}+ey^{\sigma}}{a}, \frac{cx^{\sigma}+fy^{\sigma}}{a}, \frac{z^{\sigma}}{a} ) $ and this point belongs to $\cM_{\alpha', \beta'}$ if and only if
\begin{multline}\label{res1}
a^{-1}\Big\{-\alpha' \left[\frac{(bx^{\sigma}+ey^{\sigma})^2}{a}+ \frac{(cx^{\sigma}+fy^{\sigma})^2}{a}\right]+ \\ \beta'\left[ \frac{(bx^{\sigma}+ey^{\sigma})^{(q+1)}}{a}+\frac{(cx^{\sigma}+fy^{\sigma})^{(q+1)}}{a}\right]-{z^{\sigma}}\Big\} \in \GF(q)
\end{multline}
From \eqref{res} and \eqref{res1},
we get that for all $(1,x,y,z)\in \cM_{\alpha,\beta}$ the following holds:
\begin{multline*}
\alpha^{\sigma} (x^{2\sigma}+y^{2\sigma})-\alpha' \left[\frac{(bx^{\sigma}+ey^{\sigma})^2}{a}+\frac{(cx^{\sigma}+fy^{\sigma})^2}{a}\right]+\\
+\beta'\left[ \frac{(bx^{\sigma}+ey^{\sigma})^{(q+1)}}{a}+\frac{(cx^{\sigma}+fy^{\sigma})^{(q+1)}}{a}\right]-\beta^{\sigma}[x^{\sigma(q+1)}+y^{\sigma(q+1)}] \in \GF(q)
\end{multline*}
that is, using~\eqref{eqq3} and
\eqref{eqq4},
\begin{equation}\label{res3}
-\alpha' a^{-1} \left[ 2 x^{\sigma}y^{\sigma}(be+cf) \right]+ \beta' \left[(b^qe+c^qf) x^{\sigma q}y^{\sigma}+(be^q+cf^q) x^{\sigma}y^{\sigma q} \right]\in \GF(q)
\end{equation}
We are going to prove that $b^qe+c^qf=0$. Thus, let $\nu \in\GF(q^2)$ be any solution of $X^{q+1}=-1$.
The semilinear collineation $\phi$ has to leave invariant the Hermitian cone $\cF $ that is , $\phi(0,x,\nu x,z)\in \cF$,
and because of the first equation in \eqref{eqq4} this means
\begin{equation}\label{res4}
(b^qe+c^qf)\nu^{\sigma}+(be^q+cf^q)\nu^{\sigma q}=0
\end{equation}
for any of the $q+1$ different solutions of $X^{q+1}=-1$.
If $(b^qe+c^qf)\neq 0$ than the equation $ (b^qe+c^qf)X+(b^qe+c^qf)^q X^{q}=0$ would have more than $q$ solutions which is impossible.
Thus,
\begin{equation}\label{add4}
b^qe+c^qf=0
\end{equation}
and since $\alpha' \notin \GF(q)$ \eqref{res3} gives
\begin{equation}\label{eqq5}
be+cf=0.
\end{equation}
In particular, if $c\neq 0 \neq e$ or $b\neq 0 \neq f$, from \eqref{eqq3} and \eqref{eqq5} we also get
$(e,f)=(c,-b)$ or $(e,f)=(-c,b)$. Thus from \eqref{add4} we also obtain
\begin{equation}\label{res5}
b^qc-bc^q=0.
\end{equation}
Hence if $b\neq 0 \neq c$ then $c=\lambda b$ where $\lambda \in \GF(q)$ and $\lambda^2+1\neq 0$. So the lemma follows.
\end{proof}
From the previous Lemma, taking into account conditions from \eqref{add2} to \eqref{eqq2}, we get that $\cM_{\alpha,\beta}$ and $\cM_{\alpha',\beta'}$ are projectively equivalent if and only if
\begin{equation}\label{equiv}(
\alpha',\beta')=(a\alpha^{\sigma}/(b^2+c^2), a\beta^{\sigma}/(b^{q+1}+c^{q+1})+u)\end{equation} for some $\sigma \in\mathrm{Aut}(\GF(q^2))$, $a\in\GF(q)^*$, $u\in \GF(q)$, $b,c \in \GF(q^2): b^2+c^2\neq 0$ and if $b\neq 0\neq c$ then $c=\lambda b$ with $\lambda \in \GF(q)\setminus \{0\}$.
In this case we write $(\alpha, \beta) \sim (\alpha',\beta')$ where $\sim$ is in particular an equivalence relation on the ordered pairs $(\alpha,\beta)\in \GF(q^2)^2$ such that $4\alpha^{q+1}+(\beta^q-\beta)^2 \neq 0$.
\begin{lemma}\label{lemadd5}
Let $\cM_{\alpha,\beta}$ be a BM-quasi-Hermitian variety of $\PG(3,q^2)$, $q$ odd
and $\varepsilon$ be a primitive element of $\GF(q^2)$.
Then, there exists $\alpha'\in\GF(q^2)\setminus\{0\}$ such that
$\cM_{\alpha,\beta}$ is projectively equivalent to $\cM_{\alpha',\varepsilon}$.
\end{lemma}
\begin{proof}
Write $\beta=\beta_0+\varepsilon \beta_1$, with $\beta_0,\beta_1 \in \GF(q)$ and $\beta_1 \neq 0$. Then, there exists $b\in \GF(q^2)\setminus \{0\}$, such that $\beta_1/b^{q+1}=1$.
Therefore $(\alpha,\beta) \sim (\alpha/b^2, \beta/b^{q+1}-\beta_0/b^{q+1})=(\alpha/b^2, \varepsilon) $.
\end{proof}
In light of the previous lemma, in order to determine the equivalence
classes of BM-quasi-Hermitian varieties it is enough to determine when
two varieties $\cM_{\alpha,\varepsilon}$ and $\cM_{\alpha',\varepsilon}$
are equivalent. This is done in the following.
\begin{lemma}
\label{main-lemma}
Let $q=p^n$ be an odd prime, $\varepsilon$ be a primitive element of $\GF(q^2)$, $\cM_{\alpha,\varepsilon}$
and $\cM_{\alpha',\varepsilon}$ be two BM-quasi-Hermitian varieties
of $\PG(3,q^2)$.
Put
\[
\delta(\alpha):=\frac{(\varepsilon^q-\varepsilon)^2}{4\alpha^{q+1}}.
\]
Then, $\cM_{\alpha,\varepsilon}$ is projectively equivalent to
$\cM_{\alpha',\varepsilon}$ if and only if there exist $\sigma\in\mathrm{Aut}(\GF(q^2))$ such that
\[ \delta(\alpha')=\delta(\alpha)^{\sigma}.
\]
\end{lemma}
\begin{proof}
First we observe that for all $\alpha \in\GF(q^2)\setminus \{0\}$ such that $4\alpha^{q+1}+(\varepsilon^q-\varepsilon)^2 \neq 0$ \[\delta(\alpha):=\frac{(\varepsilon^q-\varepsilon)^2}{4\alpha^{q+1}}\] belongs to $ \GF(q)\setminus\{0,-1\}$.
Conversely, given any $\delta \in\GF(q)\setminus \{0,-1\}$ we can generate some BM-quasi-Hermitian varieties $\cM_{\alpha, \varepsilon}$, by choosing $\alpha$ to
be any solution of $4 \delta x^{q+1}=(\varepsilon^q-\varepsilon)^2$. In fact, it turns out that $(\varepsilon ^q-\varepsilon)^2+4\alpha^{q+1}\neq 0$. Furthermore, let $\alpha_1$ and $\alpha_2$ be any two such solutions. Then there
exists $k$ such that $\alpha_2=\varepsilon^{k(q-1)}\alpha_1$.
On the other hand,
$(\alpha_1, \varepsilon) \sim (\alpha_1 \varepsilon^{-2} \varepsilon^{q+1}, \varepsilon \varepsilon^{-(q+1)}\varepsilon^{q+1})=(\alpha_1\varepsilon^{q-1}, \varepsilon)$.
By repeating this process $k$ times, we see
\begin{equation}
\label{eqk}
(\alpha_1,\varepsilon) \sim
(\alpha_1\varepsilon^{k(q-1)},\varepsilon)=(\alpha_2, \varepsilon).
\end{equation}
Thus $\delta(\alpha_1)=\delta(\alpha_2)$ implies that
$\cM_{\alpha_1,\varepsilon}$ is projectively equivalent to
$\cM_{\alpha_2,\varepsilon}$.
Hence, in order to determine the number $N$ of projectively inequivalent BM-quasi-Hermitian varieties we need to count the number of "inequivalent"
$\delta \in \GF(q)\setminus\{0,-1\}$.
Now, given two BM-quasi-Hermitian varieties $\cM_{\alpha,\varepsilon}$ and $\cM_{\alpha',\varepsilon}$ and setting
\[ \delta=\delta(\alpha)=\frac{(\varepsilon^q-\varepsilon)^2}{4\alpha^{q+1}},\quad
\delta'=\delta(\alpha')=\frac{(\varepsilon^q-\varepsilon)^2}{4{\alpha'}^{q+1}}, \]
we have to show that $\cM_{\alpha,\varepsilon}\sim\cM_{\alpha',\varepsilon}$
if and only if $\delta'=\delta^{\sigma}$ for some $\sigma\in\mathrm{Aut}(\GF(q^2))$.
Suppose first that $\cM_{\alpha,\varepsilon}$ and
$\cM_{\alpha',\varepsilon}$
are equivalent that is, $(\alpha',\varepsilon) \sim (\alpha,\varepsilon)$.
This is true if and only if
\[ \alpha'=\frac{\alpha^{\sigma}a}{b^2+c^2},
\quad
\varepsilon=\frac{a\varepsilon^{\sigma}}{b^{q+1}+c^{q+1}}+u, \]
for some $\sigma \in \mathrm{Aut}(\GF(q^2))$, $a \in \GF(q)\setminus \{0\}$, $u\in \GF(q)$, $b,c\in \GF(q^2)$ such that the conditions in the thesis of Lemma~\ref{col-lemma} hold.
Then
\[ \delta'=(b^2+c^2)^{q+1}\frac{(\varepsilon^q-\varepsilon)^2}{4a^2(\alpha^{\sigma})^{q+1}},\quad
\delta^{\sigma}=(b^{q+1}+c^{q+1})^2\frac{(\varepsilon^q-\varepsilon)^2}{4a^2(\alpha^{\sigma})^{q+1}}.\]
We observe that
\begin{equation}
\label{dpds}
(b^2+c^2)^{q+1}=(b^{q+1}+c^{q+1})^2.
\end{equation}
In fact, if either $b=0$ or $c=0$, then~\eqref{dpds} is trivially satisfied
and there is nothing further
to prove.
Otherwise, a direct manipulation yields that \eqref{dpds} is equivalent to
\[ \frac{b^{q-1}}{c^{q-1}}+\frac{c^{q-1}}{b^{q-1}}=2.
\]
This gives $\frac{b^{q-1}}{c^{q-1}}=1$, which is always true, since,
\eqref{res5} holds.
Because of \eqref{dpds} then $\delta'=\delta^{\sigma}$.
Conversely, suppose that $\delta'=\delta^{\sigma}$ for some $\sigma$.
Then we observe that $(\alpha,\varepsilon)\sim (\alpha^{\sigma}, \varepsilon^{\sigma}) $. Furthermore
$(\alpha^{\sigma}, \varepsilon^{\sigma})\sim (\alpha^{\sigma}/b^2,\varepsilon)$ where $\varepsilon^{\sigma}=b_1\varepsilon+b_0$ with $b_1/b^{q+1}=1$ for a suitable $b\in \GF(q^2)\setminus\{0\}$, as seen in the proof of Lemma \ref{lemadd5}.
Thus we have that
\begin{multline*}
\delta(\alpha^{\sigma}/b^2) =(\varepsilon^q-\varepsilon)^2 (b^2)^{q+1}/4(\alpha^{\sigma})^{q+1}=\\
(b^2)^{q+1}\{[(\varepsilon^{\sigma})^q-b_2]-(\varepsilon^{\sigma}-b_2)\}^2/4(\alpha^{\sigma})^{q+1}(b^{q+1})^2=\\
[(\varepsilon^q-\varepsilon)^2]^{\sigma}/4(\alpha)^{(q+1)\sigma}=\delta^{\sigma}=\delta'.
\end{multline*}
Hence,
\[(\alpha',\varepsilon)\sim (\alpha^{\sigma}/b^2,\varepsilon)\sim (\alpha^{\sigma},\varepsilon^{\sigma})\sim(\alpha, \varepsilon)\]
\end{proof}
\begin{conjecture}
We conjecture that Lemma~\ref{main-lemma} holds for all odd $r\geq 2$.
\end{conjecture}
\begin{theorem}
Let $q=p^n$ with $p$ an odd prime. Then
the number $N$ of projectively inequivalent BM-quasi-Hermitian varieties $\cM_{\alpha,\beta}$ of $\PG(3,q^2)$ is
\[N=\frac{1}{n}\sum_{k|n}\Phi\left(\frac{n}{k}\right)p^k-2,\]
where $\Phi$ is the Euler $\Phi$-function.
\end{theorem}
\begin{proof}
For all $\delta, \delta' \in \GF(q) \setminus \{0,-1\}$ write
$\delta \sim \delta'$ if and only if $\delta'=\delta^{\sigma}$ for some $\sigma \in \mathrm{Aut}(\GF(q^2))$. By Lemma~\ref{main-lemma}, $N$ is the number of inequivalent classes $[\delta]$ under $\sim$.
Let $N_e=|\{ \delta \in \GF(p^e) \setminus \{0,-1\}:$ $ \delta$ is not contained in any smaller subfield of $\GF(q)\}|$. We have
\[ N=\sum_{e|n}\frac{N_e}{e}. \]
Observing that
\[\sum_{e'|e}N_{e'}=p^e-2\]
Denote by $\mu(x)$ the M\"obius function.
Then,
M\"obius inversion gives
\[N_e=\sum_{e'|e}\mu(e')p^{e/e'}-2\sum_{e'|e}\mu(e').\]
It follows that \[N=(\sum_{e|n}\frac{1}{e}\sum_{e'|e}\mu(e')p^{e/e'})-2.\]
Let $m=e/e'$ be a divisor of $n$, then the coefficient of $p^m$ is
\[ \frac{1}{n}\sum_{e/m|n/m}\mu(\frac{e}{m})\frac{n/m}{e/m}=\frac{1}{n} \Phi(\frac{n}{m})\]
and finally \[N=\frac{1}{n}\left(\sum_{k/n}\Phi(\frac{n}{k})p^{k}\right)-2.\]
\end{proof}
|
2,869,038,154,991 | arxiv | \section{Introduction}\label{intro}
The observation of exotic stable multi-charge objects would represent striking evidence for physics
beyond the Standard Model. Cosmological arguments put severe constraints on possible properties of
such objects. Such particles (see e.g. Ref. \cite{DMRev} for review and reference) should be stable, provide the measured dark matter density and be decoupled
from plasma and radiation at least before the beginning of matter dominated stage. The easiest way to
satisfy these conditions is to involve neutral elementary weakly interacting massive particles (WIMPs).
SUSY Models provide a list of possible WIMP candidates: neutralino, axino, gravitino etc.,
However it may not be the only particle physics solution for the dark matter problem.
One of such alternative solutions is based on the existence of heavy stable charged particles bound
in neutral ”dark atoms”.
Dark atoms offer an interesting possibility to solve the puzzles of dark matter searches. It turns out that even stable electrically charged particles can exist hidden in such atoms, bound by ordinary Coulomb interactions (see \cite{DMRev,mpla,DDMRev} and references therein).
Stable particles with charge -1 are excluded due to overproduction of anomalous isotopes. However, there doesn't appear such an evident contradiction for negatively doubly charged particles.
There
exist several types of particle models where heavy
stable -2 charged species, $O^{--}$, are predicted:
\begin{itemize}
\item[(a)] AC-leptons, predicted
as an extension of the Standard Model, based on the approach
of almost-commutative geometry \cite{Khlopov:2006dk,5,FKS,bookAC}.
\item[(b)] Technileptons and
anti-technibaryons in the framework of Walking Technicolor
(WTC) \cite{KK,Sannino:2004qp,Hong:2004td,Dietrich:2005jn,Dietrich:2005wk,Gudnason:2006ug,Gudnason:2006yj}.
\item[(c)] stable "heavy quark clusters" $\bar U \bar U \bar U$ formed by anti-$U$ quark of 4th generation
\cite{Khlopov:2006dk,Q,I,lom,KPS06,Belotsky:2008se} \item[(d)] and, finally, stable charged
clusters $\bar u_5 \bar u_5 \bar u_5$ of (anti)quarks $\bar u_5$ of
5th family can follow from the approach, unifying spins and charges\cite{Norma}.
\end{itemize}
All these models also
predict corresponding +2 charge particles. If these positively charged particles remain free in the early Universe,
they can recombine with ordinary electrons in anomalous helium, which is strongly constrained in
terrestrial matter. Therefore a cosmological scenario should provide a mechanism which suppresses anomalous helium.
There are two possible mechanisms than can provide a suppression:
\begin{itemize}
\item[(i)] The abundance of anomalous helium in the Galaxy may be significant, but in terrestrial matter
a recombination mechanism could suppress this abundance below experimental upper limits \cite{Khlopov:2006dk,FKS}.
The existence of a new U(1) gauge symmetry, causing new Coulomb-like long range interactions between charged dark matter particles, is crucial for this mechanism. This leads inevitably to the existence of dark radiation in the form of hidden photons.
\item[(ii)] Free positively charged particles are already suppressed in the early Universe and the abundance
of anomalous helium in the Galaxy is negligible \cite{mpla,I}.
\end{itemize}
These two possibilities correspond to two different cosmological scenarios of dark atoms. The first one is
realized in the scenario with AC leptons, forming neutral AC atoms \cite{FKS}.
The second assumes a charge asymmetry of the $O^{--}$ which forms the atom-like states with
primordial helium \cite{mpla,I}.
If new stable species belong to non-trivial representations of
the SU(2) electroweak group, sphaleron transitions at high temperatures
can provide the relation between baryon asymmetry and excess of
-2 charge stable species, as it was demonstrated in the case of WTC
\cite{KK,KK2,unesco,iwara}.
After it is formed
in the Standard Big Bang Nucleosynthesis (BBN), $^4He$ screens the
$O^{--}$ charged particles in composite $(^4He^{++}O^{--})$ {\it
$OHe$} ``atoms'' \cite{I}.
In all the models of $OHe$, $O^{--}$ behaves either as a lepton or
as a specific ``heavy quark cluster" with strongly suppressed hadronic
interactions.
The cosmological scenario of the $OHe$ Universe involves only one parameter
of new physics $-$ the mass of O$^{--}$. Such a scenario is insensitive to the properties of $O^{--}$ (except for its mass), since the main features of the $OHe$ dark atoms are determined by their nuclear interacting helium shell. In terrestrial matter such dark matter species are slowed down and cannot cause significant nuclear recoil in the underground detectors, making them elusive in direct WIMP search experiments (where detection is based on nuclear recoil) such as CDMS, XENON100 and LUX. The positive results of DAMA experiments (see \cite{DAMAtalk} for review and references) can find in this scenario a nontrivial explanation due to a low energy radiative capture of $OHe$ by intermediate mass nuclei~\cite{mpla,DMRev,DDMRev}. This explains the negative results of the XENON100 and LUX experiments. The rate of this capture is
proportional to the temperature: this leads to a suppression of this effect in cryogenic
detectors, such as CDMS.
OHe collisions in the central part of the Galaxy lead to OHe
excitations, and de-excitations with pair production in E0 transitions can explain the
excess of the positron-annihilation line, observed by INTEGRAL in the galactic bulge \cite{DMRev,DDMRev,KK2,CKWahe}.
One should note that the nuclear physics of OHe is in the course of development and its basic element for a successful and self-consistent OHe dark matter scenario is related to the existence of a dipole Coulomb barrier, arising in the process of OHe-nucleus interaction and providing the dominance of elastic collisions of OHe with nuclei. This problem is the main open question of composite dark matter, which implies correct quantum mechanical solution \cite{CKW}. The lack of such a barrier and essential contribution of inelastic OHe-nucleus processes seem to lead to inevitable overproduction of anomalous isotopes \cite{CKW2}.
Production of pairs of elementary stable doubly charged heavy leptons is characterized by a number of distinct experimental signatures that would provide effective search for
them at the experiments at the LHC and test the atom-like structure of the cosmological dark matter. Moreover, astrophysical consequences of composite dark matter model can reproduce experimentally detected excess in positron annihilation line and high energy positron fraction in cosmic rays only if the mass of double charged $X$ particles is in the 1 TeV range (Section 2). We discuss confrontation of these predictions and experimental data in Section 3. The current status and expected progress in experimental searches for stable double charged particles as constituents of composite dark matter are summarized in the concluding Section 4.
\section{Indirect effects of composite dark matter}
\label{astro}
The existence of such form of matter as O-helium should lead to a number of astrophysical signatures,
which can constrain or prove this hypothesis. One of the signatures of O-helium can be a presence
of an anomalous low Z/A component in the cosmic ray flux. O-helium atoms that are present in the
Galaxy in the form of the dark matter can be destroyed in astrophysical processes and free $X^{−−}$ can be
accelerated as ordinary charged particles. O-helium atoms can be ionized due to nuclear interaction with
cosmic rays or in the front of a shock wave in the Supernova explosions, where they were effectively
accumulated during star evolution \cite{I}. If the mechanisms of $X^{−−}$ acceleration are effective, the
low Z/A component with charge −2 should be present in cosmic rays at the level of $F_X/F_p \sim 10^{-9}m^{-1}_o $ \cite{KK2},
and might be observed by PAMELA and AMS02 cosmic ray experiments. Here $m_o$ is the mass
of O-helium in TeV, $F_X$ and $F_p$ are the fluxes of $X^{−−}$ and protons, respectively.
\subsection{Excess of positron annihilation line in the galactic bulge}
\label{bulge}
Another signature of O-helium in the Galaxy is the excess of the positron annihilation line in cosmic
gamma radiation due to de-excitation of the O-helium after its interaction in the interstellar space. If
2S level of O-helium is excited, its direct one-photon transition to the 1S ground state is forbidden and
the de-excitation mainly goes through direct pair production. In principle this mechanism of positron
production can explain the excess in positron annihilation line from the galactic bulge, measured by the
INTEGRAL experiment. Due to the large uncertainty of DM distribution in the galactic bulge this interpretation of the INTEGRAL data is possible in a wide range of masses of
O-helium with the minimal required central density of O-helium dark matter at $m_o = 1.25 \,{\rm TeV}$ \cite{integral1,integral2
For the smaller values of $m_o$ on needs larger central density to provide effective excitation of O-helium in collisions
Current analysis favors lowest values of central dark matter density, making possible O-helium explanation for this excess only for a narrow window around this minimal value (see Fig. \ref{integral})
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1]{integral.png}
\caption{Dark matter is subdominant in the central part of Galaxy. It strongly suppresses it dynamical effect and causes large uncertainty in dark matter density and velocity distribution. At this uncertainty one can explain the positron line excess, observed by INTERGRAL, for a wide range of $m_o$ given by the curve with minimum at $m_o = 1.25 \,{\rm TeV}$. However, recent analysis of possible dark matter distribution in the galactic bulge favor minimal value of its central density.}
\label{integral}
\end{center}
\end{figure}
\subsection{Composite dark matter solution for high energy positron excess}
\label{HEpositrons}
In a two-component dark atom model, based on Walking Technicolor, a
sparse WIMP-like component of atom-like state, made of positive and neg-
ative doubly charged techniparticles, is present together with the dominant
OHe dark atom and the decays of doubly positive charged techniparticles to
pairs of same-sign leptons can explain the excess of high-energy cosmic-ray
positrons, found in PAMELA and AMS02 experiments [17]. This explana-
tion is possible for the mass of decaying +2 charged particle below 1 TeV
and depends on the branching ratios of leptonic channels (See Fig. \ref{ams}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1]{ams.png}
\caption{Best fit high energy positron fluxes from decaying composite dark matter in confrontation with the results of AMS02 experiment.}
\label{ams}
\end{center}
\end{figure}
Since even pure
lepton decay channels are inevitably accompanied by gamma radiation the
important constraint on this model follows from the measurement of cosmic
gamma ray background in FERMI/LAT experiment.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.2]{fermi.png}
\caption{Gamma ray flux accompanying the best fit high energy positron fluxes from decaying composite dark matter reproducing the results of AMS02 experiment, in confrontation with FERMI/LAT measurement of gamma ray background.}
\label{fermi}
\end{center}
\end{figure}
The multi-parameter
analysis of decaying dark atom constituent model determines the maximal model independent value of the mass of decaying
+2 charge particle, at which this explanation is possible $$m_o<1 TeV.$$
One should take into account that according to even in this range hypothesis on decaying composite dark matter, distributed in the galactic halo, can lead to gamma ray flux exceeding the measured by FERMI/LAT.
\subsection{Sensitivity of indirect effect of composite dark matter to the mass of their double charged constituents}
\label{mass}
We see that indirect effects of composite dark matter strongly depend on the mass of its double charged constituents.
To explain the excess of positron annihilation line in the galactic bulge mass of double charged constituent of O-helium should be in a narrow window around
$$m_o = 1.25 \,{\rm TeV}.$$
To explain the excess of high energy cosmic ray positrons by decays of constituents of composite dark matter with charge +2 and to avoid overproduction of gamma background, accompanying such decays, the mass of such constituent should be in the range
$$m_o < 1 \,{\rm TeV}.$$
These predictions should be confronted with the experimental data on the accelerator search for stable double charged particles.
\section{Searches for stable multi-charged particles}
\label{experiment}
A~new charged massive particle with electric charge $\neq 1e$ would represent a~dramatic deviation from the~predictions of the~Standard Model, and such a~spectacular discovery would lead to fundamental insights and critical theoretical developments. Searches for such kind of particles were carried out in many cosmic ray and collider experiments (see for instance review in~\cite{fair}).
Experimental search for double charged particles is of a~special interest because of important cosmological sequences discussed in previous sections. In a~tree approximation, such particles cannot decay to pair of quarks due to electric charge conservation and only decays to the~same sign leptons are possible. The~latter implies lepton number nonconservation, being a~profound signature of new physics. In general, it makes such states sufficiently long-living in a~cosmological scale.
Obviously, such experiments are limited to look only for the~``long-lived'' particles, which do not decay within a~detector, as opposed to truly stable particles, which do not decay at all. Since the~kinematics and cross sections for double charged particle production processes cannot be reliably predicted, search results at collider experiments are usually quoted as cross section limits for a~range of charges and masses for well-defined kinematic models. In these experiments, the~mass limit was set at the~level of up to $100$~\,{\rm GeV}(see e.g. for review~\cite{fair}).
The~CDF experiment collaboration performed a~search for long-lived double charged Higgs bosons ($H^{++}, H^{--}$) with $292$~pb$^{-1}$ of data collected in $p\bar{p}$ collisions at $\sqrt{s}=1.96$~\,{\rm TeV}{}~\cite{Acosta:2005np}.
The~dominant production mode considered was $p\bar{p}\rightarrow \gamma^{\star}/Z+X\rightarrow H^{++}H^{--}+X$.
Background sources include jets fragmenting to high-$\ensuremath{p_{\text{T}}}$ tracks, $Z\rightarrow ee$, $Z\rightarrow \mu \mu$, and $Z\rightarrow \tau \tau$, where at least one $\tau$ decays hadronically. Number of events expected from these backgrounds in the~signal region was estimated to be $<10^{-5}$ at $68\%$ confidence level (CL).
Not a~single event with a~$H^{++}$ or $H^{--}$ was found in experimental data. This allows to set cross-section limit shown in Fig.~\ref{CDF_limits} as a~horizontal solid line.
Next-to-leading order theoretical calculations of the cross-section for pair production of $H^{\pm\pm}$ bosons for left-handed and right-handed couplings are also shown in this figure.
Comparison of expected and observed cross-section limits gives the~following mass constrains: $133$ and $109$~\,{\rm GeV}{} on the~masses of long-lived $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$, respectively, at $95\%$ CL as shown in Fig.~\ref{CDF_limits}.
In case of degenerate $H^{\pm\pm}_L$ and $H^{\pm\pm}_R$ bosons, the~mass limit was set to $146$~\,{\rm GeV}{}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.2]{CDF_limits.pdf}
\caption{The~comparison of the~experimental cross section
upper limit with the~theoretical next-to-leading order cross
section for pair production of $H^{\pm\pm}$ bosons. The~theoretical
cross sections are computed separately for bosons with left-handed ($H^{\pm\pm}_L$) and right-handed ($H^{\pm\pm}_R$) couplings, and summed
for the~case that their masses are degenerate,~\cite{Acosta:2005np}.}
\label{CDF_limits}
\end{center}
\end{figure}
\subsection{Searches at Large Hadron Collider}
Significant increase of beam energy at the~Large Hadron Collider (LHC) opens a~new era in the~high energy physics and allows accessing uncharted territories of particle masses. In this section the~results of searches for the~multi-charged particles, performed by the~ATLAS and the~CMS collaborations at LHC, will be described.
Calculations of the cross sections assume that these particles are generated as new massive spin-$1/2$ ones, neutral under SU(3)$_C$ and SU(2)$_L$.
\subsubsection{ATLAS experiment at LHC}
\label{ATLAS_search}
In Run~1 (2010--2012), the~ATLAS~\cite{Aad:2008zzm} collaboration at LHC performed two searches for long-lived multi-charged particles, including the~double charged particles:
one search with $4.4$~fb$^{-1}$ of data collected in $pp$ collisions at $\sqrt{s}=7$~\,{\rm TeV}{}~\cite{Aad:2013pqd},
and another one with $20.3$~fb$^{-1}$ collected at $\sqrt{s}=8$~\,{\rm TeV}{}~\cite{Aad:2015oga}.
Both these searches feature particles with large transverse momentum values, traversing the~entire ATLAS detector. An~energy loss of a~double charged particle is by a~factor of $q^2=4$ higher than that of single charged particle. Such particles will leave a~very characteristic signature of high ionization in the~detector.
More specifically, the~searches look for particles with anomalously high ionization on their tracks in three independent detector subsystems: silicon pixel detector (Pixel) and transition radiation tracker (TRT) in the~ATLAS inner detector, and monitoring drift tubes (MDT) in the~muon system.
The~estimate of the~expected background originating from the~SM processes and providing input into the~signal region D was calculated to be $0.013\pm0.002 \textrm{(stat.)}\pm0.003\textrm{(syst.)}$ events.
In order to define cross section, a~reconstruction efficiency of signal particles has to be known.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.45]{EfficienciesTrends_PAPER.pdf}
\caption{The~signal efficiencies for different masses and charges of the~multi-charged particles for the~DY production model. Double charged particles are denoted as ``$z=2$'' (red points and line). The~picture is taken from~\cite{Aad:2015oga}.}
\label{ATLAS_EfficiencyTrend}
\end{center}
\end{figure}
This value is defined as a~fraction of simulated events with at least one multi-charged particle satisfying all of the~aforementioned criteria over the~number of all generated events. In other words, it is a~search sensitivity to find a~hypothetical particle with the~ATLAS experiment. These values are shown in Fig.~\ref{ATLAS_EfficiencyTrend} for each considered signal sample containing particles with charges from $2$ to $6$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.45]{AllMasses_Xsections_AllChargesOnOneHisto_NoUncertaintyBands.pdf}
\caption{Observed $95\%$ CL cross-section upper limits and theoretical cross-sections as functions of the multi-charged particles mass. Again, the~double charged particles are denoted as ``$z=2$'' (red points and lines). The~picture is taken from~\cite{Aad:2015oga}.}
\label{ATLAS_Limits}
\end{center}
\end{figure}
No events with double charged particles were found in neither 2011 or 2012 experimental data sets, setting the~lower mass limits to $430$ and $660$~\,{\rm GeV}{}, respectively, at $95\%$ CL. The~comparison of observed cross-section upper limits and theoretically predicted cross-sections is shown in Fig.~\ref{ATLAS_Limits}.
\subsubsection {CMS experiment at LHC}
\label{CMS_search}
In parallel to the~ATLAS experiment, the~CMS~\cite{Chatrchyan:2008aa} collaboration at LHC performed a~search for double charged particles, using $5.0$~fb$^{-1}$ of data collected in $pp$ collisions at $\sqrt{s}=7$~\,{\rm TeV}{} and $18.8$~fb$^{-1}$ collected at $\sqrt{s}=8$~\,{\rm TeV}{}~\cite{Chatrchyan:2013oca}.
The~search features particles with high ionization along their tracks in the~inner silicon pixel and strip tracker. Tracks with specific ionization $I_h>3$~\,{\rm MeV}{}/cm were selected. The muon system was used to measure the~time-of-flight values. Tracks with $1/\beta>1.2$ were considered.
For the~part of the~search based on the~$\sqrt{s}=7$~\,{\rm TeV}{} data, the~number of events in the~signal region, expected from SM processes, was estimated to be $0.15\pm0.04$, whereas for the~$\sqrt{s}=8$~\,{\rm TeV}{} part it was $0.52\pm0.11$~events.
The~uncertainties include both statistical and systematical contributions. $0$ and $1$~events were observed in the~signal regions for the~$7$ and $8$~\,{\rm TeV}{} analyses, respectively, which is consistent with the~predicted event rate.
Comparison between observed upper cross section limits and theoretically predicted cross section values for the~$8$~\,{\rm TeV}{} is shown in Fig.~\ref{CMS_limits_8TeV}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{CMS_limits_8TeV.png}
\caption{Observed $95\%$ CL cross-section upper limits and theoretical cross-sections as functions of the multi-charged particles mass in CMS search at the~$\sqrt{s}=8$~\,{\rm TeV}{}. The~double charged particles are denoted as ``$|Q|=2e$''. The~picture is taken from~\cite{Chatrchyan:2013oca}.}
\label{CMS_limits_8TeV}
\end{center}
\end{figure}
For the~$8$~\,{\rm TeV}{} search, the~mass limit of $665$~\,{\rm GeV}{} was obtained. This result (within uncertainties) is very close to the~ATLAS limit of $660$~\,{\rm GeV}{} for the~$8$~\,{\rm TeV}{} data set. Also, CMS treated the~results obtained at $7$ and $8$~\,{\rm TeV}{} as combined. This allowed to push the~lower mass limit to $685$~\,{\rm GeV}{} at $95\%$ CL. A~combination of the~results of two experiments for $8$~\,{\rm TeV}{} would mean an~increase of statistics by a~factor of $2$. Having said that, one can conclude that the~mass limit based on the~results of both experiment for double charged particles can be set at the~level of about $730$~\,{\rm GeV}{}.
\subsubsection {What one expects from LHC Run~2}
\label{LHC Run 2}
Considering a~recent CMS Physics Analysis Summary~\cite{CMS:2016ybj} and an~ATLAS paper in preparation, both on the~searches for double charged particles in data delivered by LHC to these experiments in 2015--2016, we may conclude that each of these two experiments will be able to set a~lower mass limit on the~double charged particles at $m=1000\pm50$~\,{\rm GeV}{}.
Going further and considering the data taking periods of ATLAS and CMS until the end of Run 2 (end of 2018), we can estimate a~low limit on the~double charged particles mass corresponding to the~Run 2 data set. Several assumptions are made in these speculations:
\begin{itemize}
\item by the~end of 2018, ATLAS and CMS will each record about $120$~fb$^{-1}$ of $\sqrt{s}=13$~\,{\rm TeV}{} data;
\item signal efficiency will remain at a~present level in both experiments, without being compromised by high detector occupancy or any other effects;
\item no double charged particle candidates will be in observed in the~first place.
\end{itemize}
Considering all of the~above, the~ATLAS and CMS collaborations may each be expected to set the~lower mass limits at the~level of $1.2$~\,{\rm TeV}{} based on their analyses of the~entire $13$~\,{\rm TeV}{} data set. If these two experiments combined their independently gathered statistics together for this kind of search, the~limits would go as high as up to $1.3$~\,{\rm TeV}{}.
\section{Conclusions}
The existence of heavy stable neutral particles is one of the popular solutions for the dark matter problem. However, DM considered to be electrically neutral, potentially can be formed by stable heavy charged particles bound in neutral atom-like states by Coulomb attraction. Analysis of the cosmological data and atomic composition of the Universe gives the constrains on the particle charge
showing that only -2 charged constituents, being trapped by primordial helium in neutral O-helium states, can avoid the problem of overproduction of the anomalous isotopes of chemical elements, which
are severely constrained by observations. Cosmological model of O-helium dark matter can even explain
puzzles of direct dark matter searches.
Stable charge -2 states ($X^{--}$) can be elementary like AC-leptons or technileptons, or look like
technibaryons. The latter, composed of techniquarks, reveal their structure at much higher energy scale
and should be produced at colliders and accelerators as elementary species. They can also be composite like ``heavy quark clusters'' $\bar U \bar U \bar U$ formed by anti-U quark in one of the models of fourth generation \cite{I} or $\bar u_5 \bar u_5 \bar u_5$ of
(anti)quarks $\bar u_5$ of stable 5th family in the approach \cite{Norma}.
In the context of composite dark matter scenario accelerator search for stable doubly charged leptons
acquires the meaning of direct critical test for existence of charged constituents of cosmological dark matter.
The signature for AC leptons and techniparticles is unique and distinctive what allows to separate them from other hypothetical exotic particles. Composite dark matter models can explain observed excess of high energy positrons and gamma radiation in positron annihilation line at the masses of $X^{--}$ in the range of $1$~\,{\rm TeV}{}, it makes search for double charged particles in this range direct experimental test for these predictions of composite dark matter models.
Test for composite $X^{--}$ can be only indirect: through the search for heavy hadrons, composed of single $U$ or $\bar U$ and light quarks (similar to R-hadrons).
The~ATLAS and CMS collaborations at the~Large Hadron Collider are searching for the~double charged particles since 2011. The~most stringent results achieved so far exclude the~existence of such particles up to their mass of $680$~\,{\rm GeV}{}. This value was obtained by both ATLAS and CMS collaborations independently. It is expected that if these two collaborations combine their independently gathered statistics of LHC Run 2 (2015--2018), the~lower mass limit of double charged particles could reach the~level of about $1.3$~\,{\rm TeV}{}.
\section*{Acknowledgements}
We thank K.Belotsky, J.-R. Cudell, C. Kouvaris, M.Laletin for collaboration in development of the presented approach. The work was supported by Russian Science Foundation and fulfilled in the framework of MEPhI
Academic Excellence Project (contract 02.a03.21.0005, 27.08.2013)..
|
2,869,038,154,992 | arxiv | \section*{Introduction}
A random tensor is a $d$-dimensional array of random variables. The joint probability distribution is typically chosen to be $U(N)^d$-invariant. Just like unitary invariance in random (Hermitian or complex) matrices selects matrix traces as natural invariant observables, the invariance under $U(N)^d$ points to a family of invariant observables, which are polynomials in the tensor entries. Those observables are labeled by connected, $d$-regular, edge-colored graphs, and one is interested in the large $N$ expansion of their expectations.
In matrix models \cite{mm-review-difrancesco}, expectations of observables $\tr M^n$ can be expanded onto Feynman graphs, actually maps whose genus is the exponent of $N$ in the expansion at large $N$. The Feynman expansion also applies to random tensors and gives rise to $(d+1)$-regular edge-colored graphs \cite{Gur3, Gur4, uncoloring}. Each Feynman graph inherits an $N$-dependent weight, the exponent of $N$ being called the \emph{degree} (which reduces to the genus at $d=2$). Those graphs are suited for a combinatorial analysis: the classification of colored graphs of fixed degree has been obtained in \cite{GurauSchaeffer} and the singularities of the generating functions are now known.
However, the calculation of expectations of invariant polynomials is still a challenging issue. Even in the simplest case, i.e. the Gaussian distribution, and even if one only asks for the leading order at large $N$ of an expectation, it is necessary to $i)$ identify the exponent of $N$ for the dominant contributions, $ii)$ find the numerical coefficient which counts the number of dominant contributions (dominant Wick contractions). Both problems are bubble-dependent.
A few cases have been studied. The so-called melonic polynomials all have a single leading order Wick contraction (as well as a generalization of this family including nearly-melonic polynomials) \cite{uncoloring}. A family of polynomials has been found in \cite{MeandersTensors} whose numbers of leading order Wick contractions are numbers of meandric systems.
It is worth emphasizing the importance of the Gaussian distribution in random tensor models. It was proved in \cite{universality} that any $U(N)^d$-invariant distribution satisfying some assumptions on its dependence with $N$ becomes Gaussian at large $N$ (the covariance does depend on the initial distribution) -- models which do not satisfy those assumptions (and not Gaussian at large $N$) are discussed in \cite{new1/N}. Moreover, models which do not have a trivial bare covariance, and therefore have a renormalization group flow, tend to be asymptotically free: they have a Gaussian fixed point (this happens surprisingly more often than in ordinary quantum field theory) \cite{FRGforTGFT, RenormalizableGeloun, BetaFunctionD=4, EpsilonCarrozza}. It is appealing to think this somehow comes from the universality theorem of \cite{universality}.
In addition to the purely combinatorial analysis of \cite{GurauSchaeffer}, matrix model techniques have recently been employed in tensor models. They apply to models with quartic interactions for which the Hubbard-Stratanovich transformation leads to multi-matrix models \cite{DSQuartic, BeyondPert, GenericQuartic}. Furthermore, as argued in \cite{DSSD}, the behavior of generic models can then be deduced from those of the quartic ones (universality phenomenon).
The relationship of such multi-matrix models to ordinary matrix models has been studied in \cite{FullyPacked} where they are described in terms of fully packed loop configurations on random surfaces generated by $U(\tau)$ models. The results of random tensor theory were applied in this context, leading to a classification of loop configurations which corresponds to the classification of colored graphs. This basically relied on interpreting a collection of matrices as a tensor.
Here we focus on the Gaussian distribution instead, and use another direct connection between random tensors and matrices. Splitting the set of $d$ indices of the tensor into two sets, we can consider the tensor as a (typically rectangular) matrix of size $N^{p}\times N^{d-p}$ and then use the singular value decomposition. However, tensor models are not $U(N^p)$ or $U(N^{d-p})$ invariant, meaning that the integrals over the angular degrees of freedom are non-trivial. When they can be performed at fixed singular values, an effective theory on the singular values is obtained.
Although it is a simple idea, it is only the first time that it is applied explicitly to tensor models. As a first step in this program, we consider here the case of the calculation of expectations of polynomial observables in a Gaussian distribution. The angular integrals are integrals of polynomials in the unitary matrix elements and the formula of \cite{Collins} can be applied. In concrete situations where the degree of the polynomial is not too large, exact results are obtained. We also show that large $N$ behaviors can be extracted using the diagrammatic method of \cite{DiagrammaticU(N)} on a family of observables which generalize the melonic ones (basically, trees, like in the melonic case, formed by the gluing of matrix trace-invariants).
In addition to providing so far unexplored relationships between random tensors and matrices, the approach of \cite{FullyPacked} and the present paper shows the difficulties faced in random tensor theory in the familiar context of matrix models
\section{Gaussian expectations in random tensor theory} \label{sec:Gaussian}
The work presented in \cite{FullyPacked} relied on the simple observation that a set of matrices with a unitary symmetry among them can be seen as a tensor equipped with independent unitary transformations on its indices. There is another simple way to relate tensor to matrices, which is to consider a tensor as a (typically rectangular) matrix between a subset of indices to the subset of the other indices. For a tensor on $d$ indices, seen as a linear form on $\bigoplus_{i=1}^d V_{i}$, one picks up a subset $\mathcal{C}=\{i_1,\dotsc,i_{|\mathcal{C}|}\} \subset \{1,\dotsc,d\}$ and denote its complement $\{k_1,\dotsc,k_{d-|\mathcal{C}|}\} = \{1,\dotsc,d\}\setminus \mathcal{C}$. One defines $M$ as a linear application from $\bigoplus_{p=1}^{d-|\mathcal{C}|} V_{k_p}$ to $\bigoplus_{j=1}^{|\mathcal{C}|} V_{i_j}$.
Of course, there are multiple ways to choose $\mathcal{C}$, but depending on the observables one is interested in, some choices are better than others. We recall that a basis of observables is formed by the polynomials $B(T,\overline{T})$ labeled by bubbles. A \emph{bubble} is a connected bipartite $d$-regular edge-colored graph, with colors $1,\dotsc,d$ such that all $d$ colors are incident to each vertex (see Figure \ref{fig:Necklace}). The polynomial associated to a bubble is built by writing a tensor $T$ for each white vertex and its conjugated tensor $\overline{T}$ for each black vertex. When an edge of color $i$ connects a white to a black vertex, the indices in position $i$ of the corresponding $T$ and $\overline{T}$ are identified (with a Kronecker delta) and summed over from $1$ to $N$. Such polynomials are invariant under $U(N)^d$, \cite{uncoloring}.
Loosely speaking, a ``good'' choice of the subset $\mathcal{C}$ relative to $B(T,\overline{T})$ is one that minimizes the degrees of freedom entering $B$. A natural way of separating the degrees of freedom of $T,\overline{T}$ is to use the singular value decomposition with respect to $\mathcal{C}$, i.e. $M=UDV$, where $U,V$ are unitary and $D$ is made of a square diagonal matrix completed with some rows or columns of zeros. For a few families of polynomials $B$, the choice of $\mathcal{C}$ is obvious, e.g. when there exists one such that $B(T,\overline{T}) = B(D)$ is a function of the singular values only.
This last situation however requires to focus on polynomials of the form $\tr (MM^\dagger)^n$ which we call \emph{necklaces}. Their bubbles consist of single cycles on $2n$ vertices connected by two types of multiple edges, those with colors in $\mathcal{C}$ and those with colors in the complementary subset (see example in figure \ref{fig:Necklace}).
\begin{figure}
\includegraphics[scale=.5]{NecklaceFiveColors.pdf}
\caption{\label{fig:Necklace} This is the necklace on five colors with $\mathcal{C}=\{3,5\}$ and 6 vertices.}
\end{figure}
Generically, a polynomial $B(T,\overline{T}) = B(U,V,D)$ depends on all the degrees of freedom of the singular value decomposition. This is simply because a polynomial invariant under the action of $U(N_1)\times \dotsb \times U(N_d)$ is generically not invariant under $U(\prod_{j=1}^{|\mathcal{C}|} N_{i_j}) \times U(\prod_{p=1}^{d-|\mathcal{C}|} N_{k_p})$. Notice that the Gaussian measure however is independent of the angular matrices $U,V$, i.e. $T\cdot \overline{T} = \tr MM^\dagger$ for any choice of $\mathcal{C}$. Therefore the \emph{Gaussian} expectation of a polynomial reads
\begin{equation}
\langle B(T,\overline{T})\rangle = \frac1Z\, \int d\mu(\{\lambda_i\})\,e^{-N^{d-1}\sum_i \lambda_i^2} \int dU\,dV\ B(U,V,\{\lambda_i\}).
\end{equation}
Here $\{\lambda_i\geq0\}$ denotes the set of singular values, $d\mu(\{\lambda_i\})$ the measure inherited from the change of variables, while $dU, dV$ are the Haar measures on unitary matrices of the appropriate sizes.
This provides a notion of effective polynomial of $B$ with respect to $\mathcal{C}$ which is a function over the singular values,
\begin{equation} \label{AngularIntegral}
B_{\mathcal{C}}(\{\lambda_i\}) = \int dU\,dV\ B(U,V,\{\lambda_i\}).
\end{equation}
Then, the expectation of $B$ is just the expectation of $B_{\mathcal{C}}$ in the complex Gaussian Wishart ensemble (also known as the Laguerre ensemble, from the family of relevant orthogonal polynomials for this measure). The integral over the angular variables results in an expansion onto $U(\prod_{j=1}^{|\mathcal{C}|} N_{i_j}) \times U(\prod_{p=1}^{d-|\mathcal{C}|} N_{k_p})$-invariants,
\begin{equation}
B_{\mathcal{C}}(\{\lambda_i\}) = \sum_k \sum_{l_1,\dotsc,l_k} c^{(B)}_{l_1,\dotsc,l_k}(N_1,\dotsc,N_d)\ \Bigl(\sum_{i_1} \lambda_{i_1}^{2l_1}\Bigr) \dotsm \Bigl(\sum_{i_k} \lambda_{i_k}^{2l_k}\Bigr),
\end{equation}
and therefore
\begin{equation}
\langle B(T,\overline{T}) \rangle = \sum_k \sum_{l_1,\dotsc,l_k} c^{(B)}_{l_1,\dotsc,l_k}(N_1,\dotsc,N_d)\ \langle \Bigl(\sum_{i_1} \lambda_{i_1}^{2l_1}\Bigr) \dotsm \Bigl(\sum_{i_k} \lambda_{i_k}^{2l_k}\Bigr) \rangle_{\text{Laguerre}}.
\end{equation}
Taking all the tensor indices to have range $N$, the expectation of the product of single-trace invariants factorizes as the product of the expectations in the large $N$ limit. There are two possible cases,
\begin{equation}
\langle \sum_i \lambda_i^{2l} \rangle = \langle \tr (MM^\dagger)^{l} \rangle \sim_{N\to\infty} \begin{cases} 1 & \text{if $|\mathcal{C}| \neq d-|\mathcal{C}|$},\\ \operatorname{Cat}_l & \text{if $|\mathcal{C}| = d-|\mathcal{C}|$}. \end{cases}
\end{equation}
The symbol $\sim$ here also means up to some power of $N$ and $\operatorname{Cat}_l$ is the $l$-th Catalan number. When the matrix is rectangular, it is quite unbalanced since the ratio of its dimensions goes to either zero or infinity, and as a result a single Wick contraction survives at large $N$. When $\mathcal{C}$ (or its complement) is a singlet, it means that $B$ is a cycle of melons inserted on a fixed color and the result follows from \cite{universality}. In the other cases, it is an application of the results of \cite{new1/N} to the Gaussian distribution (which rely on the fact that there is a melonic subgraph which visits all the vertices). Only when $M$ is a square matrix one recovers the familiar Catalan numbers of Gaussian matrix models.
Several methods have been developed to deal with integrals over the unitary group, \cite{U(N)IntegralsAubert, ZinnZuber}. For our purpose, since the function $B(U,V,\{\lambda_i\})$ is polynomial in the matrix elements of $U,U^\dagger,V,V^\dagger$, the following formula \cite{Collins} seems to be the most natural to perform the integral \eqref{AngularIntegral},
\begin{equation} \label{U(N)Integral}
\int_{U(N)} dU\ U_{a_1 \alpha_1} \dotsm U_{a_n \alpha_n}\, \overline{U}_{b_1 \beta_1} \dotsm \overline{U}_{b_n \beta_n} = \sum_{\sigma,\tau\in\mathfrak{S}_n} \delta_{a_1,b_{\sigma(1)}}\dotsm \delta_{a_n, b_{\sigma(n)}}\,\delta_{\alpha_1, \beta_{\tau(1)}} \dotsm \delta_{\alpha_n, \beta_{\tau(n)}}\ \operatorname{Wg}_N(\sigma\tau^{-1}).
\end{equation}
$\sigma$ and $\tau$ run over the symmetric group on $n$ elements $\mathfrak{S}_n$, and $\operatorname{Wg}_N$ is a Weingarten function over $\mathfrak{S}_n$.
In principle, this provides a way to compute the effective polynomial $B_{\mathcal{C}}$, and there even exists a diagrammatic expansion which enables to control the scaling with $N$ of the various terms involved in the sums over permutations \cite{DiagrammaticU(N)}. Yet in practice, the number $n$ of matrix elements of $U$ (and similarly for $V$) cannot be too large as the sum over permutations becomes unmanageable for relatively large $n$.
We now set $d=4$ and $\mathcal{C} = \{2,4\}$ and illustrate the method on two examples: a simple, but exact calculation, and a large $N$ behavior on a family of observables. We write $T_{a_1 a_2 a_3 a_4} = M_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} a_2\\ a_4\end{smallmatrix}}$ where a column of tensor indices represents a index of $M$ with range $N^2$. Therefore $MM^\dagger = UD^2 U^\dagger$ is a $N^2\times N^2$ matrix $V_1\otimes V_3 \to V_1\otimes V_3$.
\subsection{A simple, exact calculation} \label{sec:ExactCalculation}
If $B$ is a bubble in which the edges of color 2 and 4 always connect the same vertices, then the associated polynomial is a function of $MM^\dagger$. Moreover, the edges of color 2 count the degree in $MM^\dagger$. We consider the following bubble, with $k$ edges of color 2 on the top and $l$ at the bottom of the drawing,
\begin{equation} \label{BubbleEdgeTree}
\begin{array}{c} \includegraphics[scale=.4]{BubbleCatCat.pdf} \end{array} =
\tr_1 \Bigl(\tr_3(MM^\dagger)^{k}\,\tr_3(MM^\dagger)^{l} \Bigr) = \sum_{a_1,a_3,b_1,b_3=1}^N (MM^\dagger)^{k}_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} b_1\\ a_3\end{smallmatrix}}\ (MM^\dagger)^{l}_{\begin{smallmatrix} b_1\\ b_3\end{smallmatrix} \begin{smallmatrix} a_1\\ b_3\end{smallmatrix}},
\end{equation}
using an obvious partial trace notation. By writing $MM^\dagger = UD^2U^\dagger$ with $D^2 = \operatorname{diag}(\{\lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\})$, we find
\begin{equation}
\sum U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2k} U^\dagger_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix} \begin{smallmatrix} b_1\\ a_3\end{smallmatrix}}\, U_{\begin{smallmatrix} b_1\\ b_3\end{smallmatrix} \begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}} \lambda_{\begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}}^{2l} U^\dagger_{\begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix} \begin{smallmatrix} a_1\\ b_3\end{smallmatrix}}
\end{equation}
The integral over $U(N^2)$ is simple enough since there are only two permutations on two elements,
\begin{multline}
\int_{U(N^2)} dU\ U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,U_{\begin{smallmatrix} b_1\\ b_3\end{smallmatrix} \begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}}\,\overline{U}_{\begin{smallmatrix} b_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,\overline{U}_{\begin{smallmatrix} a_1\\ b_3\end{smallmatrix} \begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}} = \Wg_{N^2}(1^2) \Bigl(\delta_{a_1,b_1} + \delta_{a_3,b_3}\delta_{\alpha_2,\beta_2}\delta_{\alpha_4,\beta_4} \Bigr) \\
+ \Wg_{N^2}(2) \Bigl(\delta_{a_3,b_3} + \delta_{a_1, b_1}\delta_{\alpha_2,\beta_2}\delta_{\alpha_4,\beta_4} \Bigr).
\end{multline}
Only the cycle structure of the arguments of the Weingarten functions is retained (they are class functions), so that $1^2$ is the (class of the) identity and $2$ the (class of the) transposition. Moreover,
\begin{equation}
\Wg_{N^2}(1^2) = \frac{1}{N^4-1},\qquad \Wg_{N^2}(2) = -\frac{1}{N^2(N^4-1)}.
\end{equation}
Performing the sums, it comes
\begin{equation}
\sum_{a_1,b_1,a_3,b_3} \int_{U(N^2)} dU\ U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,U_{\begin{smallmatrix} b_1\\ b_3\end{smallmatrix} \begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}}\,\overline{U}_{\begin{smallmatrix} b_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,\overline{U}_{\begin{smallmatrix} a_1\\ b_3\end{smallmatrix} \begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}} = \frac{N}{N^2+1} \bigl(1+\delta_{\alpha_2,\beta_2}\,\delta_{\alpha_4,\beta_4}\bigr),
\end{equation}
which is the tensor that has to be contracted with the singular values. Eventually we arrive at
\begin{equation}
\begin{aligned}
B_{\mathcal{C}} (\{ \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} \}) &= \frac{N}{N^2+1} \Bigl( \bigl(\sum_{\alpha_2,\alpha_4} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2k} \bigr)\,\bigl(\sum_{\beta_2,\beta_4} \lambda_{\begin{smallmatrix} \beta_2\\ \beta_4\end{smallmatrix}}^{2l} \bigr) + \sum_{\alpha_2,\alpha_4} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2k+2l} \Bigr)\\
&= \frac{N}{N^2+1} \Bigl(\tr(MM^\dagger)^{k}\,\tr(MM^\dagger)^{l} + \tr (MM^\dagger)^{k+l}\Bigr).
\end{aligned}
\end{equation}
The large $N$ limit of the expectation can be easily extracted. Normalizing the Gaussian with a covariance $1/N^2$, the double-trace term dominates (each trace bringing up a factor $N^2$) and factorizes, so that
\begin{equation}
\langle B(T,\overline{T})\rangle \underset{\text{large $N$}}{=} N^3\ \operatorname{Cat}_{k}\,\operatorname{Cat}_{l}.
\end{equation}
(The scaling factor $N^3$ is due to the fact that the bubble has a single cycle with colors $(1,2)$ and two cycles of colors $(3,4)$, see \cite{new1/N}.)
\subsection{Large $N$ behavior} \label{sec:CatalanTrees}
When the degree of the polynomial in $U$ (and/or $V$) which has to be integrated is large, an exact result becomes difficult to extract and largely depend on the combinatorics of the initial bubble. Even the asymptotics seems difficult to evaluate. However, in some cases, the method can lead to the large $N$ limit.
We consider a family of observables built in the following way. We first define the \emph{open necklace of color $i$} as the necklace with a line of color $i$ cut into two halves (see figure \ref{fig:OpenNecklace}). Then the construction starts with a necklace of length $k_1$. A random edge of color 1 or 3 is deleted and instead an open necklace of the appropriate color is attached so as to get a bipartite graph. On this new graph, a randomly chosen edge of color 1 or 3 is removed and replaced by an open necklace of the same color. The family of interest in this section consists of graphs built by continuing this process recursively a finite number of times.
\begin{figure}
\includegraphics[scale=.35]{Necklace.pdf}
\hspace{3cm}
\includegraphics[scale=.35]{OpenNecklace.pdf}
\caption{\label{fig:OpenNecklace} On the left is a necklace and on the right an open necklace of color 1.}
\end{figure}
To avoid redundancies in the construction, one can consider the rooted observables. The construction starts with an open necklace of color 1. Then the process of piling up open necklaces is similar, except that after an insertion on the color $i$, further insertions on the newly created edge of this color and incident to the black vertex of the inserted open necklace are forbidden. All other edges of color 1 and 3 are called active edges. This way, there is a partial order between the open necklaces used to build up the observables. A necklace (a child) is smaller than another one (a parent) if it is inserted on an active edge of the other one. Every necklace (but the root one) has a single parent.
There is a simple bijection between the set of such rooted observables and a family of rooted, corner-labeled, plane trees. It is a generalization of the bijection between melonic graphs and trees, explained in \cite{critical-colored}, in the sense that the case of melonic graphs with melons on the colors 1 and 3 only will be recovered by removing the labels on the trees. The tree associated to an observable simply represents the relations of partial order between the necklaces. Vertices correspond to open necklaces and edges of color 1 and 3 represent the child/parent relationships. Notice that a typical necklace however has several edges of color 1 and 3 so that further decorations are required to keep track of the specific edges on which insertions are performed. One defines the distance between two edges incident to the same vertex and both connected to children of this vertex as the number of edges of color 2 which separate the two insertions on the necklace corresponding to this vertex. The distance at a vertex $v$ between an edge connected to a children of this vertex and an edge connected to the parent of $v$ is the number of edges of color 2 around the necklace corresponding to $v$ between the edge incident to a black vertex which goes to the parent necklace and the edge on which the insertion of the children necklace is performed. In the tree, those distances are included by preserving the cyclic ordering of the insertions around each vertex, so that they label the corners. An example is given in figure \ref{fig:TreeVertex}.
\begin{figure}
\includegraphics[scale=.35]{TreeVertex.pdf}
\caption{\label{fig:TreeVertex} This represents a typical necklace within an observable, which is mapped to a vertex of the corresponding tree. If we assume the edge connected to the parent is the open one incident to the black vertex and of color 1 on the left, then the distances labeling the corners (going counter-clockwise) read 3, 2, 0, 1, 3, 0. The final 0 is only here when there is a necklace insertion on the open edge of color 1 on the left, incident to the white vertex.}
\end{figure}
The bubble considered in the equation \eqref{BubbleEdgeTree} is represented by the tree with two vertices and a single edge of color 1 between them. It has two corners, one with label $k$ and one with label $l$. (The necklace itself could be seen as the tree consisting of a single vertex, which has no corner, although an integer is needed for the length.) If $C(\mathcal{T})$ denotes the set of corners of the tree $\mathcal{T}$, $V(\mathcal{T})$ its set of vertices, and $\{k_c\}_{c\in C(\mathcal{T})}$ the set of integers attached to the corners of $\mathcal{T}$, the total length around the vertex $v\in V(\mathcal{T})$ is $k_v = \sum_{\text{$c$ around $v$}} k_c$. Then
\begin{equation} \label{CatProd}
\langle B_{\mathcal{T}}(T,\overline{T})\rangle \underset{\text{large $N$}}{=} \prod_{v\in V(\mathcal{T})} \operatorname{Cat}_{k_v}.
\end{equation}
To prove this, we will show that the removal of a leaf with label $l$ and its incident edge in the tree amounts to factorizing the Catalan number $\operatorname{Cat}_l$. Zooming on the explicit dependence of $B_{\mathcal{T}}$ at such a leaf, say with incident edge of color 1, we have
\begin{equation}
B_{\mathcal{T}} = \sum_{a_1,b_1=1}^N \Bigl[\tr_3 (MM^\dagger)^l\Bigr]_{a_1 b_1}\,\Bigl[f(MM^\dagger)\Bigr]_{b_1 a_1} = \sum U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2l} \overline{U}_{\begin{smallmatrix} b_1\\ a_3\end{smallmatrix}\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,\Bigl[f(UD^2U^\dagger)\Bigr]_{b_1 a_1}.
\end{equation}
Here $f$ is a matrix-valued function corresponding to chopping off the leaf and half the incident edge.
To perform the integral over $U$, we use the diagrammatic method of \cite{DiagrammaticU(N)}. First, one gets a diagram corresponding to the integrand and then the expansion of the integral \eqref{U(N)Integral} is obtained as a sum over decorations of this diagram by additional edges (akin to Wick contractions of Feynman diagrams). The matrix element of $U$ is represented by an edge between a black vertex (corresponding to the left indices, of colors 1, 3) and a white vertex (corresponding to the right indices, of colors 2, 4). For $\overline{U}$, the edge is simply decorated with a star. The matrix elements of powers of $D^2$ are represented as edges with a box on them. Finally the indices of colors 1, 3 are drawn explicitly as half-edges. This way,
\begin{equation}
\sum_{\alpha_2,\alpha_4=1}^N U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2l} \overline{U}_{\begin{smallmatrix} b_1\\ b_3\end{smallmatrix}\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} = \begin{array}{c} \includegraphics[scale=.65]{MD2Mdagger.pdf} \end{array}
\end{equation}
The integral is performed using \eqref{U(N)Integral} and $n$ denotes the number of matrix elements of $U$. Each term in the resulting sum over permutations $\sigma, \tau\in\mathfrak{S}_n$ is represented as a diagram obtained from the one of the integrand by adding a dashed edge for each Kronecker delta. The permutation $\sigma$ (respectively $\tau$) is thus represented by dashed edges connecting black (respectively white) vertices two by two. Each diagram is then weighted with the Weingarten function $\Wg_{N^2}(\sigma\tau^{-1})$.
Consider a diagram where $\bar{v}_1$ and $v_1$ are not connected to $\bar{v}_2$ and $v_2$, but rather to other vertices from other parts of the tree, as in the left of figure \ref{fig:LeafContraction}. The rectangle there stands for an arbitrary configuration of the remaining dashed edges. We are going to compare this situation to another diagram obtained from it by cutting the four dashed edges in halves and reconnecting them as on the right of figure \ref{fig:LeafContraction} (all dashed lines which are not drawn are untouched). We want to compare their amplitude as a function of $N$. The $N$-dependence is found by counting the number of independent sums of range $N$ after applying \eqref{U(N)Integral}. There are several types of independent sums which can all be tracked diagrammatically.
\begin{itemize}
\item Each cycle consisting of alternating dashed edges and edges of color 1 (resp. 3) corresponds to a sum over an index with the color 1 (resp. 3), hence it brings a factor $N$. The number of such cycles is denoted $F_{1}$ (resp. $F_{3}$).
\item Each cycle consisting of alternating dashed edges and edges with a box corresponds to a sum over indices with the colors 2 and 4, hence it brings a factor $N^2$. The number of such cycles is denoted $F_{\square}$.
\item The Weingarten function $\Wg_{N^2}(\sigma\tau^{-1})$ is a function of the cycle structure of $\sigma\tau^{-1}$, which we write $(1^{p_1} 2^{p_2} \dotsb n^{p_n})$ (meaning $p_j$ cycles of length $j$, with $\sum_j j p_j = n$). Each cycle is represented in the diagram as a cycle consisting of alternating dashed edges and edges of $U$ and $\overline{U}$. The number of cycles is denoted $F_0 = \sum_j p_j$. Moreover, the Weingarten function asymptotically behaves as
\begin{equation}
\Wg_{N^2}(1^{p_1} 2^{p_2} \dotsb n^{p_n}) \underset{\text{large $N$}}{=} (N^2)^{F_0-2n} \prod_{j=1}^n \Bigl[(-1)^{j-1}\, \operatorname{Cat}_{j-1}\Bigr]^{p_j}.
\end{equation}
That implies that each cycle comes with a factor $N^2$.
\end{itemize}
We can now evaluate the variation of all those quantities between the left and the right of figure \ref{fig:LeafContraction}.
\begin{itemize}
\item The two dashed lines incident to the black vertices on the left are parts of either one or two cycles with color 1 (depending on the details inside the rectangle) which gives two or one cycles on the right, hence $|\delta F_1|\leq 1$.
\item The two dashed lines incident to the black vertices on the left belong to one cycle with color 3, which splits into two cycles on the right, $\delta F_3 =1$.
\item The two dashed lines incident to the white vertices on the left belong to one cycle of type ``box'', which splits into two cycles on the right, $\delta F_{\square} =1$.
\item The four dashed lines on the left can either belong to two different cycles of the permutation $\sigma\tau^{-1}$ (one going along the edge $(\bar{v}_1 v_1)$ and the other along $(\bar{v}_2 v_2)$) or to the same one. On the right, we have a cycle $(\bar{v}_1 v_1 v_2 \bar{v}_2)$ and at least another one in the rectangle. Thus, $\delta F_0 \geq 0$.
\end{itemize}
Therefore the amplitude for the diagram on the right bounds the one for the diagram on the left by at least a factor $N^{2\delta F_{\square}} = N^2$. This means that for every configuration like the left hand side of figure \ref{fig:LeafContraction}, there is an associated configuration which is enhanced. The same conclusion is reached by means of similar arguments if instead of the left hand side of figure \ref{fig:LeafContraction} there is a dashed edge between $v_1$ and $v_2$ or between $\bar{v}_1$ and $\bar{v}_2$.
\begin{figure}
\includegraphics[scale=.5]{LeafNonOptimal.pdf}
\hspace{4cm}
\includegraphics[scale=.5]{LeafOptimal.pdf}
\caption{\label{fig:LeafContraction} The solid, non-colored edges represent matrix elements of $U$, and $\overline{U}$ with a star, edges with boxes represent singular values. Contractions of indices of colors 1, 3 are represented with edges incident to black vertices and the dashed edges correspond to the Kronecker deltas induced by the permutations $\sigma, \tau$ in \eqref{U(N)Integral}. The vertices $v_1, v_2, \bar{v}_1, \bar{v}_2$ represent the matrix indices on a leaf of a tree and the rectangle contains the rest of the tree. The diagram on the right is obtained from the one of the left by cutting the dashed edges which are drawn and reconnecting them so that $v_1$ is connected to $v_2$ and $\bar{v}_1$ to $\bar{v}_2$.}
\end{figure}
As a conclusion, it comes that all dominant contributions to the angular integral have a dashed edge between $v_1$ and $v_2$ and one between $\bar{v}_1$ and $\bar{v}_2$, which fixes one value of $\sigma$ and one value of $\tau$. It forces $\sigma\tau^{-1}$ to have a fixed point and this cycle contributes to a trivial factor 1 of the Weingarten function. Therefore, up to factors of $N$,
\begin{equation}
\begin{aligned}
\int_{U(N^2)} B_{\mathcal{T}}(U,\{\lambda_i\}) dU &= \int_{U(N^2)} dU \sum U_{\begin{smallmatrix} a_1\\ a_3\end{smallmatrix} \begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2l} \overline{U}_{\begin{smallmatrix} b_1\\ a_3\end{smallmatrix}\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}\,\Bigl[f(UD^2U^\dagger)\Bigr]_{b_1 a_1} \\
&\underset{\text{large $N$}}{=} \Bigl(\sum_{\alpha_2, \alpha_4} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2l}\Bigr) \int_{U(N^2)} dU\,\sum_{a_1} \Bigl[f(UD^2U^\dagger)\Bigr]_{a_1 a_1}.
\end{aligned}
\end{equation}
The expectation of this product factorizes at large $N$, so that $\sum_{\alpha_2, \alpha_4} \lambda_{\begin{smallmatrix} \alpha_2\\ \alpha_4\end{smallmatrix}}^{2l} = N^2 \operatorname{Cat}_{l}$ and the trace of $f(MM^\dagger)$ is simply the bubble polynomial labeled by the tree $\mathcal{T}\setminus (\text{leaf, incident edge})$, where the two corners incident to the edge on the parent vertex are merged and their labels added. An induction from the leaves to the root leads to \eqref{CatProd}.
\section*{Conclusion}
In this short note, both approaches of \cite{MeandersTensors} and \cite{FullyPacked} have been pursued:
\begin{itemize}
\item exploring the relations between tensor and matrix models like in \cite{FullyPacked} and using them
\item to evaluate new Gaussian expectations in random tensor theory, in particular for non-melonic polynomials, supplementing the results of \cite{MeandersTensors} through another method.
\end{itemize}
The starting point is as simple as that of \cite{FullyPacked}: a tensor can be seen as a (typically rectangular) matrix with one index being a $p$-uple of tensor indices and the other index a $(d-p)$-uple ($d$ being the total number of indices). Just as in \cite{FullyPacked}, unitary transformations and symmetries play a major role. But instead of offering a novel presentation of the results of random tensor theory, we rather apply tools from random matrices, namely integrals over the unitary group.
In the section \ref{sec:Gaussian}, we have embedded our tensor on a matrix space, but the polynomials of interest in tensor theory do not have a group of unitary symmetries as large as the matrix trace-invariants. As a consequence, a typical tensor observable depends on the angular degrees of freedom in addition to the singular values of the matrix. In a Gaussian distribution, integrating the observable over its angular variables leads to a notion of effective observable which writes as a sum of products of traces. This approach enables exact calculations in concrete cases as shown in section \ref{sec:ExactCalculation}.
We have shown in section \ref{sec:CatalanTrees} that in more complicated cases (i.e. when the degree of the polynomial to be integrated over the unitary group can be arbitrarily large) asymptotic calculations may still be possible. This was illustrated on a family of observables built from matrix trace-invariants glued together in a tree-like fashion.
As well-known, the large $N$ limit of tensor models (equipped with their standard scaling) with melonic interactions is Gaussian \cite{universality}: from a matrix viewpoint, it is as if the singular values all localize at the minimum of the potential \cite{toy-doublescaling,SDEs,IntermediateT4}. The method used here in \ref{sec:CatalanTrees} allows to evaluate expectations for \emph{non-melonic} observables, ones for which the distribution of singular values has a non-vanishing width. This is a new addition to the toolbox for random Gaussian tensors, complementing the meander approach of \cite{MeandersTensors} which fits a different set of observables.
An open challenge is to take the exponential of such observables, so as to get non-melonic, non-Gaussian measures, which as shown in \cite{new1/N} do have non-Gaussian large $N$ limit (but whose entropy exponents are still unknown). Even the simplest integral over $U(N)$ in the Gaussian case was shown to be of degree 4, implying that the exponential is not of the Itzykson-Zuber type. It may be that techniques not considered in the present paper work, like the one developed in \cite{U(N)IntegralsEynard}, and this will be investigated elsewhere. We hope that the relationship we have established between tensor models and loop models can lead to fruitful cross-fertilization.
We would like to eventually mention a more speculatively related idea. The same way the Schwinger-Dyson equations of ordinary matrix models constitute a set of Virasoro constraints, those of tensor models can be cast as a set of constraints generated by operators which form a Lie algebra \cite{tree-algebra, bubble-algebra}. Those operators are differential with respect to the coupling constants and are labeled by bubbles. They basically trade the expectations of bubbles for linear operations. In order to understand this algebra, it seems natural to gain knowledge of those expectations (methods as well as exact or approximated evaluations). Only the large $N$ limit and the double scaling limit are so far understood through Schwinger-Dyson equations (they however only include melonic contributions and specific chains) \cite{SDEs, DSSD}. The calculation of Gaussian expectations for more generic bubbles is a first step in that direction. This is certainly crucial to quantum gravity or rather ``bubble theory'', as well as to the possible integrable properties of tensor models.
\section*{Acknowledgements}
Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
|
2,869,038,154,993 | arxiv | \section{Introduction}
The positive acceleration of our universe was discovered almost twenty years ago thanks to the accurate measurement of the apparent magnitude versus redshift relation of distant Supernovae of type Ia (SNIa) (Riess et al. 1998; Perlmutter et al. 1999). Assuming that General Relativity is the correct classical theory of gravity (also at cosmological scales), one needs to introduce a new component into the Cold Dark Matter (CDM) model to generate the negative pressure that is needed to explain such late-time speeding-up. The generic name for it is dark energy (DE). After two decades of intense research, the exact nature of the DE remains unknown, but observations tell us that it must fulfill two basic requirements: (i) the current associated equation of state (EoS) for the DE must be of vacuum or quasi-vacuum type, i.e. $w\simeq -1$; and (ii) its clustering properties, if available, must be very much suppressed at deep subhorizon scales, meaning that the DE must be essentially uniform and hence evenly distributed in all corners of the universe.
The ``simplest'' proposed framework satisfying these conditions is obtained by adding a tiny cosmological constant (CC) in Einstein's field equations, $\CC>0$. This setting warrants a universe with a uniformly distributed vacuum energy density $\rLo=\Lambda/(8\pi G)=$const. ($G$ being Newton's constant) with EoS parameter exactly $w=-1$. Such configuration of vacuum energy is automatically unable to cluster. The resulting theoretical construction, usually assumed spatially flat in order to be consistent with an early period of inflation, is the so-called concordance or $\Lambda$CDM model (Peebles 1984, 1993). It is considered the standard model of cosmology and is able to explain with proficiency a wide variety of cosmological observations, including of course the high precision Cosmic Microwave Background (CMB) data (cf. Planck Collab. XIII 2016; DES Collab. 2017). But despite its numerous successes, the CC is also at the root of two of the most profound theoretical problems in physics, namely the old CC problem (Weinberg 1989; Padmanabhan 2003; Sol\`a 2013) and the Cosmic Coincidence problem (see e.g. the reviews by Peebles \& Ratra 2003; Copeland, Sami \& Tsujikawa 2006), both of them still lacking a solution. In addition, there exist some severe and persistent tensions between data sets in the context of the $\Lambda$CDM model. They involve relevant parameters of cosmology, such as the Hubble parameter, i.e. the current value of the Hubble function, $H(t_0)=H_0$, and the current value of the rms of mass fluctuations at spheres of $8\,h^{-1}$ Mpc, i.e. the parameter $\sigma_8$, where $h=H_0/(100\,{\rm km/s/Mpc})$ stands for the reduced Hubble parameter.
Among the many alternative scenarios beyond the $\Lambda$-term proposed throughout the years to solve these conundrums one finds a large body of DE entities, including quintessence and the like, see e.g. the comprehensive book (Amendola \& Tsujikawa 2010) and references therein. Not all the models perform equally good, though. Previous works in the literature have shown that some dynamical DE models are able to fit considerably better the cosmological data than the standard $\Lambda$CDM with a rigid $\CC$-term. The positive signal of DE dynamics can be captured in different ways and confidence levels, to wit: i) using a simple XCDM parametrization; ii) a nontrivial $\phi$CDM scalar field model with a specific potential (see e.g. Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017a,b); iii) a non-parametric reconstruction of the DE equation of state as a function of the redshift (Zhao et al. 2017); iv) and also with a variety of dynamical vacuum models (DVMs), more conspicuously those in the class of the so-called running vacuum models (RVMs) (see Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017a,b,c,d; and also Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018).
In the context of the RVMs, the vacuum energy density evolves (runs) slowly with the cosmic expansion. The law describing its evolution is motivated from the renormalization group formalism of Quantum Field Theory (QFT) in curved spacetime (for reviews see Sol\`a 2011, 2013, 2016; Sol\`a \& G\'omez-Valent 2015, G\'omez-Valent 2017, and references therein). After the inflationary epoch (whose evolution can also be described in this context, see e.g. Lima, Basilakos \& Sol\`a 2013, 2015; Sol\`a 2015), the vacuum density takes on the following simple form, $\rho_\Lambda(H)=C_0+C_1 H^2$. In such framework it should be possible to better tackle the basic CC problems. Actually, the RVMs are not only well motivated from a theoretical point of view, but are also preferred over the $\Lambda$CDM at an outstanding $\sim 4\sigma$ c.l. when they are confronted to the same string of rich enough cosmological data sets SNIa+BAO+$H(z)$+LSS+CMB, which includes not only the data on SNIa and Hubble function at different redshifts, but also the crucial information encoded in the CMB anisotropies, the data on Baryon Acoustic Oscillations (BAOs) and the Large Scale Structure (LSS), see the above mentioned papers. The last three data sources have proved to be indispensable to detect the signature of vacuum dynamics.
In particular, it is of utmost importance to incorporate the LSS data from the weighted growth rate ($f(z)\sigma_8(z)$) provided by different galaxy surveys, as e.g. BOSS (Gil-Mar\'in et al. 2017), which are mainly (but not only) extracted from the analysis of Redshift Space Distortions (RSD). As shown in extensive numerical analyses (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017a,b,c,d; Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018) the RVMs are capable of substantially improving the fit of the LSS observations while keeping the quality fit to the BAO+CMB part. This is mainly due to the fact that the RVMs allow a $8-9\%$ reduction in the value of the $\sigma_8$ parameter with respect to the typical values that are obtained in the $\Lambda$CDM, and this loosens the tension with the data obtained from RSD (see e.g. Macaulay, Wehus \& Eriksen 2013; Basilakos \& Nesseris 2016, 2017) and from weak gravitational lensing (see e.g. Heymans et al. 2013; Hildebrandt et al. 2017; Joudaki et al. 2018). For a devoted study of the impact of the RVMs to the issue of the $H_0$ and $\sigma_8$ tensions, see Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d. See also Valentino et al. 2016, 2017 for related studies.
Due to the crucial role played by the LSS data in the overall fit, it is extremely important to compute the linear perturbations correctly in order to ensure the correct inference of the model parameters and the right determination of the significance of the detected signal. In actual fact, the lack of systematically taking into account the LSS data in the overall fit to the cosmological data is at the root of missing the possible dynamical DE effects in most past studies in the literature, including Planck Collab. XIII 2016 and DES Collab. 2017. To the best of our knowledge, the first studies duly taking into account these effects are those by Sol\`a, G\'omez-Valent \& de Cruz P\'erez, 2015, 2017a,b; Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018; and Zhao et al. 2017. They resonate in the important conclusion of favoring dynamical DE at a $3-4\sigma$ c.l.
The main aim of this work is to study in detail the linear density perturbations in the RVMs with a vacuum-matter interaction and provide an analytical explanation for the origin of these important dynamical DE effects in such context, after the explicit numerical analysis has already supported such level of evidence. We want to illustrate how the RVMs seem to be an ideal framework to describe the LSS data and relax the aforementioned $\sigma_8$-tension with the $\CC$CDM prediction. The RVMs indeed provide a possible natural solution to the $\sigma_8$-tension, as advanced in (G\'omez-Valent \& Sol\`a 2018). At the same time we wish to confront our results for the RVMs with those obtained from the XCDM as a baseline model for comparison used in generic studies of DDE. The XCDM (or $w$CDM) is the next-to-simplest extension of the $\CC$CDM and is characterized by the EoS $p=w\rho$, with $w=$const., in which $w=-1$ corresponds to the $\CC$CDM (Turner \& White 1997). One expects that if a particular model is capable of detecting significant traces of DDE in the data should also be corroborated by some departure of $w$ from $-1$ when the same data are analyzed in terms of the XCDM parametrization.
The layout of this paper is as follows. In Sect. 2 we derive the equations that govern the evolution of density perturbations at subhorizon scales during the matter and vacuum-dominated epochs for general DVMs using the Newtonian conformal gauge. In Sect. 3 we define the canonical RVM in interaction with matter. The relative size of vacuum energy density fluctuations in this model are analyzed in Sect. 4. We reconsider the situation in the synchronous gauge in Sect. 5. The connection between the weighted linear growth $f(z)\sigma_8(z)$ and matter power spectrum for the RVM is outlined in Sect. 6, whereas Sect. 7 is devoted to the leading effects of running vacuum on that LSS observable. Finally, Sect. 8 studies the implications on the weak-lensing parameter $S_8$. In Sect. 9 we deliver our conclusions.
\section{Density perturbations with vacuum dynamics at subhorizon scales in the Newtonian gauge}\label{sect:NewtonianGauge}
In what follows we discuss the cosmological density perturbations for the spatially flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric in the context of the dynamical vacuum models (DVMs), which have been recently discussed in the literature from different points of view (see e.g. Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017a,b,c,d). For these models the vacuum energy density $\rL$ is not constant but dynamical, meaning that the EoS parameter is still $w=-1$ but the corresponding vacuum energy density evolves with the expansion. The evolution of $\rL$ is sufficiently small as to depart mildly at present from the rigid assumption $\rL=$const. of the $\CC$CDM. For the DVMs we may assume that $\rL=\rL(\zeta)$, where $\zeta(t)$ is a cosmic variable, typically it can be the scale factor or even the Hubbe function $H$. The considerations made in the present section will be general for any DVM, but from Section 3 onwards we shall consider a specific type of DVM in which $\zeta=H$, called the running vacuum models (RVMs). Although there are several possible realizations of the RVMs we will focus here in the canonical type, which will be introduced in Sect. 3.
In the conformal Newtonian gauge (or longitudinal gauge) we write the perturbed FLRW metric in conformal time $\eta$ as follows (Mukhanov, Feldman \& Brandenberger 1992):
\begin{equation}
ds^2=a^2 (\eta)\left[(1+2\Phi)d\eta^2-(1+2\Psi)d\vec{x}^2\right]\,,
\end{equation}
where $\Phi$ and $\Psi$ are the so-called Bardeen potentials (Bardeen 1980), and we recall that $d\eta=dt/a$, with $t$ the cosmic time. Treating the matter and vacuum components as perfect fluids, it can be shown that the
\begin{table*}
\begin{center}
\begin{scriptsize}
\resizebox{1\textwidth}{!}{
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|}
\multicolumn{1}{c}{Model} & \multicolumn{1}{c}{$H_0$(km/s/Mpc)} & \multicolumn{1}{c}{$\omega_b$} & \multicolumn{1}{c}{{\small$n_s$}} & \multicolumn{1}{c}{$\Omega_m$} &\multicolumn{1}{c}{$\nu$} &\multicolumn{1}{c}{$w$} &\multicolumn{1}{c}{$\ln A$} &\multicolumn{1}{c}{$\ln B$} \vspace{0.5mm}
\\\hline
$\Lambda$CDM & $68.83\pm 0.34$ & $0.02243\pm 0.00013$ &$0.973\pm 0.004$& $0.298\pm 0.004$ & - & -1 & - & - \\
\hline
XCDM & $67.16\pm 0.67$& $0.02251\pm0.00013 $&$0.975\pm0.004$& $0.311\pm0.006$ & - &$-0.936\pm{0.023}$ & 2.68 & 1.56 \\
\hline
RVM & $67.45\pm 0.48$& $0.02224\pm0.00014 $&$0.964\pm0.004$& $0.304\pm0.005$ &$0.00158\pm 0.00041 $ & -1 & 6.74 & 5.62 \\
\hline
\end{tabular}}
\end{scriptsize}
\caption{Best-fit values obtained from the SNIa+BAO+$H(z)$+LSS+CMB fitting analysis of (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d) for the $\CC$CDM, XCDM and the RVM, together with the Akaike and Bayesian
evidence criteria. See the aforementioned paper for further details, including the complete list of data used in the analysis and the corresponding references. Both, the XCDM and, more conspicuously, the RVM, are clearly preferred over the $\Lambda$CDM. The positive signal in favor of vacuum dynamics reaches $\sim 3.8\sigma$ c.l. in the RVM, whereas in the XCDM parametrization the signal of DE dynamics is lower ($\sim 2.8\sigma$ c.l.), but still significant.}
\end{center}
\label{tableFit1}
\end{table*}
fulfillment of the ij component of the perturbed Einstein's equations, i.e. $\delta G_{ij}=8\pi G\delta T_{ij}$, requires $\Psi=-\Phi$, and this relation will be assumed from now on. In the presence of anisotropic stress such relation would not hold, for instance. Let us also note that, in this gauge, the vector and the tensor degrees of freedom are eliminated from the beginning. In fact, the vector part of the perturbation is set to zero and the non-diagonal spatial part decouples from the rest in the form of gravitational waves propagating in the FLRW background.
The equations that govern the growth of the perturbations in this gauge can be found by the standard procedure (see e.g. Ma \& Bertschinger 1995). As independent perturbations equations we can take the $(00)$ component of the perturbed Poisson equation,
\begin{equation}\label{eq:nablaequation}
3\mathcal{H}^2\Phi-\Delta\Phi+3\mathcal{H}\dot{\Phi} = -4\pi G a^2\sum_{i=\Lambda,m}\delta\rho_i\,,
\end{equation}
the perturbed local energy-momentum conservation equation $\nabla^{\mu}T_{\mu 0}=0$,
\begin{equation}\label{eq:ContinuityOriginal}
\sum_{i=\Lambda,m} \dot{\delta\rho_i}+(p_i+\rho_i)(\Delta v_i-3\dot{\Phi})+3\mathcal{H}(\delta\rho_i+\delta p_i)=0\,,
\end{equation}
and of its spatial part $\nabla^{\mu}T_{\mu i}=0$, which leads to the (perturbed) Euler equation
\begin{equation}\label{eq:Euler1}
\sum_{i=\Lambda,m} \frac{d}{d\eta}\left[(p_i+\rho_i)v_i\right]+(\rho_i+p_i)(4\mathcal{H}v_i+\Phi)+\delta p_i=0\,.
\end{equation}
Throughout the paper an overdot denotes a derivative with respect to the conformal time, $\dot{f}=df/d\eta$, $\mathcal{H}=\dot{a}/a$ is the Hubble function in conformal time, and $\Delta$ is the Laplace operator with respect to the comoving coordinates. Differentiation with respect to the cosmic time and the scale factor will also be used and a different notation will be employed.
Furthermore, in the above equations it is understood that $v_i$ stands for the (longitudinal) velocity potential. In fact, the longitudinal contribution of the (peculiar) 3-velocity of the i$_{th}$ component can be written in terms of the gradient of the scalar velocity potential, specifically $\vec{v}_i^L=\vec{\nabla} v_i$ with $i=\Lambda,m$. Recall that the transverse part of the 3-velocity only affects the vector perturbations, which are decoupled from the scalar ones and are not being considered here (see e.g. Gorbunov \& Rubakov 2011 for further details) \footnote{For this reason we shall henceforth suppress the upper index $L$ in $\vec{v}_i^L$ for each component.}. At physical scales $\lambda$ deeply inside the horizon, i.e. $\lambda\ll 3000\,h^{-1}{\rm Mpc}$, and taking into account that in the DVMs under consideration the vacuum interacts with the matter sector, the above equations can be written in momentum space as follows:
\begin{align}
k^2\Phi&=-4\pi G a^2(\delta\rho_m+\delta\rho_\Lambda)\,,\label{eq:PoissonNewton}\\
0&=\dot{\delta\rho_m}+3\mathcal{H}\delta\rho_m-k^2v_m\rho_m+\dot{\delta\rho_\Lambda}\,,\label{eq:ContinuityNewton}\\
0&=\frac{d}{d\eta}\left(\rho_mv_m\right)+4\mathcal{H}\rho_mv_m+\rho_m\Phi-\delta\rho_\Lambda\,,\label{eq:EulerNewton}
\end{align}
where $k\equiv|\vec{k}|$ is the comoving wave number (hence $k/a$ is the physical one), and $\rho_m$ is the sum of the mean baryon and dark matter energy densities in the universe, i.e. $\rho_m=\rho_b+\rho_{dm}$. Since we are mainly interested in the physics at subhorizon scales ($k^2\gg\mathcal{H}^2$) we have dropped the terms that are suppressed by this condition, e.g. when going
from Eq.\,\eqref{eq:nablaequation} to Eq.\,\eqref{eq:PoissonNewton}. We proceed systematically with this approximation throughout our exposition.
In the previous equations,
\begin{equation}\label{eq:vm}
v_m=\frac{v_{dm}\rho_{dm}+v_b\rho_b}{\rho_m}
\end{equation}
is the total matter velocity potential, obtained upon weighting the contributions of the two matter components. We are interested in the total matter growth because this is usually what the LSS observables are sensitive to. For instance, the RSD are caused by the total amount of matter, not only by one particular type. Notice also that we are studying the evolution of non-relativistic matter and vacuum perturbations from the deeply matter-dominated (MD) epoch up to the present time. In this period of the cosmic expansion the radiation energy density only constitutes a derisory fraction of the critical energy density in the universe, and moreover it is completely decoupled from matter, so we can neglect the effect of radiation at both, background and perturbations levels.
Using the background continuity equation for a general DVM in interaction with matter,
\begin{equation}\label{eq:contBackground}
\dot{\rho}_\Lambda+\dot{\rho}_m+3\mathcal{H}\rho_m=0\,,
\end{equation}
one can write \eqref{eq:EulerNewton} in a more standard way:
\begin{equation}\label{eq:EulerNewton2}
\dot{v}_m+\mathcal{H}v_m+\Phi+\psi v_m-\frac{\delta\rho_\Lambda}{\rho_m}= 0\,,
\end{equation}
with $\psi\equiv-\dot{\rho}_\Lambda/\rho_m$ (not to be confused with the potential $\Psi$, which was previously fixed as $\Psi=-\Phi$ once and for all in this study). The first three terms of the last expression correspond to those appearing in the (perturbed) Euler equation within the $\Lambda$CDM. In fact, they can be obtained in a simple way, just by perturbing the Newtonian gravitational law,
\begin{equation}
\frac{d\vec{v}_{p}}{dt}=\vec{a}_{\rm cosm}-\frac{1}{a}\vec{\nabla}\Phi\,,\ \ \ \ \ \vec{v}_p=\frac{d\vec{r}}{d t}=\mathcal{H}\vec{x}+\vec{v}_m\,,
\end{equation}
where $\vec{v}_p$ is the perturbed 3-velocity in proper coordinates and $\vec{a}_{\rm cosm}=\dot{\mathcal{H}}\vec{x}/a$ is the cosmic acceleration associated to the uniform-expansion observers. Recall that $\vec{v}_m=\vec{\nabla} v_m$ and that $\vec{\nabla}$ denotes the gradient with respect to the comoving coordinates, and hence $(1/a)\vec{\nabla}$ is the gradient with respect tot the physical coordinates. Using $dt=a d\eta$ the above equation can be written as
\begin{equation}
\vec{\nabla}\left(\dot{v}_m+\mathcal{H}v_m+\Phi\right)=\vec{0}\,,
\end{equation}
which indeed leads to the standard Euler equation in the concordance model, if the integration constant is set to zero.
The last two terms of Eq. \eqref{eq:EulerNewton2} are new and can be interpreted as the change in the matter velocity potential that is induced by the matter-vacuum interaction. We should ask ourselves now if it is actually possible that this interaction modifies somehow the velocity of the matter particles. The loss of energy of the vacuum sector can only occur in two different ways: by vacuum decay through the generation of particle pairs, or because of an increase of the particles' masses. If the vacuum decay occurs only due to an increase of the particles' masses and we assume that the equivalence principle is preserved, we expect to recover the standard Euler equation (Koyama, Maartens \& Song 2009). This reasoning leads us to impose an extra relation in order to ensure the correct physical behavior of the DVMs at the linear perturbations level,
\begin{equation}\label{eq:ExtraRelation}
\delta\rho_\Lambda=\psi v_m\rho_m\,,
\end{equation}
so that the usual ($\Lambda$CDM) Euler equation is warranted:
\begin{equation}\label{eq:EulerNewton3}
\dot{v}_m+\mathcal{H}v_m+\Phi=0\,.
\end{equation}
It is also interesting to note that Eq. \eqref{eq:ExtraRelation} can also be written in terms of the physical velocity of matter and the gradient of vacuum perturbations,
\begin{equation}
\vec{\nabla}\delta\rho_\Lambda=-\dot{\rho}_\Lambda\vec{v}_m\,.
\end{equation}
In contrast, if particles pop out from the vacuum, then one does not expect {\it a priori} an exact fulfillment of the Euler equation \eqref{eq:EulerNewton3}. In order to study this alternative scenario it is convenient to split \eqref{eq:EulerNewton2} as follows,
\begin{align}
\dot{v}_m+\mathcal{H}v_m+\Phi+\psi v_m &= \mathcal{B}\,,\label{eq:split1}\\
\frac{\delta\rho_\Lambda}{\rho_m}&=\mathcal{B}\,,\label{eq:split2}
\end{align}
where $\mathcal{B}$ must be a linear function of the perturbed quantities under consideration. In addition, it is obvious that $\mathcal{B}$ must be proportional to the background function $\psi$ because if the vacuum energy density remains constant, i.e. if $\psi=0$, we must retrieve the $\Lambda$CDM result. This means that equations \eqref{eq:split1} and \eqref{eq:split2} must decouple from each other in this case, i.e. $\mathcal{B}=0$, so the vacuum perturbations disappear and the Euler equation is recovered. Thus, we expect $\mathcal{B}$ to take the following general form,
\begin{equation}
\mathcal{B}=\psi\sum_{i}\alpha_i\delta A_i\,,
\end{equation}
$\alpha_i$ being dimensionless constants and $\delta A_i$ perturbed functions with dimensions of inverse of energy. Notice that the choice $\mathcal{B}=\psi v_m$ satisfies the above mentioned conditions. This is precisely the relation that is obtained from \eqref{eq:ExtraRelation}, which is exactly fulfilled when the vacuum loses energy due to an increase of the matter particles' masses. From now on we will adopt \eqref{eq:ExtraRelation} for simplicity. This assumption allows us to study the two possibilities of vacuum decay with the same formula, although it is important to keep in mind that more general expressions for $\mathcal{B}$ could in principle apply if the vacuum decay occurred through particle creation.
To sum up, by using the Newtonian conformal gauge and applying the above arguments, the system of equations that governs the linear density perturbations at deep subhorizon scales in the DVMs are: \eqref{eq:PoissonNewton}, \eqref{eq:ContinuityNewton}, \eqref{eq:ExtraRelation}, and \eqref{eq:EulerNewton3}.
\begin{figure*}
\includegraphics[scale=0.55]{aHoverk}
\caption{{\it Left plot:} Squared ratio between the comoving scales $1/k$ and the comoving Hubble horizon, $\mathcal{H}^{-1}=H^{-1}/a$, as a function of the redshift. The used range of comoving wave numbers, $k$, correspond to those that we have observational access to inside the horizon and at which the linear perturbations regime is still valid, namely the modes $0.01\,h{\rm Mpc}^{-1}\leq k\leq 0.2\,h{\rm Mpc}^{-1}$. They are inside the gray band. The lowest modes (corresponding to the largest scales) reentered the horizon far in the past, at $z\simeq 1920$, previously to the decoupling time but already during the MD epoch, whereas the largest modes (smallest scales) reentered the horizon deeply in the radiation-dominated era, at $z\simeq 61.1\times 10^{3}$ (hence out of the plot); {\it Upper-right plot:} As in the left plot, but here for a shorter redshift range, up to $z=100$. This is to show that the modes we are focusing our attention on, i.e. those that lie in the gray region, are deeply inside the horizon from $z\sim 100$ up to the present time, i.e. $(\mathcal{H}/k)^2<0.035\ll 1$. This is all the more true at $ z\lesssim 10$ since the relevant modes then satisfy $(\mathcal{H}/k)^2<0.004\ll 1$. This legitimates to use the subhorizon approximation at these scales, see the text for further details; {\it Lower-right plot:} Similar to the previous cases, now in the narrow range $0\leq z\leq 4$, allowing us to appreciate the existence of a minimum at $z_{{\rm min}}\sim 0.6-0.7$. The latter indicates the transition from a decelerated to an accelerated universe, which causes that the lowest subhorizon modes start exiting the Hubble horizon again. The transition point, defined as the point at which the deceleration parameter vanishes, i.e. $q(a_t)=-1-\dot{H}(a_t)/[a_tH^{2}(a_t)]=0$, can be analytically computed in the RVM: $a_t=\left[\frac{(1-3\nu)\,\Om}{2(\OL-\nu)}\right]^{1/(3(1-\nu))}$. Using the values of the RVM parameters in Table 1 we obtain $z_{t}=a_t^{-1}-1=0.663$.
\label{fig:aHoverk}}
\end{figure*}
\section{Running vacuum in interaction with matter}
Running vacuum models are particularly motivated realizations of the DVMs discussed above, in which the dynamical origin of the vacuum energy density can be conceived from the point of view of the renormalization group (see e.g. Sol\`a 2011, 2013, 2015, 2016; Sol\`a \& G\'omez-Valent 2015; G\'omez-Valent 2017, and references therein). Hereafter we shall focus on the canonical or simplest form, and we will call it the RVM (running vacuum model). In this case, $\rL$ evolves with the Hubble rate: $\rL=\rL(H)$. In this context one can say that $\rL$ ``runs'' with the cosmic expansion.
The RVM has a smooth $\CC$CDM limit, namely it departs from the usual $\rL=$const. assumption characteristic of the $\CC$CDM through a continuous parameter $\nu$. For $\nu=0$ the concordance model is recovered. Specifically, $\rL$ takes on the form
\begin{equation}\label{eq:RVMvacuumdadensity}
\rho_\CC(H) = \frac{3}{8\pi{G}}\left(c_{0} + \nu{H^2}\right)\,.
\end{equation}
Here $c_0=H_0^2\left(1-\Omega_m-\nu\right)$ is fixed by the boundary condition $\rL(H_0)=\rho_{\Lambda 0}=\rco\,(1-\Omega_m)$, with $\Omega_m=\Omega_b+\Omega_{dm}$ the nonrelativistic matter density parameter at present and $\rco$ the current critical density. The dimensionless coefficient $\nu$ is the vacuum parameter of the RVM. A nonzero value of it makes possible the cosmic evolution of the vacuum. It is expected to be very small, $|\nu|\ll1$, since the model must remain sufficiently close to the $\CC$CDM. The moderate dynamical evolution of $\rL(H)$ is possible thanks to the vacuum-matter interaction, see Eq.\,(\ref{eq:contBackground}). Formally, $\nu$ can be given a QFT meaning by linking it to the $\beta$-function of the running $\rL$ (Sol\`a 2013; Sol\`a \& G\'omez-Valent 2015, and references therein). Theoretical estimates place its value in the ballpark of $\nu\sim 10^{-3}$ at most in the context of a typical Grand Unified Theory (GUT) (Sol\`a 2008), and this is precisely the order of magnitude for $\nu$ preferred by the cosmological data (cf. G\'omez-Valent \& Sol\`a 2015; G\'omez-Valent, Sol\`a \& Basilakos 2015; Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017a,b,c,d; Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018). The order of magnitude coincidence between theoretical expectations and phenomenological fits to the data is quite reassuring.
Different realizations of the RVM are possible, but here we limit ourselves to the model studied in (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017b,c,d; Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018), in which the vacuum exchanges energy only with dark matter. Baryons and radiation are covariantly conserved and, therefore, they obey the same dilution laws under expansion as in the $\Lambda$CDM \footnote{See Appendix A for the treatment of the matter perturbations under the condition of baryon conservation.}:
\begin{equation}
\rho_b(a)=\rho_{b0}a^{-3}\qquad \qquad\rho_r(a)=\rho_{r0}a^{-4}\,.
\end{equation}
The corresponding normalized Hubble rate $E\equiv H/H_0$ (with $H=\mathcal{H}/a$) is
\begin{eqnarray}\label{eq:H2RVM}
E^2(a) &=& 1 + \frac{\Omega_m}{1-\nu}\left(a^{-3(1-\nu)}-1\right) \label{HRVM}\\ \nonumber\\
&& + \frac{\Omega_r}{1-\nu}\left(\frac{1-\nu}{1+3\nu}a^{-4} + \frac{4\nu}{1+3\nu}a^{-3(1-\nu)} -1\right)\,,\nonumber
\end{eqnarray}
and the total matter energy density reads
\begin{equation}\label{eq:rhoM}
\rho_{m}(a) =\rho_{m0}\,a^{-3(1-\nu)}+\frac{4\nu\rho_{r0}}{1 + 3\nu}\,\left(a^{-3(1-\nu)} - a^{-4}\right)\,.
\end{equation}
Note that for $\nu=0$ we recover the $\CC$CDM expressions, as it should be expected.
The numerical values of the parameters for the RVM used in all the calculations and plots of the present work have been obtained from the fitting analysis of (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d) based on a large string of SNIa+BAO+$H(z)$+LSS+CMB data described there. They are written in Table 1 for convenience, together with the values obtained in the same analysis for the $\Lambda$CDM model and the XCDM parametrization. Recall that the XCDM (or $w$CDM) is the next-to-leading formulation of the DE beyond the $\CC$CDM. Rather than assuming a rigid CC-term $\CC=$const. with exact EoS $w=-1$, one assumes that the DE obeys $p=w\rho$, with constant $w=-1+\epsilon$, such that for $w=-1$ (i.e. $\epsilon=0$) one retrieves the particular $\CC$CDM model.
It is natural to expect that if there are significant traces of DDE in the current observational data it should be possible to minimally capture them in a model-independent way through the XCDM parametrization by finding a small departure of $w$ from $-1$. Small departures of the EoS parameter from $-1$, i.e. $|\epsilon|\ll1$, would point to dynamical DE of quintessence ($w>-1$, i.e. $\epsilon>0$) or phantom ($w<-1$, i.e. $\epsilon<0$) type. Let us note that despite in the RVM the EoS is the strict vacuum one ($w=-1$), such vacuum energy density is dynamical. As a result, from Eq.\,(\ref{eq:RVMvacuumdadensity}) it is clear that if $\nu>0$ the vacuum energy density is larger in the past than is at present, and hence the RVM effectively behaves as quintessence. In contrast, if $\nu<0$ the effective behavior of the RVM would be phantom-like. From the best-fit values that we have found in Table 1, which take into consideration the indicated large set of cosmological data, we infer that the actual behavior of the RVM is quintessence-like at $3.8\sigma$ c.l., namely $\nu=0.00158\pm 0.00041$. We can see from Table 1 that this is consistent with the EoS value that we have found for the XCDM, which is $w=-0.936\pm 0.023$ and hence favoring quintessence at about $2.8\sigma$ c.l. The two signals are clearly pointing to the same direction, but the RVM seems to involve a stronger germ of DDE than the simple XCDM parametrization.
We have carried out several tests in order to study the robustness of our results. Here we report on the outcome after performing an update of our database with respect to the one used in (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d). The list of changes is the following: we have added the data point $H(z=0.47)$ obtained with the differential-age technique by Ratsimbazafy et al. (2017), the anisotropic BAO and LSS data at $z_{\rm eff}=1.52$ from (Gil-Mar\'in et al. 2018), the weak-lensing data from (Hildebrandt et al. 2017), the LSS data point at $z=1.36$ reported in (Okamura et al. 2016); we have also replaced the SDSS LSS point from (Feix, Nusser \& Branchini 2015) by the one from (Shi et al. 2017), the 2MTF LSS point from (Springob et al. 2016) by the one from (Howlett et al. 2017), the LSS data point from (Granett et al. 2015) by those from (Pezzota et al. 2017), the BAO data at $z=0.106$ from the 6dFGS (Beutler et al. 2011) and at $z=0.15$ from SDSS DR7 (Ross et al. 2015) by their combined value at $z_{\rm eff}=0.122$ (Carter et al. 2018), the BAO Ly$\alpha$ forest data from (Delubac et al. 2015; Aubourg et al. 2015) by those from (du Mas des Bourboux et al. 2017), and the information of the 740 SNIa of the joint light-curve analysis (JLA) sample (Betoule et al. 2014) by the 15 SNIa from the CANDELS and CLASH Multy-Cycle Treasury programs obtained by the Hubble Space Telescope (Riess et al. 2018a), together with the 1049 SNIa of the Pantheon compilation (Scolnic et al. 2017), which also incorporates those from the JLA sample. Use has been made of the compressed version of these SNIa data provided in (Riess et al. 2018a). The impact of all these changes on the value of the RVM parameter is not dramatic at all: the result of the new fit reads $\nu=0.00134\pm 0.00038$, still keeping a remarkable $3.53\sigma$ departure with respect to the standard $\Lambda$CDM ($\nu=0$). Furthermore, the obtained value of $\nu$ and of all the other fitted parameters are fully compatible with the ones provided in Table 1, so no significant changes from the statistical point of view are obtained. The test speaks out in favor of the robustness of the reported results.
\section{Relative size of the vacuum fluctuations}
Relation \eqref{eq:ExtraRelation} allows us to estimate the size of the vacuum energy density perturbations at deep subhorizon scales. According to the background formulas \eqref{eq:contBackground} and \eqref{eq:rhoM}, for the RVM we easily find
\begin{equation}\label{eq:psinuH}
\psi=-\frac{\dot{\rho}_m+3\mathcal{H}\rho_m}{\rho_m}=-\left(3+a\frac{\rho_m'}{\rho_m}\right)\mathcal{H}=3\nu\mathcal{H}\,,
\end{equation}
where a prime indicates differentiation with respect to the scale factor and, as indicated above, we assume that for the discussion on density perturbations we can entirely neglect the radiation component.
Using the above formula and the perturbed continuity equation \eqref{eq:ContinuityNewton} in \eqref{eq:ExtraRelation} one can check that $\frac{\delta\rho_\Lambda}{\delta\rho_m}\propto\nu\left(\frac{\mathcal{H}}{k}\right)^2$, see Figs 1-2 and the remaining discussion in this section. Therefore, the vacuum energy density perturbations are very much suppressed at scales deep inside the horizon ($k\gg \mathcal{H}$). Actually, since the fitting results in Table 1 tell us that cosmological observations prefer values of $\nu$ of order $\mathcal{O}(10^{-3})$, this helps to suppress even more the vacuum density fluctuation $\delta\rho_\Lambda$ with respect to the material one. As expected, we recover the $\Lambda$CDM result, i.e. $\delta\rho_\Lambda=0$, when we set $\nu=0$.
\begin{figure}
\includegraphics[scale=0.59]{VacPertVSMattPert}
\caption{Logarithm of the ratio of density perturbations of vacuum and matter as a function of the redshift and for the same comoving wave numbers of Fig. 1. They are again inside the gray band. The effect of vacuum energy perturbations is enhanced at large scales, but even for the largest comoving scales of interest it is negligible in front of the corrections considered in Eq. \eqref{eq:DensityContrastEq}, see the discussion in Sect. 4.
\label{fig:PertComparison}}
\end{figure}
Despite the $\delta\rho_\CC$ suppression, it is noticeable that the dynamical nature of $\rL$ enables the formation of some structure in the vacuum sector at subhorizon scales, something that is strictly denied in the $\Lambda$CDM model. In the RVM the small clustering of the vacuum can be enhanced for larger and larger values of the vacuum parameter $\nu$, if they would be allowed. This is reasonable, since a defect or an excess in the matter distribution must generate also a departure from uniformity of the vacuum energy density, just because both components are directly interacting with each other. In the $\Lambda$CDM one deals with a strictly constant vacuum energy density, but in principle one could think that although there is no direct exchange of energy between matter and vacuum at the background level, the vacuum perturbations could affect the matter ones through the gravitational potential in Poisson's equation. It turns out not to be the case, since vacuum perturbations in the $\Lambda$CDM are strictly zero. This can be automatically inferred from \eqref{eq:split2} after setting $\mathcal{B}=0$.
Let us remark that we have initially begun with three independent equations, \eqref{eq:PoissonNewton}-\eqref{eq:EulerNewton}, and 4 unknown perturbed functions, $\delta\rho_m$, $\delta\rho_\Lambda$, $\Phi$ and $v_m$. By providing solid physical arguments we have motivated an additional relation between $\delta\rho_\Lambda$ and $v_m$, \eqref{eq:ExtraRelation}, which has allowed us to retrieve the standard Euler equation \eqref{eq:EulerNewton3}. This will also let us to close the system of perturbed equations in a consistent way and finally find the equation that governs the evolution of the matter density perturbations. But before doing this, it is illuminating to provide some details on computing the above mentioned ratio $\delta\rho_\Lambda/\delta\rho_m$ at subhorizon scales to within linear approximation in the small vacuum parameter $\nu$. Such is, of course, also the parameter controlling the strength of the vacuum-matter interaction. The calculation of $\delta\rho_\Lambda/\delta\rho_m$ can be easily done by using Eq. \eqref{eq:ContinuityNewton} and the relation \eqref{eq:ExtraRelation}:
\begin{equation}\label{eq:Ratio1}
\frac{\delta\rho_\Lambda}{\delta\rho_m}=\frac{\psi}{k^2}\frac{\dot{\delta}_m}{\delta_m}+\mathcal{O}(\nu^2)\,,
\end{equation}
where $\delta_m=\delta\rho_m/\rho_m$ is the so-called matter density contrast and $\mathcal{O}(\nu^2)$ encapsulates all the explicit terms of second or higher order in $\nu$, i.e. those higher order corrections that are not implicit in the first term of the {\it r.h.s.} of \eqref{eq:Ratio1}. Equivalently, the last relation can also be rewritten as
\begin{equation}\label{eq:Ratio2}
\frac{\delta\rho_\Lambda}{\delta\rho_m}=-a\left(\frac{\mathcal{H}(a)}{k}\right)^2\frac{f(a)}{\rho_m(a)}\frac{d\rho_\Lambda}{da}+\mathcal{O}(\nu^2)\,,
\end{equation}
with
\begin{figure*}
\includegraphics[scale=0.7]{RelDiff}
\caption{{\it Left plot:} Evolution of the percentage difference of the matter density contrast in the RVM with respect to the $\Lambda$CDM, $\Delta (z)$, as defined in \eqref{eq:RelDiff}, in the redshift range $0\leq z\leq 100$; {\it Right plot:} The same, but in the smaller redshift range $0\leq z\leq 4$, just to show the region where $\Delta(z)$ becomes negative, near the present time. It is crystal-clear that such differences are always smaller than $2\%$ in absolute value.
\label{fig:RelDiff}}
\end{figure*}
\begin{equation}\label{eq:growthrate}
f(a)=\frac{d\ln\delta_m}{d\ln\,a}
\end{equation}
the growth rate. For the RVM the ratio (\ref{eq:Ratio2}) can be cast in a compact form. Taking into account that the matter density in the MD epoch for this model is given by the first term on the \textit{r.h.s.} of (\ref{eq:rhoM}) and that the corresponding vacuum energy density in the same epoch reads
\begin{equation}\label{eq:rLMDE}
\rho_\CC(a) = \rLo + \frac{\nu\,\rho_{m0}}{1-\nu}\left(a^{-3(1-\nu)}-1\right)\,,
\end{equation}
we find:
\begin{equation}
\frac{\delta\rho_\Lambda}{\delta\rho_m}=3\nu\left(\frac{\mathcal{H}(a)}{k}\right)^2f(a) +\mathcal{O}(\nu^2)\,.
\end{equation}
We note that $f(a)$ is a monotonic function that is around $0.5$ at present and saturates to $f\simeq 1$ at $z\simeq 1$ for models with a well-defined $\Lambda$CDM limit (cf. G\'omez-Valent \& Sol\`a 2015). This proves our contention that $\frac{\delta\rho_\Lambda}{\delta\rho_m}\propto\nu\left(\frac{\mathcal{H}}{k}\right)^2$. Taking into account that $\nu=\mathcal{O}(10^{-3})$ , we find
\begin{equation}
\frac{\delta\rho_\Lambda}{\delta\rho_m}\bigg\rvert_{z=0}\sim\left(\frac{H_0}{k}\right)^2\cdot\mathcal{O}(10^{-3})\,,\end{equation}
\begin{equation}
\frac{\delta\rho_\Lambda}{\delta\rho_m}\bigg\rvert_{z=100}\sim\left(\frac{H_0}{k}\right)^2\cdot\mathcal{O}(10^{-1})\label{eq:N2}\,,
\end{equation}
where $H_0\simeq 3.336\times 10^{-4}h{\rm Mpc}^{-1}$, and where we have used $\mathcal{H}^2=H^2a^2=H^2/(1+z)^2\simeq H_0^2\Omega_m(1+z)$ at high redshift within the MD epoch. Recalling that the observational data concerning the linear power spectrum lie in the approximate comoving wave number range $0.01\,h{\rm Mpc}^{-1}\lesssim k\lesssim 0.2\,h{\rm Mpc}^{-1}$, or equivalently, $0.002\lesssim H_0/k\lesssim 0.03$, we find that the above ratios of the energy density perturbations sit in the following intervals:
\begin{equation}\label{eq:bounds1}
10^{-9}\lesssim \frac{\delta\rho_\Lambda}{\delta\rho_m}\bigg\rvert_{z=0}\lesssim 10^{-6}\,,
\end{equation}
\begin{equation}\label{eq:bounds2}
10^{-7}\lesssim\frac{\delta\rho_\Lambda}{\delta\rho_m}\bigg\rvert_{z=100}\lesssim 10^{-4}\,,
\end{equation}
\noindent where the lower bounds in these expressions have been obtained using the lowest value of $H_0/k$ we have observational access to, i.e. $H_0/k\sim 0.002$, and the upper bound with the largest accessible value of $H_0/k\sim 0.03$. In Fig. 1 we show that these modes are deeply inside the horizon in the epoch of interest, viz. from $z\sim 100$ up to the present time. Therefore, we are fully legitimated to apply the subhorizon approximation for the relevant modes. The bounds \eqref{eq:bounds1}-\eqref{eq:bounds2} completely agree with the results presented in Fig. 4 of (G\'omez-Valent, Karimkhani \& Sol\`a 2015), in which we studied the effect of DE perturbations at subhorizon scales in the context of the so-called $\mathcal{D}$-class of dynamical DE models. By direct inspection of \eqref{eq:bounds1}-\eqref{eq:bounds2} and Fig. 2 it becomes evident that $\delta\rho_m\gg\delta\rho_\Lambda$ at subhorizon scales. In principle, this allows us to neglect the vacuum energy density perturbations and its derivatives in front of the matter ones. But before accepting this we must still address one more question. Are the effects that come from not neglecting $\delta\rho_\Lambda$ in front of $\delta\rho_m$ of the same order of magnitude as those coming from the fact of having the pure background extra effect $\psi\ne 0$, see Eq.\,(\ref{eq:psinuH}), associated to the vacuum time evolution? If this were the case, then it would be a fairer approximation to our correction to the standard perturbations equations in the presence of vacuum dynamics to keep $\delta\rho_\Lambda\ne 0$. We will see now that it is not the case.
We have checked before that $\delta\rho_m\gg\delta\rho_\Lambda$.
Let us therefore neglect the vacuum perturbations in front of the matter ones now. Then, equations \eqref{eq:PoissonNewton} and \eqref{eq:ContinuityNewton} can be rewritten in a simpler way, as follows:
\begin{align}
k^2\Phi &= -4\pi G a^2\rho_m\delta_m\,,\label{eq:PoissonNewton2}\\
\dot{\delta}_m+\psi\delta_m&=k^2v_m\,,\label{eq:ContinuityNewton2}
\end{align}
The continuity equation \eqref{eq:ContinuityNewton2} is modified with an extra term, i.e. $\psi\delta_m$, with respect to the $\Lambda$CDM case because now there is an injection/extraction of energy in the matter sector caused by the decay/increase of the vacuum energy density.
Combining \eqref{eq:EulerNewton3}, \eqref{eq:PoissonNewton2} and \eqref{eq:ContinuityNewton2} the following second order equation for the matter density contrast is obtained:
\begin{equation}\label{eq:DensityContrastEq}
\ddot{\delta}_m+\dot{\delta}_m(\mathcal{H}+\psi)+\delta_m(-4\pi G a^2\rho_m+\dot{\psi}+\psi\mathcal{H})=0\,.
\end{equation}
As expected, in the absence of matter-vacuum interaction, i.e. $\psi=0$, we retrieve the equation of the standard $\Lambda$CDM model. To solve Eq. \eqref{eq:DensityContrastEq} we need to set two initial conditions, for simplicity at a time at which the expansion is fully matter-dominated. It could be, say at $a_i\sim 1/100$, but it is even better to chose $a_i$ around $1/5$ or $1/10$ in order to maximally suppress the theoretical error associated to the use of the subhorizon approximation (cf. Fig. 1), which at these values of the scale factor is lower than $0.4\%$ for all the modes $k$ of interest. At these redshifts the universe is still strongly matter-dominated: $\Omega_m a^{-3}\gg\Omega_\CC$.
In the MD era the density contrast can be computed analytically. Changing from conformal time to the scale factor through the chain rule $d/d\eta=a\mathcal{H}d/da=a^2Hd/da$, and then using the simplified form for $H$ in the matter epoch (given by the first two terms on the \textit{r.h.s} of (\ref{eq:H2RVM})), the perturbations equation (\ref{eq:DensityContrastEq}) boils down to
\begin{equation}\label{diffeqDaRVM}
\delta''_m + \frac{3}{2a}(1+3\nu)\delta'_m - \left(\frac{3}{2}-3\nu-\frac{9}{2}\,\nu^2\right)\frac{\delta_m}{a^2} = 0\,.
\end{equation}
The (exact) growing mode solution of this equation is the power-law $\delta_m(a)=a^{1-3\nu}$, so we can use this relation to set the initial conditions for $\delta_m$ and its first derivative.
Eq. \eqref{eq:DensityContrastEq} was obtained by assuming a perfectly homogeneous dynamical vacuum energy density at deep subhorizon scales. The corrections introduced by the non-standard terms can easily be evaluated in the RVM, even analytically. Their relative weights with respect to the standard parts in (\ref{eq:DensityContrastEq}) read as follows. On the one hand we have already indicated in (\ref{eq:psinuH}) that
\begin{equation}\label{eq:psioverH}\frac{\psi}{\mathcal{H}}=3 \nu\,,\end{equation}
and similarly we obtain for the other non-standard terms:
\begin{equation}
\frac{\psi\mathcal{H}}{4\pi G a^2\rho_m}=2\nu\left(1+\frac{\rho_\Lambda}{\rho_m}\right)\lesssim\frac{20}{3}\nu\,,\end{equation}
\begin{equation}\label{eq:dotpsi}
\left|\frac{\dot{\psi}}{4\pi G a^2\rho_m}\right|=\nu\left|2\frac{\rho_\Lambda}{\rho_m}-1\right|\lesssim \frac{11}{3}\nu\,,
\end{equation}
where we recall that $\nu>0$ from our fit. The upper bounds in the above expressions correspond to the ratios near our time, for which $\rho_\CC/\rho_m\simeq \OL/\Om\simeq 7/3$. In the past, e.g. deep in the MD epoch, the bound is of course tighter since at that time $\rho_\CC/\rho_m\ll 1$. To reach e.g. the bound (\ref{eq:dotpsi}) we can use $\psi=3\nu\mathcal{H}=3\nu a H$ from (\ref{eq:psioverH}) and the leading terms of (\ref{eq:H2RVM}) in the MD epoch. We obtain
\begin{eqnarray}\label{eq:dotpsi2}
\dot{\psi}&=&3\nu H_0^2 a^2 \left(E^2+\frac{a}{2}\frac{dE^2}{da}\right)=3\nu H_0^2 a^2 \left(E^2-\frac32\frac{\rho_m}{\rco}\right)\nonumber\\
&=&\frac32\,\frac{\nu H_0^2 a^2}{\rco}\left(2\rL-\rho_m\right)\,,
\end{eqnarray}
where use has been made of $E^2=\left(\rho_m+\rho_{\CC}\right)/\rco$.
The relative corrections computed above are seen to be of order $\mathcal{O}(10^{-3}-10^{-2})$ in all cases, and are therefore much larger than the corrections associated to the clustering of the vacuum energy, which proves to be lower than $\mathcal{O}(10^{-4})$ for the modes under study and for redshifts lower than $z\sim 100$, see equations (\ref{eq:bounds1}) and (\ref{eq:bounds2}).
We may crosscheck the above result by comparing the value of the matter density contrast, which evolves from $\delta_m(z\sim 100)=\mathcal{O}(10^{-2})$ to $\delta_m(z=0)=\mathcal{O}(1)$, with the values of $\delta\rho_\Lambda/\rho_m$, which are of order $\mathcal{O}(10^{-9}-10^{-6})$ for any accessible scale at $z\lesssim 100$. The latter follow from $\delta\rho_\Lambda/\rho_m=\delta_m\delta\rho_\Lambda/\delta\rho_m$ and the bounds on the ratio $\delta\rho_\Lambda/\delta\rho_m$ \eqref{eq:bounds1}-\eqref{eq:bounds2}. Thus, from the former analysis we conclude that we can safely neglect the vacuum energy density perturbations and use Eq.\,\eqref{eq:DensityContrastEq} to study the evolution of the matter density contrast at deep subhorizon scales.
Let us now elucidate what are the relative differences between the matter density contrast $\delta_m(z)$ as obtained from Eq. \eqref{eq:DensityContrastEq} (with $\psi\ne 0$) and the standard one in the $\Lambda$CDM model (corresponding to $\psi=0$ ). In the last case we will denote the resulting density contrast as $\tilde{\delta}_m(z)$. The percentage differences
\begin{equation}\label{eq:RelDiff}
\Delta(z)\equiv 100\cdot\frac{\delta_m(z)-\tilde{\delta}_m(z)}{\tilde{\delta}_m(z)}
\end{equation}
are shown in Fig.\,3. We can read-off from it that the corrections introduced in Eq. \eqref{eq:DensityContrastEq} by the terms that are proportional to $\psi$ or its time derivative are lower than $2\%$ in absolute value. Despite being small, it is important to take them into account, since the fitting results are already sensitive to them. Note that some RSD data points, e.g. those from Gil-Mar\'in et al. 2017, have a relative error of only few percent, and in the near future the error will decrease in some cases below $1\%$ (Weinberg et al. 2013).
Owing to the importance of the subject, let us further dwell upon the perturbations equations in the presence of vacuum dynamics. It turns out that the same equation \eqref{eq:DensityContrastEq} can also be derived using a source 4-vector $Q_\mu=Q u_\mu$, with $Q=-\mathring{\rho}_\Lambda$ (the circle denotes hereafter a derivative with respect to the cosmic time) and $u_\mu=\bar{u}_\mu+\delta u_\mu$ the perturbed 4-velocity of the matter fluid in natural units ($c=1$), where $\bar{u}_\mu=(a,\vec{0})$ and $\delta u_\mu=a\left(\Phi,-\vec{v}_m\right)$. The use of $Q_\mu$ ensures the automatic fulfillment of \eqref{eq:ExtraRelation} and the usual Euler equation \eqref{eq:EulerNewton3}. Let us see this more in detail. Due to the Bianchi identity, we find
\begin{equation}
\nabla^\mu (T^{\rm m}_{\mu\nu}+T^{\rm \Lambda}_{\mu\nu})=0\,,
\end{equation}
$T^{\rm m}_{\mu\nu}$ and $T^{\rm \Lambda}_{\mu\nu}$ being the matter and vacuum energy-momentum tensors, respectively. We can split this equation in two parts by means of $Q_\mu$,
\begin{equation}\label{eq:splitQ}
\nabla^\mu T^{\rm m}_{\mu\nu}\equiv Q_\nu\qquad\nabla^\mu T^{\rm \Lambda}_{\mu\nu}\equiv -Q_\nu\,.
\end{equation}
The perturbed source vector yields,
\begin{equation}\label{eq:deltaQ}
\delta Q_\mu=\delta Q\bar{u}_\mu+Q\delta u_\mu=(a\delta Q+a\Phi Q,-aQ\vec{v}_m)\,.
\end{equation}
Let us now perturb the first equation of \eqref{eq:splitQ} and substitute (\ref{eq:deltaQ}) on its \textit{r.h.s.}:
\begin{equation}\label{eq:deltaTmn}
\delta\left(\nabla^{\mu} T_{\mu\nu}^m\right)=(a\delta Q+a\Phi Q,-aQ\vec{v}_m)\,,
\end{equation}
Writing out the spatial component ($\nu=i$) along the lines of (\ref{eq:Euler1}) and defining $\delta Q_i\equiv \partial_i\delta V$, with $\delta V=-aQv_m=a\mathring{\rho}_\CC v_m=\dot{\rho}_\Lambda v_m$, we find
\begin{equation}
-\rho_m\left(\dot{v}_m+\mathcal{H}v_m+\Phi+\psi v_m\right)={\delta V}\,,
\end{equation}
and hence
\begin{equation}
\dot{v}_m+\mathcal{H}v_m+\Phi=-\frac{\delta V}{\rho_m}-\psi v_m=\frac{-\delta V+\dot{\rho}_\Lambda v_m}{\rho_m}=0\,,
\end{equation}
so the usual Euler equation is retrieved. This shows that this alternative procedure is equivalent to use the setting (\ref{eq:ExtraRelation}) on Eq.\,(\ref{eq:EulerNewton2}). We can proceed similarly with the time component, i.e. the perturbed continuity equation, and we find
\begin{equation}
\dot{\delta}_m-k^2v_m+\psi \delta_m=\frac{\delta Q_0}{\rho_m}=\frac{a}{\rho_m}(\delta Q+\Phi Q)\,.
\end{equation}
The fact that we have already neglected the term proportional to $\dot{\Phi}$, originally present in \eqref{eq:ContinuityOriginal}, is completely justified at subhorizon scales (cf. Eq. \eqref{eq:PoissonNewton2}). The last equation can be rewritten as follows:
\begin{equation}
\dot{\delta}_m-k^2v_m+\psi \delta_m=-\left(\frac{\delta\dot{\rho}_\Lambda+\Phi \dot{\rho}_\Lambda}{\rho_m}\right)\,.
\end{equation}
As we have checked before, vacuum fluctuations are negligible at subhorizon scales as compared to matter fluctuations, so the first term on the {\it r.h.s.} can be clearly neglected when compared with the first term of the {\it l.h.s.} The second term on the {\it r.h.s.} can be neglected too, but for a different reason. Because of the Poisson equation \eqref{eq:PoissonNewton2}, $\Phi\propto G a^2 \delta\rho_m/k^2$, at subhorizon scales. It follows that
\begin{equation}\label{eq:ratiosubhorizon}
\frac{\Phi \dot{\rho}_\Lambda/\rho_m}{\psi \delta_m}\propto \frac{G a^2\rho_m}{k^2}\propto\frac{\mathcal{H}^2}{k^2}\ll1
\end{equation}
and therefore we are allowed to neglect the second term of the {\it r.h.s.} as well, so we meet once more the continuity equation \eqref{eq:ContinuityNewton2}.
\section{Density perturbations with vacuum dynamics at subhorizon scales in the synchronous gauge}\label{sect:SynchronousGauge}
To hammer this important point home from a different perspective, let us now obtain the same equation \eqref{eq:DensityContrastEq} for the matter density contrast at low scales using the synchronous gauge (Ma \& Bertschinger 1995). This will help to further illustrate the robustness of our previous results. In the synchronous gauge the perturbed FLRW metric reads:
\begin{equation}
ds^2=dt^2-(a^2\delta_{ij}-h_{ij})dx^i dx^j\,.
\end{equation}
In this case the three basic perturbations equations read (see e.g. Grande, Pelinson \& Sol\`a 2009):
\begin{equation}\label{eq:syn1}
\mathring{\hat{h}}+2H\hat{h}=8\pi G\sum_{i=\Lambda,m}(\delta\rho_i+3\delta p_i)\,,
\end{equation}
\begin{equation}\label{eq:syn2}
\sum_{i=\Lambda,m}\delta\rho_i+(\rho_i+p_i)\left(\theta_i-\frac{\hat{h}}{2}\right)+3H(\delta\rho_i+\delta p_i)=0\,,
\end{equation}
\begin{equation}\label{eq:syn3}
\sum_{i=\Lambda,m} \mathring{\theta}_i(\rho_i+p_i)+\theta_i\left[\mathring{\rho}_i+\mathring{p}_i+5H(\rho_i+p_i)\right]=\frac{k^2}{a^2}\sum_{i=\Lambda,m}\delta p_i\,,
\end{equation}
in which $\hat{h}=\frac{\partial}{\partial t}\left(\frac{h_{ii}}{a^2}\right)=-\mathring{h}$ ($h_{ii}$ being the trace) is one of the two scalar modes of the spatial metric perturbations in this gauge, and $\theta_i$ is the covariant derivative of the i$_{th}$ component velocity perturbation, i.e. $\theta=\nabla_\mu\delta u^\mu$, with $\delta u^\mu=\frac{1}{a}\left(0,\vec{v}\right)$ and $\vec{v}=\vec{\nabla}v$. We assume that the vacuum has no peculiar velocity and, again, that $\delta\rho_m\gg\delta\rho_\Lambda$ and $\mathring{\delta\rho_m}\gg\mathring{\delta\rho_\Lambda}$. In Fourier space we obtain:
\begin{figure*}
\includegraphics[scale=0.6]{fsigma8}
\caption{{\it Left plot:} The weighted growth rate for the $\Lambda$CDM, the XCDM and the RVM, obtained by using the best-fit values of Table 1. The values of $\sigma_8$ that we obtain for these models are also indicated. We also plot the reconstructed $f(z)\sigma_8(z)$ curve and its $1\sigma$ uncertainty band, both obtained by using the observational data (depicted in green) and the Gaussian processes method (GPM) with Cauchy's kernel, see e.g. (Seikel, Clarkson \& Smith 2012) and references therein. Almost identical results are obtained using alternative kernels as the Gaussian or the Mat\'ern ones. This is to show the preference of the data for lower values of this LSS observable; {\it Right plot:} The relative (percentage) difference of the weighted growth rate with respect to the concordance model, $\Delta_{f\sigma_8}$, as defined in \eqref{eq:Deltafsigma8}.
\label{fig:fsigma8}}
\end{figure*}
\begin{align}
\mathring{\hat{h}}+2H\hat{h}&=8\pi G \delta\rho_m\,,\label{eq:syn1b}\\
\mathring{\delta\rho_m}+\rho_m\left(\theta_m-\frac{\hat{h}}{2}\right)+3H\delta\rho_m&=0\,,\label{eq:syn2b}\\
\rho_m\mathring{\theta}_m+(\mathring{\rho}_m+5H\rho_m)\theta_m&=-\frac{k^2}{a^2}\delta\rho_\Lambda\label{eq:syn3b}\,.
\end{align}
The last equation can be cast as follows,
\begin{equation}\label{eq:thetaEq}
\mathring{\theta}_m+\theta_m\left(\bar{\psi}+2H\right)=-\frac{k^2}{a^2}\frac{\delta\rho_\Lambda}{\rho_m}\,.
\end{equation}
with $\bar{\psi}=-\mathring{\rho}_\Lambda/\rho_m$, which is the analog in cosmic time of ${\psi}=-\dot{\rho}_\Lambda/\rho_m$ defined in the previous sections with conformal time.
In coordinate space,
\begin{equation}
\theta_m=\nabla_\mu\delta u^\mu=\nabla_\mu(g^{\mu\nu}\delta u_\nu)=g^{ij}\partial_j \delta u_i+\mathcal{O}(2)\,,
\end{equation}
or
\begin{equation}
\theta_m=\frac{-\vec{\nabla}\cdot(-a\vec{v}_m)}{a^2}+\mathcal{O}(2)=\frac{\nabla^2v_m}{a}+\mathcal{O}(2)\,,
\end{equation}
where $\mathcal{O}(2)$ refers to second order perturbations. In momentum space,
\begin{equation}
\theta_m=-\frac{k^2}{a}v_m+\mathcal{O}(2)\,.
\end{equation}
Substituting this in \eqref{eq:thetaEq} the $k$-dependence cancels and we find:
\begin{equation}\label{eq:mathringv}
\mathring{v}_m+Hv_m+v_m\bar{\psi}=\frac{1}{a}\frac{\delta\rho_\Lambda}{\rho_m}\,.
\end{equation}
Equation (\ref{eq:mathringv}) is the momentum conservation equation for the matter particles in the synchronous gauge. As in the Newtonian gauge, we do not want this equation to be modified with respect to the $\Lambda$CDM one, where $\mathring{v}_m+Hv_m=0$. Thus, we impose $\delta\rho_\Lambda=\bar{\psi }v_m\rho_m$, which is formally the same kind of relation that we have obtained in the Newtonian gauge. But we must still fix the residual gauge freedom characteristic of the synchronous gauge (Ma \& Bertschinger 1995).
In the Newtonian gauge we have the Bardeen potentials, $\Phi$ and $\Psi$, with $\Psi=-\Phi$ for perfect fluids. In the synchronous gauge, one can see that $\Phi$ is absorbed in the trace of $h_{ij}$ (see e.g. Grande, Pelinson \& Sol\`a 2009). The second scalar mode existing in this gauge (which is contained in the longitudinal part of the metric) still gives us a residual gauge freedom that we are entitled to use. With it we can make a suitable choice that encompasses the situation in the $\Lambda$CDM, in which $\bar{\psi}=0$, and where the residual gauge freedom can be fixed by setting $v_m=0$. Similarly, in the DVMs in general, and in the RVM in particular, we can exploit the remaining gauge freedom left by imposing that the peculiar velocity of matter particles is zero, i.e. once more the comoving frame condition $v_m=0$ (see e.g. Wang, Wands, Xu, De-Santiago \& Hojjati 2013; Wang, Wands, Zhao \& Xu 2014). This setting automatically leads to $\delta\rho_\Lambda=0$ from (\ref{eq:mathringv}). It follows that the vacuum energy perturbations vanish in the comoving frame of matter, and as a result the dependence on $k$ can be seen to drop from all the above equations.
In the the Newtonian gauge we have a qualitatively different picture, which can nevertheless be made quantitatively coincident under appropriate conditions. The presence of the potential $\Phi$ (which is nothing but the Newton potential in the nonrelativistic limit) is inherently associated to $k$ through the Poisson equation. Such $k$-dependence, however, disappears from the resulting matter density perturbations equation \eqref{eq:DensityContrastEq}, but only if we work at scales deeply below the horizon. In this sense the conformal Newtonian gauge is more physical since it tracks continuously the $k$-dependence and informs us on the physical conditions under which such dependence becomes negligible.
In the synchronous gauge there is no Newtonian limit; notwithstanding, the same physical result for the material density perturbations ensues if we use the mentioned comoving setting for the peculiar velocities of the matter particles, viz.
$v_m=0$, which implies $\delta\rho_{\CC}=0=\theta_m$. Indeed, upon appropriate manipulation of the remaining equations \eqref{eq:syn1b} and \eqref{eq:syn2b} we can derive the following second order differential equation for the matter perturbations within the synchronous gauge and in cosmic time:
\begin{equation}\label{diffeqD}
\ringring{\delta}_m+\left(2H+\bar{\psi}\right)\,\mathring{\delta}_m-\left(4\pi
G\rmr-2H{\bar{\psi}}-\mathring{\bar{\psi}}\right)\,\delta_m=0\,.
\end{equation}
This is the final equation for matter perturbations well below the horizon. At this point we can jump once more back to conformal time with the help of simple relations such as $\mathring{\delta}= \dot{\delta}/a$, $\ringring{\delta}=(\ddot{\delta}-\mathcal{H}\dot{\delta})/a^2$, as well as $\bar{\psi}=\psi/a$ and $\mathring{\bar{\psi}}=(\dot{\psi}-\mathcal{H}\psi)/a^2$. In this way we arrive once more to Eq.\,\eqref{eq:DensityContrastEq}. The final matter perturbations equation in both gauges is therefore the same, provided we consider scales sufficiently small as compared to the horizon. This result, which was well known for the $\CC$CDM (Ma \& Bertschinger 1995) has been proven here to hold good for the DVMs too.
Let us remark that even if we would not fix the residual gauge condition by picking a comoving frame for the matter particles, namely if we instead use $\delta\rho_\Lambda=\bar{\psi} v_m\rho_m\neq 0$ and solve $\mathring{v}_m+Hv_m=0$, we find that the velocity potential reads $v_m(a)=v_m(a_i)a_i/a$, where $a_i$ is some initial value of the scale factor far in the past but still in the MD epoch. This solution is a decaying mode and therefore it is expected to have a completely negligible effect on the matter energy density perturbations. In practice, we can set $v_m(a)\simeq 0$, what directly leads to $\delta\rho_\Lambda\simeq 0$ because the latter is not only proportional to $v_m$, but also to $\bar{\psi}$, so in practice the quantity $\delta\rho_\Lambda=-\mathring{\rho}_\CC v_m$ is additionally suppressed by the tiny time variation of $\rL$.
The fact that at scales deeply inside the horizon the differential equation that controls the evolution of the matter density perturbations is the same in both gauges is somehow expected since it is already so for the $\Lambda$CDM case (see e.g. Ma \& Bertschinger 1995). In the Newtonian gauge one measures a non-zero matter longitudinal velocity field, whereas in the synchronous gauge the observer free-falls with the matter fluid flow. Therefore in this gauge no peculiar velocity for matter particles is measured, although the observer can in principle detect differences between the baryon and dark matter peculiar velocities. The concordance of the matter perturbations in the two gauges is also a reflex that there is no significant scale-dependence (i.e. $k$-dependence) in the evolution of the density perturbations below the horizon, which means that all subhorizon modes evolve essentially alike, the corrections being of order $\mathcal{H}^2/k^2\ll1$. Gauge differences, however, can be significant for calculations involving very large scales, and of course for super-horizon scales. In that case, one must keep the appropriate $k$-dependence (as e.g. in the Newtonian gauge) or resort to a gauge-invariant formalism (Bardeen 1980; Kodama \& Sasaki 1984). Let us also note that the reason why the physical discussion can be more transparent in the Newtonian gauge is because the time slicing in this gauge respects the isotropic expansion of the background. The synchronous gauge, instead, corresponds to free falling observers at all points (as previously indicated), what implies that its predictions are relevant only to length scales significantly smaller than the horizon, but at these scales it renders the same physics as the Newtonian gauge (Ma \& Bertschinger 1995; Mukhanov, Feldman \& Brandenberger 1992). While all these facts have been known since long for the $\CC$CDM, here we have re-examined them in the context of dynamical vacuum models, and we have shown that the main features are preserved.
A short summary of the analysis presented in the last two sections is now in order. By imposing a fully consistent physical condition in both gauges or, equivalently, by choosing an appropriate interaction 4-vector $Q_\mu=Qu_\mu$ that ensures the setting in a covariant manner, we have shown the following two important results: (i) the vacuum energy density perturbations are definitely negligible at low (subhorizon) scales in front of the matter ones; and (ii) in both considered gauges (Newtonian and synchronous) we find the same modified law (i.e. different from the $\CC$CDM one) which governs the matter density contrast in the presence of vacuum dynamics, viz. Eq. \eqref{eq:DensityContrastEq}, or equivalently Eq.\,(\ref{diffeqD}). In the absence of that dynamics, the modified equation reduces to the standard $\CC$CDM one.
\section{Weighted growth rate and matter power spectrum}
The weighted growth rate, $f(z)\sigma_8(z)$, has become one the most important LSS observables because of its ability to constrain cosmological models. One of its main advantages is that it is independent of the bias between the observed galaxy spectrum and the underlying (total) matter power spectrum (Guzzo et al. 2008; Song \& Percival 2009) and therefore it is protected from the side effects that might be introduced by the assumption of a particular fiducial cosmological model in the calculation of the bias factor $b(z)$. Nevertheless, these data points are not completely model-independent, since the observational teams must assume a specific fiducial model (the $\Lambda$CDM, for convenience) in order to infer cosmological distances from the measured redshifts. This model-dependence can be removed from the $f(z)\sigma_8(z)$ data points by e.g. rescaling them as in (Macaulay, Wehus \& Eriksen 2013; Nesseris, Pantazis \& Perivolaropoulos 2017). We have explicitly checked that, when applied, the mentioned correction has very low impact on the fitting results presented in Table 1. We find that all the numbers remain almost unaltered, as e.g. the values of $\chi^2_{\rm min}$ for the various models, which only undergo very mild corrections of $0.5\%$ at most; or the RVM parameter, which after the data rescaling reads $\nu=0.00162\pm0.00042$ and, therefore, keeps the very same level of significance ($\sim 3.85\sigma$) as the one shown in Table 1.
The weighted growth rate is given by the product of the growth rate $f(z)$, defined in \eqref{eq:growthrate}, and
\begin{equation}
\sigma_8(z)=\sigma_8\delta_m(z)/\delta_m(z=0)\,.
\end{equation}
In the left plot of Fig. 4 we show the theoretical curves of the weighted growth rate for the $\Lambda$CDM, the XCDM parametrization and the RVM. In the concordance model there is an obvious excess of power when compared with the other two scenarios, specially with the RVM. The exact relative difference with respect to the concordance model, defined as
\begin{equation}\label{eq:Deltafsigma8}
\Delta_{f\sigma_8}(z)\equiv100\cdot\frac{f(z)\sigma_8(z)\bigg\rvert_{{\rm Y}}-f(z)\sigma_8(z)\bigg\rvert_{\Lambda{\rm CDM}}}{f(z)\sigma_8(z)\bigg\rvert_{\Lambda{\rm CDM}}}\,,
\end{equation}
with Y = (XCDM, RVM), can be read off in the right plot of the same figure. It reaches a (negative) $2-4\%$ level in the XCDM and is enhanced up to $8-9\%$ (negative too) in the RVM. Now, because the $f(z)\sigma_8(z)$ data points lie some $\sim 8\%$ below the $\CC$CDM prediction (what is nothing more than the aforementioned $\sigma_8$-tension), the mentioned relative differences allow the RVM to fit better the LSS data than the other two models under study. Such differences in $f(z)\sigma_8(z)$ actually agree with those that are found in the value of the $\sigma_8$ parameter (cf. the legend of the left plot in Fig. 4), which are around $3.4\%$ lower in the XCDM and $8.4\%$ lower in the RVM. This is probably telling us that the main source of the differences observed between the theoretical curves of $f(z)\sigma_8(z)$ come precisely from the predicted values of $\sigma_8$.
The main aim of the present and the next sections is to disentangle the origin of the above differences (\ref{eq:Deltafsigma8}) and to see which is the role played by the various parameters involved in the calculation of the weighted growth rate in the framework of both the XCDM and the RVM. It is particularly intriguing to understand how can the vacuum parameter $\nu$ in the RVM have such a great impact on the LSS predictions, taking into account that it is ``only'' of order $\mathcal{O}(10^{-3})$. The answer was advanced in (G\'omez-Valent \& Sol\`a 2018) and here we will provide more details.
The following question should be addressed: Why are the induced changes with respect to the $\Lambda$CDM not of order $\nu$, as one could naively expect at linear order? It was shown in (G\'omez-Valent, Sol\`a \& Basilakos 2015) that in the non-linear perturbations regime the effect of a non-null $\nu$ can be very big, giving rise to differences in the prediction of the collapsed number of halos that can reach the $50\%$ level in some cases for typical values of $\nu\sim 10^{-3}$. This was studied in the context of an improved version of the Press-Schechter formalism (Press \& Schechter 1974), see (G\'omez-Valent, Sol\`a \& Basilakos 2015) for details. The effects in the linear perturbations regime are not as big as the observed ones at the non-linear one, but are nevertheless higher than first glance expectations. An explanation is therefore mandatory at this point, and we do offer it here in detail.
Before going on, let us write $\sigma_8(z)$ in a convenient way, which will allow us to better capture the physical information encoded in it:
\begin{equation}\label{eq:s88generalNN}
\sigma_8^2(z)=\delta_m^2(z)\int\frac{d^3k}{(2\pi)^3}\, P(k,\vec{p})\,\,W^2(kR_8)\,.
\end{equation}
Here $P(k,\vec{p})=P_0\,k^{n_s}T^2(k,\vec{p})$ is the linear matter power spectrum, $P_0$ is its normalization factor and $T(k,\vec{p})$ the matter transfer function, with $\vec{p}$ being the vector that contains the parameters of the model. Function $P(k,\vec{p})$ gives the spectrum, i.e. the Fourier transform of the two-point correlation function of the primordial linear density field, whereas $T(k,\vec{p})$ modulates the shape of the gravitational potential in the MD epoch for every mode. For the latter we have adopted the usual BBKS form (Bardeen, Bond, Kaiser \& Szalay 1986):
\begin{equation}\label{eq:BBKS}
\begin{array}{ll}
T(x) = &\frac{\ln (1+0.171 x)}{0.171\,x}\Big[1+0.284 x + (1.18 x)^2+\\
& + \, (0.399 x)^3+(0.490x)^4\Big]^{-1/4}\,.
\end{array}
\end{equation}
Originally, $x=k/k_{eq}$, where
\be\label{keqDef}
k_{eq}=a_{eq}H(a_{eq})
\ee
is the value of the comoving wavenumber at the equality scale $a_{eq}$ between matter and radiation densities: $\rho_r(a_{eq})=\rho_m(a_{eq})$. It is well-known that \eqref{eq:BBKS} does not incorporate the effects produced by the tightly coupled photo-baryon plasma before the decoupling time. The fight between pressure and gravity in this coupled system generates the baryon acoustic oscillations in the matter power spectrum at ``small'' scales, i.e. for $k>k_{eq}$. The baryon density effects can be introduced in \eqref{eq:BBKS} through the modified shape parameter $\tilde{\Gamma}$ (Peacock \& Dodds 1994; Sugiyama 1995) in $x=k/(k_{eq}\tilde{\Gamma})$, with
\be
\tilde{\Gamma}=e^{-\Omega_b-\sqrt{2h}\frac{\Omega_b}{\Omega_m}}\,.
\ee
Alternatively, one can use the transfer function provided in (Eisenstein \& Hu 1998) instead of the BBKS one. The former already includes the baryonic effects. We have checked that the use of the
alternative matter transfer function does not produce any significant change in our results, so we stick to the BBKS by incorporating the baryon effects through the shape parameter $\tilde{\Gamma}$, as explained above, since it is easier to deal with from an analytical point of view.
We remark that $k_{eq}$ is a model-dependent quantity, which departs from the $\CC$CDM expression in those models in which matter and/or radiation are governed by a nonstandard continuity equation in which matter exchanges energy with vacuum, such as e.g. in the RVM. For the concordance model and the XCDM parametrization, $k_{eq}$ has the simplest expression:
\begin{equation}\label{keqCCprev}
k^\CC_{eq} = H_0\,\Omega_m\sqrt{\frac{2}{\Omega_r}}\,.
\end{equation}
\begin{figure*}
\includegraphics[scale=0.45]{RVMconsLCDMdegen}
\caption{{\it Left plot:} Reconstruction of the $f(z)\sigma_8(z)$ curve of the RVM from the $\Lambda$CDM one. See the text in Sect. 7 for a detailed explanation; {\it Right plot:} Curves of $f(z)\sigma_8(z)$ obtained for the $\Lambda$CDM, with different values of $\Omega_m$ and $h$ satisfying the same relation $\omega_m\equiv\Omega_m h^2=0.1412$. This is to show the approximate degeneracy of the LSS results under modifications of $\Omega_m$ and $h$ that fully respect the strong constraint on $\omega_m$ coming from the CMB data, as explained in the text.
\label{fig:RVMcons}}
\end{figure*}
For the RVM, however, is not possible to find a formula as compact as \eqref{keqCCprev}. The corresponding expression for $a_{eq}$ is quite involved in this case. It follows from $\rho_r(a_{eq})=\rho_m(a_{eq})$, in which $\rho_m(a)$ is the function (\ref{eq:rhoM}) and $\rho_r(a)$ is the standard density formula for conserved radiation. We find:
\begin{eqnarray}\label{eq:aeqRVM}
\textrm{RVM}:\quad a_{eq}& =& \left[\frac{\Omega_r(1+7\nu)}{\Omega_m(1+3\nu)+4\nu\Omega_r}\right]^{\frac{1}{1+3\nu}}\\
& =&\frac{\Omega_r}{\Omega_m}\left[1+4\nu-3\nu\ln\left(\frac{\Omega_r}{\Omega_m}\right)\right]+\mathcal{O}(\nu^2)\nonumber\,,
\end{eqnarray}
where in the last equality we have expanded up to linear terms in the small parameter $\nu$.
On the other hand, the Hubble rate at the equality time in the RVM can be extracted from (\ref{eq:H2RVM}). It reads, in very good approximation,
\begin{equation}\label{eq:E2eqRVM}
E^2(a_{eq})=\frac{\Omega_m^4}{\Omega_r^3}\left[2-30\nu+24\nu\ln\left(\frac{\Omega_r}{\Omega_m}\right)\right]+\mathcal{O}(\nu^2)\,.
\end{equation}
Thus, at linear order in the vacuum parameter, the wave number at equality, $k_{eq}$, takes the following form in the RVM:
\begin{equation}\label{eq:keqRVM}
k^{\rm RVM}_{eq}=k^\CC_{eq}\left[1-\frac{7\nu}{2}+3\nu\ln\left(\frac{\Omega_r}{\Omega_m}\right)\right]+\mathcal{O}(\nu^2)\,,
\end{equation}
where $k^\CC_{eq}$ is the standard value (\ref{keqCCprev}).
As expected, for $\nu=0$ we retrieve the values of $a_{eq}$ and $E^2_{eq}$ in the $\CC$CDM, i.e. $a_{eq}\to\Omega_r/\Omega_m$ and $E^2(a_{eq})\to 2\Omega_m^4/\Omega_r^3$, and also $k_{eq}\to k^\CC_{eq}$. Moreover, it is worth stressing at this point that although $\nu\sim \mathcal{O}(10^{-3})$, the relative change in $k_{eq}$ caused by it is not just of order $\nu$, but it is significantly enhanced owing to the large log up to roughly $3\nu|\ln ({\Omega_r}/{\Omega_m})|\sim 25\nu$, hence a result which is comfortably one order of magnitude larger than naively expected . This point will be important in the discussion of Sect. 7.
Function $W(kR_8)$ in Eq.\,\eqref{eq:s88generalNN} is a top-hat smoothing function, which can be expressed in terms of the spherical Bessel function of order $1$, as follows:
\begin{equation}\label{eq:WBessel}
W(kR_8)=3\,\frac{j_1(kR_8)}{kR_8}=\frac{3}{k^2R_8^2}\left(\frac{\sin{\left(kR_8\right)}}{kR_8}-\cos{\left(kR_8\right)}\right)\,,
\end{equation}
with $R_8=8{h^{-1}}$ Mpc. In the fitting analysis of Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d, from where we have taken the values of the various parameters (cf. Table 1), we have fixed the power spectrum normalization factor $P_0$ as follows,
\begin{equation}\label{eq:P0}
\begin{small}
P_0=\frac{\sigma_{8,\Lambda}^2}{\delta^2_{m,\Lambda}}\left[\int_{0}^{\infty} \frac{d^3k}{(2\pi)^3}k^{n_{s,\Lambda}}T^2(k,\vec{p}_{\Lambda})W^2(kR_{8,\Lambda})\right]^{-1}\,,
\end{small}
\end{equation}
where the chosen values of the parameters in this expression define a fiducial model. Specifically, we have set $\delta_{m,\Lambda}\equiv\delta_{m,\Lambda}(z=0)$ and the parameters of the vector $\vec{p}_\Lambda$ are taken to be equal to those from the Planck 2015 TT,TE,EE+lowP+lensing analysis (Planck Collab. XIII 2016). The subscript $\CC$ in all these parameters denotes such a setting. In particular, $\sigma_{8,\Lambda}$ in \eqref{eq:P0} is also taken from the aforementioned Planck 2015 data. However, $\delta_{m,\Lambda}$ in the same formula is computable: it is the value of $\delta_m(z=0)$ obtained from solving the perturbations equation of the $\CC$CDM, i.e. Eq. \eqref{eq:DensityContrastEq} with $\psi=0$, using the mentioned fiducial values of the other parameters.
Another way of dealing with the normalization of the power spectrum would consist in leaving $P_0$ free in the fitting analysis, while forcing it to satisfy the Planck 2015 CMB bounds. The point is that the Planck Collaboration does not provide such constraints. Alternatively, they provide the central value and associated uncertainty of the $A_s$ parameter, i.e. the normalization factor of the (dimensionless) primordial power spectrum of the scalar perturbations,
\begin{equation}
\mathcal{P}_{\mathcal{R}}(k)\equiv A_s\left(\frac{k}{k_*}\right)^{n_s-1}\,,
\end{equation}
where $k_*=0.05\,{\rm Mpc}^{-1}$ is Planck's pivot scale (Planck Collab. XIII 2016). The relation between $A_s$ with $P_0$ can be found using standard formulae (see e.g. Gorbunov \& Rubakov 2011; Amendola \& Tsujikawa 2015). We find:
\begin{equation}
P_0=A_s\frac{8\pi^2}{25}\frac{k_*^{1-n_s}}{(\Omega_m h^2)^2(100\varsigma)^4}\,,
\end{equation}
with $\varsigma\equiv 1$ km/s/Mpc$=2.1332\times10^{-44} GeV$ (in natural units), and $h$ is defined as usual through $H_0=100 h\, \varsigma$. Let us note that both $P_0$ and $A_s$ encode information of the primordial universe, whereas $\sigma_8$ strongly depends on the physics of the late-time expansion and, therefore, on the features of the DE or vacuum energy, in particular of its possible time-evolution. It is thus natural to rely on the Planck 2015 constraint on $A_s$ rather than on $\sigma_8$, since the latter is clearly more sensitive to the $\Lambda$CDM assumption used in Planck's analysis. This is the reasoning that has motivated the fitting scheme followed by us, in which $\sigma_8$ is a computed quantity from the fitting parameters of Table 1 and Eq.\,(\ref{eq:s88generalNN}) rather than picking up some a priori fiducial value. Notice, also, that the constraint extracted from the Planck 2015 TT,TE,EE+lowP+lensing analysis is $10^9 A_s=(2.130\pm 0.053)$. It is worth remarking that the uncertainty on the value of $A_s$ is only of $\sim 2.5\%$. Thus, a variation of $A_s$ respecting the tight margin left by that constraint is completely unable to account for the observed deficit of structure formation in the context of the $\Lambda$CDM, as we have checked. In other words, this narrow freedom cannot be used to relax the $\sigma_8$-tension in the context of the $\CC$CDM (cf. Fig. 4, where we show that the needed relative change in $f(z)\sigma_8(z)$ is around $\sim 8\%$). Lower values of $A_s$ (or, equivalently, of $P_0$) would be of course very welcome by the LSS data, since the theoretical curve of $f(z)\sigma_8(z)$ would be lowered, but such values value of the power spectrum normalization would then be tensioned with the Planck 2015 CMB constraint. We conclude that a variation of $A_s$ is unable to explain alone, in a consistent way, the needed reduction in $f(z)\sigma_8(z)$. Such cul-de-sac situation for the $\CC$CDM suggests that a new dynamical variable beyond the $\CC$CDM may be necessary to account for the $\sigma_8$-tension. We propose that the needed variable is connected with the dynamical character of the DE, in contrast to the rigid status of $\CC$ in the $\CC$CDM. In the next section we illustrate the benefits that are obtained concerning the $\sigma_8$-tension if we adopt a DDE point of view. In the previous sections we have prepared the ground for such calculation both at the background and perturbations level. By performing a detailed computation of the growth rate within the RVM we find that $\sigma_8$ becomes reduced by precisely the desired amount of $\sim 8\%$, if we use the fitting values from Table 1. We also compare with the corresponding result within the XCDM.
\section{Analytical calculation of $\Delta_{\lowercase{f}\sigma_8}(\lowercase{z})$ in the RVM: solving the $\sigma_8$-tension}
In this section we provide a detailed analytical calculation, supported by numerical analysis, aimed at explaining how and why the RVM is capable to produce the necessary $\sim 8\%$ reduction of $\sigma_8$, and in general of $f(z)\sigma_8(z)$, with respect to the $\CC$CDM. It is well-known that the $\CC$CDM predicts a too large value of $\sigma_8$ and hence an exceeding structure formation power that is unable to explain the LSS data represented by the $f(z)\sigma_8(z)$ observations, see our Fig. 4. In the following we will show how the vacuum coefficient $\nu$ of the RVM is capable to provide the necessary $\sim 8\%$ decrease despite its fitted value (cf. Table 1) is of order $\nu\sim 10^{-3}$. Let us also mention at this point that there are a few alternative approaches attempting to cure the $\sigma_8$-tension, e.g. using possible effects of viscosity of the cosmic fluid (Anand et al. 2017), or some phenomenological interactions between DM and DE (Barros et al. 2018; An, Feng \& Wang 2017; Wang et al. 2016), or even using a small amount of spatial curvature (Ooba, Ratra \& Sugiyama 2017). Another potentially significant effect comes from the impact of massive neutrinos, see the studies by Hamann \& Hasenkamp 2013; Battye \& Moss 2014: Salvatelli et al. 2014, and the recent works by Lorenz, Calabrese \& Alonso 2017 and Mishra-Sharma, Alonso \& Dunkley 2018. In fact, dynamical dark energy models may exhibit degeneracies with the cosmic neutrino background since massive neutrinos can suppress the power spectrum (and hence the structure formation) on small scales (see Hu, Eisenstein \& Tegmark 1998; Shoji \& Komatsu 2010). However, the above mentioned papers show that the effect proves insufficient to relax the $\sigma_8$-tension, if the allowed neutrino mass hierarchies are to be respected. Other recent works have examined this problem within particular DDE models (e.g. Guo, Zhang \& Zhang 2018; Park \& Ratra 2018; McCarthy et al. 2018). In another vein, it has been suggested that one can mitigate the tension by allowing the amplitude of the CMB lensing power spectrum, $A_{Lens}$, to be free when fitting the TT power spectrum rather than fixing its natural value to unity, what might reflect an unaccounted for systematic issue, see e.g. Addison et al. 2017 and McCarthy et al. 2018. These various possibilities deserve of course further examination, but here we wish to put the emphasis on the impact from dynamical vacuum energy and in particular within the framework of the RVM. It turns out that a detailed study of this problem within the RVM is feasible both at the analytical and numerical level and we shall show next that the results are perfectly consistent. The remarkable outcome is that vacuum dynamics alone can dispose of the $\sigma_8$-tension. Subsequent studies on the interplay between the various types of mentioned alternative effects should be interesting, of course, but they are beyond the reach of the current study.
The solution to the $\sigma_8$-tension that we are proposing here with the help of the RVM was first advanced by us in (G\'omez-Valent \& Sol\`a 2018). It is truly an economical and efficient solution, in the sense that the tension becomes fully relaxed. This result is not obtained by just focusing exclusively on the LSS data but by considering a global quality fit to the entire string of SNIa+BAO+$H(z)$+LSS+CMB observations. The fit quality of the RVM is substantially better than that of the $\CC$CDM, see Table 1.
\begin{figure*}
\includegraphics[scale=0.85]{RVMdifOrigin2range}
\caption{{\it Left plot:} Relative difference in the transfer function \eqref{eq:BBKS} between the RVM (cf. Table 1) and the $\Lambda$CDM (with $\nu=0$ but the other parameters chosen equal to the RVM ones) as a function of $k$, i.e. $\Delta_T(k)=100\cdot (T_{\rm RVM}(k)-T_\Lambda(k))/T_{\Lambda}(k)$; {\it Right plot:} Product of functions entering the integral of \eqref{eq:s88generalNN}, also as a function of $k$. The range of wave numbers at which this product gets its largest values is the most sensitive range to the relative differences induced by a $\nu$ in the transfer function. We have marked off this approximate range of $k$'s by red vertical dashed lines in both plots in order to ease the visualization.
\label{fig:RVMdifOrigin4}}
\end{figure*}
First of all let us focus our attention on the left plot of Fig. 5. It is aimed to show the individual impact of each parameter on the $f(z)\sigma_8(z)$ observable. A short description of this plot is in order. We take as baseline model the $\Lambda$CDM with the fitted values provided in Table 1. The corresponding curve is the black one. To obtain the dotted blue curve, we have only changed the value of $n_s$ with respect to the reference line, and have set it to the best-fit value of the RVM. The curve moves mildly upwards. We see that the effect of this change is derisory ($\sim 1\%$). The other curves plotted therein are obtained upon progressively setting the various parameters to the values obtained in the fitting
analysis of the RVM (cf. Table 1). If we do not only change the value of $n_s$, but also set $\Omega_m$ to the RVM value, we obtain the dashed red curve (labelled $n_s+\Omega_m$), which is noticeably higher. Clearly these two changes push the prediction in the wrong direction since the resulting curves are shifted upwards and therefore imply even higher structure formation power than the concordance model (the black curve). As indicated, the remaining curves are obtained by sequentially incorporating the changes in the other parameters to the previous configurations, analogously to the procedure described before. The next change is going to revert the ``wrong'' movements made before. Indeed, the dashed purple curve (referred to as $n_s+\Omega_m+h$ in the legend), which is obtained upon adding the change in $h$ to the previous situation, lies now very near (just slightly below) to the $\Lambda$CDM one (the relative difference is only about $\sim 2\%$). This means that the change in $h$ is significant enough as to counteract the previous unfavorable changes. Later on we will show how this comes about. Concerning $\omega_b$, its variation has almost no effect on $f(z)\sigma_8(z)$, and the corresponding curve (labelled $n_s+\Omega_m+h+\omega_b$ and in orange) lies just on top of the last one. What finally makes a big difference to bring the theoretical curve towards the correct direction is the role played by the vacuum parameter $\nu$. This can be easily appraised by direct comparison of the orange curve and the brown continuous curve (the latter contains, in addition to the former, the effect of $\nu$). In point of fact, a non-null and positive value of $\nu$ is the genuine force capable of dragging the theoretical $f(z)\sigma_8(z)$ curve downwards as a whole by the desired amount ($\sim 8\%$ ) so as to conform with the data points, and therefore we can assert that $\nu$ is the crucial new ingredient that warrants a fit to the LSS data better than the $\CC$CDM. Of course, $\nu$ can depart from zero because the other parameters can be readjusted without worsening the fit to the other data sets, and this is also very important. In particular, let us recall that the product $\omega_m\equiv\Omega_m h^2$ is very much constrained by the CMB data, and this fact enforces $\omega_m$ to remain very near the value 0.141. In our case we find $\omega_m=0.1412$ for the $\CC$CDM. Actually, we find that the relation $\omega_m=0.141$ defines a degeneracy curve in the $\Omega_m-h$ plane, meaning that if we move on this curve by varying $\Omega_m$ (or $h$) at fixed $\omega_m=0.141$, we find no significant changes in the prediction of $f(z)\sigma_8(z)$ for the $\Lambda$CDM. This is shown in the right plot of Fig. 5 where all curves crowd around the black one.
It is important to understand that the possible modifications of the weighted growth rate caused by a dynamical vacuum scenario can be important only in the recent universe, where the DE starts to dominate over the CDM. The mentioned degeneracy is only approximate near the present time, of course, but it helps to understand how $\Omega_m$ and $h$ can both vary while respecting the CMB bounds and at the same time keeping almost intact the $f(z)\sigma_8(z)$ curve predicted by the concordance model. The RVM, however, can break this degeneracy thanks to the vacuum parameter $\nu$, which for small positive values can bring the LSS curve down, relaxing in this way the well-known tension between the $\Lambda$CDM and the LSS data.
\begin{figure*}
\includegraphics[scale=0.7]{RVMdifOrigin4}
\caption{{\it Upper-left plot:} Weighted growth rate obtained by (i) setting all the parameters to the RVM ones (cf. Table 1); and (ii) keeping the same configuration, but with $\nu=0$. These correspond to the red and black lines, respectively; {\it Upper-right plot:} Relative difference between the curves of the upper-left plot, as defined in \eqref{eq:Deltafsigma8}. The change induced by the non-null vacuum parameter reaches $\sim 6.3\%$ at $z\sim 0$; {\it Lower-left plot:} Relative difference between the density contrasts $\delta_m(z)$ associated to the two scenarios explored in the upper-left plot, expressed in $\%$. The differences in this case are lower than $0.4\%$ for $z<1$; {\it Lower-right plot:} The same, but for the growth function $f(z)$, in $\%$ too. Around the present time, the relative differences attain the $0.8\%$ level.
\label{fig:RVMdifOrigin4}}
\end{figure*}
Now that we have demonstrated graphically the crucial role played by $\nu$ in the needed lowering of the theoretical $f(z)\sigma_8(z)$ curve, we can proceed to study which is the analytical explanation for the fact that a tiny parameter of order $10^{-3}$ can induce changes in the LSS prediction one order of magnitude larger than $\nu$ itself. To this end let us start by computing the leading order corrections induced by $\nu$ in the matter transfer function \eqref{eq:BBKS}. The percentage change caused by a non-null $\nu$ (if we keep the other parameters constant) can be computed as follows:
\begin{equation}\label{eq:DeltaTdef}
\Delta_T(k,\nu)=\frac{100}{T_\Lambda}\frac{\partial T(x)}{\partial\nu}\bigg\rvert_{\nu=0}\nu+\mathcal{O}(\nu^2)\,,
\end{equation}
where $T_{\Lambda}\equiv T(x_\CC)$, $x_\CC\equiv k/k^\CC_{eq}$. Let us firstly compute the correction $\Delta_T$ when $x\gg 1$ or, equivalently, when $k\gg k_{eq}$, and see whether we can extract information from this calculation. In this limit the BBKS transfer function \eqref{eq:BBKS} can be approximated just by
\begin{equation}
T(x)\approx \frac{C\ln(1+Ax)}{x^2}\,,
\end{equation}
with $A=0.171$ and $C=(0.171\times 0.49)^{-1}$. In order to compute \eqref{eq:DeltaTdef} it is convenient to use the differentiation chain rule:
\begin{equation}\label{eq:3terms}
\frac{\partial T(x)}{\partial\nu}\bigg\rvert_{\nu=0}=\frac{\partial T(x)}{\partial x}\frac{\partial x}{\partial k_{eq}}\frac{\partial k_{eq}}{\partial\nu}\bigg\rvert_{\nu=0}\,.
\end{equation}
The first factor on the ${\it r.h.s.}$ of this relation reads,
\begin{equation}
\frac{\partial T(x)}{\partial x}\bigg\rvert_{\nu=0}=-\frac{2}{x_\Lambda}T(x_\Lambda)+\frac{CA}{x_\Lambda^2(1+Ax_\Lambda)}\,.
\end{equation}
Taking into account that $k^\CC_{eq}\sim 0.01\,{\rm Mpc}^{-1}$, it is easy to see that for $k\gtrsim 1\,{\rm Mpc}^{-1}$ ($x\gtrsim 100$) the second term in the last expression can be neglected and therefore:
\begin{equation}\label{eq:T1}
\frac{\partial T(x)}{\partial x}\bigg\rvert_{\nu=0}\approx-\frac{2}{x_\Lambda}T_\Lambda\,.
\end{equation}
The second factor on the ${\it r.h.s.}$ of \eqref{eq:3terms} is just
\begin{equation}\label{eq:T2}
\frac{\partial x}{\partial k_{eq}}\bigg\rvert_{\nu=0}=-\frac{x_\Lambda}{k^\CC_{eq}}\,,
\end{equation}
and the last factor can be obtained upon differentiation of \eqref{eq:keqRVM} with respect to $\nu$,
\begin{equation}\label{eq:T3}
\frac{\partial k_{eq}}{\partial\nu}\bigg\rvert_{\nu=0}=k^\CC_{eq}\left[-\frac{7}{2}+3\ln\left(\frac{\Omega_r}{\Omega_m}\right)\right]\,.
\end{equation}
Introducing \eqref{eq:T1}, \eqref{eq:T2} and \eqref{eq:T3} in \eqref{eq:3terms} we finally obtain:
\begin{equation}\label{eq:DeltaT}
\Delta_T (x\gg 1) =- 100\nu\left[7+6\ln\left(\frac{\Omega_m}{\Omega_r}\right)\right]+\mathcal{O}(\nu^2)\,.
\end{equation}
By using in the above formula the values of the RVM parameters presented in Table 1, we see that the asymptotic relative difference between the RVM and the $\Lambda$CDM transfer functions is constant and attains $-8.8\%$. This is precisely the number that we get for the asymptotic value of $\Delta_T$ in our numerical results (cf. the left plot of Fig. 6). Of course, \eqref{eq:DeltaT} might still not allow us to directly infer the ultimate correction induced by $\nu$ on the value of $\sigma_8$ since there are other contributions to consider. To ease the discussion let us write symbolically Eq.\,(\ref{eq:s88generalNN}) in the form $\sigma_8=\delta_m\sqrt{I}$, where $I$ is the integral over the wave number $k$ involved in that equation. Obviously, the transfer function is only part of the integrand of $I$, and to assess the relative correction on $\sigma_8$ we have to evaluate the relative corrections $\Delta_{\delta_m}$ and $\Delta_{\sqrt{I}}$ on each one of the factors. Let us therefore study this issue more carefully. In the right plot of Fig. 6 we show the shape of the relevant function in the integrand of $I$, i.e. $k^{n_s+2} T^{2}(k) W^2(kR_{8})$. It is clear from the plot that for wave numbers $k\gtrsim 0.5\,{\rm Mpc}^{-1}$ the integrand is very suppressed, whereas it is much more sizeable for a range of smaller wave numbers where it reaches a maximum, specifically in the range $0.007\,{\rm Mpc}^{-1}\lesssim k \lesssim 0.3\,{\rm Mpc}^{-1}$. This range has been marked off with red vertical dashed lines in both plots of Fig. 6 to facilitate the reading of the results. According to the left plot of Fig.\,6 and the aforesaid range of relevant wave numbers, we expect $\Delta_{\sqrt{I}}$ to be around $-5.5\%$. Next we have to sum to it the numerical contribution from the density contrast, i.e. $\Delta_{\delta_m}$ (c.f. the lower-left plot of Fig. 7), so as to obtain the net effect $\Delta_{\sigma_8}$. Finally, the total relative correction undergone by $f(z)\sigma_8(z)$ induced by the presence of the vacuum parameter $\nu$ is given by
\begin{equation}\label{eq:Deltafsigma8Total}
\begin{array}{ll}
\Delta_{f\sigma_8}(z)& = \Delta_f(z)+\Delta_{\sigma_8}(z) \\
& =\Delta_f(z)+\Delta_{\delta_m}(z)+\Delta_{\sqrt{I}}\,.
\end{array}
\end{equation}
where $\Delta_f$ (computed numerically in the lower-right plot of Fig. 7) is the corresponding contribution from the growth rate. The total percentage correction (\ref{eq:Deltafsigma8Total}) is shown in the upper-right plot of the same figure.
\begin{figure}
\includegraphics[scale=0.65]{Pk}
\caption{{\it Upper plot:} The present linear power spectrum of matter perturbations $P(k)$ obtained for the $\Lambda$CDM and the RVM model; {\it Lower plot:} The relative difference between the two power spectra expressed in \%.
\label{fig:P(k)}}
\end{figure}
Using this formula, together with our theoretical estimation $\Delta_{\sqrt{I}}\approx -5.5\%$, which is redshift independent, and the numerical results for $\Delta_{\delta_m}(z)$ and $\Delta_f(z)$ shown in the lower plots of Fig. 7, we can check that we recover the total relative difference $\Delta_{f\sigma_8}(z)$ shown in the upper-right plot of the same figure. For instance, for $z=0$ we find:
\begin{equation}
\Delta_{f\sigma_8}(0)\approx -0.8\%+0\%-5.5\%=-6.3\%\,,
\end{equation}
and for $z=0.8$:
\begin{equation}
\Delta_{f\sigma_8}(0.8)\approx -0.55\%+0.3\%-5.5\%=-5.75\%\,,
\end{equation}
which match almost perfectly with the values of the upper-right plot of Fig. 7. Let us note that the obtained result is not the real correction predicted by the RVM, which is around $-8\%$ (cf. Fig. 4 right) because the parameters of the $\CC$CDM are also fixed at the central values of the RVM, but with vanishing $\nu$.
Therefore we can say that we have been able to identify the origin of the lowering of the $f(z)\sigma_8(z)$ curve in the RVM. The most part of the effect ($\sim 75\%$) is driven by $\nu$. More concretely, $\sim 65\%$ of the induced changes are due to the negative shift in the value of $k_{eq}$, which directly translates into a negative shift of the location of the power spectrum's maximum (cf. formula \eqref{eq:keqRVM} and Fig. 8), see also the analysis of (Perico \& Tamayo 2017) and (Geng, Lee \& Yin 2017). Although the effect of $\nu$ in the evolution of the density contrast at subhorizon scales is not exaggeratedly big (the corrections are of order $\nu$, as expected), the RVM parameter $\nu$ is capable of changing in a significant way the time at which the different modes reentered the Hubble horizon with respect to the concordance case. Despite $\nu$ is small, the scale factor and wave number at the equality point between matter and radiation epochs become modified in a non-negligible way, see Eqs.\,(\ref{eq:aeqRVM}) and (\ref{eq:keqRVM}). As a consequence, the LSS formation is substantially suppressed at low scales with respect to the $\Lambda$CDM and in this way the tension with the $f(z)\sigma_8(z)$ data loosens, mainly due to an important decrease of the $\sigma_8$ parameter. Let us stress that such feature cannot be appraised if one restricts the analysis mostly to the CMB without sufficient LSS input (Heavens et al. 2017; Perico \& Tamayo 2017).
Contrary to the RVM, the XCDM can only lower $f(z)\sigma_8(z)$ roughly by $2-4\%$ at most with respect to the $\Lambda$CDM (cf. Fig. 4). In this parametrization we have $\rho_X(a)=\rho_{X0}a^{-3(1+w)}$, with $\rho_{X0}=\rLo$ and the EoS parameter satisfying $w\ne -1$.
We can perfectly explain why the XCDM parametrization cannot match the very good description of the LSS data by the RVM. The reason is pretty simple. A DE parameter $w$ close (but not equal to) $-1$ cannot change the transfer function, just because the equality time between matter and radiation energy densities is not modified (matter and radiation are covariantly self-conserved) and the contribution of DE to the critical energy density at $a_{eq}$ is completely negligible. Thus $k_{eq}$ is not sensitive to $w$ and $T(k)$ remains unaltered. This leads us to conclude that the only modifications induced by $w$ on the $f(z)\sigma_8(z)$ observable can be due to late-time physics, mainly through the changes in the density contrast and the growth rate caused by the late-time domination of the DE over the non-relativistic matter. These corrections are of a few percent and cannot give rise to the desired level of lowering of the $f(z)\sigma_8(z)$ curve. By taking a look on the reconstruction plot of Fig. 9, which is the analogous of the left plot of Fig. 5, we can observe that for the XCDM the final effect is mainly due to the deviation of $w$ from $-1$. We also find that the compensation between $\Omega_m$ and $h$ discussed before for the RVM also occurs here.
\section{Dynamical dark energy: LSS data versus weak-lensing data}
The improvement in the description of the LSS data in the XCDM, and more conspicuously in the RVM, does not only concern the $f(z)\sigma_8(z)$ data, but also some weak gravitational lensing constraints on the conventional quantity $S_8\equiv \sigma_8(\Omega_m/0.3)^{0.5}$ that one can find in the literature (see e.g. Heymans et al. 2013; Hildebrandt et al. 2017; Joudaki et al. 2018). The impact of the DDE is crystal-clear from the left plot of Fig. 10, where we show the contour lines in the ($\Omega_m,\sigma_8)$ plane obtained from the very same datasets used in the fitting analyses presented in Table 1 for the $\CC$CDM, the XCDM and the RVM, together with the observational constraints in the same plane provided by: (i) DES Collab. 2017, extracted from weak gravitational lensing tomography, $S_8=0.783^{+0.021}_{-0.025}$; (ii) Joudaki et al. 2018, $S_8=0.742\pm 0.035$, obtained by KiDS-450, 2dFLenS and BOSS collaborations from a joint analysis of weak gravitational lensing tomography and overlapping redshift-space galaxy clustering; and (iii) KiDS-450 collaboration (K\"ohlinberg et al. 2017), obtained from weak gravitational lensing tomography, $S_8=0.651\pm 0.058$. The last two data points on $S_8$ tend to favor lower values of $\sigma_8$. Very similar results have been found using only weak gravitational lensing tomography data by KiDS-450 collaboration (Hildebrandt et al. 2017), $S_8=0.745\pm 0.039$, and also by CFHTLenS (Heymans et al. 2013), $(\Omega_m/0.27)^{0.46}=0.770\pm 0.040$. In contrast, the point provided by DES is more resonant with Planck, but due to its large uncertainty it is still fully compatible with Joudaki et al. 2018; Hildebrandt et al. 2017; and Heymans et al. 2013. Our discussion on the ability of the models under study to describe the gravitational weak lensing data basically remains unchanged if we use the constraints from Heymans et al. 2013 or Hildebrandt et al. 2017 instead that of Joudaki et al. 2018.
\begin{figure}
\includegraphics[scale=0.45]{XCDMcons}
\caption{Reconstruction of the $f(z)\sigma_8(z)$ curve of the XCDM from the $\Lambda$CDM one, following the same procedure utilized in Fig. 5 (left).
\label{fig:XCDMcons}}
\end{figure}
In Fig. 10 we can better assess the impact of the weak-lensing data. Two comments related with the results shown in that figure are in order: (i) the $\Lambda$CDM is compatible with the $S_8$ data point of Joudaki et al. 2018 at $1\sigma$ only, so the tension of the $\CC$CDM with $S_8$ is actually very small. Despite this, it is intriguing to observe that the RVM achieves an outstanding level of concordance with this data point. It actually removes completely the existing $1\sigma$ tension between it and the concordance model. If that is not enough, the RVM best-fit value is almost centered in the band delimited by the dashed curves in purple that corresponds to the preferred values of (Joudaki et al. 2018); and (ii) the data points from Heymans et al. 2013; Hildebrandt et al. 2017; and Joudaki et al. 2018 are also compatible at $1\sigma$ with the constraints obtained by other weak lensing studies such as those by (DES Collab. 2017) or (K\"ohlinger et al. 2017). These have been drawn in green and orange, respectively. The current variety of data points on $S_8$ unavoidably casts a shadow of doubt about the level of confidence that we can ultimately have on the weak lensing constraints in general, since it seems that there exists a non-negligible degree of dispersion of alternative constraints on $S_8$ around the combined value found by Joudaki et al. 2018 from KiDS-450+2dFLenS+BOSS. The present situation indicates that the constraints that we can derive from the LSS data points $f(z)\sigma_8(z)$ seem to be in very good accordance with the weak lensing constraints on $S_8$ furnished in the works by Heymans et al. 2013; Hildebrandt et al. 2017; and Joudaki et al. 2018, what is not too surprising since the latter favor lower values of $\sigma_8$ in the range $\sim 0.730-0.750$, rather than the typical values found by the DES Collab. 2017 (or K\"ohlinger et al. 2017), which tend to favor values of $S_8$ that are larger (respectively, lower) than those inferred from the direct $f(z)\sigma_8(z)$ data.
\begin{figure*}
\includegraphics[scale=0.6]{CLcombix3}
\caption{{\it Left plot:} Likelihood contour lines in the ($\Omega_m,\sigma_8)$ plane for the values $-2\ln \mathcal{L}/\mathcal{L}_{\rm max}$= $2.30$, $6.18$, $11.81$, and $19.33$ (corresponding to $1\sigma$, $2\sigma$, $3\sigma$, and $4\sigma$ c.l.) obtained from the very same fitting analyses presented in Table 1 for the $\Lambda$CDM, the XCDM and the RVM, together with the observational constraints in the same plane provided by: (i) (DES Collab. 2017), extracted from weak gravitational lensing tomography, $S_8=0.783^{+0.021}_{-0.025}$ (green curves); (ii) combined values of KiDS-450+2dFLenS+BOSS collaborations, extracted by Joudaki et al. 2018 from weak gravitational lensing tomography and overlapping redshift-space galaxy clustering, $S_8=0.742\pm 0.035$ (purple curves); (iii) KiDS-450 collaboration (K\"ohlinberg et al. 2017), obtained from weak gravitational lensing tomography, $S_8=0.651\pm 0.058$ (orange curves). We show the allowed $1\sigma$ bands for the three data points on $S_8$ used; {\it Right plot:} Contour lines up to $4\sigma$ c.l. in the ($H_0,\sigma_8)$ plane for the same three models under study.
\label{fig:CLcombi}}
\end{figure*}
It may be appropriate to single out at this point the recent and interesting works by Lin \& Ishak 2017a,b, in which the authors run a so-called (dis)cordance test based on using a proposed index of inconsistency (IOI) tailored at finding possible inconsistencies/tensions between two or more data sets in a systematic and efficient way. For instance, it is well-known that there is a persistent discrepancy between the Planck CMB measurements of $H_0$ and the local measurements based on distance ladder (Riess et al. 2016, 2018b). At the same time, if one compares what is inferred from Planck 2015 best-fit values, the LSS/RSD measurements generally assign smaller power to the large scale structure data parametrized in terms of the weighted linear growth rate $f(z)\sigma_8(z)$. This feature is of course nothing but the $\sigma_8$-tension we have been addressing in this paper. It is therefore natural to run the IOI test for the different kinds of $H_0$ measurements and also to study the consistency between the $H_0$ and the growth data. For example, upon comparing the constraints on $H_0$ from different methods Lin \& Ishak 2017b observe a decrease of the IOI when the local $H_0$-measurement is removed. From this fact they conclude that the local measurement of $H_0$ is an outlier compared to the others, what would favor a systematics-based explanation. This situation is compatible with the observed improvement in the statistical quality of the fitting analysis by Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017c,d when the local $H_0$-measurement is removed from the overall fit of the data using the RVM and the $\CC$CDM. In this respect, let us mention that a recent model-independent analysis of data on cosmic chronometers and an updated compilation of SNIa seem to favor the lower range of $H_0$ (G\'omez-Valent \& Amendola 2018), what would be more along the line of the results found here, which favor a theoretical interpretation of the observed $\sigma_8$ and $H_0$ tensions in terms of vacuum dynamics and in general of DDE (cf. Fig. 10).
The mentioned authors of the IOI test actually apply it to two large sets of current observational data: the geometry data (e.g. SNIa, BAO etc.) versus the growth data (e.g. LSS/RSD, weak-lensing, CMB-lensing etc.). They find that a persistent inconsistency is present between the two sorts of data sets. Despite encountering such inconsistency, Lin \& Ishak 2017a,b emphasize that if they focus on the LSS data sets (which include e.g. the WiggleZ power spectrum, SDSS redshift space distortion, CFHTLenS weak lensing, CMB lensing, and cluster count from SZ effect) there is a global consistency among them. They confirm they are consistent one with another and also when all combined. In contrast, they find a persistent moderate inconsistency between Planck and individual or combined LSS probes. For the time being this cannot be fixed within the $\CC$CDM. However, if we combine the fact that the RVM fit of $H_0$ (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017c,d) is in good accordance with the Planck value, and the result obtained here showing that the RVM is also capable of relaxing the $\sigma_8$-tension (in contrast to the $\CC$CDM), it seems judicious to conclude that the running vacuum model can furnish a successful joint description of the observables $H_0$ and $\sigma_8$. For this reason when we consider the various sources of weak-lensing data discussed above we tend to prefer those that are more in accordance with the direct LSS growth data, such as e.g. the weak-lensing data from Joudaki et al. 2018, rather than the weak-lensing data from DES Collab. 2017, which are more in accordance with the (tensioned) $\sigma_8$-values predicted by Planck.
To summarize, despite that gravitational lensing statistics has since long been considered as a possible probe for the EoS of the DE (Cooray \& Huterer 1999) and hence as a useful test for DDE, the bare truth is that at present the wealth of growth data collected from the direct $f(z)\sigma_8(z)$ measurements at different redshifts seem to encode much more accurate information on the possible dynamical nature of the DE. While the lensing data are compatible with the growth data, the dispersion of the lensing measurements is too large at present to provide a firm handle to possible DDE observations. Thus, in current practice a putative DDE signal from these sources becomes considerably blurred, what is in stark contrast with the situation involving direct growth data points. In fact, with the help of these data (and the remaining observational sources) we have shown here that even a simple XCDM parametrization enables us to extract a DDE signal at near $3\sigma$ c.l. Interestingly the signal can be further enhanced within the RVM up to $3.8\sigma$. At the end of the day the possibility of having running vacuum proves particularly sensitive to the features of the growth data and this fact produces a remarkable improvement of the overall fit quality of the RVM as compared to the $\CC$CDM.
\section{Conclusions}
It is well known that the $\CC$CDM harbors important theoretical conundrums, but is also plagued with some persistent phenomenological problems of very practical nature. To put it in a nutshell, we can say that there is a significant tension between the geometry data and the growth data. Two representative observables illustrating this tension are the disparate results for $H_0$ obtained from local and CMB measurements, and the exceeding power associated to large scale structure (LSS) formation data, which lead to the $\sigma_8$-tension. These problems cannot be currently cured within the concordance model with rigid cosmological term $\CC=$const. In this work we have considered a possible way to relax these tensions by admitting the possibility of dynamical dark energy (DDE) models. Most particularly we have focused on the running vacuum model (RVM), although we have studied the problem also within the simple XCDM parametrization of the DDE. In order to tackle these tensions in a consistent way we have first of all undertaken a careful study of the matter density perturbations in the context of dynamical vacuum models (DVMs). In these models the vacuum fluctuations have to be considered as well, in principle, but we have explicitly shown that they are negligible at all subhorizon scales that are relevant for the study of the large scale structure formation data. We have considered possible issues of gauge dependence and we have presented the results both in the conformal Newtonian gauge and in the synchronous gauge. The outcome is that the effective matter perturbations equation obeyed by the density contrast for the DVMs is the same in both gauges and is free both from scale-dependence and from significant vacuum fluctuation effects. This result is valid at scales below the horizon and is similar to the $\CC$CDM case. The effective equation for DVMs, however, is different from the standard one in the $\CC$CDM and reduces to it in the limit of constant vacuum energy density.
Armed with the previous theoretical results we have faced the practical study and possible resolution of the mentioned tensions between theory and observation in the context of the RVM and compared with the XCDM. While in previous works we had addressed the $H_0$ tension between the Planck CMB data and the local measurements (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d), here we have concentrated on the growth data, namely the observations that are obtained by means of the direct measurements of the weighted growth rate $f(z)\sigma_8(z)$. We have noted that many studies essentially incorporate the LSS observations only through the gravitational weak-lensing data parametrized in terms of $S_8$ and we have signaled that this practice may result in an insufficient account of the LSS data. We find that the pictures achieved in terms of the $f(z)\sigma_8(z)$ data and the constraints on $S_8$ from weak-lensing data point consistently towards the same direction, but are not equivalent. The current $f(z)\sigma_8(z)$ data turn out to be more restrictive than the $S_8$ data insofar as concerns the monitoring of a possible DDE signal in the observations. In addition, the latter are not able to improve in a significant way the constraints on the cosmological parameters which we had previously obtained from a rich string of SNIa+$H(z)$+CMB+BAO+$f(z)\sigma_8(z)$ observables (Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2017d), as we have explicitly checked here. At the end of the day the weak lensing data can be regarded as a useful complementary source of LSS information, which we find to be compatible with the $f(z)\sigma_8(z)$ data but the former cannot, at present, be a replacement for the latter. From our study we conclude that for the time being only the direct $f(z)\sigma_8(z)$ observations offer the possibility of extracting the signature of vacuum/DE dynamics when combined with CMB and BAO data, whereas if $S_8$-data is utilized as a substitute for $f(z)\sigma_8(z)$ in the overall fit it yields a much blurred description of the DDE signal. In this work we have illustrated the extraction of a possible such signal using the RVM and the XCDM parametrization. For the RVM the $\sigma_8$-tension with the LSS data becomes fully relaxed, whereas for the XCDM we observe a correct trend towards a further relaxation as compared to the $\CC$CDM, but the loosening of the tension is definitely weaker. We interpret these results as new signs of evidence of DDE in modern cosmological observations, along the lines of those that were first reported in the works by Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017a,b; Sol\`a, de Cruz P\'erez \& G\'omez-Valent 2018; and independently by Zhao et al. 2017.
\section{Acknowledgements}
We are partially supported by MINECO FPA2016-76005-C2-1-P, Consolider CSD2007-00042, 2017-SGR-929 (Generalitat de Catalunya) and MDM-2014-0369 (ICCUB). AGV wants to express his gratitude to the Institute of Theoretical Physics of the Ruprecht-Karls University of Heidelberg for the financial support and hospitality during part of the elaboration of this paper.
|
2,869,038,154,994 | arxiv | \section{Introduction}
Theoretical approach to the description of hard exclusive processes, which is called light
cone formalism (LC), is based on
the factorization theorem \cite{Lepage:1980fj, Chernyak:1983ej}. Within this theorem the amplitude of hard exclusive process
can be separated into two parts. The first part is partons production at very small
distances, which can be treated within perturbative QCD. The second part is
the hardronization of the partons at large distances. For hard exclusive processes it can be
parameterized by process independent distribution amplitudes (DA).
The production of the charmonium meson $H$ in the process $e^+ e^- \to H+\gamma$ at B-factories,
is the simplest example of hard exclusive process. One can assume that the energy at
which B-factories operate is sufficiently large so that it is possible to apply
LC. Another approach to the calculation of the cross section
of this process is nonrelativistic QCD (NRQCD) \cite{Bodwin:1994jh}. This approach is based
on the assumption that relative velocity of quark-antiquark pair in charmonia is small
parameter in which the amplitude of charmonium production can be expanded.
LC has two very important advantages in comparison to the NRQCD. The first
advantage is that LC formalism can be applied for light or heavy mesons if DA
of this meson is known. From, NRQCD perspective this means that LC resums whole
series of relativistic corrections to the amplitude under study. For NRQCD relativistic
corrections are very important especially for the production of exited charmonia
mesons. The second advantage is that within LC one can resum leading logarithmic
radiative corrections to the amplitude in all loops. The main disadvantage of
LC is that within this formalism it is rather difficult to control power corrections
to the amplitude.
Within NRQCD the process $e^+ e^- \to H+\gamma$ was considered in papers \cite{Chung:2008km, Li:2009ki, Sang:2009jc}.
In paper \cite{Chung:2008km} this process was considered at the leading order approximation in relative velocity
and strong coupling constant. The authors of paper \cite{Li:2009ki} took into
account one loop radiative corrections. In addition to the radiative corrections
the first order relativistic corrections to the process $e^+ e^- \to \eta_c+\gamma$
were considered in paper \cite{Sang:2009jc}.
The only process considered within LC is
$e^+ e^- \to \eta_c+\gamma$ \cite{Jia:2008ep, shifman}. The main drawback
of these papers is that the authors used very simple model of DA of the $\eta_c$ meson,
which doesn't take into account relativistic motion in this meson.
Recently, the leading twist DAs of charmonia mesons have become the object of intensive
study
\cite{Bodwin:2006dm, Ma:2006hc, Braguta:2006wr, Braguta:2007fh,
Braguta:2007tq, Choi:2007ze, Bell:2008er, Braguta:2008qe, Hwang:2009cu}.
The study of these DAs allowed one to build some models for charmonia DAs,
that can be used in the calculation of different exclusive processes.
In this paper the leading twist processes $e^+ e^- \to H+\gamma$ will be considered.
Using helicity selection rules \cite{Chernyak:1977fk,Chernyak2,Chernyak3} it is not difficult to show that at the leading
twist accuracy the mesons with longitudinal polarization and the following quantum numbers $H=^1S_0, ^3P_1, ^3P_2, ^3P_3$ can be produced.
So, in this paper the following processes will be considered: $e^+ e^- \to H+\gamma, H=\eta_c, \eta_c',
\chi_{c0}, \chi_{c1}, \chi_{c2}$. To calculate the cross sections
of these processes the model of DAs proposed in papers
\cite{Braguta:2006wr, Braguta:2007fh, Braguta:2007tq, Braguta:2008qe} will be used.
This paper is organized as follows. In the next section the amplitudes of the
processes under consideration will be derived. Numerical results and the
discussion of these results will be given in the last section of this paper.
\section{The amplitude of the process $e^+ e^- \to H+\gamma$.}
In this section the leading twist approximation for the amplitude of the processes
$e^+ e^- \to H+\gamma, H=\eta_c, \eta_c', \chi_{c0}, \chi_{c1}, \chi_{c2}$ will be derived.
The diagrams that contribute to the processes at the leading order approximation
in the strong coupling constant are shown in Fig. \ref{fig1}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.8]{fig1.eps}
\par\end{centering}
\caption{The diagrams that contribute to the processes $e^+ e^- \to H+\gamma, H=\eta_c,
\eta_c', \chi_{c0}, \chi_{c1}, \chi_{c2}$ at the leading order approximation.
in the strong coupling constant}
\label{fig1}
\end{figure}
As was noted in the introduction, selection rules tell us that at the leading
twist accuracy all produced mesons are longitudinally polarized. So, the
polarization vectors of these mesons are proportional to the momentum of these
mesons.
To calculate the amplitudes and the cross sections of the processes involved one needs
the expressions for the following matrix element of the electromagnetic current
$J_{\mu}(0)$: $~\langle H(p) \gamma(k)| J_{\mu}(0)|0 \rangle$. For the production
of the longitudinally polarized $\eta_c, \eta_c', \chi_{c1}$ mesons it can parameterized as follows
\beq
\langle H(p) \gamma(k)| J_{\mu}(0)|0 \rangle = F_H ~e_{\mu \nu \alpha \beta} \epsilon^{\nu} p^{\alpha} k^{\beta},
\label{J1}
\eeq
where $\epsilon^{\nu}$ is the polarization vector of the final photon. It
causes no difficulties to find the expression for the
formfactor $F_{H=\eta_c, \eta_c', \chi_{c1}}$ at the leading
twist approximation
\beq
F_{\eta_c, \eta_c', \chi_{c1}} = \frac {16 \pi \alpha Q_c^2 f_{\eta_c, \eta_c', \chi_{c1}}} {s}
\int_{-1}^{1} d \xi \frac {\phi_{\eta_c, \eta_c', \chi_{c1}}( \xi, \mu)} {(1-\xi^2)},
\label{HJ1}
\eeq
where $Q_c$ is the charge of $c$-quark, the definitions of the constants $f_{\eta_c, \eta_c', \chi_{c1}}$
and the DAs $\phi_{\eta_c, \eta_c', \chi_{c1}}( \xi, \mu)$ can be found in the Appendix,
$\xi$ is the fraction of relative momentum of the whole meson carried by quark-antiquark pair,
$\mu$ is the characteristic scale of the process, $s=(p+k)^2$.
The expression for the production amplitude
of the longitudinally polarized $ \chi_{c0}, \chi_{c2}$ mesons can be written
in the following form
\beq
\langle H(p) \gamma(k)| J_{\mu}(0)|0 \rangle = F_H~\biggl (
(\epsilon q) k_{\mu} - (kq) \epsilon_{\mu}
\biggr ),
\label{J2}
\eeq
where $q=k+p$. The other designations are the same as were used in equation (\ref{J1}).
The expression for the formfactor $F_{H=\chi_{c0}, \chi_{c2}}$ has the form
\beq
F_{\chi_{c0}, \chi_{c2} } = \frac {16 \pi \alpha Q_c^2 f_{ \chi_{c0}, \chi_{c2} }} {s}
\int_{-1}^{1} d \xi \frac {\xi ~\phi_{\chi_{c0}, \chi_{c2}}( \xi, \mu)} {(1-\xi^2)},
\label{HJ2}
\eeq
The constants $f_{\chi_{c0}, \chi_{c2}}$
and the DAs $\phi_{\chi_{c0}, \chi_{c2}} ( \xi, \mu)$ can be found in the Appendix.
The cross section of the processes can be written in the following form
\beq
\sigma_{H}= \frac {\alpha} {24}
F^2_{H} \biggl ( 1- \frac {M_H^2} {s} \biggr ),
\label{cr}
\eeq
It should be noted that the matrix elements of the processes under study were taken at the
leading order approximation in ${M_H^2}/{s}$. The factor
$1- {M_H^2}/ {s}$ in the cross section appeared due to the phase space of the final particles.
The expression of the formfactor $F_H$ depend on the DAs $\phi_H(\xi, \mu)$ of the charmonia
mesons. If infinitely narrow distribution amplitudes
$\phi_{\eta_c, \eta_c', \chi_{c1}}(\xi, \mu)=\delta(\xi),~~
\phi_{\chi_{c0}, \chi_{c2} }(\xi, \mu)=-\delta'(\xi) $ are substituted
to formulas (\ref{HJ1}), (\ref{HJ2}), than NRQCD results for the amplitude will be reproduced
\cite{Chung:2008km}.
If real distribution amplitudes $\phi_H(\xi, \mu)$ are taken at the scale
$\mu \sim m_c$, than formulas (\ref{HJ1}), (\ref{HJ2}) will resum the relativistic corrections
to the cross section up to $O(1/s^3)$ terms.
To resum the relativistic and leading logarithmic radiative corrections simultaneously
one must take the distribution amplitudes $\phi_H(\xi, \mu)$ at the
characteristic scale of the process $\mu \sim \sqrt s$. The calculation
of the cross sections will be done at the scale $\mu=\sqrt s /2$.
It is interesting to note that it is possible find the leading logarithmic
radiative corrections at one loop level using formulas (\ref{HJ1}), (\ref{HJ2}) without
calculation of one loop diagrams. Applying the approach
proposed in paper \cite{Jia:2008ep} one gets the results
\beq
F_{\eta_c, \eta_c'}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {\langle O \rangle_S} {m_c} }
\biggl (1+ C_f \frac {\alpha_s( s ) } {4 \pi} \log {\biggl (\frac {\mu^2} {\mu_0^2}} \biggr )
\bigl ( 3-2 \log 2 \bigr ) \biggr ), \nonumber \\
F_{\chi_{c0}}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {\langle O \rangle_P} {3 m_c^3} }
\biggl (1+ C_f \frac {\alpha_s( s ) } {4 \pi} \log {\biggl (\frac {\mu^2} {\mu_0^2}} \biggr )
\bigl ( 1- 2 \log 2 \bigr ) \biggr ), \nonumber \\
F_{\chi_{c1}}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {2 \langle O \rangle_P} {m_c^3} }
\biggl (1+ C_f \frac {\alpha_s( s ) } {4 \pi} \log {\biggl (\frac {\mu^2} {\mu_0^2}} \biggr )
\bigl ( 3-2 \log 2 \bigr ) \biggr ), \nonumber \\
F_{\chi_{c2}}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {2 \langle O \rangle_P} {3 m_c^3} }
\biggl (1+ C_f \frac {\alpha_s( s ) } {4 \pi} \log {\biggl (\frac {\mu^2} {\mu_0^2}} \biggr )
\bigl ( 1- 2 \log 2 \bigr ) \biggr ),
\label{LL}
\eeq
where $m_c$ is the pole mass of the $c$-quark, the definition of the NRQCD matrix elements
$\langle O \rangle_S, \langle O \rangle_P$ can found in paper \cite{Bodwin:1994jh}, $C_f=4/3$.
Note that in the above equations it was assumed that renormalization group evolution
of the DAs begins at the scale $\mu_0 \sim m_c$ and ends at the scale $\mu \sim \sqrt s$.
At the scale $\mu_0 \sim m_c$ the DAs are $\phi_{\eta_c, \eta_c', \chi_{c1}}(\xi, \mu)=\delta(\xi),~~
\phi_{\chi_{c0}, \chi_{c2} }(\xi, \mu)=-\delta'(\xi) $. Leading order NRQCD results
(\ref{LL}) coincide with the results obtained in paper \cite{Chung:2008km}.
The one loop leading logarithmic radiative corrections for the $F_{\eta_c}$ coincide with the result
of paper \cite{Jia:2008ep}. The one loop leading logarithmic radiative corrections for the
$F_{\eta_c}, F_{\chi_{c0}}, F_{\chi_{c1}}, F_{\chi_{c2}}$ coincide with the results
of paper \cite{Sang:2009jc}.
The result (\ref{HJ1}) for the production of the pseudoscalar mesons $\eta_c, \eta_c'$
can be improved since there exists expression for the one loop radiative correction
to this amplitude \cite{Braaten:1982yp, Kadantseva:1985kb}. This expression can be written as follows \cite{Braaten:1982yp}
\beq
F_{\eta_c, \eta_c'} = \frac {16 \pi \alpha Q_c^2 f_{\eta_c, \eta_c'}} {s}
\int_{-1}^{1} d \xi \frac {\phi_{\eta_c, \eta_c'}( \xi, \mu)} {(1+\xi)}
\biggl [
1+ C_f \frac {\alpha_s (s)} {4 \pi} \biggl ( &&
\log^2 {\biggl ( \frac {1+\xi} 2 \biggr ) }
- \frac {1+\xi} {1-\xi} \log { \biggl ( \frac {1+\xi} 2 \biggr ) } \nonumber \\
&& - 9 +
\biggl (
3+2 \log {\biggl ( \frac {1+\xi} 2 \biggl ) }
\biggr ) \log {\biggl ( \frac s {\mu^2} \biggl ) }
\biggr )
\biggr ],
\label{rad}
\eeq
In the above expressions it is assumed that the DAs $\phi_{\eta_c, \eta_c'}$
are $\xi$ even.
It is instructive to take the limit of zero relative velocity of quark-antiquark pair
and compare it to the NRQCD result \cite{Sang:2009jc}. At leading order approximation
in relative velocity expression (\ref{rad}) becomes
\beq
F_{\eta_c, \eta_c'}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {\langle O \rangle_S} {m_c} }
\biggl [1+ && C_f \frac {\alpha_s( s ) } {4 \pi} \log {\biggl (\frac {\mu^2} {\mu_0^2}} \biggr )
\biggl ( 3-2 \log 2 \biggr )
\nonumber \\ && + C_f \frac {\alpha_s( s ) } {4 \pi}
\biggl (
\log^2 2 +\log 2 - 9 + \log {\biggl (\frac {s} {\mu^2}} \biggr )
\bigl ( 3-2 \log 2 \bigr )
\biggr )
\biggr ].
\label{nrrad}
\eeq
The second term in equation (\ref{nrrad}) is due to renormalization
group resummation of the leading logarithms in the DA. The last term is
one loop radiative corrections to the hard part of the amplitude.
The factorization scale $\mu$ separates long distance dynamic of
the chamonium meson parameterized by DA from the small distance effects
parameterized in the hard part of the amplitude. It is seen that
$\mu$ dependence is canceled in the final answer, as it should be.
The authors of paper \cite{Sang:2009jc} obtained the following
NRQCD expression for equation (\ref{nrrad})
\beq
F_{\eta_c, \eta_c'}=\frac {16 \pi \alpha Q_c^2 } {s} \sqrt{ \frac {\langle O \rangle_S} {m_c} }
\biggl [1+ C_f \frac {\alpha_s( s ) } {4 \pi}
\biggl (
\log^2 2 +3 \log 2 - 9 - \frac {\pi^2} 3+ \log {\biggl (\frac {s} {m_c^2}} \biggr )
\bigl ( 3-2 \log 2 \bigr )
\biggr )
\biggr ].
\label{nr1rad}
\eeq
It is seen that this expression is very similar to (\ref{nrrad}). Moreover
one has one free parameter $\mu_0$ in expression (\ref{nrrad}), which
can be used to adjust (\ref{nrrad}) to (\ref{nr1rad}). However, expressions
(\ref{nrrad}) and (\ref{nr1rad}) seem to be a little bit different.
\section{Numerical results and discussion.}
To obtain numerical results for the cross sections of the processes
under study the following numerical parameters are needed.
In this paper we are going to use the models of the charmonia DAs proposed in papers
\cite{Braguta:2006wr, Braguta:2007fh, Braguta:2007tq, Braguta:2008qe}.
For the strong coupling constant we use one-loop expression
\begin{eqnarray*}
\alpha_s(\mu) &=& \frac{4\pi}{b_0 \ln(\mu^2/\Lambda_\mathrm{QCD}^2)},
\end{eqnarray*}
where $b_0=25/3$ and $\Lambda_\mathrm{QCD}=0.2$ GeV.
In the calculation the following values of the constants $f_H$ will be used \cite{Braguta:2009df}
\begin{eqnarray}
f_{\eta_c} & = & 0.373 \pm 0.064 \,\mathrm{GeV}, \nonumber \\
f_{\eta_c'} &=& 0.261 \pm 0.077 \,\mathrm{GeV},\nonumber \\
f_{\chi_{c0}}(M_{J/\Psi}) & = & 0.093 \pm 0.017\,\mathrm{GeV},\nonumber \\
f_{\chi_{c1}} & = & 0.272 \pm 0.048 \,\mathrm{GeV}, \nonumber \\
f_{\chi_{c2}}(M_{J/\Psi}) & = & 0.131 \pm 0.023 \,\mathrm{GeV}
\label{const_values}
\end{eqnarray}
The values of the constants $f_{\eta_c}, f_{\eta_c'}$ were
calculated in paper \cite{Braguta:2008tg}. The values of the constants of the $P$-wave charmonia mesons
can be found in paper \cite{Braguta:2008qe}. It should be noted that the constants
$ f_{\chi_{c0}}, f_{\chi_{c2}}$ depend on the renormalization scale. As it is seen from formulas (\ref{const_values})
these constants are defined at the scale $\mu=M_{J/\Psi}$. The anomalous dimensions of these constants,
which govern the evolution, can be found in paper \cite{Braguta:2008qe}.
\begin{table}
$$\begin{array}{|c|c|c|c|c|}
\hline
H & \sigma (e^+ e^- \to H+\gamma) (\mbox{fb})
& \sigma (e^+ e^- \to H+\gamma) (\mbox{fb})
& \sigma (e^+ e^- \to H+\gamma) (\mbox{fb}) & \sigma (e^+ e^- \to H+\gamma) (\mbox{fb}) \\
& \mbox{This work} & \mbox{\cite{Chung:2008km}} & \mbox{\cite{Li:2009ki}} & \mbox{\cite{Sang:2009jc}} \\
\hline
\eta_c& 41.6 \pm 14.1 & 82.0^{+21.4}_{-19.8} & 42.5-53.7 & 68.0^{+22.2}_{-20.3} \\
\hline
\eta_c'& 24.2 \pm 14.5 & 49.2^{+9.4}_{-7.4} & 27.7-35.1 & 42.6^{+10.9}_{-8.8} \\
\hline
\chi_{c0}& 6.1 \pm 3.9 & 1.3^{+0.2}_{-0.2} & 1.53-2.48 & 1.36^{+0.26}_{-0.26} \\
\hline
\chi_{c1}& 24.2 \pm 13.3 & 13.7^{+3.4}_{-3.1} & 11.1-17.7 & 10.9^{+3.7}_{-3.4} \\
\hline
\chi_{c2}& 12.0 \pm 17.4 & 5.3^{+1.6}_{-1.3} & 1.65-3.53 & 1.95^{+1.85}_{-1.56} \\
\hline
\end{array}$$
\label{tab}
\caption{ The cross sections of the processes
$e^+ e^- \to H+\gamma, H=\eta_c, \eta_c', \chi_{c0}, \chi_{c1}, \chi_{c2}$.
Second column contains the results obtained in this paper.
In the third, fourth and fifth columns the results
obtained in papers \cite{Chung:2008km}, \cite{Li:2009ki}, \cite{Sang:2009jc} are shown.
}
\end{table}
There are different sources of uncertainty to the results obtained in this paper. The most important
uncertainties can be divided into the following groups:
{\bf 1.} {\it The uncertainty in the models of the distribution amplitudes $\phi_H (x, \mu)$},
which can be modeled by the variation of the parameters of these models .
The calculation shows that this source of uncertainty is not greater than 10\%. So, it
is not very important and it will be ignored.
{\bf 2.} {\it The uncertainty due to radiative corrections}.
In the approach applied in this paper the leading logarithmic radiative corrections due to the
evolution of the DAs and the strong coupling constant were resummed. For the processes
$e^+e^- \to \eta_c, \eta_c'+\gamma$ one loop radiative corrections were
taken into account. So, for last two processes radiative corrections
are not very important and they will be ignored. As to the
other processes considered in this paper radiative corrections to the results
can be estimated as $\alpha_s(s) \sim 20 \%$.
{\bf 3.} {\it The uncertainty due to the power corrections.} This uncertainty is determined
by the next-to-leading order contribution in the $1/s$ expansion. One can estimate these
corrections using the leading order NRQCD predictions \cite{Chung:2008km}, as it was discussed
in paper \cite{Braguta:2009df}. Thus, for
the processes $e^+e^- \to \eta_c, \eta_c', \chi_{c0}, \chi_{c1}, \chi_{c2}+ \gamma$ the
errors due to this source of uncertainty are $\sim 3 \%, 6 \%, 50 \%, 37 \%, 60 \%$
correspondingly.
{\bf 4.} {\it The uncertainty in the values of constants (\ref{const_values}).} The calculations
show that, for the processes
$e^+e^- \to \eta_c, \eta_c', \chi_{c0}, \chi_{c1}, \chi_{c2}+ \gamma$ the
errors due to this source of uncertainties are
$\sim 34 \%, 60 \%, 35 \%, 35 \%, 35 \%$ correspondingly.
Adding all these uncertainties in quadrature one gets the total errors of the calculation.
The results of the calculation are presented in Table \ref{tab}. Second column
contains the results obtained in this paper. In the third, fourth and fifth columns the results
obtained in papers \cite{Chung:2008km}, \cite{Li:2009ki}, \cite{Sang:2009jc} are shown.
It is seen that the results obtained in this paper are in reasonable agreement
with the results obtained within NRQCD.
\begin{acknowledgments}
This work was partially supported by Russian Foundation of Basic Research under grant 08-02-00661, grant 09-01-12123, grant 10-02-00061, Leading Scientific Schools grant NSh-6260.2010.2
and president grant MK-140.2009.2.
\end{acknowledgments}
|
2,869,038,154,995 | arxiv | \section{Introduction}
Despite very different natures, gauge theories and gravity have deep connections; one of the oldest and the most prominent example is the double copy structure \cite{Kawai:1985xq, Bern:2008qj, Bern:2010ue}. Originally it was discovered from Kawai-Lewellen-Tye (KLT) relations \cite{Kawai:1985xq} in string theory, and a modern realization of double copy has relied on the duality between color and kinematics for gauge theory amplitudes, where the Bern-Carrasco-Johansson (BCJ) kinematic numerators satisfy the same Jacobi relations as the color factors. The duality and double copy have led to tremendous progress in the study of amplitudes both in gauge theory and gravity (see~\cite{Bern:2019prr, Bern:2022wqg, Adamo:2022dcm} and references therein). More recently, the authors of~\cite{Cheung:2021zvb} have revealed a so-called covariant color-kinematics (CCK) duality for a large class of theories including Yang-Mills theory (YM) and its coupling to bi-adjoint $\phi^3$ (YMS). As a consequence, the duality implies new, closed-form expression for BCJ numerators of all tree-level amplitudes in YMS and YM theory.
Previous works on BCJ numerators and kinematic algebras include~\cite{Mafra:2011kj,Bargheer:2012gv,CHY3,He:2015wgf,Fu:2017uzt,Teng:2017tbo, Du:2017kpo, He:2017spx, Edison:2020ehu,He:2021lro, Monteiro:2011pc,Monteiro:2013rya,Cheung:2016prv,Chen:2019ywi,Edison:2022jln} and references therein.
On the other hand, recent years have seen progress on revealing new geometric/combinatorial structures underlying scattering amplitudes {\it e.g.} from the (all-loop) amplituhedron of supersymmetric Yang-Mills ~\cite{Arkani-Hamed:2013jha} to the associahedron for bi-adjoint $\phi^3$ at tree level~\cite{Arkani-Hamed:2017mur} (with extensions to string scattering~\cite{Arkani-Hamed:2019mrd}). It is natural to look for hints of such structures underlying YM and gravity amplitudes; instead of directly working with tree amplitudes, one may decompose the problem and ask a somewhat strange question as a first step: are there combinatorial structures underlying BCJ numerators?
In this note, we take BCJ numerators from CCK duality~\cite{Cheung:2021zvb} as inputs and present preliminary evidence for such structures: in addition to the more familiar quasi-shuffle Hopf algebras~\cite{Hoffman}, we find hidden combinatorial permutohedra \cite{wiki:Permutohedron} for BCJ numerators. Any BCJ numerator can be written as the sum over all boundaries of a permutohedron (or terms from a quasi-shuffle product); for a co-dimension $d$ boundary (length-$d$ term), it contains a product of $d{+}1$ factors each with a spurious pole and a gauge-invariant numerator. We will focus on the case with two scalars and $n{-}2$ gluons, which corresponds to a $(n{-}3)$-dimensional permutohedron, and it has Fubini number ${\cal F}_{n{-}2}$ boundaries with co-dimensions $d=0, 1, \ldots, n{-}3$; each boundary is labeled by $d+1$ subsets, and for each factor labeled by such a set both the numerator (which is gauge invariant in the gluons) and the (spurious pole) denominator are given by Lorentz products of momenta and polarizations, as well as the reference momenta. Apart from being the most illustrative BCJ numerators of YMS cases, we will also see that they give nice BCJ numerators in the heavy-mass effective theory (HEFT)~\cite{Georgi:1990um,Luke:1992cs,Neubert:1993mb,manohar_wise_2000,Damgaard:2019lfh,Brandhuber:2021kpo, Brandhuber:2021bsf, Brandhuber:2022enp} as well as decoupling limit into pure YM amplitudes. BCJ numerators in HEFT have attracted lots of interest recently for their roles in the computation of gravitational amplitudes for black-hole scattering and gravitational waves~\cite{Brandhuber:2021eyq} ({\it c.f.} ~\cite{Kosower:2018adc,Bern:2019nnu,Damour:2019lcq,Bern:2021dqo,DiVecchia:2021bdo,Herrmann:2021tct,Bjerrum-Bohr:2021din,Bjerrum-Bohr:2021wwt,Jakobsen:2021lvp} for some recent works). We will take the heavy-mass limit of YMs amplitude, and (as we have checked up to $n=10$) highly nontrivial cancellations lead to a nice formula for BCJ numerators in HEFT which corresponds to ${\cal P}_{n{-}3}$ (one dimensional lower)!
Furthermore, our results imply new recursion relations and surprisingly, ``factorization'' properties of BCJ numerators on facets of permutohedra; all these can be extended to BCJ numerators of general YMS amplitudes, which in turn combine into a formula for the YM case as well. For the latter, we can then turn the logic around: since the BCJ numerators are manifestly gauge invariant in $n{-}1$ gluons, by showing that all spurious poles indeed cancel in the amplitude based on such ``factorizations", it follows from the uniqueness theorem of~\cite{Arkani-Hamed:2016rak} that they must give correct YM and gravity amplitude (after double copy) even without knowing the CCK duality.
Let us consider color-ordered YMS amplitude $A(1^\phi,2,\ldots,n{-}1,n^\phi)$ with scalars $1^\phi,n^\phi$. Its expansion onto the Kleiss-Kuij(KK) basis~\cite{Kleiss:1988ne} of bi-adjoint $\phi^3$ amplitudes has BCJ master numerators as coefficients reads
\begin{equation}\label{eq: BCJnum}
A(1^\phi,2,\ldots,n{-}1,n^\phi){=}\sum_{\beta\in S_{n-2}} K(1,\beta,n)A^{\phi^3}(1,\beta,n),
\end{equation}
where the sum is over $(n{-}2)!$ permutations of gluons and $A^{\phi^3}(1,\beta,n) \equiv m(1,2,\ldots,n | 1,\beta,n)$ denotes bi-adjoint $\phi^3$ amplitudes with the first ordering fixed to be $(1,2,\ldots,n)$.
Remarkably, the BCJ numerators from CCK duality $K(1,\beta,n)$ respect the Bose symmetry of all the $n{-}2$ gluons ~\cite{Cheung:2021zvb}: we only need a single numerator with the ordering chosen to be $\beta=(2,\ldots,n{-}1)$, and all others can be obtained by relabelling; they are also gauge invariant for the gluons, which becomes manifest since the dependence on polarizations is through Lorentz products of linearized field strengths $F_i^{\mu\nu}\equiv p_i^\mu \varepsilon_i^\nu-p_i^\nu\varepsilon_i^\mu$
\begin{equation}\label{Fprod}
[F_{\sigma}]^{\mu \nu}= [F_{\sigma_{1}}\cdot F_{\sigma_{2}}\cdots \cdot F_{\sigma_{|\sigma|}}]^{\mu \nu}
\end{equation}
for an ordered subset $\sigma$. The price to pay for these desirable properties is the presence of $2^{n{-}2}-1$ spurious poles, one for each nonempty subset $I\subset \{2, \ldots, n{-}1\}$:
\begin{equation}\label{Ddef}
D_{I}:= p_{I}\cdot q_{I},\quad \text{with} \quad p_I:=\sum_{i\in I}p_i,
\end{equation}
which depends on a reference momentum $q_{I}$. These numerators can be simplified with some choices of $q_I$, and the final amplitude is independent of them.
\section{The permutohedron and algebra underlying BCJ numerators}
In this section, we show that all the terms in a BCJ master numerator obtained from CCK duality for YMS amplitudes are in one-to-one correspondence with all boundaries of permutohedron $\mathcal{P}_{n{-}2}$, or equivalently terms from a quasi-shuffle product.
\subsection{The (combinatorial) permutohedra and quasi-shuffle products}\label{sec:def}
Following~\cite{Cheung:2021zvb}, we organize $K(1,2,\ldots, n)$ according to the spurious pole structure, which is isomorphic to the boundary structure of the permutohedron $\mathcal{P}_{n-2}$.
The permutohedron $\mathcal{P}_{n-2}$ is an $(n{-}3)$-dimensional polytope~\cite{wiki:Permutohedron}, whose co-dimension $d$ boundary $\Gamma_d$ can be labeled by $d{+}1$ consecutive subsets
\begin{equation}\label{eq: lBD}
\Gamma_d:=\{I_{0}, I_{1},\ldots, I_{d}\},
\end{equation}
where $I_d{\neq}\emptyset$ and $I_d\subset I_{d{-}1}\subset\ldots\subset I_0=\{2,3,\ldots,n{-}1\}$; the interior of ${\cal P}_{n{-}2}$ can be viewed as its co-dimension $0$ boundary, $\Gamma_0:=I_0$. ${\cal P}_{n{-}2}$ and its boundaries have appeared in the context of cubic tree graphs from the worldsheet~\cite{Gao:2017dek, He:2018pue}. Here each term of the BCJ numerator $K(1, 2, \ldots, n)$ with $d{+}1$ spurious poles corresponds to such a co-dimension $d$ boundary, thus the numerator can be expanded in terms of boundaries of $\mathcal{P}_{n-2}$
\begin{equation}\label{eq: KBDexp}
K(1,2,\ldots,n)=\sum_{d=0}^{n{-}3}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}}K_{\Gamma_d}(1,2,\ldots,n),
\end{equation}
where we sum over all boundaries $\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}$ with co-dimension $d=0, \ldots, n{-}3$, and the contribution from $\Gamma_d$, $K_{\Gamma_d}(1,2,\ldots,n)\equiv K_{\Gamma_d}$ reads
\begin{equation}\label{eq: KBD}
K_{\Gamma_d}=\prod_{k=0}^{d}\frac{p_{1
\Delta(I_{k},I_{k+1})
}\cdot F_{\tau_{k}}\cdot q_{I_{k}}}{D_{I_{k}}}.
\end{equation}
It has $d{+}1$ factors each with a denominator $D_{I_k}$ of \eqref{Ddef} and a numerator of the form $p_{1 \Delta(I_{k},I_{k+1})}\cdot F_{\tau_k} \cdot q_{I_k}$ for $k=0, \dots, d$. To specify the ordered subset $\tau_k$ of the Lorentz product as in \eqref{Fprod}, we introduce an alternative form of \eqref{eq: lBD} using ordered sets:
\begin{align}\label{eq: lBDt}
\Gamma_d
=&\{\tau_0\cup\tau_1\cup\ldots\cup\tau_d,\tau_1\cup\ldots\cup\tau_d,\ldots,\tau_{d{-}1}\cup\tau_d,\tau_d\}\\\nonumber
\sim&\{\tau_0,\tau_1,\ldots,\tau_d\},
\end{align
where the first line is equivalent to \eqref{eq: lBD} but we use ordered sets $\tau_{k}=\mathrm{Id}(I_k/I_{k{+}1})$ with $I_{d{+}1}\equiv \emptyset$. $\mathrm{Id}(I)$ means sorting the subset $I$ in numerical ordering~\footnote{For $K(1,\beta,n)$, everything stays the same except that in \eqref{eq: lBDt} the definition of $\tau_i$'s become $\tau_{k}=\beta(I_k/I_{k{+}1})$, {\it i.e.} any subset $I$ is sorted according to $\beta$ ordering.}. Perhaps the most subtle point is that we also define
$\Delta(I_{k},I_{k+1}):=\left(\bar{I}_{k}\right)_{<\tau_{{k},1}}$, which refers to the elements in the set $\bar{I}_{k}=\{2,3,\ldots,n-1\}/I_k$ that are numerically smaller than the first element $\tau_{k,1}$ of the ordered set $\tau_{k}$.
For example, at $n=5$, we have a boundary $\Gamma_2=\{I_0=\{2,3,4\}, I_1=\{2,4\}, I_2=\{4\}\}$; equivalently, we have $\tau_0=\{3\}, \tau_1=\{2\}, \tau_2=\{4\}$, thus we have a term
\begin{equation}
K_{234,24,4}= \frac{p_1\cdot F_3\cdot q_{234} p_1\cdot F_2\cdot q_{24} p_{123}\cdot F_4\cdot q_4 }{D_{234}D_{24} D_4 }
\end{equation}
where we have used $\Delta(I_2,\emptyset)=\left.\left(\bar{I_2}\right)\right|_{<4}=\{2,3\}$. A more nontrivial example for the latter is for $n{=}9$, $\Delta(\{4,5,7,8\},\{4,8\})=\left.\{2,3,6\}\right|_{<5}=\{2,3\}$.
On the other hand, the boundaries of $\mathcal{P}_ {n-2}$ have a nice quasi-shuffle product interpretation. The quasi-shuffle product $\star$ can be defined between two arbitrary generators $\left(\sigma_0,\sigma_1,\ldots,\sigma_r\right)$ and $\left(\rho_0,\rho_1,\ldots,\rho_s\right)$, where $\sigma_i$ and $\rho_j$ are sets with arbitrary lengths and can also be prompted to the quasi-shuffle Hopf algebra~\cite{Hoffman, Brandhuber:2021bsf, Brandhuber:2022enp, Chen:2022nei}. We summarize the definitions in appendix \ref{sec:appAG} and here we just use the following result of the quasi-shuffle product $\hat{K}(2,\ldots, n{-}1)\equiv(2)\star(3)\star\ldots\star(n-1)$
\begin{equation}\label{eq: part}
\hat{K}(2,\ldots,n{-}1)
{=}\sum_{d{=}0}^{n{-}3}\sum_{\tau\in \text{part}^{(d{+}1)}(2,\ldots,n{-}1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!(-1)^{n+d-1}(\tau_0,\tau_1,\ldots,\tau_{d}),
\end{equation}
where $\text{part}^{(d+1)}(2,\ldots,n{-}1)$ denotes all the ordered partitions of $\{2,3,\ldots,n{-}1\}$ into $d+1$ nonempty subsets $(\tau_0,\tau_1,\ldots,\tau_d)$ (each $\tau_i$ is sorted according to $\beta$). Terms on the RHS of \eqref{eq: part} are in one-to-one correspondence with boundaries of $\mathcal{P}_{n-2}$ as in~\eqref{eq: lBDt}, thus we can rewrite \eqref{eq: KBDexp} in terms of the quasi-shuffle product
\begin{equation}
\label{eq:qsp}
K(1,2,\ldots,n)=\langle\hat{K}(2,\ldots n{-}1)\rangle,
\end{equation}
where we have defined a {\it linear map} $\langle\cdot\rangle$ from \eqref{eq: KBD} for any partition $(\tau_0,\tau_1,\ldots,\tau_{d})$
\begin{equation}\label{eq:linmap}
\langle(\tau_0,\tau_1,\ldots,\tau_{d})\rangle{=}(-1)^{n+d-1}\prod_{k=0}^{d}\frac{p_{1\Delta(I_{k},I_{k+1})
}\cdot F_{\tau_{k}}\cdot q_{I_{k}}}{D_{I_{k}}}.
\end{equation}
\subsection{The counting and some examples}
By definition, the permutohedron $\mathcal{P}_m$ contains $m!$ vertices and $2^{m}-2$ co-dimension one facets. More generally, the number of co-dimension $d$ boundaries of this polytope is $(d+1)!S(m,d+1)$, where $S(m,d)$ is the second kind of Stirling number \cite{wiki:Stirling_numbers_of_the_second_kind}. Algebraically, $S(m,d+1)$ also counts the number of ways to partition a set of $m$ labeled objects into $d+1$ nonempty unlabeled subsets $\{\tau_0,\tau_1,\ldots,\tau_d\}$ \cite{wiki:Stirling_numbers_of_the_second_kind}, so after considering the ordering between these sets, there are $(d+1)!S(m,d+1)$ terms in the summation for any $d$.
The total number is the Fubini number $\mathcal{F}_{m}$, where $\mathcal{F}_{m}=\sum_{d=1}^{m}d!S(m,d)$ \cite{Mezo}, thus the $n$-point BCJ numerator has $\mathcal{F}_{n-2}$ terms.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\diagbox[innerwidth=0.5cm]{n}{d} & 0 & 1 & 2 & 3 &Total \\
\hline
3 & 1 & & & &1\\
\hline
4 & 1 & 2 & & &3\\
\hline
5 & 1 & 6 & 6 & &13 \\
\hline
6 & 1 & 14 & 36 & 24&75 \\
\hline
\end{tabular}%
\label{tab:BDc}%
\caption{Counting co-dimension-$d$ boundaries of $\mathcal{P}_{n-2}$}
\end{table}%
Let us illustrate \eqref{eq: KBDexp} and \eqref{eq: KBD} with some examples. The most trivial case is $n=3$, where the BCJ numerator corresponds to the zero-dimensional permutohedron $\mathcal{P}_{1}$ which is just a point. It contains one term with $\Gamma_0=\{I_0\}=\{2\}$, thus $K(1,2,3)=K_{2}(1,2,3)=\frac{p_1\cdot F_{2}\cdot q_2}{D_2}$, where we have used $\Delta(I_{0},I_{1})=\emptyset$, this is generally true since $\bar{I_0}=\emptyset$.
For $n=4$, the permutohedron $\mathcal{P}_{2}$ is a line segment, where the interior ($d=0$) is labelled by $I_0=\{23\}$, and the two vertices ($d=1$) are labeled by $\{23,2\}$ and $\{23,3\}$; we show these three terms in figure \ref{fig:P2}.
\begin{figure}[H]
\centering
\subfloat[]{ \label{fig:P2}
\begin{minipage}[c]{1\linewidth}
\centering
\includegraphics[scale=1.5]{figs/P2K.pdf}
\end{minipage}
}
\subfloat[]{ \label{fig:P3}
\begin{minipage}[c]{1\linewidth}
\centering
\includegraphics[scale=1.3]{figs/P3D.pdf}
\end{minipage}
}
\caption{Permutohedra ${\cal P}_2$ for $K(1,2,3,4)$(top) and ${\cal P}_3$ for $K(1,2,3,4,5)$(bottom)}
\end{figure}
Equivalently, in \eqref{eq: part} the partition $\text{part}^{(1)}$ of $\{2,3\}$ has $(\{\tau_0=\{2, 3\})$ and $\text{part}^{(2)}$ has $(\tau_0=\{2\},\tau_1=\{3\})$ and $(\tau_0=\{3\},\tau_1=\{2\})$: they are nothing but the interior and the two vertices, according to \eqref{eq: lBDt}.
Thus the BCJ numerator $K(1,2,3,4)$ has three terms, $K_{23}$, $K_{23,2}$ and $K_{23,3}$, which read
\begin{align} \label{eq: BCJnum4}
\frac{p_{1}{\cdot}F_{23}{\cdot} q_{23}}{D_{23}}{+}\frac{p_{1}{\cdot} F_{3}{\cdot} q_{23} p_{1}{\cdot} F_{2}{\cdot} q_{2}}{D_{23}D_{2}}{+}\frac{p_{1}{\cdot} F_{2}{\cdot} q_{23} p_{12}{\cdot} F_{3}{\cdot} q_{3} }{D_{23}D_{3}
\end{align}
Notice that the last term in the above equation is from the boundary $\{23,3\}$, so the second factor in the numerator is $p_{1\left(\overline{\{3\}}\right)_{<3}}\cdot F_{3}\cdot q_{3}=p_{12}\cdot F_{3}\cdot q_{3}$. Meanwhile \eqref{eq: BCJnum4} shows that the four-point numerator $K(1,2,3,4)$ has an overall pole $D_{I_0}=D_{23}$. This can be easily seen from \eqref{eq: lBD} since the first set of any co-dimension $d$ boundary $\Gamma_d$ is always labeled by $I_0=\{2,3,\ldots,n{-}1\}$.
Notice that \eqref{eq: KBD} means each term in the BCJ numerator contains spurious poles, and the co-dimension $d$ contribution will have $d{+}1$ spurious poles where $D_{I_0}$ is an overall pole for every boundary.
Except for the overall one, the simple poles can be written as $D_I$ where $I$ is a nonempty proper subset of $I_0=\{2,3,\ldots,n{-}1\}$ and two simple poles $D_I$ and $D_J$ are compatible if and only if $I\subset J$ or $J\subset I$. For example, at five-point, except for the overall $D_{234}$, the simple poles are $D_{2},D_{3},D_{4},D_{23},D_{24},D_{34}$ which correspond to the six co-dimension one boundaries of $\mathcal{P}_3$, and the compatible double poles are
\begin{equation}
\{D_{23}D_{2},D_{24}D_{2},D_{23}D_{3},D_{34}D_{3},D_{24}D_{4},D_{34}D_{4}\},
\end{equation}
which correspond to six vertices of the hexagon $\mathcal{P}_3$. We show the boundary contribution formally in figure \ref{fig:P3}.
These $13$ terms form a two-dimensional polytope $\mathcal{P}_{3}$. These boundaries can also be realized in quasi-shuffle product $\hat{K}(2,3,4)$, which will be discussed in appendix \ref{sec: N5}. To be precise, we give some explicit examples of different co-dimension here
\begin{equation}
\begin{aligned}
&K_{234}{=}\frac{p_1{\cdot}F_{234}\cdot q_{234}}{D_{234}},\\
&K_{234,23}{=}\frac{p_1{\cdot}F_4{\cdot}q_{234} p_1{\cdot} F_{23}{\cdot}q_{23}}{D_{234}D_{23}},\;\,K_{234,2}{=}\frac{p_1{\cdot} F_{34}{\cdot} q_{234} p_1{\cdot}F_2{\cdot}q_2}{D_{234}D_2}\\
&K_{234,23,2}{=}\frac{ p_1{\cdot}F_4{\cdot}q_{234} p_1{\cdot} F_3{\cdot} q_{23} p_1{\cdot} F_2{\cdot} q_2}{D_{234}D_{23}D_2 }.
\end{aligned}
\end{equation}
The complete result for the BCJ numerator $K(1,2,3,4,5)$ is shown in the appendix \ref{sec: N5}.
Moreover, we emphasize that all the spurious poles are canceled in the final amplitude, and the amplitude does not depend on the reference momenta. The proof will be put into the following paper \cite{toapp}.
For $n=6$, $\mathcal{P}_4$ is a three-dimensional truncated octahedron shown in figure \ref{fig: P4}. As we have counted, it contains $14$ co-dimension one boundaries (six squares and eight hexagons), $36$ edges and $4!=24$ vertices, thus $75$ boundaries in total. Some terms with co-dimension $d=0,1,2,3$ are
\begin{align}
&K_{2345}=\frac{p_1{\cdot} F_{2345}{\cdot} q_{2345}}{D_{2345}}\\\nonumber
&K_{2345,234}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{234}{\cdot} q_{234}}{D_{2345}D_{234}}\\\nonumber
&K_{2345,234,23}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{4}{\cdot} q_{234}p_1{\cdot} F_{23}{\cdot} q_{23}}{D_{2345}D_{234}D_{23}}\\\nonumber
&K_{2345,234,23,2}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{4}{\cdot} q_{234}p_1{\cdot} F_{3}{\cdot} q_{23}p_1{\cdot} F_2{\cdot} q_2}{D_{2345}D_{234}D_{23}D_2}
\end{align}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{figs/P4label.pdf}
\caption{The permutohedron $\mathcal{P}_4$ for $K(1,\ldots, 6)$}
\label{fig: P4}
\end{figure}
\section{Numerators for general YMS and pure YM amplitudes}
\label{sec:YM}
More generally, the CCK duality has provided closed-formulae for BCJ numerators of $n$-point YMS amplitude with $r\geq 2$ scalars~\cite{Cheung:2021zvb}. It turns out that any such numerator corresponds to a permutohedron ${\cal P}_{n{-}r}$ (with dimension $n{-}r{-}1$): everything we have discussed above for $r=2$ case still applies if we replace $\{2, 3,\ldots, n{-}1\}$ by the set of $n{-}r$ gluons. For example, in the other extreme with $r=n$ we can formally define ${\cal P}_0:={\cal P}_{\emptyset}$ and the $n$-scalar numerator (of bi-adjoint $\phi^3$ amplitude) is $1$ or $0$. For $r=n{-}1$, we have the zero-dimensional ${\cal P}_1$ and the numerator with a single gluon $i$ reads
\begin{equation}
K^{1-{\rm gluon}}(1, \beta, n)=\frac{p_{1{\rightarrow }i} \cdot F_i \cdot q_i}{D_i}
\end{equation} where $1 {\rightarrow} i$ denotes all the scalars preceding $i$ in $(1\beta n)$. We shall not repeat this for general cases but leave detailed discussions to a separate paper~\cite{toapp}.
Moreover, since the pure YM amplitude can be expanded as a linear combination of these YMS ones~\cite{Lam:2016tlk, Fu:2017uzt, Du:2017kpo, Cheung:2017ems, Dong:2021qai}, we obtain its BCJ numerators for free; the resulting numerator naively contains $2\mathcal{F}_{n-2}$ terms as derived in \cite{Cheung:2021zvb}. However, we can still organize the terms according to pole structures and immediately combine them in pairs as $\mathcal{F}_{n-2}$ terms: the resulting numerator has the same form as the $2{-}$scalar case and corresponds to boundaries of permutohedron $\mathcal{P}_{n-2}$. By expanding $A^{\text{YM}}(1,2,\ldots,n)$ in exactly the same way as \eqref{eq: BCJnum}
each master BCJ numerator, {\it e.g.} $K^{\text{YM}}(1,2,\ldots,n)$ is given by a sum over boundaries of $\mathcal{P}_{n-2}$ as in \eqref{eq: KBDexp}:
\begin{align}
K^{\text{YM}}(1,2,\ldots,n)&=\sum_{d=0}^{n{-}3}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}}K^{\text{YM}}_{\Gamma_d}(1,2,\ldots,n). \nonumber
\end{align}
where the contribution from each boundary
is identical to \eqref{eq: KBD} except for the $k=0$ factor which becomes
\begin{equation}\label{eq: N0YM}
\frac{\varepsilon_n\cdot F_{1\tau_0}\cdot q_{I_{0}}+\varepsilon_1\cdot F_{\tau_0}\cdot\left(\varepsilon_n p_{1n}\cdot q_{I_{0}}-q_{I_{0}} p_{1}\cdot\varepsilon_n\right)}{D_{I_0}}
\end{equation}
Of course, similar to the YMS case, all spurious poles cancel in the final amplitude, which does not depend on $q_I$. Therefore we are free to choose them to simplify the expression \eqref{eq: N0YM}. One such choice is $q_{I_{0}}=\varepsilon_n$, and the $k=0$ factor \eqref{eq: N0YM} takes a simpler form
\begin{equation}\label{eq: KYMSim}
\frac{\varepsilon_n{\cdot} F_{1\tau_0}{\cdot} \varepsilon_n}{\varepsilon_n{\cdot} p_{23\ldots n{-}1}}=-\frac{\varepsilon_n{\cdot} F_{1\tau_0}{\cdot} \varepsilon_n}{\varepsilon_n{\cdot} p_{1}}.
\end{equation}
It is easy to see that the BCJ numerators become manifestly gauge invariant in particles $1,2,\ldots,n{-}1$.
For example, the BCJ numerator $K^{\text{YM}}(1,2,3,4)$ reads
\begin{equation*}
{-}\frac{\varepsilon_{4}{\cdot}F_{123}{\cdot} \varepsilon_4}{p_1{\cdot}\varepsilon_4}{-}\frac{\varepsilon_4{\cdot} F_{13}{\cdot} \varepsilon_4 p_{1}{\cdot} F_{2}{\cdot} q_{2}}{p_1{\cdot}\varepsilon_4 D_{2}}{-}\frac{\varepsilon_4{\cdot} F_{12}{\cdot} \varepsilon_4 p_{12}{\cdot} F_{3}{\cdot} q_{3} }{p_1{\cdot}\varepsilon_4 D_{3}}.
\end{equation*}
Furthermore, similar to the discussion in section \ref{sec:def}, BCJ numerators of YM amplitudes can also be interpreted in terms of quasi-shuffle products, and the only change is that in the linear map \eqref{eq:linmap} the $k=0$ factor is modified to \eqref{eq: N0YM}.
Before ending the section, we mention the obvious double-copy from YM to GR
\begin{equation}
M^{\rm GR}_n=\sum_{\alpha, \beta} K^{\rm YM}(1, \alpha, n) m(1,\alpha ,n|1, \beta ,n) K^{\rm YM}(1, \beta, n)
\end{equation}
where we sum over a pair of permutations $\alpha,\beta$ of $\{2,3,\ldots, n{-}1\}$, with $m$ denoting bi-adjoint $\phi^3$ amplitudes; if we replace YM by YMS with $1, n$ being scalars, it gives the amplitude with $n{-}2$ gravitons and two scalars.
\section{Recursions and factorizations}
In this section, we propose recursion relations and factorization properties (on spurious poles $D_I$) for the BCJ numerators, which are implied by the combinatorial and algebraic structure. The argument can be equally applied to both two-scalar YMS and pure YM numerators.
\subsection{Recursion relations}
First, in quasi-shuffle product~\eqref{eq: part}, one can collect the terms with the same $\tau_d$ and then apply the linear map \eqref{eq:linmap} to obtain the following recursion relation,
\begin{equation}\label{eq: Krec}
K(1,2,\ldots,n){=
\sum_{I\subset\{2,\ldots,n{-}1\}}\frac{p_{1\Delta(I,\emptyset
}{\cdot} F_I {\cdot} q_I}{D_I}\tilde{K}^I(1,\bar{I},n),
\end{equation}
where the summation is over all the nonempty subsets of $\{2,3,\ldots,n{-}1\}$. The definition of $\tilde{K}^I(1,\bar{I},n)$ is slightly different from \eqref{eq: KBD} in the denominator: it is given by the sum over boundaries of the permutohedron $\mathcal{P}_{\bar{I}}$ with vertices labeled by all permutations of set $\bar{I}$, and for each boundary $\Gamma_d=\{J_0=\bar{I},J_1,\ldots,J_{d}\}$ where $J_d\subset J_{d{-}1}\ldots\subset J_0$, we have a contribution
\begin{equation}\label{eq: tKBD}
\tilde{K}^I_{\Gamma_d}=\prod_{k=0}^{d}\frac{p_{1\Delta(J_{k},J_{k+1})
}\cdot F_{\tau_k}\cdot q_{IJ_{k}}}{D_{IJ_{k}}},
\end{equation}
where $D_{IJ_k}=p_{IJ_k}\cdot q_{IJ_k}$, $\tau_k= \mathrm{Id}(J_k/J_{k{+}1})$ with $J_{d{+}1}\equiv \emptyset$ and the complement of the set $J_k$ appears in $\Delta(J_{k},J_{k+1})$ is defined as $\bar{I}/J_k$. For $\abs{I}{=}n{-}2$ ($\bar{I}=\emptyset$), we define $\tilde{K}^I(1,n)=1$. Formally, this numerator corresponds to the permutohedron $\mathcal{P}_0$.
For example, the recursion relation of $K(2,3,4)\equiv K(1,2,3,4,5)$ \footnote{Here we have omitted the labels of the scalar particles $1$ and $n$ in the numerators $K$ and $\tilde{K}$.} reads
\begin{small}
\begin{align}\label{eq: rec5}
K&(2,3,4)=\frac{p_1{{\cdot}} F_{234}{\cdot} q_{234}}{D_{234}}\\\nonumber
{+}&\frac{p_1{\cdot} F_{23}{\cdot} q_{23}}{D_{23}}\tilde{K}^{23}(4){+}\frac{p_{1}{\cdot} F_{24}{\cdot} q_{24}}{D_{24}}\tilde{K}^{24}(3){+}\frac{p_{12}{\cdot} F_{34}{\cdot} q_{34}}{D_{34}}\tilde{K}^{34}(2)\\\nonumber
{+}&\frac{p_1{\cdot} F_2{\cdot} q_2}{D_2}\tilde{K}^2(3,4)
{+}\frac{p_{12}{\cdot} F_3{\cdot} q_3}{D_3}\tilde{K}^3(2,4){+}\frac{p_{123}{\cdot} F_4{\cdot} q_4}{D_4}\tilde{K}^4(2,3).
\end{align}
\end{small}
Geometrically, the recursion relation~\eqref{eq: Krec} tells us how the co-dimension one boundaries of permutohedron are glued together. In the above five-point example, the term with $\abs{I}=3$ in the first line has only one pole and corresponds to the interior (co-dimension $0$ boundary) of $\mathcal{P}_{3}$, depicted in figure~\ref{fig:rec5}. For the three terms with $\abs{I}=2$ in the second line, each factor $\tilde{K}^I(1,\bar{I},n)$ corresponds to a zero-dimensional permutohedron; on the other hand, each term is mapped to a co-dimension one boundary of $\mathcal{P}_3$ without vertices. For the remaining three terms with $\abs{I}=1$ in the last line, each $\tilde{K}^I(1,\bar{I},n)$ corresponds to a one-dimensional permutohedron and it is mapped to a co-dimension one boundary with two vertices.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{figs/rec5.pdf}
\caption{Recursion relation at $n=5$}
\label{fig:rec5}
\end{figure}
\subsection{Factorization properties on spurious poles}
Next, we move to certain intriguing factorization properties of the BCJ numerator on spurious poles. Combinatorially, any co-dimension one boundary of the permutohedron $\mathcal{P}_{n{-}2}$ is the product of two lower-dimensional permutohedra $\mathcal{P}_{I}\times\mathcal{P}_{\bar{I}}$. Remarkably, we find that on any pole $D_I=0$, the residue of the BCJ numerator factorizes into the product of a $(\abs{I}{+}2)$-point numerator and a $(n{-}\abs{I})$-point numerator! Unlike the usual factorization on the physical poles of the amplitude, these factorizations on the spurious poles stem from the combinatorial picture without any known physical origin. Explicitly
\begin{equation}\label{eq: Kfac}
\begin{aligned}
\left.\mathrm{Res}\right|_{D_I{=}0}K(1,2,\ldots,n){=}&D_I K( 1,\mathrm{Id}(I),P) \\
&\times\tilde{K}^I(1,\mathrm{Id}(\bar{I}),n),
\end{aligned}
\end{equation}
where $P\equiv \bar{I}n$ denotes an effective scalar. For the definition of $\Delta(I_{k}, I_{k{+}1})$ in $K( 1,\mathrm{Id}(I),P)$, the complement of the set $I_k$ is still defined as $\bar{I_k}=\{2,3,\ldots,n{-}1\}/I_k$ while for $\Delta(J_{k}, J_{k{+}1})$ in $\tilde{K}^I(1,\mathrm{Id}(\bar{I}),n)$ the complement of $J_k$ is defined as $\bar{I}/J_k$.
The factor $D_I K(1,\mathrm{Id}(I),P)$ in \eqref{eq: Kfac} means that the overall pole $D_I$ of $K(1,\mathrm{Id}(I),P)$ is excluded. The factorization properties~\eqref{eq: Kfac} can be proved directly by plugging in the definitions on both sides.
For instance, at six points as shown in figure \ref{fig: P4}, there are $14$ co-dimension one boundaries $D_I=0$ including eight poles with $\abs{I}=1$ or $3$ corresponding to hexagons and six poles with $\abs{I}=2$ corresponding to squares. On any of the hexagon boundary, {\it i.e.} when $D_I=0$ with $\abs{I}=1$ or $3$, the residue factorizes into $\mathcal{F}_3=13$ terms (times $\mathcal{F}_1=1$ term). Similarly when $D_I=0$ with $\abs{I}=2$, the residue factorizes differently, {\it e.g.} as $D_{23}K(1,2,3,456)\times\tilde{K}^{23}(1,4,5,6)$ when $D_{23}=0$ (the square is the product of two line segments $\mathcal{P}_{\{23\}}\times\mathcal{P}_{\{45\}}$).
Algebraically, the quasi-shuffle algebra can be prompted to a bialgebra by introducing the coproduct map \cite{Hoffman}, and one can show the factorization properties from the coproduct. Similar to \cite{Brandhuber:2021bsf, Brandhuber:2022enp}, we can also define the antipode map to make the bialgebra a quasi-shuffle Hopf algebra. Acting on the BCJ numerators, the antipode map does nothing but changes its overall sign. The detail is given in the appendix \ref{sec:appAG}.
We expect the factorization properties of BCJ numerators to be the key for showing the cancellation of spurious poles in the amplitude. Such properties also suggest certain positive geometries (rather than just combinatorics) underlying these BCJ numerators, and we leave further investigations to future works.
\section{Heavy-mass effective field theory} \label{sec:heavy1}
In this section, we study YMS amplitudes and their BCJ numerators in the heavy-mass effective field theory (HEFT), which are obtained by taking the heavy-mass limit for a pair of massive scalars with momenta \cite{Brandhuber:2021kpo, Brandhuber:2021bsf,Brandhuber:2022enp}
\begin{equation}
p_1^\mu=mv^\mu,\qquad p_n^\mu=-m v^\mu - k^\mu,
\end{equation}
where $v^2=1$ and we are interested in the limit $m \to \infty$; in other words, we will study the the expansion in $1/m$ of the BCJ numerators which we denote as $K_\mathrm{H}(1,2,\ldots,n)$, as well as that of $\phi^3$ amplitudes, which combine to give the resulting HEFT amplitude $A^\mathrm{H}(1,2,\ldots,n)$ at the leading order in $1/m$. Here $k^\mu$ is at the same order as gluon momenta, which stay finite at ${\cal O}(m^0)$ as $m \to \infty$.
\subsection{Heavy limit of YMS amplitudes}
We will make a particular choice of the reference momenta: $q_I=v$ for all $I$, which dramatically simplifies formulae for BCJ numerators and give rise to poles similar to HEFT numerators in~\cite{Brandhuber:2021bsf}. In fact, for $n=4$ such a choice reduces the BCJ numerator to one term, since $v\cdot F_a \cdot v$ vanishes for a single particle $a$
\begin{equation}\label{eq: hamp4}
K_\mathrm{H}(1,2,3,4)= \frac{p_{1}\cdot F_{23}\cdot v} {p_{23} \cdot v} = -\frac{2m^2}{k^2}\ v \cdot F_{23}\cdot v,
\end{equation}
where in the second equality we have used $v\cdot k=-k^2/(2m)$ implied by the on-shell condition $p_n^2=m^2$.
Notice that $K_{\mathrm{H}}(1,3,2,4)=K_{\mathrm{H}}(1,2,3,4)$, thus the amplitude $A^\mathrm{H}(1,2,3,4)$ becomes
\begin{align} \label{eq:Hamp4}
&(\frac{1}{s_{12}}+ \frac{1}{s_{23}}) K_\mathrm{H}(1,2,3,4) -\frac{1}{s_{23}}K_\mathrm{H}(1,3,2,4) \\\nonumber
=&-\frac{m}{k^2} \frac{v \cdot F_{23}\cdot v}{ v\cdot p_2}.
\end{align}
Physically, the final HEFT amplitude has the leading order $\mathcal{O}(m)$ \cite{Brandhuber:2021kpo}. In the above example, we can see that the numerators are at $\mathcal{O}(m^2)$, and the sum of the leading contribution of $\phi^3$ amplitudes at $\mathcal{O}(m^0)$, say $1/s_{23}$ times the corresponding numerators vanishes. Therefore, the sum of the contribution of $\phi^3$ amplitudes at the next order, {\it i.e.} $1/s_{12}$ from $A^{\phi^3}(1,2,3,4)$ times the numerator produces the HEFT amplitude as the first non-vanishing order. This is also the case for $n=5$.
However, for higher $n$, the numerator contains some additional terms with higher power of $m$.
To obtain the leading order contribution of HEFT final amplitude, we expand the numerators and the $\phi^3$ amplitudes in $m^{-1}$. Note the overall pole $D_{23\ldots n-1}=v\cdot k$ for BCJ numerators is proportional to $m^{-1}$, we first collect the numerator according to its {\it superficial order} of $m^{-1}$, {\it i.e.} terms with $(i-1)$ $p_1$'s in the numerator,
\begin{equation} \label{eq: Kexpand}
K_{\mathrm{H}}(1,2,\ldots,n)= \sum_{i=2}^{\lfloor n/2 \rfloor } K_\mathrm{H}^{(i)}(1,2,\ldots,n),
\end{equation}
where the upper bound of the summation is $\lfloor n/2 \rfloor$ since $p_1 \cdot F_{a}\cdot v=0$
implies that the numerator should contain as many $p_1 \cdot F_{ab} \cdot v$ as possible to have the highest power of $p_1$. In the above expansion, $K_\mathrm{H}^{(i)}\equiv K_\mathrm{H}^{(i)}(1,2,\ldots,n)$ refers to terms with the superficial order $\mathcal{O}(m^i)$. For example, at six points we have the following terms for $K_\mathrm{H}^{(2)}$ and $K_\mathrm{H}^{(3)}$ respectively:
\begin{equation*}
\frac{p_1\cdot F_{23} \cdot v\ p_{23}\cdot F_5 \cdot v\ p_{23}\cdot F_4 \cdot v } {v\cdot k\ v \cdot p_{45}\ v\cdot p_4 }, \frac{p_1 \cdot F_{25}\cdot v\ p_1\cdot F_{34}\cdot v} {v\cdot k\ v\cdot p_{34} }.
\end{equation*}
In fact, as explained in appendix \ref{sec:heavy}, the actual order of $K_\mathrm{H}^{(i)}$ is $\mathcal{O}(m^{2})$ for $i=2$ and $\mathcal{O}(m^{i-1})$ otherwise.
In the HEFT amplitude, we sum over all cubic graphs relevant at leading order, and for each graph with its propagator structure, its numerator is given by the corresponding commutator of $K_\mathrm{H}^{(i)}$~\cite{Bern:2011ia}. Nicely we observe that certain commutators of $K_\mathrm{H}^{(i)}$ actually vanish, and the end result is that only $K_\mathrm{H}^{(2)}$ contributes to the amplitude at the leading order! We have checked such vanishing results up to $n=10$, but we do not have an all-$n$ proof at the moment.
In fact, such vanishing results are better than what we need here, {\it i.e.} only $K^{(2)}_{\rm H}$ contributes to gauge-theory amplitudes at leading order. We have checked up to $n=10$ that the stronger vanishing results actually ensure that only $K^{(2)}_{\rm H}$ contributes to gravity amplitude, which is at order $\mathcal{O}(m^2)$, as obtained by double copy in HEFT. We leave more details in the appendix \ref{sec:heavy} with a proof of the simplest case. As a result of this conjecture, the amplitude is given by
\begin{equation} \label{eq: Hamp}
\begin{aligned}
&A^\mathrm{H}(1,2 \ldots,n)=\sum_{\Theta^1 } \frac{K_\mathrm{H}^{(2)}(1,\Theta^1,n)}{d_{\Theta^1}}, \\
\end{aligned}
\end{equation}
where the summation is over nested commutators of depth $n{-}4$ (``co-depth" $1$) of the ordered set $(2,3,\ldots,n-1)$. For instance, at 5-point we sum over $\Theta^1=([2,3],4),(2,[3,4])$; $d_{\Theta^1}$ denotes the propagator denominator corresponding to the cubic tree associated with $\Theta^1$ (two sub-trees on the scalar line $(1 n)$):
\begin{equation*}
\begin{aligned}\includegraphics[width=0.23\linewidth]{figs/general1.pdf}\end{aligned} \leftrightarrow d_{\Theta^{1}}, {\it e.g.} \begin{aligned}\includegraphics[width=0.23\linewidth]{figs/theta511.pdf}\end{aligned} \leftrightarrow d_{[2,3],4}=s_{123}s_{23}.
\end{equation*}
Moreover, it is easy to show (see appendix \ref{sec:heavy} for details) that the effective BCJ numerator $K_\mathrm{H}^{(2)}(1,2,\ldots,n)$ contains $\mathcal{F}_{n{-}3}$ terms, and its pole structure corresponds to the permutohedron $\mathcal{P}_{\{34\ldots n{-}1\}}$, which means that
\begin{equation}\label{eq: BDexph}
K_\mathrm{H}^{(2)}(1,2,\ldots,n)=\sum_{d=0}^{n-2}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}3}}K^{ (2)}_{H,\Gamma_d}(1,2,\ldots,n).
\end{equation}
For the boundary $\Gamma_d{=}\{I_0,I_1,\ldots,I_d\}{\in}\partial^d\mathcal{P}_{\{34\ldots n{-}1\}}$ where $I_d{\subset}I_{d-1}{\subset}\ldots{\subset}I_0{=}\{3,4,\ldots,n{-}1\}$ and $I_d{\neq}\emptyset$, the contribution is
\begin{align}
K^{(2)}_{\mathrm{H},\Gamma_d}&= \frac{m v\cdot F_{\tau_{0}}\cdot v}{p_{23\ldots n{-}1}\cdot v} \prod_{k=1}^{d}\frac{p_{\Delta(I_{k},I_{k{+}1})}\cdot F_{\tau_d}\cdot v}{v\cdot p_{I_k}}\\\nonumber
&=-\frac{2m^2 v\cdot F_{\tau_{0}}\cdot v}{k^2} \prod_{k=1}^{d}\frac{p_{\Delta(I_{k},I_{k{+}1})}\cdot F_{\tau_d}\cdot v}{v\cdot p_{I_k}},
\end{align}
where in the calculation of $\Delta(I_{k},I_{k{+}1})$, the complement set of $I_k$ is still taken to be $\{2,3,\ldots,n{-}1\}/I_k$. For $n=4$, there is no commutator in $\Theta^1$ and the result is \eqref{eq:Hamp4}.
For $n=5$, the amplitude becomes
\begin{equation}
\begin{aligned}
A^\mathrm{H}(1,2,3,4,5){=} &\frac{1}{s_{12}} \frac{K_\mathrm{H}^{(2)}(1,2,[3,4],5)}{s_{34}} \\
& {+}\frac{1}{s_{123}} \frac{K_\mathrm{H}^{(2)}(1,[2,3],4,5)}{s_{23}},
\end{aligned}
\end{equation}
where $K_\mathrm{H}^{(2)}(1,2,3,4,5)$ is given by{\small
\begin{equation*}
-\frac{2m^2}{k^2}(v \cdot F_{234} \cdot v+\frac{ v\cdot F_{24} \cdot v\ p_{2}\cdot F_3\cdot v }{v\cdot p_3 }
+ \frac{ v\cdot F_{23} \cdot v \ p_{23}\cdot F_4\cdot v }{v\cdot p_4}).
\end{equation*}}
Let us give a final example for $n=6$ amplitude
\begin{equation}
\begin{aligned}
& \frac{1}{s_{12}} \left(\frac{K_\mathrm{H}^{(2)}(1,2,[[3,4],5],6)}{s_{34}s_{345}}{+} \frac{K_\mathrm{H}^{(2)}(1,2,[3,[4,5]],6)}{s_{45}s_{345}} \right) \\
{+}& \frac{1}{s_{123}} \frac{K_\mathrm{H}^{(2)}(1,[2,3],[4,5],6)}{s_{23}s_{45}} \\
{+}&\frac{1}{s_{1234}} \left(\frac{K_\mathrm{H}^{(2)}(1,[[2,3],4],5,6)}{s_{23}s_{234}}{+} \frac{K_\mathrm{H}^{(2)}(1,[2,[3,4]],5,6)}{s_{34}s_{234}} \right).
\end{aligned}
\end{equation}
It is interesting to notice the numerators we present here only differ from those in \cite{Brandhuber:2021bsf} denoted by $N(1,2,\ldots,n)$ by an overall prefacto
\begin{equation}\label{eq: HBCJnumerator}
K_\mathrm{H}^{(2)}(1,2,\ldots,n)=(-1)^{n}(n{-}2)\frac{2m}{k^2} v\cdot p_2 N(1,2,\ldots,n).
\end{equation}
It is highly nontrivial, however, that these two sets of effective BCJ numerators give the same HEFT amplitude. In~\cite{Brandhuber:2021bsf}, the expression involves the sum of cubic graphs corresponding to nested commutators of depth $n-3$ of the ordered set $(2,3,\ldots,n{-}1)$, thus the propagator denominator contains an overall factor $s_{23\ldots n-1}$, which in our case is replaced by different $s_{1 \sigma}$ for different terms. In addition, the numerator of~\cite{Brandhuber:2021bsf} for each cubic graph is given by a nested commutator of $N(1,2,\ldots,n)$, thus the number of terms in it is twice as ours. Nevertheless, we have analytically checked up to $n=10$ that the amplitude \eqref{eq: Hamp} agrees with \cite{Brandhuber:2021bsf}. Moreover, we have checked that although they look very different, the HEFT gravity amplitude via double copy also agrees with that in~\cite{Brandhuber:2021bsf}, and we expect both agreements to hold for all $n$.
\subsection{Decoupling into pure YM}
Given the explicit result of the $n$-point heavy mass BCJ numerators, the $(n-1)$-point pure YM BCJ numerators, as well as the amplitudes, can be easily obtained via the decoupling limit:
$mv\to \varepsilon_n$, $p_{23\ldots n-1}^2\to 0$
to obtain the BCJ numerator $K^{\prime \text{YM}}(2,3,\ldots,n)$ \cite{Brandhuber:2021kpo,Brandhuber:2021bsf}. Under this kinematics, the overall factor $k^2$ vanishes, which we ignore in the decoupling limit. For instance, the three-point BCJ numerator is given by $K^{\prime \text{YM}}(2,3,4)=-2\varepsilon_4 \cdot F_{23}\cdot \varepsilon_4$. Therefore, the three-point amplitude is
\begin{align}
A^{\text{YM}}(2,3,4)&= -\frac{\varepsilon_4 \cdot F_{23}\cdot \varepsilon_4}{ \varepsilon_4 \cdot p_2}\\\nonumber
&=\varepsilon _4\cdot \varepsilon _2 p_2\cdot \varepsilon _3{-}\varepsilon _2\cdot \varepsilon _3 p_2\cdot \varepsilon _4{-}\varepsilon _4\cdot \varepsilon _3 p_3\cdot \varepsilon _2.
\end{align}
For the 4-point YM amplitude, the numerator $K^{\prime \text{YM}}(2,3,4,5)$ reads
\begin{equation*}
{-}2\left(\varepsilon_{5}{\cdot}F_{234}{\cdot} \varepsilon_5{+}\frac{\varepsilon_5{\cdot} F_{24}{\cdot} \varepsilon_5 p_{2}{\cdot} F_{3}{\cdot} \varepsilon_5}{p_3{\cdot}\varepsilon_5 }{+}\frac{\varepsilon_5{\cdot} F_{23}{\cdot} \varepsilon_5 p_{23}{\cdot} F_4{\cdot} \varepsilon_5 }{ p_4{\cdot}\varepsilon_5}\right).
\end{equation*}
Note that $K^{\prime \text{YM}}(2,3,\ldots,n)$ also manifests the gauge invariance of particles $2,3,\ldots,n-1$. Moreover, it is related to the BCJ numerator given in sec.\ref{sec:YM} via
\begin{equation*}
K^{\prime \text{YM}}(2,3,\ldots,n)=2\left. \varepsilon_n \cdot p_2 K^{\text{YM}}(2,3,\ldots,n)\right|_{q_I\to \varepsilon_n}.
\end{equation*}
These numerators, accompanied by different $\phi^3$ amplitudes, produce the same YM amplitude.
\section{Conclusions and outlook}
In this note, we established a correspondence between BCJ numerators from covariant color-kinematics duality and the combinatorial permutohedra, which are closely related to the quasi-shuffle Hopf algebra. This apply to all YMS amplitudes, but the most interesting case is that with two scalars, whose numerators share the same combinatorial structure as the pure YM ones: each term is mapped to a boundary of $\mathcal{P}_{n-2}$; the contribution from each boundary is almost identical in these two cases, except that we need to modify one factor to take into account the remaining two gluons. We also found nice recursion relations and factorization properties implied by this picture. Finally, based on highly nontrivial cancellations which are needed for both YMS and gravity amplitudes (via double copy) in HEFT, we conjectured a compact formula for their effective numerators; they become closely related to permutohedra $\mathcal{P}_{n-3}$, which, while producing the same amplitude, differ by an overall factor from the numerators in ~\cite{Brandhuber:2021bsf, Brandhuber:2022enp}.
There are numerous open questions for further investigations. First, as we will present in \cite{toapp}, it is interesting to see how lower-dimensional permutohedra for general YMS numerators combine into $\mathcal{P}_{n-2}$ which corresponds to the pure YM ones; we also find interesting combinatorial structures underlying BCJ numerators of amplitudes in NLSM {\it etc.}. Moreover, the somewhat miraculous cancellations that simplify these numerators in HEFT still remain to be proven, which would also be important to establish the correct double copy in HEFT. Since the final amplitudes are independent of reference momenta, all the spurious poles must cancel, which still calls for a direct understanding (without relying on the CCK duality); such an understanding could connect this combinatorial picture (especially the factorizations) to the uniqueness theorem for YM amplitude~\cite{Arkani-Hamed:2016rak, Rodina:2016jyz} and YMS ones via the universal expansion~\cite{Dong:2021qai}. Last but not least, it is tempting to ask: could we combine the permutohedra for BCJ numerators with the associahedra for bi-adjoint $\phi^3$ amplitudes, and obtain a unified geometric understanding of gluon and graviton scattering?
\begin{acknowledgments}
We thank Linghui Hou, Guanda Lin, Tianheng Wang for discussions and collaborations on related projects. This research is supported in part by the Key Research Program of CAS, Grant No. XDPB15 and National
Natural Science Foundation of China under Grant No.
11935013,11947301,12047502,12047503
\end{acknowledgments}
\newpage
\widetext
\section{Introduction}
Despite very different natures, gauge theories and gravity have deep connections; one of the oldest and the most prominent example is the double copy structure \cite{Kawai:1985xq, Bern:2008qj, Bern:2010ue}. Originally it was discovered from Kawai-Lewellen-Tye (KLT) relations \cite{Kawai:1985xq} in string theory, and a modern realization of double copy has relied on the duality between color and kinematics for gauge theory amplitudes, where the Bern-Carrasco-Johansson (BCJ) kinematic numerators satisfy the same Jacobi relations as the color factors. The duality and double copy have led to tremendous progress in the study of amplitudes both in gauge theory and gravity (see~\cite{Bern:2019prr, Bern:2022wqg, Adamo:2022dcm} and references therein). More recently, the authors of~\cite{Cheung:2021zvb} have revealed a so-called covariant color-kinematics (CCK) duality for a large class of theories including Yang-Mills theory (YM) and its coupling to bi-adjoint $\phi^3$ (YMS). As a consequence, the duality implies new, closed-form expression for BCJ numerators of all tree-level amplitudes in YMS and YM theory.
Previous works on BCJ numerators and kinematic algebras include~\cite{Mafra:2011kj,Bargheer:2012gv,CHY3,He:2015wgf,Fu:2017uzt,Teng:2017tbo, Du:2017kpo, He:2017spx, Edison:2020ehu,He:2021lro, Monteiro:2011pc,Monteiro:2013rya,Cheung:2016prv,Chen:2019ywi,Edison:2022jln} and references therein.
On the other hand, recent years have seen progress on revealing new geometric/combinatorial structures underlying scattering amplitudes {\it e.g.} from the (all-loop) amplituhedron of supersymmetric Yang-Mills ~\cite{Arkani-Hamed:2013jha} to the associahedron for bi-adjoint $\phi^3$ at tree level~\cite{Arkani-Hamed:2017mur} (with extensions to string scattering~\cite{Arkani-Hamed:2019mrd}). It is natural to look for hints of such structures underlying YM and gravity amplitudes; instead of directly working with tree amplitudes, one may decompose the problem and ask a somewhat strange question as a first step: are there combinatorial structures underlying BCJ numerators?
In this note, we take BCJ numerators from CCK duality~\cite{Cheung:2021zvb} as inputs and present preliminary evidence for such structures: in addition to the more familiar quasi-shuffle Hopf algebras~\cite{Hoffman}, we find hidden combinatorial permutohedra \cite{wiki:Permutohedron} for BCJ numerators. Any BCJ numerator can be written as the sum over all boundaries of a permutohedron (or terms from a quasi-shuffle product); for a co-dimension $d$ boundary (length-$d$ term), it contains a product of $d{+}1$ factors each with a spurious pole and a gauge-invariant numerator. We will focus on the case with two scalars and $n{-}2$ gluons, which corresponds to a $(n{-}3)$-dimensional permutohedron, and it has Fubini number ${\cal F}_{n{-}2}$ boundaries with co-dimensions $d=0, 1, \ldots, n{-}3$; each boundary is labeled by $d+1$ subsets, and for each factor labeled by such a set both the numerator (which is gauge invariant in the gluons) and the (spurious pole) denominator are given by Lorentz products of momenta and polarizations, as well as the reference momenta. Apart from being the most illustrative BCJ numerators of YMS cases, we will also see that they give nice BCJ numerators in the heavy-mass effective theory (HEFT)~\cite{Georgi:1990um,Luke:1992cs,Neubert:1993mb,manohar_wise_2000,Damgaard:2019lfh,Brandhuber:2021kpo, Brandhuber:2021bsf, Brandhuber:2022enp} as well as decoupling limit into pure YM amplitudes. BCJ numerators in HEFT have attracted lots of interest recently for their roles in the computation of gravitational amplitudes for black-hole scattering and gravitational waves~\cite{Brandhuber:2021eyq} ({\it c.f.} ~\cite{Kosower:2018adc,Bern:2019nnu,Damour:2019lcq,Bern:2021dqo,DiVecchia:2021bdo,Herrmann:2021tct,Bjerrum-Bohr:2021din,Bjerrum-Bohr:2021wwt,Jakobsen:2021lvp} for some recent works). We will take the heavy-mass limit of YMs amplitude, and (as we have checked up to $n=10$) highly nontrivial cancellations lead to a nice formula for BCJ numerators in HEFT which corresponds to ${\cal P}_{n{-}3}$ (one dimensional lower)!
Furthermore, our results imply new recursion relations and surprisingly, ``factorization'' properties of BCJ numerators on facets of permutohedra; all these can be extended to BCJ numerators of general YMS amplitudes, which in turn combine into a formula for the YM case as well. For the latter, we can then turn the logic around: since the BCJ numerators are manifestly gauge invariant in $n{-}1$ gluons, by showing that all spurious poles indeed cancel in the amplitude based on such ``factorizations", it follows from the uniqueness theorem of~\cite{Arkani-Hamed:2016rak} that they must give correct YM and gravity amplitude (after double copy) even without knowing the CCK duality.
Let us consider color-ordered YMS amplitude $A(1^\phi,2,\ldots,n{-}1,n^\phi)$ with scalars $1^\phi,n^\phi$. Its expansion onto the Kleiss-Kuij(KK) basis~\cite{Kleiss:1988ne} of bi-adjoint $\phi^3$ amplitudes has BCJ master numerators as coefficients reads
\begin{equation}\label{eq: BCJnum}
A(1^\phi,2,\ldots,n{-}1,n^\phi){=}\sum_{\beta\in S_{n-2}} K(1,\beta,n)A^{\phi^3}(1,\beta,n),
\end{equation}
where the sum is over $(n{-}2)!$ permutations of gluons and $A^{\phi^3}(1,\beta,n) \equiv m(1,2,\ldots,n | 1,\beta,n)$ denotes bi-adjoint $\phi^3$ amplitudes with the first ordering fixed to be $(1,2,\ldots,n)$.
Remarkably, the BCJ numerators from CCK duality $K(1,\beta,n)$ respect the Bose symmetry of all the $n{-}2$ gluons ~\cite{Cheung:2021zvb}: we only need a single numerator with the ordering chosen to be $\beta=(2,\ldots,n{-}1)$, and all others can be obtained by relabelling; they are also gauge invariant for the gluons, which becomes manifest since the dependence on polarizations is through Lorentz products of linearized field strengths $F_i^{\mu\nu}\equiv p_i^\mu \varepsilon_i^\nu-p_i^\nu\varepsilon_i^\mu$
\begin{equation}\label{Fprod}
[F_{\sigma}]^{\mu \nu}= [F_{\sigma_{1}}\cdot F_{\sigma_{2}}\cdots \cdot F_{\sigma_{|\sigma|}}]^{\mu \nu}
\end{equation}
for an ordered subset $\sigma$. The price to pay for these desirable properties is the presence of $2^{n{-}2}-1$ spurious poles, one for each nonempty subset $I\subset \{2, \ldots, n{-}1\}$:
\begin{equation}\label{Ddef}
D_{I}:= p_{I}\cdot q_{I},\quad \text{with} \quad p_I:=\sum_{i\in I}p_i,
\end{equation}
which depends on a reference momentum $q_{I}$. These numerators can be simplified with some choices of $q_I$, and the final amplitude is independent of them.
\section{The permutohedron and algebra underlying BCJ numerators}
In this section, we show that all the terms in a BCJ master numerator obtained from CCK duality for YMS amplitudes are in one-to-one correspondence with all boundaries of permutohedron $\mathcal{P}_{n{-}2}$, or equivalently terms from a quasi-shuffle product.
\subsection{The (combinatorial) permutohedra and quasi-shuffle products}\label{sec:def}
Following~\cite{Cheung:2021zvb}, we organize $K(1,2,\ldots, n)$ according to the spurious pole structure, which is isomorphic to the boundary structure of the permutohedron $\mathcal{P}_{n-2}$.
The permutohedron $\mathcal{P}_{n-2}$ is an $(n{-}3)$-dimensional polytope~\cite{wiki:Permutohedron}, whose co-dimension $d$ boundary $\Gamma_d$ can be labeled by $d{+}1$ consecutive subsets
\begin{equation}\label{eq: lBD}
\Gamma_d:=\{I_{0}, I_{1},\ldots, I_{d}\},
\end{equation}
where $I_d{\neq}\emptyset$ and $I_d\subset I_{d{-}1}\subset\ldots\subset I_0=\{2,3,\ldots,n{-}1\}$; the interior of ${\cal P}_{n{-}2}$ can be viewed as its co-dimension $0$ boundary, $\Gamma_0:=I_0$. ${\cal P}_{n{-}2}$ and its boundaries have appeared in the context of cubic tree graphs from the worldsheet~\cite{Gao:2017dek, He:2018pue}. Here each term of the BCJ numerator $K(1, 2, \ldots, n)$ with $d{+}1$ spurious poles corresponds to such a co-dimension $d$ boundary, thus the numerator can be expanded in terms of boundaries of $\mathcal{P}_{n-2}$
\begin{equation}\label{eq: KBDexp}
K(1,2,\ldots,n)=\sum_{d=0}^{n{-}3}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}}K_{\Gamma_d}(1,2,\ldots,n),
\end{equation}
where we sum over all boundaries $\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}$ with co-dimension $d=0, \ldots, n{-}3$, and the contribution from $\Gamma_d$, $K_{\Gamma_d}(1,2,\ldots,n)\equiv K_{\Gamma_d}$ reads
\begin{equation}\label{eq: KBD}
K_{\Gamma_d}=\prod_{k=0}^{d}\frac{p_{1
\Delta(I_{k},I_{k+1})
}\cdot F_{\tau_{k}}\cdot q_{I_{k}}}{D_{I_{k}}}.
\end{equation}
It has $d{+}1$ factors each with a denominator $D_{I_k}$ of \eqref{Ddef} and a numerator of the form $p_{1 \Delta(I_{k},I_{k+1})}\cdot F_{\tau_k} \cdot q_{I_k}$ for $k=0, \dots, d$. To specify the ordered subset $\tau_k$ of the Lorentz product as in \eqref{Fprod}, we introduce an alternative form of \eqref{eq: lBD} using ordered sets:
\begin{align}\label{eq: lBDt}
\Gamma_d
=&\{\tau_0\cup\tau_1\cup\ldots\cup\tau_d,\tau_1\cup\ldots\cup\tau_d,\ldots,\tau_{d{-}1}\cup\tau_d,\tau_d\}\\\nonumber
\sim&\{\tau_0,\tau_1,\ldots,\tau_d\},
\end{align
where the first line is equivalent to \eqref{eq: lBD} but we use ordered sets $\tau_{k}=\mathrm{Id}(I_k/I_{k{+}1})$ with $I_{d{+}1}\equiv \emptyset$. $\mathrm{Id}(I)$ means sorting the subset $I$ in numerical ordering~\footnote{For $K(1,\beta,n)$, everything stays the same except that in \eqref{eq: lBDt} the definition of $\tau_i$'s become $\tau_{k}=\beta(I_k/I_{k{+}1})$, {\it i.e.} any subset $I$ is sorted according to $\beta$ ordering.}. Perhaps the most subtle point is that we also define
$\Delta(I_{k},I_{k+1}):=\left(\bar{I}_{k}\right)_{<\tau_{{k},1}}$, which refers to the elements in the set $\bar{I}_{k}=\{2,3,\ldots,n-1\}/I_k$ that are numerically smaller than the first element $\tau_{k,1}$ of the ordered set $\tau_{k}$.
For example, at $n=5$, we have a boundary $\Gamma_2=\{I_0=\{2,3,4\}, I_1=\{2,4\}, I_2=\{4\}\}$; equivalently, we have $\tau_0=\{3\}, \tau_1=\{2\}, \tau_2=\{4\}$, thus we have a term
\begin{equation}
K_{234,24,4}= \frac{p_1\cdot F_3\cdot q_{234} p_1\cdot F_2\cdot q_{24} p_{123}\cdot F_4\cdot q_4 }{D_{234}D_{24} D_4 }
\end{equation}
where we have used $\Delta(I_2,\emptyset)=\left.\left(\bar{I_2}\right)\right|_{<4}=\{2,3\}$. A more nontrivial example for the latter is for $n{=}9$, $\Delta(\{4,5,7,8\},\{4,8\})=\left.\{2,3,6\}\right|_{<5}=\{2,3\}$.
On the other hand, the boundaries of $\mathcal{P}_ {n-2}$ have a nice quasi-shuffle product interpretation. The quasi-shuffle product $\star$ can be defined between two arbitrary generators $\left(\sigma_0,\sigma_1,\ldots,\sigma_r\right)$ and $\left(\rho_0,\rho_1,\ldots,\rho_s\right)$, where $\sigma_i$ and $\rho_j$ are sets with arbitrary lengths and can also be prompted to the quasi-shuffle Hopf algebra~\cite{Hoffman, Brandhuber:2021bsf, Brandhuber:2022enp, Chen:2022nei}. We summarize the definitions in appendix \ref{sec:appAG} and here we just use the following result of the quasi-shuffle product $\hat{K}(2,\ldots, n{-}1)\equiv(2)\star(3)\star\ldots\star(n-1)$
\begin{equation}\label{eq: part}
\hat{K}(2,\ldots,n{-}1)
{=}\sum_{d{=}0}^{n{-}3}\sum_{\tau\in \text{part}^{(d{+}1)}(2,\ldots,n{-}1)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!(-1)^{n+d-1}(\tau_0,\tau_1,\ldots,\tau_{d}),
\end{equation}
where $\text{part}^{(d+1)}(2,\ldots,n{-}1)$ denotes all the ordered partitions of $\{2,3,\ldots,n{-}1\}$ into $d+1$ nonempty subsets $(\tau_0,\tau_1,\ldots,\tau_d)$ (each $\tau_i$ is sorted according to $\beta$). Terms on the RHS of \eqref{eq: part} are in one-to-one correspondence with boundaries of $\mathcal{P}_{n-2}$ as in~\eqref{eq: lBDt}, thus we can rewrite \eqref{eq: KBDexp} in terms of the quasi-shuffle product
\begin{equation}
\label{eq:qsp}
K(1,2,\ldots,n)=\langle\hat{K}(2,\ldots n{-}1)\rangle,
\end{equation}
where we have defined a {\it linear map} $\langle\cdot\rangle$ from \eqref{eq: KBD} for any partition $(\tau_0,\tau_1,\ldots,\tau_{d})$
\begin{equation}\label{eq:linmap}
\langle(\tau_0,\tau_1,\ldots,\tau_{d})\rangle{=}(-1)^{n+d-1}\prod_{k=0}^{d}\frac{p_{1\Delta(I_{k},I_{k+1})
}\cdot F_{\tau_{k}}\cdot q_{I_{k}}}{D_{I_{k}}}.
\end{equation}
\subsection{The counting and some examples}
By definition, the permutohedron $\mathcal{P}_m$ contains $m!$ vertices and $2^{m}-2$ co-dimension one facets. More generally, the number of co-dimension $d$ boundaries of this polytope is $(d+1)!S(m,d+1)$, where $S(m,d)$ is the second kind of Stirling number \cite{wiki:Stirling_numbers_of_the_second_kind}. Algebraically, $S(m,d+1)$ also counts the number of ways to partition a set of $m$ labeled objects into $d+1$ nonempty unlabeled subsets $\{\tau_0,\tau_1,\ldots,\tau_d\}$ \cite{wiki:Stirling_numbers_of_the_second_kind}, so after considering the ordering between these sets, there are $(d+1)!S(m,d+1)$ terms in the summation for any $d$.
The total number is the Fubini number $\mathcal{F}_{m}$, where $\mathcal{F}_{m}=\sum_{d=1}^{m}d!S(m,d)$ \cite{Mezo}, thus the $n$-point BCJ numerator has $\mathcal{F}_{n-2}$ terms.
\begin{table}[H]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\diagbox[innerwidth=0.5cm]{n}{d} & 0 & 1 & 2 & 3 &Total \\
\hline
3 & 1 & & & &1\\
\hline
4 & 1 & 2 & & &3\\
\hline
5 & 1 & 6 & 6 & &13 \\
\hline
6 & 1 & 14 & 36 & 24&75 \\
\hline
\end{tabular}%
\label{tab:BDc}%
\caption{Counting co-dimension-$d$ boundaries of $\mathcal{P}_{n-2}$}
\end{table}%
Let us illustrate \eqref{eq: KBDexp} and \eqref{eq: KBD} with some examples. The most trivial case is $n=3$, where the BCJ numerator corresponds to the zero-dimensional permutohedron $\mathcal{P}_{1}$ which is just a point. It contains one term with $\Gamma_0=\{I_0\}=\{2\}$, thus $K(1,2,3)=K_{2}(1,2,3)=\frac{p_1\cdot F_{2}\cdot q_2}{D_2}$, where we have used $\Delta(I_{0},I_{1})=\emptyset$, this is generally true since $\bar{I_0}=\emptyset$.
For $n=4$, the permutohedron $\mathcal{P}_{2}$ is a line segment, where the interior ($d=0$) is labelled by $I_0=\{23\}$, and the two vertices ($d=1$) are labeled by $\{23,2\}$ and $\{23,3\}$; we show these three terms in figure \ref{fig:P2}.
\begin{figure}[H]
\centering
\subfloat[]{ \label{fig:P2}
\begin{minipage}[c]{1\linewidth}
\centering
\includegraphics[scale=1.5]{figs/P2K.pdf}
\end{minipage}
}
\subfloat[]{ \label{fig:P3}
\begin{minipage}[c]{1\linewidth}
\centering
\includegraphics[scale=1.3]{figs/P3D.pdf}
\end{minipage}
}
\caption{Permutohedra ${\cal P}_2$ for $K(1,2,3,4)$(top) and ${\cal P}_3$ for $K(1,2,3,4,5)$(bottom)}
\end{figure}
Equivalently, in \eqref{eq: part} the partition $\text{part}^{(1)}$ of $\{2,3\}$ has $(\{\tau_0=\{2, 3\})$ and $\text{part}^{(2)}$ has $(\tau_0=\{2\},\tau_1=\{3\})$ and $(\tau_0=\{3\},\tau_1=\{2\})$: they are nothing but the interior and the two vertices, according to \eqref{eq: lBDt}.
Thus the BCJ numerator $K(1,2,3,4)$ has three terms, $K_{23}$, $K_{23,2}$ and $K_{23,3}$, which read
\begin{align} \label{eq: BCJnum4}
\frac{p_{1}{\cdot}F_{23}{\cdot} q_{23}}{D_{23}}{+}\frac{p_{1}{\cdot} F_{3}{\cdot} q_{23} p_{1}{\cdot} F_{2}{\cdot} q_{2}}{D_{23}D_{2}}{+}\frac{p_{1}{\cdot} F_{2}{\cdot} q_{23} p_{12}{\cdot} F_{3}{\cdot} q_{3} }{D_{23}D_{3}
\end{align}
Notice that the last term in the above equation is from the boundary $\{23,3\}$, so the second factor in the numerator is $p_{1\left(\overline{\{3\}}\right)_{<3}}\cdot F_{3}\cdot q_{3}=p_{12}\cdot F_{3}\cdot q_{3}$. Meanwhile \eqref{eq: BCJnum4} shows that the four-point numerator $K(1,2,3,4)$ has an overall pole $D_{I_0}=D_{23}$. This can be easily seen from \eqref{eq: lBD} since the first set of any co-dimension $d$ boundary $\Gamma_d$ is always labeled by $I_0=\{2,3,\ldots,n{-}1\}$.
Notice that \eqref{eq: KBD} means each term in the BCJ numerator contains spurious poles, and the co-dimension $d$ contribution will have $d{+}1$ spurious poles where $D_{I_0}$ is an overall pole for every boundary.
Except for the overall one, the simple poles can be written as $D_I$ where $I$ is a nonempty proper subset of $I_0=\{2,3,\ldots,n{-}1\}$ and two simple poles $D_I$ and $D_J$ are compatible if and only if $I\subset J$ or $J\subset I$. For example, at five-point, except for the overall $D_{234}$, the simple poles are $D_{2},D_{3},D_{4},D_{23},D_{24},D_{34}$ which correspond to the six co-dimension one boundaries of $\mathcal{P}_3$, and the compatible double poles are
\begin{equation}
\{D_{23}D_{2},D_{24}D_{2},D_{23}D_{3},D_{34}D_{3},D_{24}D_{4},D_{34}D_{4}\},
\end{equation}
which correspond to six vertices of the hexagon $\mathcal{P}_3$. We show the boundary contribution formally in figure \ref{fig:P3}.
These $13$ terms form a two-dimensional polytope $\mathcal{P}_{3}$. These boundaries can also be realized in quasi-shuffle product $\hat{K}(2,3,4)$, which will be discussed in appendix \ref{sec: N5}. To be precise, we give some explicit examples of different co-dimension here
\begin{equation}
\begin{aligned}
&K_{234}{=}\frac{p_1{\cdot}F_{234}\cdot q_{234}}{D_{234}},\\
&K_{234,23}{=}\frac{p_1{\cdot}F_4{\cdot}q_{234} p_1{\cdot} F_{23}{\cdot}q_{23}}{D_{234}D_{23}},\;\,K_{234,2}{=}\frac{p_1{\cdot} F_{34}{\cdot} q_{234} p_1{\cdot}F_2{\cdot}q_2}{D_{234}D_2}\\
&K_{234,23,2}{=}\frac{ p_1{\cdot}F_4{\cdot}q_{234} p_1{\cdot} F_3{\cdot} q_{23} p_1{\cdot} F_2{\cdot} q_2}{D_{234}D_{23}D_2 }.
\end{aligned}
\end{equation}
The complete result for the BCJ numerator $K(1,2,3,4,5)$ is shown in the appendix \ref{sec: N5}.
Moreover, we emphasize that all the spurious poles are canceled in the final amplitude, and the amplitude does not depend on the reference momenta. The proof will be put into the following paper \cite{toapp}.
For $n=6$, $\mathcal{P}_4$ is a three-dimensional truncated octahedron shown in figure \ref{fig: P4}. As we have counted, it contains $14$ co-dimension one boundaries (six squares and eight hexagons), $36$ edges and $4!=24$ vertices, thus $75$ boundaries in total. Some terms with co-dimension $d=0,1,2,3$ are
\begin{align}
&K_{2345}=\frac{p_1{\cdot} F_{2345}{\cdot} q_{2345}}{D_{2345}}\\\nonumber
&K_{2345,234}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{234}{\cdot} q_{234}}{D_{2345}D_{234}}\\\nonumber
&K_{2345,234,23}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{4}{\cdot} q_{234}p_1{\cdot} F_{23}{\cdot} q_{23}}{D_{2345}D_{234}D_{23}}\\\nonumber
&K_{2345,234,23,2}=\frac{p_1{\cdot} F_{5}{\cdot} q_{2345}p_1{\cdot} F_{4}{\cdot} q_{234}p_1{\cdot} F_{3}{\cdot} q_{23}p_1{\cdot} F_2{\cdot} q_2}{D_{2345}D_{234}D_{23}D_2}
\end{align}
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{figs/P4label.pdf}
\caption{The permutohedron $\mathcal{P}_4$ for $K(1,\ldots, 6)$}
\label{fig: P4}
\end{figure}
\section{Numerators for general YMS and pure YM amplitudes}
\label{sec:YM}
More generally, the CCK duality has provided closed-formulae for BCJ numerators of $n$-point YMS amplitude with $r\geq 2$ scalars~\cite{Cheung:2021zvb}. It turns out that any such numerator corresponds to a permutohedron ${\cal P}_{n{-}r}$ (with dimension $n{-}r{-}1$): everything we have discussed above for $r=2$ case still applies if we replace $\{2, 3,\ldots, n{-}1\}$ by the set of $n{-}r$ gluons. For example, in the other extreme with $r=n$ we can formally define ${\cal P}_0:={\cal P}_{\emptyset}$ and the $n$-scalar numerator (of bi-adjoint $\phi^3$ amplitude) is $1$ or $0$. For $r=n{-}1$, we have the zero-dimensional ${\cal P}_1$ and the numerator with a single gluon $i$ reads
\begin{equation}
K^{1-{\rm gluon}}(1, \beta, n)=\frac{p_{1{\rightarrow }i} \cdot F_i \cdot q_i}{D_i}
\end{equation} where $1 {\rightarrow} i$ denotes all the scalars preceding $i$ in $(1\beta n)$. We shall not repeat this for general cases but leave detailed discussions to a separate paper~\cite{toapp}.
Moreover, since the pure YM amplitude can be expanded as a linear combination of these YMS ones~\cite{Lam:2016tlk, Fu:2017uzt, Du:2017kpo, Cheung:2017ems, Dong:2021qai}, we obtain its BCJ numerators for free; the resulting numerator naively contains $2\mathcal{F}_{n-2}$ terms as derived in \cite{Cheung:2021zvb}. However, we can still organize the terms according to pole structures and immediately combine them in pairs as $\mathcal{F}_{n-2}$ terms: the resulting numerator has the same form as the $2{-}$scalar case and corresponds to boundaries of permutohedron $\mathcal{P}_{n-2}$. By expanding $A^{\text{YM}}(1,2,\ldots,n)$ in exactly the same way as \eqref{eq: BCJnum}
each master BCJ numerator, {\it e.g.} $K^{\text{YM}}(1,2,\ldots,n)$ is given by a sum over boundaries of $\mathcal{P}_{n-2}$ as in \eqref{eq: KBDexp}:
\begin{align}
K^{\text{YM}}(1,2,\ldots,n)&=\sum_{d=0}^{n{-}3}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}2}}K^{\text{YM}}_{\Gamma_d}(1,2,\ldots,n). \nonumber
\end{align}
where the contribution from each boundary
is identical to \eqref{eq: KBD} except for the $k=0$ factor which becomes
\begin{equation}\label{eq: N0YM}
\frac{\varepsilon_n\cdot F_{1\tau_0}\cdot q_{I_{0}}+\varepsilon_1\cdot F_{\tau_0}\cdot\left(\varepsilon_n p_{1n}\cdot q_{I_{0}}-q_{I_{0}} p_{1}\cdot\varepsilon_n\right)}{D_{I_0}}
\end{equation}
Of course, similar to the YMS case, all spurious poles cancel in the final amplitude, which does not depend on $q_I$. Therefore we are free to choose them to simplify the expression \eqref{eq: N0YM}. One such choice is $q_{I_{0}}=\varepsilon_n$, and the $k=0$ factor \eqref{eq: N0YM} takes a simpler form
\begin{equation}\label{eq: KYMSim}
\frac{\varepsilon_n{\cdot} F_{1\tau_0}{\cdot} \varepsilon_n}{\varepsilon_n{\cdot} p_{23\ldots n{-}1}}=-\frac{\varepsilon_n{\cdot} F_{1\tau_0}{\cdot} \varepsilon_n}{\varepsilon_n{\cdot} p_{1}}.
\end{equation}
It is easy to see that the BCJ numerators become manifestly gauge invariant in particles $1,2,\ldots,n{-}1$.
For example, the BCJ numerator $K^{\text{YM}}(1,2,3,4)$ reads
\begin{equation*}
{-}\frac{\varepsilon_{4}{\cdot}F_{123}{\cdot} \varepsilon_4}{p_1{\cdot}\varepsilon_4}{-}\frac{\varepsilon_4{\cdot} F_{13}{\cdot} \varepsilon_4 p_{1}{\cdot} F_{2}{\cdot} q_{2}}{p_1{\cdot}\varepsilon_4 D_{2}}{-}\frac{\varepsilon_4{\cdot} F_{12}{\cdot} \varepsilon_4 p_{12}{\cdot} F_{3}{\cdot} q_{3} }{p_1{\cdot}\varepsilon_4 D_{3}}.
\end{equation*}
Furthermore, similar to the discussion in section \ref{sec:def}, BCJ numerators of YM amplitudes can also be interpreted in terms of quasi-shuffle products, and the only change is that in the linear map \eqref{eq:linmap} the $k=0$ factor is modified to \eqref{eq: N0YM}.
Before ending the section, we mention the obvious double-copy from YM to GR
\begin{equation}
M^{\rm GR}_n=\sum_{\alpha, \beta} K^{\rm YM}(1, \alpha, n) m(1,\alpha ,n|1, \beta ,n) K^{\rm YM}(1, \beta, n)
\end{equation}
where we sum over a pair of permutations $\alpha,\beta$ of $\{2,3,\ldots, n{-}1\}$, with $m$ denoting bi-adjoint $\phi^3$ amplitudes; if we replace YM by YMS with $1, n$ being scalars, it gives the amplitude with $n{-}2$ gravitons and two scalars.
\section{Recursions and factorizations}
In this section, we propose recursion relations and factorization properties (on spurious poles $D_I$) for the BCJ numerators, which are implied by the combinatorial and algebraic structure. The argument can be equally applied to both two-scalar YMS and pure YM numerators.
\subsection{Recursion relations}
First, in quasi-shuffle product~\eqref{eq: part}, one can collect the terms with the same $\tau_d$ and then apply the linear map \eqref{eq:linmap} to obtain the following recursion relation,
\begin{equation}\label{eq: Krec}
K(1,2,\ldots,n){=
\sum_{I\subset\{2,\ldots,n{-}1\}}\frac{p_{1\Delta(I,\emptyset
}{\cdot} F_I {\cdot} q_I}{D_I}\tilde{K}^I(1,\bar{I},n),
\end{equation}
where the summation is over all the nonempty subsets of $\{2,3,\ldots,n{-}1\}$. The definition of $\tilde{K}^I(1,\bar{I},n)$ is slightly different from \eqref{eq: KBD} in the denominator: it is given by the sum over boundaries of the permutohedron $\mathcal{P}_{\bar{I}}$ with vertices labeled by all permutations of set $\bar{I}$, and for each boundary $\Gamma_d=\{J_0=\bar{I},J_1,\ldots,J_{d}\}$ where $J_d\subset J_{d{-}1}\ldots\subset J_0$, we have a contribution
\begin{equation}\label{eq: tKBD}
\tilde{K}^I_{\Gamma_d}=\prod_{k=0}^{d}\frac{p_{1\Delta(J_{k},J_{k+1})
}\cdot F_{\tau_k}\cdot q_{IJ_{k}}}{D_{IJ_{k}}},
\end{equation}
where $D_{IJ_k}=p_{IJ_k}\cdot q_{IJ_k}$, $\tau_k= \mathrm{Id}(J_k/J_{k{+}1})$ with $J_{d{+}1}\equiv \emptyset$ and the complement of the set $J_k$ appears in $\Delta(J_{k},J_{k+1})$ is defined as $\bar{I}/J_k$. For $\abs{I}{=}n{-}2$ ($\bar{I}=\emptyset$), we define $\tilde{K}^I(1,n)=1$. Formally, this numerator corresponds to the permutohedron $\mathcal{P}_0$.
For example, the recursion relation of $K(2,3,4)\equiv K(1,2,3,4,5)$ \footnote{Here we have omitted the labels of the scalar particles $1$ and $n$ in the numerators $K$ and $\tilde{K}$.} reads
\begin{small}
\begin{align}\label{eq: rec5}
K&(2,3,4)=\frac{p_1{{\cdot}} F_{234}{\cdot} q_{234}}{D_{234}}\\\nonumber
{+}&\frac{p_1{\cdot} F_{23}{\cdot} q_{23}}{D_{23}}\tilde{K}^{23}(4){+}\frac{p_{1}{\cdot} F_{24}{\cdot} q_{24}}{D_{24}}\tilde{K}^{24}(3){+}\frac{p_{12}{\cdot} F_{34}{\cdot} q_{34}}{D_{34}}\tilde{K}^{34}(2)\\\nonumber
{+}&\frac{p_1{\cdot} F_2{\cdot} q_2}{D_2}\tilde{K}^2(3,4)
{+}\frac{p_{12}{\cdot} F_3{\cdot} q_3}{D_3}\tilde{K}^3(2,4){+}\frac{p_{123}{\cdot} F_4{\cdot} q_4}{D_4}\tilde{K}^4(2,3).
\end{align}
\end{small}
Geometrically, the recursion relation~\eqref{eq: Krec} tells us how the co-dimension one boundaries of permutohedron are glued together. In the above five-point example, the term with $\abs{I}=3$ in the first line has only one pole and corresponds to the interior (co-dimension $0$ boundary) of $\mathcal{P}_{3}$, depicted in figure~\ref{fig:rec5}. For the three terms with $\abs{I}=2$ in the second line, each factor $\tilde{K}^I(1,\bar{I},n)$ corresponds to a zero-dimensional permutohedron; on the other hand, each term is mapped to a co-dimension one boundary of $\mathcal{P}_3$ without vertices. For the remaining three terms with $\abs{I}=1$ in the last line, each $\tilde{K}^I(1,\bar{I},n)$ corresponds to a one-dimensional permutohedron and it is mapped to a co-dimension one boundary with two vertices.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{figs/rec5.pdf}
\caption{Recursion relation at $n=5$}
\label{fig:rec5}
\end{figure}
\subsection{Factorization properties on spurious poles}
Next, we move to certain intriguing factorization properties of the BCJ numerator on spurious poles. Combinatorially, any co-dimension one boundary of the permutohedron $\mathcal{P}_{n{-}2}$ is the product of two lower-dimensional permutohedra $\mathcal{P}_{I}\times\mathcal{P}_{\bar{I}}$. Remarkably, we find that on any pole $D_I=0$, the residue of the BCJ numerator factorizes into the product of a $(\abs{I}{+}2)$-point numerator and a $(n{-}\abs{I})$-point numerator! Unlike the usual factorization on the physical poles of the amplitude, these factorizations on the spurious poles stem from the combinatorial picture without any known physical origin. Explicitly
\begin{equation}\label{eq: Kfac}
\begin{aligned}
\left.\mathrm{Res}\right|_{D_I{=}0}K(1,2,\ldots,n){=}&D_I K( 1,\mathrm{Id}(I),P) \\
&\times\tilde{K}^I(1,\mathrm{Id}(\bar{I}),n),
\end{aligned}
\end{equation}
where $P\equiv \bar{I}n$ denotes an effective scalar. For the definition of $\Delta(I_{k}, I_{k{+}1})$ in $K( 1,\mathrm{Id}(I),P)$, the complement of the set $I_k$ is still defined as $\bar{I_k}=\{2,3,\ldots,n{-}1\}/I_k$ while for $\Delta(J_{k}, J_{k{+}1})$ in $\tilde{K}^I(1,\mathrm{Id}(\bar{I}),n)$ the complement of $J_k$ is defined as $\bar{I}/J_k$.
The factor $D_I K(1,\mathrm{Id}(I),P)$ in \eqref{eq: Kfac} means that the overall pole $D_I$ of $K(1,\mathrm{Id}(I),P)$ is excluded. The factorization properties~\eqref{eq: Kfac} can be proved directly by plugging in the definitions on both sides.
For instance, at six points as shown in figure \ref{fig: P4}, there are $14$ co-dimension one boundaries $D_I=0$ including eight poles with $\abs{I}=1$ or $3$ corresponding to hexagons and six poles with $\abs{I}=2$ corresponding to squares. On any of the hexagon boundary, {\it i.e.} when $D_I=0$ with $\abs{I}=1$ or $3$, the residue factorizes into $\mathcal{F}_3=13$ terms (times $\mathcal{F}_1=1$ term). Similarly when $D_I=0$ with $\abs{I}=2$, the residue factorizes differently, {\it e.g.} as $D_{23}K(1,2,3,456)\times\tilde{K}^{23}(1,4,5,6)$ when $D_{23}=0$ (the square is the product of two line segments $\mathcal{P}_{\{23\}}\times\mathcal{P}_{\{45\}}$).
Algebraically, the quasi-shuffle algebra can be prompted to a bialgebra by introducing the coproduct map \cite{Hoffman}, and one can show the factorization properties from the coproduct. Similar to \cite{Brandhuber:2021bsf, Brandhuber:2022enp}, we can also define the antipode map to make the bialgebra a quasi-shuffle Hopf algebra. Acting on the BCJ numerators, the antipode map does nothing but changes its overall sign. The detail is given in the appendix \ref{sec:appAG}.
We expect the factorization properties of BCJ numerators to be the key for showing the cancellation of spurious poles in the amplitude. Such properties also suggest certain positive geometries (rather than just combinatorics) underlying these BCJ numerators, and we leave further investigations to future works.
\section{Heavy-mass effective field theory} \label{sec:heavy1}
In this section, we study YMS amplitudes and their BCJ numerators in the heavy-mass effective field theory (HEFT), which are obtained by taking the heavy-mass limit for a pair of massive scalars with momenta \cite{Brandhuber:2021kpo, Brandhuber:2021bsf,Brandhuber:2022enp}
\begin{equation}
p_1^\mu=mv^\mu,\qquad p_n^\mu=-m v^\mu - k^\mu,
\end{equation}
where $v^2=1$ and we are interested in the limit $m \to \infty$; in other words, we will study the the expansion in $1/m$ of the BCJ numerators which we denote as $K_\mathrm{H}(1,2,\ldots,n)$, as well as that of $\phi^3$ amplitudes, which combine to give the resulting HEFT amplitude $A^\mathrm{H}(1,2,\ldots,n)$ at the leading order in $1/m$. Here $k^\mu$ is at the same order as gluon momenta, which stay finite at ${\cal O}(m^0)$ as $m \to \infty$.
\subsection{Heavy limit of YMS amplitudes}
We will make a particular choice of the reference momenta: $q_I=v$ for all $I$, which dramatically simplifies formulae for BCJ numerators and give rise to poles similar to HEFT numerators in~\cite{Brandhuber:2021bsf}. In fact, for $n=4$ such a choice reduces the BCJ numerator to one term, since $v\cdot F_a \cdot v$ vanishes for a single particle $a$
\begin{equation}\label{eq: hamp4}
K_\mathrm{H}(1,2,3,4)= \frac{p_{1}\cdot F_{23}\cdot v} {p_{23} \cdot v} = -\frac{2m^2}{k^2}\ v \cdot F_{23}\cdot v,
\end{equation}
where in the second equality we have used $v\cdot k=-k^2/(2m)$ implied by the on-shell condition $p_n^2=m^2$.
Notice that $K_{\mathrm{H}}(1,3,2,4)=K_{\mathrm{H}}(1,2,3,4)$, thus the amplitude $A^\mathrm{H}(1,2,3,4)$ becomes
\begin{align} \label{eq:Hamp4}
&(\frac{1}{s_{12}}+ \frac{1}{s_{23}}) K_\mathrm{H}(1,2,3,4) -\frac{1}{s_{23}}K_\mathrm{H}(1,3,2,4) \\\nonumber
=&-\frac{m}{k^2} \frac{v \cdot F_{23}\cdot v}{ v\cdot p_2}.
\end{align}
Physically, the final HEFT amplitude has the leading order $\mathcal{O}(m)$ \cite{Brandhuber:2021kpo}. In the above example, we can see that the numerators are at $\mathcal{O}(m^2)$, and the sum of the leading contribution of $\phi^3$ amplitudes at $\mathcal{O}(m^0)$, say $1/s_{23}$ times the corresponding numerators vanishes. Therefore, the sum of the contribution of $\phi^3$ amplitudes at the next order, {\it i.e.} $1/s_{12}$ from $A^{\phi^3}(1,2,3,4)$ times the numerator produces the HEFT amplitude as the first non-vanishing order. This is also the case for $n=5$.
However, for higher $n$, the numerator contains some additional terms with higher power of $m$.
To obtain the leading order contribution of HEFT final amplitude, we expand the numerators and the $\phi^3$ amplitudes in $m^{-1}$. Note the overall pole $D_{23\ldots n-1}=v\cdot k$ for BCJ numerators is proportional to $m^{-1}$, we first collect the numerator according to its {\it superficial order} of $m^{-1}$, {\it i.e.} terms with $(i-1)$ $p_1$'s in the numerator,
\begin{equation} \label{eq: Kexpand}
K_{\mathrm{H}}(1,2,\ldots,n)= \sum_{i=2}^{\lfloor n/2 \rfloor } K_\mathrm{H}^{(i)}(1,2,\ldots,n),
\end{equation}
where the upper bound of the summation is $\lfloor n/2 \rfloor$ since $p_1 \cdot F_{a}\cdot v=0$
implies that the numerator should contain as many $p_1 \cdot F_{ab} \cdot v$ as possible to have the highest power of $p_1$. In the above expansion, $K_\mathrm{H}^{(i)}\equiv K_\mathrm{H}^{(i)}(1,2,\ldots,n)$ refers to terms with the superficial order $\mathcal{O}(m^i)$. For example, at six points we have the following terms for $K_\mathrm{H}^{(2)}$ and $K_\mathrm{H}^{(3)}$ respectively:
\begin{equation*}
\frac{p_1\cdot F_{23} \cdot v\ p_{23}\cdot F_5 \cdot v\ p_{23}\cdot F_4 \cdot v } {v\cdot k\ v \cdot p_{45}\ v\cdot p_4 }, \frac{p_1 \cdot F_{25}\cdot v\ p_1\cdot F_{34}\cdot v} {v\cdot k\ v\cdot p_{34} }.
\end{equation*}
In fact, as explained in appendix \ref{sec:heavy}, the actual order of $K_\mathrm{H}^{(i)}$ is $\mathcal{O}(m^{2})$ for $i=2$ and $\mathcal{O}(m^{i-1})$ otherwise.
In the HEFT amplitude, we sum over all cubic graphs relevant at leading order, and for each graph with its propagator structure, its numerator is given by the corresponding commutator of $K_\mathrm{H}^{(i)}$~\cite{Bern:2011ia}. Nicely we observe that certain commutators of $K_\mathrm{H}^{(i)}$ actually vanish, and the end result is that only $K_\mathrm{H}^{(2)}$ contributes to the amplitude at the leading order! We have checked such vanishing results up to $n=10$, but we do not have an all-$n$ proof at the moment.
In fact, such vanishing results are better than what we need here, {\it i.e.} only $K^{(2)}_{\rm H}$ contributes to gauge-theory amplitudes at leading order. We have checked up to $n=10$ that the stronger vanishing results actually ensure that only $K^{(2)}_{\rm H}$ contributes to gravity amplitude, which is at order $\mathcal{O}(m^2)$, as obtained by double copy in HEFT. We leave more details in the appendix \ref{sec:heavy} with a proof of the simplest case. As a result of this conjecture, the amplitude is given by
\begin{equation} \label{eq: Hamp}
\begin{aligned}
&A^\mathrm{H}(1,2 \ldots,n)=\sum_{\Theta^1 } \frac{K_\mathrm{H}^{(2)}(1,\Theta^1,n)}{d_{\Theta^1}}, \\
\end{aligned}
\end{equation}
where the summation is over nested commutators of depth $n{-}4$ (``co-depth" $1$) of the ordered set $(2,3,\ldots,n-1)$. For instance, at 5-point we sum over $\Theta^1=([2,3],4),(2,[3,4])$; $d_{\Theta^1}$ denotes the propagator denominator corresponding to the cubic tree associated with $\Theta^1$ (two sub-trees on the scalar line $(1 n)$):
\begin{equation*}
\begin{aligned}\includegraphics[width=0.23\linewidth]{figs/general1.pdf}\end{aligned} \leftrightarrow d_{\Theta^{1}}, {\it e.g.} \begin{aligned}\includegraphics[width=0.23\linewidth]{figs/theta511.pdf}\end{aligned} \leftrightarrow d_{[2,3],4}=s_{123}s_{23}.
\end{equation*}
Moreover, it is easy to show (see appendix \ref{sec:heavy} for details) that the effective BCJ numerator $K_\mathrm{H}^{(2)}(1,2,\ldots,n)$ contains $\mathcal{F}_{n{-}3}$ terms, and its pole structure corresponds to the permutohedron $\mathcal{P}_{\{34\ldots n{-}1\}}$, which means that
\begin{equation}\label{eq: BDexph}
K_\mathrm{H}^{(2)}(1,2,\ldots,n)=\sum_{d=0}^{n-2}\sum_{\Gamma_d\in\partial^d\mathcal{P}_{n{-}3}}K^{ (2)}_{H,\Gamma_d}(1,2,\ldots,n).
\end{equation}
For the boundary $\Gamma_d{=}\{I_0,I_1,\ldots,I_d\}{\in}\partial^d\mathcal{P}_{\{34\ldots n{-}1\}}$ where $I_d{\subset}I_{d-1}{\subset}\ldots{\subset}I_0{=}\{3,4,\ldots,n{-}1\}$ and $I_d{\neq}\emptyset$, the contribution is
\begin{align}
K^{(2)}_{\mathrm{H},\Gamma_d}&= \frac{m v\cdot F_{\tau_{0}}\cdot v}{p_{23\ldots n{-}1}\cdot v} \prod_{k=1}^{d}\frac{p_{\Delta(I_{k},I_{k{+}1})}\cdot F_{\tau_d}\cdot v}{v\cdot p_{I_k}}\\\nonumber
&=-\frac{2m^2 v\cdot F_{\tau_{0}}\cdot v}{k^2} \prod_{k=1}^{d}\frac{p_{\Delta(I_{k},I_{k{+}1})}\cdot F_{\tau_d}\cdot v}{v\cdot p_{I_k}},
\end{align}
where in the calculation of $\Delta(I_{k},I_{k{+}1})$, the complement set of $I_k$ is still taken to be $\{2,3,\ldots,n{-}1\}/I_k$. For $n=4$, there is no commutator in $\Theta^1$ and the result is \eqref{eq:Hamp4}.
For $n=5$, the amplitude becomes
\begin{equation}
\begin{aligned}
A^\mathrm{H}(1,2,3,4,5){=} &\frac{1}{s_{12}} \frac{K_\mathrm{H}^{(2)}(1,2,[3,4],5)}{s_{34}} \\
& {+}\frac{1}{s_{123}} \frac{K_\mathrm{H}^{(2)}(1,[2,3],4,5)}{s_{23}},
\end{aligned}
\end{equation}
where $K_\mathrm{H}^{(2)}(1,2,3,4,5)$ is given by{\small
\begin{equation*}
-\frac{2m^2}{k^2}(v \cdot F_{234} \cdot v+\frac{ v\cdot F_{24} \cdot v\ p_{2}\cdot F_3\cdot v }{v\cdot p_3 }
+ \frac{ v\cdot F_{23} \cdot v \ p_{23}\cdot F_4\cdot v }{v\cdot p_4}).
\end{equation*}}
Let us give a final example for $n=6$ amplitude
\begin{equation}
\begin{aligned}
& \frac{1}{s_{12}} \left(\frac{K_\mathrm{H}^{(2)}(1,2,[[3,4],5],6)}{s_{34}s_{345}}{+} \frac{K_\mathrm{H}^{(2)}(1,2,[3,[4,5]],6)}{s_{45}s_{345}} \right) \\
{+}& \frac{1}{s_{123}} \frac{K_\mathrm{H}^{(2)}(1,[2,3],[4,5],6)}{s_{23}s_{45}} \\
{+}&\frac{1}{s_{1234}} \left(\frac{K_\mathrm{H}^{(2)}(1,[[2,3],4],5,6)}{s_{23}s_{234}}{+} \frac{K_\mathrm{H}^{(2)}(1,[2,[3,4]],5,6)}{s_{34}s_{234}} \right).
\end{aligned}
\end{equation}
It is interesting to notice the numerators we present here only differ from those in \cite{Brandhuber:2021bsf} denoted by $N(1,2,\ldots,n)$ by an overall prefacto
\begin{equation}\label{eq: HBCJnumerator}
K_\mathrm{H}^{(2)}(1,2,\ldots,n)=(-1)^{n}(n{-}2)\frac{2m}{k^2} v\cdot p_2 N(1,2,\ldots,n).
\end{equation}
It is highly nontrivial, however, that these two sets of effective BCJ numerators give the same HEFT amplitude. In~\cite{Brandhuber:2021bsf}, the expression involves the sum of cubic graphs corresponding to nested commutators of depth $n-3$ of the ordered set $(2,3,\ldots,n{-}1)$, thus the propagator denominator contains an overall factor $s_{23\ldots n-1}$, which in our case is replaced by different $s_{1 \sigma}$ for different terms. In addition, the numerator of~\cite{Brandhuber:2021bsf} for each cubic graph is given by a nested commutator of $N(1,2,\ldots,n)$, thus the number of terms in it is twice as ours. Nevertheless, we have analytically checked up to $n=10$ that the amplitude \eqref{eq: Hamp} agrees with \cite{Brandhuber:2021bsf}. Moreover, we have checked that although they look very different, the HEFT gravity amplitude via double copy also agrees with that in~\cite{Brandhuber:2021bsf}, and we expect both agreements to hold for all $n$.
\subsection{Decoupling into pure YM}
Given the explicit result of the $n$-point heavy mass BCJ numerators, the $(n-1)$-point pure YM BCJ numerators, as well as the amplitudes, can be easily obtained via the decoupling limit:
$mv\to \varepsilon_n$, $p_{23\ldots n-1}^2\to 0$
to obtain the BCJ numerator $K^{\prime \text{YM}}(2,3,\ldots,n)$ \cite{Brandhuber:2021kpo,Brandhuber:2021bsf}. Under this kinematics, the overall factor $k^2$ vanishes, which we ignore in the decoupling limit. For instance, the three-point BCJ numerator is given by $K^{\prime \text{YM}}(2,3,4)=-2\varepsilon_4 \cdot F_{23}\cdot \varepsilon_4$. Therefore, the three-point amplitude is
\begin{align}
A^{\text{YM}}(2,3,4)&= -\frac{\varepsilon_4 \cdot F_{23}\cdot \varepsilon_4}{ \varepsilon_4 \cdot p_2}\\\nonumber
&=\varepsilon _4\cdot \varepsilon _2 p_2\cdot \varepsilon _3{-}\varepsilon _2\cdot \varepsilon _3 p_2\cdot \varepsilon _4{-}\varepsilon _4\cdot \varepsilon _3 p_3\cdot \varepsilon _2.
\end{align}
For the 4-point YM amplitude, the numerator $K^{\prime \text{YM}}(2,3,4,5)$ reads
\begin{equation*}
{-}2\left(\varepsilon_{5}{\cdot}F_{234}{\cdot} \varepsilon_5{+}\frac{\varepsilon_5{\cdot} F_{24}{\cdot} \varepsilon_5 p_{2}{\cdot} F_{3}{\cdot} \varepsilon_5}{p_3{\cdot}\varepsilon_5 }{+}\frac{\varepsilon_5{\cdot} F_{23}{\cdot} \varepsilon_5 p_{23}{\cdot} F_4{\cdot} \varepsilon_5 }{ p_4{\cdot}\varepsilon_5}\right).
\end{equation*}
Note that $K^{\prime \text{YM}}(2,3,\ldots,n)$ also manifests the gauge invariance of particles $2,3,\ldots,n-1$. Moreover, it is related to the BCJ numerator given in sec.\ref{sec:YM} via
\begin{equation*}
K^{\prime \text{YM}}(2,3,\ldots,n)=2\left. \varepsilon_n \cdot p_2 K^{\text{YM}}(2,3,\ldots,n)\right|_{q_I\to \varepsilon_n}.
\end{equation*}
These numerators, accompanied by different $\phi^3$ amplitudes, produce the same YM amplitude.
\section{Conclusions and outlook}
In this note, we established a correspondence between BCJ numerators from covariant color-kinematics duality and the combinatorial permutohedra, which are closely related to the quasi-shuffle Hopf algebra. This apply to all YMS amplitudes, but the most interesting case is that with two scalars, whose numerators share the same combinatorial structure as the pure YM ones: each term is mapped to a boundary of $\mathcal{P}_{n-2}$; the contribution from each boundary is almost identical in these two cases, except that we need to modify one factor to take into account the remaining two gluons. We also found nice recursion relations and factorization properties implied by this picture. Finally, based on highly nontrivial cancellations which are needed for both YMS and gravity amplitudes (via double copy) in HEFT, we conjectured a compact formula for their effective numerators; they become closely related to permutohedra $\mathcal{P}_{n-3}$, which, while producing the same amplitude, differ by an overall factor from the numerators in ~\cite{Brandhuber:2021bsf, Brandhuber:2022enp}.
There are numerous open questions for further investigations. First, as we will present in \cite{toapp}, it is interesting to see how lower-dimensional permutohedra for general YMS numerators combine into $\mathcal{P}_{n-2}$ which corresponds to the pure YM ones; we also find interesting combinatorial structures underlying BCJ numerators of amplitudes in NLSM {\it etc.}. Moreover, the somewhat miraculous cancellations that simplify these numerators in HEFT still remain to be proven, which would also be important to establish the correct double copy in HEFT. Since the final amplitudes are independent of reference momenta, all the spurious poles must cancel, which still calls for a direct understanding (without relying on the CCK duality); such an understanding could connect this combinatorial picture (especially the factorizations) to the uniqueness theorem for YM amplitude~\cite{Arkani-Hamed:2016rak, Rodina:2016jyz} and YMS ones via the universal expansion~\cite{Dong:2021qai}. Last but not least, it is tempting to ask: could we combine the permutohedra for BCJ numerators with the associahedra for bi-adjoint $\phi^3$ amplitudes, and obtain a unified geometric understanding of gluon and graviton scattering?
\begin{acknowledgments}
We thank Linghui Hou, Guanda Lin, Tianheng Wang for discussions and collaborations on related projects. This research is supported in part by the Key Research Program of CAS, Grant No. XDPB15 and National
Natural Science Foundation of China under Grant No.
11935013,11947301,12047502,12047503
\end{acknowledgments}
\newpage
\widetext
|
2,869,038,154,996 | arxiv | \section{Introduction}
The study of quantum corrections to solitons in $1+1$ dimensions started in
1970's \cite{Dashen:1974cj,Goldstone,Rajaraman,FK},
and since that time a considerable
progress has been made (see \cite{Izquierdo:2006ds} for a recent review).
Noncommutative (NC) solitons \cite{Douglas:2001ba,Lechtenfeld:2006iz}
were included
in these studies only recently \cite{Kurkcuoglu:2007rr,Konoplya:2007xt}.
The work \cite{Kurkcuoglu:2007rr} used the small $\theta$ expansion,
while the paper \cite{Konoplya:2007xt} was concentrated on moderate and large
values of the NC parameter. Both papers left many questions unanswered,
mostly related to the renormalization and to the possibility of a smooth
extension of the results to the region of large (respectively, small)
noncommutativity. Besides, in $1+1$ dimensions one deals with time-space
noncommutativity which brings an infinite number of time derivatives
into the action, so that the very definition of energy becomes
less obvious.
Another line of research considers quantum corrections to supersymmetric
solitons \cite{Rebhan:2004vu,Shifman:2007ce}. It was found
\cite{Rebhan:1997iv}, that naive arguments leading to zero quantum corrections
to the mass of supersymmetric solitons were incorrect, and a new anomaly
(the anomaly in the central charge
\cite{Nastase:1998sy,Graham:1998qq,Shifman:1998zy}) was discovered.
Taking this anomaly into account restores saturation of the BPS bound
at the quantum level.
In this paper we consider quantum correction to the mass of an NC
supersymmetric kink in $1+1$ dimensions. Our motivation is twofold.
First, it is interesting to study the interplay between supersymmetry and
noncommutativity with this particular example. Second, supersymmetry
simplifies the structure of divergences of quantum field theory and may
help to resolve some problems existing in the non-supersymmetric case.
Practically, we adapt the methods developed earlier
in \cite{Bordag:2002dg} to the NC case. Supersymmetrization of the NC
space-time is done in the most straightforward way
\cite{Chu:1999ij,Ferrara:2000mm,Terashima:2000xq} where only the bosonic
coordinates are deformed. The model we study in here is a supersymmetric
extension on the NC $\varphi^4$ model in $1+1$ dimensions.
In time-space NC theories there are well-known difficulties with
the construction
of a canonical formalism (due to the presence of an infinite number of
time derivatives). Besides, generically there are no locally conserved currents
corresponding to global classical symmetries. Therefore, it is a priori
unclear whether one can define supercharges in such theories. However, as
we show below, this task can be successfully addressed in the framework
of an unconventional canonical formalism \cite{Vassilevich:2005fk},
so that one can introduce supercharges whose brackets give an analog
of the Hamiltonian and a central charge. The Hamiltonian has the meaning
of the energy integrated over an interval $T$ of time. For a static field
configuration it simply reads $TE$,
where $E$ is the energy. The main reason to call these
quantities supercharges and a Hamiltonian is that with respect to the
new brackets they indeed generate global supesymmetry transformations and
the equations of motion, respectively.
Static solutions in NC models in $(1+1)$ dimensions are not deformed, i.e.
they are the same as in corresponding commutative models. The equations of
motion for small fluctuations above such solutions are deformed, and the
fluctuations are described by wave equations with frequency-dependent
potentials. Nevertheless, in the model we consider, bosonic and fermionic
modes are isospectral. To use all advantages of the isospectrality, we employ
the zeta-function regularization and make the spectrum discrete by introducing
boundaries in the spatial direction. (These boundaries are removed at the
end of the calculations). The quantum energy is defined as one half the sum
over the eigenfrequencies. The width of the effective potential in the wave
equations for the fluctuations with the frequency $\omega$ is
proportional to $\theta\omega$, where $\theta$ is the NC parameter.
To keep boundaries far away from the location of the potential we have
to make the position of the boundaries frequency-dependent
\cite{Konoplya:2007xt}. In this approach, quantum energy of the kink
is defined as the energy of a system consisting of the kink and the
boundaries minus the (Casimir) energy of the boundaries \cite{Bordag:2002dg}.
For the renormalization, we use the heat kernel subtraction scheme which
was shown to be equivalent to the no-tadpole condition in the commutative
case \cite{Bordag:2002dg}. The divergences are removed by a renormalization
of the mass, which is precisely the same as in the commutative case. The
renormalized
energy (mass shift of the kink) does not depend on $\theta$
and coincides with its' commutative value.
Keeping in mind future applications to the verification of the quantum BPS
bound saturation, we also calculate quantum corrections to the new
Hamiltonian. We find the value $TE$, where $E$ is the mass shift
of the soliton. Two apparently different definitions of the quantum
energy give consistent results. Also, the renormalization required
for the Hamiltonian is the same mass renormalization which we described
above.
This paper is organized as follows. In the next section we introduce
a classical action and collect some preliminary information. In section
\ref{sec-can} we study the new unconventional definition of the
canonical algebra, and define supercharge, the Hamiltonian, and the
central charge. In section \ref{sec-flu} we study the spectrum of
fluctuations above the kink. Quantum corrections to the mass of the kink
are calculated in section \ref{sec-qua}, and corrections to the new
Hamiltonian are considered in section \ref{sec-Ham}. Concluding remarks are
given in section \ref{sec-con}.
\section{The classical action}\label{sec-clas}
We shall describe noncommutativity of the space-time coordinates by
the Moyal product
\begin{equation}
(f\star g) (x)=\left[ \exp \left( \frac i2 \Theta^{\mu\nu} \partial_\mu^x
\partial_\nu^y \right)f(x)g(y) \right]_{y^\mu=x^\mu},\label{Moyal}
\end{equation}
where $\Theta^{\mu\nu}$ is a constant skew-symmetric matrix which
can be chosen as
$\Theta^{\mu\nu} = 2\theta \epsilon^{\mu\nu}$ with $\epsilon^{01}=1$.
After splitting the coordinates into time and space, $\{ x^\mu\}=\{t,x\}$,
we have the following useful formulae
\begin{equation}
f(x) \star e^{i\omega t}=e^{i\omega t} f(x+\theta \omega),\quad
e^{i\omega t}\star f(x) =e^{i\omega t} f(x-\theta \omega)\,.
\label{shift}
\end{equation}
The Moyal product is closed,
\begin{equation}
\int d^2x\, f_1\star f_2 = \int d^2x\, f_1 \cdot f_2\,, \label{closed}
\end{equation}
and has the property that
\begin{equation}
\int d^2x\, f_1\star f_2 = (-1)^{g_1g_2}
\int d^2x\, f_2 \star f_1\,, \label{cycl}
\end{equation}
where the grading $g_i=0$ if $f_i$ is bosonic, and $g_i=1$ if $f_i$ is
fermionic. To derive the properties (\ref{closed}) and (\ref{cycl}) one
has to integrate by parts in (\ref{Moyal}). In general, boundary terms
may appear. To avoid them, we assume that in the time direction all fields
are periodic with a very large period which should be sent to infinity
at the end. In the spatial directions all fields must approach constant
values sufficiently fast. Such boundary conditions are satisfied by
static solitons and classical variations of the fields which produce the
equations of motion. A different set of boundary conditions will be used
in sec.\ \ref{sec-flu} to analyze quantum fluctuations.
The action for a supersymmetric NC $\varphi^4$ model reads
\begin{equation}
S=-\frac 12 \int_{\mathcal{M}} d^2x\, \left( (\partial_\mu \varphi)^2
+U'(\varphi )\star \bar\psi\star \psi +\bar\psi \gamma^\mu \partial_\mu \psi
-2F\star U -F^2 \right).\label{suact}
\end{equation}
Here $\varphi$ is a real scalar field, and $\psi$ is a Majorana
spinor. We take $\gamma$-matrices in the Majorana representation
\begin{equation}
\gamma^0 =-i\sigma^2
= \left( \begin{array}{cc} 0 & -1 \\ 1 & 0\end{array} \right),
\qquad
\gamma^1 =\sigma^3 =\left( \begin{array}{cc} 1&0\\0&-1 \end{array} \right).
\label{v-mgamma}
\end{equation}
In this representation the components of $\psi$ are real. $\bar\psi =
\psi^T i\gamma^0$. Components of the spinors will be marked by the
subscripts $\pm$, so that $\psi =\left( \begin{array}{c} \psi_+ \\
\psi_- \end{array} \right)$, $\epsilon =\left( \begin{array}{c} \epsilon_+ \\
\epsilon_- \end{array} \right)$. For the $\varphi^4$ model
\begin{equation}
U(\varphi)=\sqrt {\frac {\lambda}2 } (v_0^2 - \varphi\star\varphi),\qquad
U'(\varphi)= -\sqrt{2\lambda} \varphi\,.\label{UUpr}
\end{equation}
Note, that though due to (\ref{closed}) one star can always be deleted
under an integral, it is more convenient to write all stars explicitly
in all terms higher that second order in the fields since mixed (star
with ordinary) products are not associative.
The supersymmetry transformations
\begin{equation}
\delta\varphi =\bar\epsilon\psi,\qquad \delta\psi=
(\gamma^\mu\partial_\mu \varphi +F)\epsilon,\qquad
\delta F=\bar\epsilon \gamma^\mu \partial_\mu \psi .
\label{Fsusy}
\end{equation}
are linear, and, therefore, are undeformed. The invariance of (\ref{suact})
under (\ref{Fsusy}) follows from the general analysis of
\cite{Ferrara:2000mm,Terashima:2000xq}, but can also be verified directly.
The auxiliary field $F$ may be excluded by means of its' algebraic\footnote{
This means that no derivatives acting on $F$ appear.}
equation of motion
\begin{equation}
F=-U(\varphi). \label{Feq}
\end{equation}
The action (\ref{suact}) becomes
\begin{equation}
S=-\frac 12 \int_{\mathcal{M}} d^2x\, \left( (\partial_\mu \varphi)^2
+U'(\varphi )\star \bar\psi\star \psi +\bar\psi \gamma^\mu \partial_\mu \psi
+ U\star U \right),\label{noFact}
\end{equation}
and the supersymmetry transformations read
\begin{equation}
\delta\varphi =\bar\epsilon\psi,\qquad \delta\psi=
(\gamma^\mu\partial_\mu \varphi -U)\epsilon .
\label{susy}
\end{equation}
The equations of motion corresponding to the action (\ref{noFact}) are
\begin{eqnarray}
&&\partial_\mu \partial^\mu \varphi
+ \sqrt{\frac {\lambda}2} \, \bar \psi\star \psi
-\frac 12 (U\star U' + U'\star U)=0,\label{beom}\\
&&\slashed{\partial} \psi + \frac 12 ( U'\star \psi + \psi \star U')=0.
\label{feom}
\end{eqnarray}
Static solutions of these equations are the same as in the commutative case.
In particular, there is the kink solution
\begin{equation}
\Phi (x)=v_0 \tanh \left( v_0 \sqrt{\frac {\lambda}2 } \, x\right).
\label{kink}
\end{equation}
This solution satisfies the Bogomolny equation
\begin{equation}
\partial_1 \Phi (x) = U(\Phi)\label{Beq}
\end{equation}
and is invariant under the supersymmetry transformations (\ref{susy}) with
$\epsilon_-=0$.
\section{Canonical realization of the supersymmetry algebra}\label{sec-can}
We have no locally conserved supercurrent in the model
(as there is no locally conserved energy-momentum tensor
in NC theories \cite{Gerhold:2000ik}), but still
by using an unconventional canonical formalism
for time-space noncommutative theories \cite{Vassilevich:2005fk}
we can define supercharges which generate the supersymmetry transformations
(\ref{susy}). Let us briefly outline the formalism of \cite{Vassilevich:2005fk}
(in \cite{Vassilevich:2005fk} only the bosonic case was considered, but
an extension to the presence of fermions is straightforward). The canonical
pairs are defined ignoring the time derivatives hidden in the star-product.
In our model this implies that they are precisely the same as in the
commutative case. To read off the symplectic form, let us re-write
the action (\ref{noFact}) in a ``hamiltonian'' form
\begin{eqnarray}
S&=& \int d^2x \left( -\frac i2 (\partial_0 \psi_+ \cdot \psi_+
+\partial_0\psi_- \cdot \psi_-)+\frac 12 ((\partial_0\varphi)p-
(\partial_0p)\varphi) -\mathcal{H} \right)\label{Hamact}\\
&=& \int d^2x \left( -\frac 12 (C^{-1})^{AB}\partial_0 z_A \cdot z_B
-\mathcal{H}\right) .\nonumber
\end{eqnarray}
(We use the conventions of Henneaux \cite{Henneaux:1985kr}). Here
$\{ z_A\} \equiv \{ \varphi,p,\psi_+,\psi_-\}$.
\begin{equation}
{\mathcal{H}}=\frac 12 \left( (\partial_1 \varphi)^2 + p^2 +
U\star U + U'\star \bar \psi \star \psi +\bar\psi \gamma^1 \partial_1\psi
\right) \label{H}
\end{equation}
does not contain explicit time derivatives (all time derivatives are
hidden in the star product).
The canonical brackets are taken between
variables at different times and are postulated to be proportional to
{\it two-dimensional} delta-functions instead of one-dimensional ones,
$\{ z_A(t,x),z_B(t',x'\}=C_{AB}\delta(t-t')\delta(x-x')$. More
explicitly,
\begin{eqnarray}
&&\{ \varphi (t,x),p(t',x') \}=\delta (x-x')\delta (t-t'),\label{canrel1}\\
&&\{ \psi_\pm (t,x),\psi_\pm (t',x') \}=-i \delta (x-x')\delta (t-t'),
\label{canrel2}
\end{eqnarray}
and $p=\partial_0 \varphi$. Usual grading rules are understood. Now we have
to extend the definition of the brackets to star-polynomials of $z_A$ and their
derivatives. Here we face a difficulty since a star product by a
delta-function is not a well defined object. However, we can define brackets
between space-time integral of polynomials. Let $F$, $G$ be two such
integrals. Then
\begin{equation}
\{ F,G\} = \int d^2x \frac {\delta^r F}{\delta z_A(x)}\star C_{AB}
\frac {\delta^l G}{\delta z_B(x)} \,.\label{FGbr}
\end{equation}
Here $\delta^r$ and $\delta^l$ are right and left variational derivatives.
For a practical use, the formula (\ref{FGbr}) has to be understood in
the following way. One has to take all pairs of canonical variables $z_A$,
$z_B$ in $F$ and $G$ respectively, then one uses the property (\ref{cycl})
to bring $z_A$ to the rightmost position in $F$, and $z_B$ to the leftmost
position in $G$. Then one integrates by parts to
remove all explicit derivatives
form $z_A$ and $z_A$. Then one deletes $z_A$ and $z_B$, star-multiply
the expressions obtained, contracts with $C_{AB}$ and integrates over the
space-time. The brackets defined in this way satisfy the (graded) Jacobi
identities. For bosonic theories this was demonstrated in
\cite{Vassilevich:2005fk}, and an extension to fermions is straightforward.
By taking $F=\int f\star \hat F$, where $f$ is a smooth function,
calculating the bracket with $G$, and then varying with respect to
$f$, one can extend the definition to brackets between star-polynomials
$\hat F$ and integrated star-polynomials $G$. This trick does not work
twice. Therefore, it is not possible to define a bracket between
unintegrated polynomials, but we shall not need such an object.
In \cite{Vassilevich:2005fk} it was shown that these unconventional Poisson
brackets can be used to define first-class constraints and generate gauge
transformations in time-space NC theories (see also \cite{Vassilevich:2006uv}
for an example of practical use of these brackets). Here we shall apply
them to analyze global symmetries.
First we note, that if we define the ``Hamiltonian'' as a space-time integral
\begin{equation}
H=\int d^2x\, \mathcal{H} \label{HH}
\end{equation}
of the density (\ref{H}), then the brackets with $H$ generate the equations
of motion
\begin{equation}
\{ H,z_A\} = -\partial_0 z_A\,. \label{Hameq}
\end{equation}
A definition of the ``supercharge'' then follows by an educated guess as
a suitable generalization of corresponding commutative expression. Let us take
\begin{equation}
Q=-\int d^2x (\slashed{\partial}\varphi +U(\varphi))\star
\gamma^0 \psi\,. \label{Q}
\end{equation}
It is easy to check that this ``supercharge'' indeed generates the supesymmetry
transformations
\begin{equation}
\{ \bar \epsilon Q,z_A \} =-\tilde\delta z_A \label{Qsusy}
\end{equation}
of the Hamiltonian action (\ref{Hamact}). On shell the transformations
$\tilde \delta$ coincide with (\ref{susy}).
We see, that the ``Hamiltonian'' and the ``supercharge'' possess
the characteristic features which we expect from a Hamiltonian and
a supercharge. Therefore, we shall sometimes omit the quotation marks in what
follows.
The kink solution (\ref{kink}) is invariant under the $\epsilon_+$
transformations, which are generated by $Q_-$. The bracket of two such
supercharges reads
\begin{equation}
\{ Q_-,Q_-\} = -2i (H-Z) \label{QQ}
\end{equation}
where\footnote{Note, that there is another total derivative term in
$\{ Q_-,Q_-\}$, namely $-i\int \partial_1 (\bar\psi \psi)$. This term
vanishes if one considers fluctuations above the kink solution
with the asymptotic conditions we discussed above. However,
such terms are important for the ``supersymmetry without
boundary conditions'' approach \cite{Belyaev:2008xk}.}
\begin{eqnarray}
&&Z=\int d^2x \partial_1 W(\varphi),\label{Z}\\
&&W(\varphi) =\sqrt{\frac {\lambda}2 } \left( v_0^2 \varphi -\frac 13
\varphi \star \varphi \star \varphi \right) .\label{W}
\end{eqnarray}
$Z$ is a natural generalization of the central charge to the NC case.
We obtained a standard form of a central extension of the supersymmetry
algebra in a topologically non-trivial sector \cite{Witten:1978mh},
though the generators are given by two-dimensional integrals and the
brackets are unconventional.
On the kink background both $H$ and $Z$ are divergent unless one
restricts the integration over $t$ to a finite interval.
Note, that the difference $H-Z$ for the kink is finite
and vanishes.
\section{Fluctuations}\label{sec-flu}
The spectrum of fluctuations is defined by the linearized equations of motion
(\ref{beom}) and (\ref{feom}). For the fermionic fluctuations we have
\begin{equation}
\left( \begin{array}{cc} \partial_1 +\frac 12 (L( U'(\Phi))+
R(U'(\Phi))) & -\partial_0 \\
\partial_0 & -\partial_1 + \frac 12 (L( U'(\Phi))+
R(U'(\Phi))) \end{array} \right)
\left( \begin{array}{c} \psi_+ \\ \psi_- \end{array} \right) =0 .
\label{linDir}
\end{equation}
Here $L$ and $R$ denote left and right Moyal multiplications respectively,
$f_1\star f_2=L(f_1)f_2=R(f_2)f_1$.
The fluctuation operator commutes with $\partial_0$. Consequently, we
can look for the solutions in the form
\begin{equation}
\psi_\pm (t,x)=e^{i\omega_f t} \psi_\pm (\omega_f,x) .
\label{v-psiom}
\end{equation}
The equation (\ref{linDir}) then yields
\begin{eqnarray}
&&i\omega_f \psi_+ (\omega_f,x)=(\partial_1 -\frac 12 ( U'(\Phi_+)+
U'(\Phi_-))
)\psi_-(\omega_f,x),
\nonumber\\
&&i\omega_f \psi_- (\omega_f,x)=(\partial_1 +\frac 12 ( U'(\Phi_+)+
U'(\Phi_-)))\psi_+(\omega_f,x),
\label{Dir2}
\end{eqnarray}
where
\begin{equation}
\Phi_{\pm}(x) \equiv \Phi (x\pm \theta\omega)\,. \label{Phipm}
\end{equation}
The property (\ref{shift}) of the Moyal product has been used.
By iterating the equations (\ref{Dir2}) one obtains
\begin{eqnarray}
&&\omega_f^2 \psi_+ (\omega_f,x)=
- D_-(\omega_f)D_+(\omega_f) \psi_+ (\omega_f,x),
\nonumber\\
&&\omega_f^2 \psi_- (\omega_f,x)=
- D_+(\omega_f)D_-(\omega_f) \psi_- (\omega_f,x),
\label{Dir3}
\end{eqnarray}
where
\begin{equation}
D_\pm (\omega)=
\partial_1 \mp \sqrt{\frac{\lambda}2} (\Phi_++\Phi_-) .\label{v-Dpm}
\end{equation}
In the bosonic sector, we decompose the scalar field as
$\varphi =\Phi +\phi$. The fluctuations $\phi$ satisfy the linearized field
equation
\begin{equation}
-\partial_0^2\phi = -(\partial_1^2 +\lambda v_0^2 -
\lambda (L(\Phi^2)+R(\Phi^2)+L(\Phi)R(\Phi))\phi \,.\label{scalin1}
\end{equation}
Again, we look for the solutions in the form $\phi (\omega_b,x)=
e^{i\omega t} \phi (\omega_b,x)$. The equation (\ref{scalin1}) yields
\begin{equation}
\omega_b^2 \phi (\omega_b) = -(\partial_1^2 +\lambda v_0^2 -
\lambda (\Phi^2_++\Phi^2_-+\Phi_+\Phi_-))\phi(\omega_b). \label{scalin2}
\end{equation}
By using the Bogomolny equation (\ref{Beq}) we obtain
\begin{equation}
\omega_b^2 \phi(\omega_b) =-D_+(\omega_b)D_-(\omega_b)\phi(\omega_b).
\label{scalin3}
\end{equation}
The spectrum of the eigenfrequencies is defined by two operators,
$P_1(\omega)=-D_+(\omega)D_-(\omega)$ and
$P_2(\omega)=-D_-(\omega)D_+(\omega)$. Due to the intertwining
relations
\begin{equation}
P_1(\omega)D_+(\omega)=D_+(\omega)P_2(\omega),\qquad
D_-(\omega)P_1(\omega)=P_2(\omega)D_-(\omega) \label{inter}
\end{equation}
these operators are isospectral up to zero modes. Indeed, these relations
imply that if $P_1\psi_1=\lambda \psi_1$, then $D_-\psi_1$ is an eigenfunction
of $P_2$ with the same eigenvalue. Also, if $P_2\psi_2=\lambda \psi_2$,
then $P_1(D_+\psi_2)=\lambda (D_+\psi_2)$.
An explicit form of $P_1$ follows from (\ref{scalin2}). For the sake of
completeness we also present
\begin{equation}
P_2(\omega)=-(\partial_1^2 -\lambda v_0^2 -\lambda \Phi_+\Phi_-).
\label{P2}
\end{equation}
In calculations of the quantum corrections it is convenient to go from the
continuous to discrete spectrum of $P_1$ and $P_2$ by introducing boundaries
\cite{Bordag:2002dg} in the $x$-direction. We like the boundary to interact
with the soliton as weak as possible. Therefore, the boundary should be far
away from the place where the kink is localized. However, as we see e.g.
from eq.\ (\ref{scalin2}), the width of the effective potential is
proportional to $\theta\omega$ and becomes infinite for $\omega\to\infty$.
No boundary seems to be sufficiently far away. To overcome this difficulty,
in \cite{Konoplya:2007xt} it was suggested to make the boundary
$\omega$-dependent, i.e. to place it to the points
$x=\pm l(\omega)=\pm (l_0+\theta\omega)$ with a large $l_0$.
Having a boundary, one has to impose some boundary conditions
on the fluctuations. Particular choice of the boundary conditions
is not too important (as anyway we are going to subtract the vacuum
energy related to the boundary), but too use the full strength
of supersymmetry it is convenient to take supersymmetric boundary
conditions which respect the intertwining relations (\ref{inter})
and, therefore, preserve isospectrality of $P_1(\omega)$ and
$P_2(\omega)$ for any $\omega$. The simplest choice is to impose the
Dirichlet boundary conditions on $\phi$ and $\psi_-$,
\begin{equation}
\phi \vert_{x=\pm l(\omega)}=
\psi_- \vert_{x=\pm l(\omega)}=0.\label{Dircon}
\end{equation}
The intertwining relations then require a Robin (generalized Neumann) boundary
condition for $\psi_+$:
\begin{equation}
D_+\psi_+ \vert_{x=\pm l(\omega)}=0.\label{Robcon}
\end{equation}
(Note that the same boundary condition on $\psi_+$ follows from
the consistency of the Dirac equation (\ref{Dir2})).
In general, the Moyal product cannot be restricted to
an interval with frequency dependent boundaries. However, for
operators commuting with the time derivatives (in particular,
for Moyal multiplications by a time-independent function)
such a restrictions can be made along the lines described in
this section.
\section{Quantum corrections to the mass}\label{sec-qua}
Here we use a generalization of the method \cite{Bordag:2002dg}
to the NC case. Namely, we first consider the kink with boundaries
with fluctuations subject to the boundary conditions (\ref{Dircon})
and (\ref{Robcon}), calculate the total quantum energy of this system
$E_{\rm k+b}$, and then subtract the vacuum energy $E_{\rm b}$
which is due to the presence of boundaries. The energy associated
with the kink is then
\begin{equation}
E_{\bf k}=E_{\rm k+b}-E_{\rm b}.\label{Ekbk}
\end{equation}
The vacuum energy for each of the systems is defined as a half-sum of
the eigenfrequencies,
\begin{equation}
E=\frac 12 \sum \omega_b -\frac 12\sum \omega_f \label{Esum}
\end{equation}
(we set $\hbar=1$). In time-space NC theories there is no standard
canonical Hamiltonian to justify this formula for the energy
(though, there is a non-standard one, see sec.\ \ref{sec-can}
and \ref{sec-Ham}). For systems with a finite number of additional time
derivatives (with fields in
stationary but non-static geometries being an example of such systems)
it was shown that this definition of the energy is equivalent to
the canonical one, and the presence of extra time derivative
(which results in modifications of the Klein-Gordon current and
corresponding scalar product) influences the results of quantum
computations through modification of the spectral density
\cite{Fursaev:2000dv,Fursaev:2001yu} (see also
\cite{Strelchenko:2007xh,Konoplya:2007xt} for an extension of this analysis
to NC case). Adopting the same approach here looks as the most reliable
extension of the notion of vacuum energy to time-space NC theories.
Because of the presence of boundaries we deal with a
discrete spectrum of eigenfrequencies. It is convenient to use
the zeta-function regularization \cite{Dowker:1975tf,Hawking:1976ja}.
The operator $P_1$ (resp., $P_2$) is a product of a first-order operator
and its' formal adjoint. Therefore, both $P_1$ and $P_2$ are non-negative.
In the positive spectrum, the zeta-regularized energy reads
\begin{equation}
E(s)=\frac {\mu^{2s}}2 \left( {\sum}' (\omega_b^2)^{\frac 12 -s}
- {\sum}' (\omega_f^2)^{\frac 12 -s}\right),\label{zregE}
\end{equation}
where prime tells us that the summation runs over the positive spectrum
only. (Zero frequencies do not contribute to the vacuum energy).
The parameter $\mu$ of the dimension of the mass is introduced in order to
keep right dimensionality of the energy independently of the regularization
parameter $s$. Both sums on the right hand side of (\ref{zregE})
are convergent for ${\rm Re}\,(s)$ sufficiently large. At the end of
the calculations the result must be analytically continued to the
physical value $s=0$.
Let us first analyze $E_{\rm k+b}$. Due to the isospectrality properties
discussed above
\begin{equation}
E(s)_{\rm k+b}=0, \label{Eskb}
\end{equation}
i.e., the regularized vacuum energy vanished identically.
Although, obviously, the vacuum energy (\ref{Eskb}) is not divergent,
there might be some finite contribution due to a finite renormalization
\footnote{This indeed happens in some models. For example,
the whole correction to the mass of
the supersymmetric Abrikosov-Nielsen-Olesen vortex is due to a finite
renormalization of couplings \cite{Vassilevich:2003xk,Rebhan:2003bu}.}.
To define such a contribution one should fix a normalization condition
or a subtraction scheme. Here we use the
heat-kernel subtraction scheme which is frequently
employed in the Casimir energy calculations and is discussed in detail in
\cite{Bordag:1998vs,Bordag:2001qi}. Consider a (bosonic)
system in $1+1$ dimensions
with a discrete frequency spectrum $\{ \omega_n \}$. Let
$k_n^2=\omega_n^2-m^2$, where $m$ is the mass (or, the asymptotic value
of the potential). The regularized vacuum energy for this system admits
a representation,
\begin{equation}
\frac {\mu^2}2 \sum_n (k_n^2+m^2)^{\frac 12 -s}=
\frac {\mu^2}2 \int_0^\infty \frac{d\tau}{\tau}\,
\frac {\tau^{s-\frac 12}}{\Gamma \left(s-\frac 12\right)}\,
K(\tau)e^{-\tau m^2} \,,\label{intrep}
\end{equation}
where
\begin{equation}
K(\tau)=\sum_n e^{-\tau k_n^2} \label{Ktau}
\end{equation}
is the corresponding heat kernel. Usually, the heat kernel admits an asymptotic
expansion\footnote{Such an expansion indeed exists for practically all case
appearing in the context of quantum field theory. A more precise and complete
information on the heat kernel expansion can be found in
\cite{Vassilevich:2003xt} for commutative space, and in
\cite{Vassilevich:2007fq} in the NC case. The heat kernel for
frequency-dependent problems was analyzed in
\cite{Fursaev:2001yu,Fursaev:2001fm,Fursaev:2002vi}.}
\begin{equation}
K(\tau)\simeq \sum_{p>0} a_p \tau^{p-1}\label{asym}
\end{equation}
as $\tau\to +0$. For $s=0$ contributions to (\ref{intrep})
from the terms with $p=0,1,2$
are divergent at the lower limit. We define the divergent part of
the vacuum energy as
\begin{eqnarray}
&&E^{\rm div}\equiv \frac {\mu^2}2 \int_0^\infty \frac{d\tau}{\tau}\,
\frac {\tau^{s-\frac 12}}{\Gamma \left(s-\frac 12\right)}\,
\sum_{n=0}^2 a_n \tau^{n-1} e^{-\tau m^2}\nonumber\\
&&\qquad = \frac {\mu^2}{2\Gamma \left(s-\frac 12\right)}
\left\{ a_0 \Gamma (s-1)m^{2-2s}+a_1 \Gamma \left(s-\frac 12\right)
m^{1-2s}\right.\nonumber\\
&&\qquad\qquad \left. +a_2 \Gamma (s)m^{-2s}\right\} . \label{Ediv}
\end{eqnarray}
The renormalized energy is then
\begin{equation}
E^{\rm ren}=[E(s) - E^{\rm div}(s)]_{s=0} \label{defEren}
\end{equation}
This subtraction scheme has two important advantages. First, in the case
of commutative scalar theories in $1+1$ dimensions it is equivalent
\cite{Bordag:2002dg}
to the ``no tadpole'' normalization condition which is commonly used
to calculate the mass shift of two-dimensional solitons. Second,
this scheme can easily be extended to the NC case.
Let us return to $E_{\rm k+b}$. Due to (\ref{Eskb}) the heat kernel
is also identically zero, as well as all heat kernel coefficients
and $E^{\rm div}_{\rm k+b}(s)$. We conclude, that
\begin{equation}
E^{\rm ren}_{\rm k+b}=0.\label{Ekbren}
\end{equation}
Next we have to study the vacuum energy $E_{\rm b}$ due to the presence
of boundaries. Far away from the kink, the excitations are
free bosonic and fermionic modes with the mass $m=v_0\sqrt{2\lambda}$
which is defined by asymptotic values of the potential in (\ref{scalin2})
and (\ref{P2}). In the bosonic sector, the boundary conditions are Dirichlet.
In the fermionic sector, one mode satisfies the Dirichlet conditions as
well, another one satisfies the Robin boundary conditions (each of the
modes carries one half of a degree of freedom)\footnote{It is important
that, as in the commutative case \cite{Bordag:2002dg}, we use for the
fermions an asymptotic form of the squared Dirac equation (\ref{Dir3}).
One cannot substitute asymptotic values of the fields in the Dirac
equation (\ref{linDir}) itself and then extend it smoothly to the whole
space $[-l,l]$.}.
Let us study the Robin sector first. For large $l_0$, (we remind
that $l(\omega)=l_0+\theta\omega$) the condition
(\ref{Robcon}) yields
\begin{equation}
(\partial_x + S_1)\psi\vert_{x=-l(\omega)}=0,\qquad
(-\partial_x + S_2)\psi\vert_{x=l(\omega)}=0,
\label{Nbc}
\end{equation}
where
\begin{equation}
S_1=S_2=v_0 \sqrt{2\lambda}\equiv S.\label{S1S2}
\end{equation}
There are no bound states ($\omega^2<m^2$) for these boundary conditions.
The spectrum of oscillating modes, $\psi = A\sin (kx) + B\cos (kx)$,
$k=\sqrt{\omega^2 -m^2}$ is given by solutions of the equation
\cite{Bordag:2002dg}
\begin{equation}
0=f(\alpha_1,\alpha_2;k)\equiv
\sin (2kl(\omega)+\alpha_1+\alpha_2) \label{fk}
\end{equation}
with
\begin{equation}
\alpha_{1,2}=-\arctan (k/S_{1,2}) \equiv \alpha \,.\label{a12}
\end{equation}
It is easy to see, that the spectrum in the Dirichlet sector is defined
by the equation
\begin{equation}
0=f(0,0;k)\,.\label{fDir}
\end{equation}
Next we represent the vacuum energy as a contour integral
\cite{Bordag:1994jz,Bordag:2002dg}. The function $\partial_k \ln f(k)$
has poles with unit residues at the points where $f(k)=0$. Therefore,
we can write
\begin{equation}
E_{\rm b}(s)=
-\frac {\mu^{2s}}4 \oint \frac{dk}{2\pi i} (k^2+m^2)^{\frac 12 -s}
\frac \partial{\partial k} (\ln f(\alpha,\alpha;k) -\ln f(0,0;k)),
\label{AEC1}
\end{equation}
where the contour goes anticlockwise around the positive real semiaxis.
Along the upper part of the contour we approximate
$\sin (2(kl(\omega)+\alpha))$ by $-(1/2i)\exp (-2i(kl(\omega)+\alpha))$
since the term $\exp (2i(kl(\omega)+\alpha))$ vanishes as $l_0\to\infty$.
Along the lower part we keep $(1/2i)\exp (2i(kl(\omega)+\alpha))$.
Then,
\begin{equation}
E_{\rm b}(s)=-\mu^{2s} \int_0^\infty \frac{dk}{2\pi }
(k^2+m^2)^{\frac 12 -s}\, \frac {\partial\alpha}{\partial k}\,. \label{Esbfin}
\end{equation}
We see, that all contributions containing $l(\omega)$ are cancelled.
Therefore, the regularized boundary energy is given by precisely
the same expression as in the
commutative case (cf.\ \cite{Bordag:2002dg}).
Without any further calculations we can read off
the renormalized value
\begin{equation}
E_{\rm b}^{\rm ren}=\sqrt{\lambda/2}\, \frac {v_0}{\pi}\label{Ebren}
\end{equation}
from \cite{Bordag:2002dg}. Consequently, the renormalized vacuum energy
of the kink
\begin{equation}
E_{\rm k}^{\rm ren}=E_{\rm b+k}^{\rm ren}-E_{\rm b}^{\rm ren}
=-\sqrt{\lambda/2}\, \frac {v_0}{\pi} \label{Ekren}
\end{equation}
does not depend on $\theta$ and coincides with its' value in the
commutative theory.
\section{Vacuum expectation value of the new canonical Hamiltonian}
\label{sec-Ham}
There is little doubt in the correctness of the definition of the
vacuum energy used in the previous section. However, keeping in
mind the applications to saturation of the BPS bound one should also
calculate corrections to the new Hamiltonian (\ref{HH}) which
participates in the supersymmetry algebra.
To calculate vacuum expectation value of the Hamiltonian
(\ref{HH}) we need the propagators for small fluctuations over the
kink background. Let us start in the bosonic sector. Consider
eigenfunctions of the operator $P_1(\omega)$,
\begin{equation}
P_1(\omega) \tilde\phi_{\omega,\lambda_\omega}(x)=\lambda_\omega^2
\tilde\phi_{\omega,\lambda_\omega}(x)\,,\label{phomlom}
\end{equation}
and normalize them according to the condition
\begin{equation}
\int dx\, \tilde\phi_{\omega,\lambda_\omega}^*(x)
\tilde\phi_{\omega,\lambda'_\omega}(x)=\delta_{\lambda_\omega,\lambda'_\omega}
\,.\label{normphi}
\end{equation}
We assumed, that there is a boundary in the $x$-direction, so that the
spectrum is discrete. The functions $\tilde \phi_{\omega,\lambda_\omega}$
are defined initially on the interval $[-l(\omega),l(\omega)]$ but can be
extended to the whole $\mathbb{R}$ as $\tilde \phi_{\omega,\lambda_\omega}=0$
for $|x|>l(\omega)$. The operator $P_1(\omega)$ acts by its analytic formula
inside the interval and is extended as multiplication by $\lambda_\omega^2$
outside the interval and on the boundary. (Of course, as long as $l(\omega)$
is finite the functions $\tilde\phi_{\omega,\lambda_\omega}$ cannot be used
to expand an arbitrary function on $\mathbb{R}$).
The integration in (\ref{normphi})
can run over $\mathbb{R}$, but the dual formula
\begin{equation}
\sum_\lambda \tilde \phi^*_{\omega,\lambda_\omega}(x)
\tilde \phi_{\omega,\lambda_\omega}(x')=\delta(x-x')
\label{nck35}
\end{equation}
is valid only if both $x$ and $x'$ belong to $[-l(\omega),l(\omega)]$.
Otherwise, the right hand side is zero.
The functions
\begin{equation}
\phi_{\omega,\lambda_\omega}(x^\mu)=e^{-i\omega t}
\tilde \phi_{\omega,\lambda_\omega}(x) \label{phfull}
\end{equation}
are the eigenfunctions of the full kinetic operator acting
on fluctuations (restricted to an interval) with eigenvalues
$-\omega^2+\lambda_\omega^2$. The propagator can then be
constructed in the standard way as
\begin{equation}
G(x^\mu,{x^\mu}')=\frac 1{2\pi}\int d\omega \sum_{\lambda_\omega}
\frac{ \phi_{\omega,\lambda_\omega}(x^\mu)
\phi_{\omega,\lambda_\omega}^*({x^\mu}')}{-\omega^2+\lambda_\omega^2
-i\varepsilon},\label{GF}
\end{equation}
but the relation $P_1G(x^\mu,{x^\mu}')=\delta (x^\mu{x^\mu}')$
is true only if both $x^1$ and ${x^1}'$ belong to the intersection of
the intervals $[-l(\omega),l(\omega)]$, i.e., to $[-l_0,l_0]$.
For $l_0\to \infty$ one recovers the Feynman propagator.
Then,
\begin{equation}
\langle \phi (x^\mu) \phi (y^\nu)\rangle = -iG(y^\nu,x^\mu).
\label{vevpp}
\end{equation}
With the help of this equation one calculates the one-loop
vacuum expectation of the bosonic part of the Hamiltonian
\begin{eqnarray}
&&\langle H \rangle_B =-\frac i2
\int d^2x (-\partial_0^2-\partial_1^2
+\lambda v_0^2 -\lambda (L(\Phi^2)+R(\Phi^2)+L(\Phi)R(\Phi))_x
\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad \times
\left. G(x^\mu,y^\nu)\right|_{y^1=x^1,\ y^0=x^0+\sigma}.
\label{vevHb}
\end{eqnarray}
where we introduced a time-splitting regularization
with the parameter $\sigma$. The operator acting on $G$ should be understood
as $-\partial_0^2+P_1$. The action of $P_1$ on
$\tilde\phi_{\omega,\lambda_\omega}$
is already defined above.
It is easy to see, that the integrand does not depend on $x^0$. In
order to remove the corresponding divergence we restrict the integration
over $x^0$ to $[0,T]$ with some finite $T$. We have,
\begin{eqnarray}
&&\langle H \rangle_B =-\frac {iT}2 \int \frac {d\omega}{2\pi}\int dx^1
\sum_{\lambda_\omega} \frac {\omega^2+\lambda_\omega^2}{-\omega^2+
\lambda_\omega^2 -i\varepsilon}
\tilde \phi_{\omega,\lambda_\omega}(x^1)
{\tilde \phi_{\omega,\lambda_\omega}}^* (x^1)\, e^{i\omega\sigma}\nonumber\\
&&\qquad = -\frac {iT}2 \int \frac {d\omega}{2\pi}
\sum_{\lambda_\omega} \frac {\omega^2+\lambda_\omega^2}{-\omega^2+
\lambda_\omega^2 -i\varepsilon}\, e^{i\omega\sigma}\label{vevH2}
\end{eqnarray}
Let $\sigma<0$. The integration contour can be closed in the lower
complex half-plane.
For each value of $\omega$ there is a discrete set of eigenvalues
$\{ \lambda^j_\omega\}$. Let $\omega_j$ be positive solutions
of the equation $\omega_j=\lambda^j_{\omega_j}$ (there could be
multiple solutions of this equation for each $j$, but we do not consider
such case for simplicity). Then,
\begin{equation}
\langle H \rangle_B = \frac T2 \sum_j \omega_j \left( 1 -
\frac{d\lambda^j_\omega}{d\omega}\vert_{\omega=\omega_j}\right)^{-1}.
\label{vevH3}
\end{equation}
For $\sigma >0$ the result is the same.
This formula admits a rather simple interpretation. The factor $T$
appears since our Hamiltonian has the meaning of energy integrated over
the time. The expression under the sum is an energy of an excitation
with the frequency $\omega_j$. In the commutative limit the derivative
in the bracket vanishes, so that each excitation contributes $\frac 12 \omega$.
In the NC case, a correction factor appears. The presence of this factor means
that the contribution of an individual mode to $\langle H \rangle$
differs from that to $E$. As we shall see below, due to the supersymmetry
this difference does not affect the final result when contributions of
all modes, bosonic and fermionic, are taken into account.
For contribution of the fermionic fluctuations one obtains
similarly\footnote{The only subtlety is the way to extend the eigenfunctions
satisfying Robin boundary conditions outside the interval $[-l(\omega),
l(\omega)]$. This should be done again by setting these function to zero.
Possible discontinuities at the boundary do not play a role. In this way
we preserve the isospectrality of $P_1$ and $P_2$.}
\begin{equation}
\langle H \rangle_F = -\frac T2 \sum_j \omega_j \left( 1 -
\frac{d\lambda^j_\omega}{d\omega}\vert_{\omega=\omega_j}\right)^{-1},
\label{vevH4}
\end{equation}
where, as expected, the overall sign is different from (\ref{vevH3}).
$\omega_j$ now denote the fermionic frequencies.
Due to the isospectrality of bosonic and fermionic fluctuations on
a background of the kink in the presence of boundaries
\begin{equation}
\langle H \rangle_F^{\rm b+k} +\langle H \rangle_B^{\rm b+k}=0 .
\label{Hbk}
\end{equation}
(It is understood that these quantities must be regularized by
replacing $\omega$ with $\omega^{1-2s}$. The calculations proceed
precisely as in the previous section.)
Let us now calculate the boundary contribution $\langle H \rangle^{\rm b}$
to the vacuum expectation value of the Hamiltonian. An effective free field
theory which must be used to calculate boundary contributions was described
in the previous section. The boundary conditions are given by (\ref{Dircon})
and (\ref{Robcon}), and the spectrum of $\lambda_\omega^j$ is defined by
the solutions of the equation $f(\omega | \lambda_\omega)=0$, where
\begin{equation}
f(\omega | \lambda)=\sin (2(k(\lambda)l(\omega)+\alpha(k(\lambda)))),
\qquad k(\lambda)=\sqrt{\lambda^2-m^2}, \label{fol}
\end{equation}
$\alpha(k)=0$ for Dirichlet conditions, $\alpha(k)=-\arctan (S/k)$ for
Robin ones. The quantity
\begin{equation}
h(s)=\sum_j \omega_j^{1-2s} \left( 1 -
\frac{d\lambda^j_\omega}{d\omega}\vert_{\omega=\omega_j}\right)^{-1},
\end{equation}
which is a zeta-regularized expression for the right hand sides of
(\ref{vevH3}) and (\ref{vevH4}), can be represented as a contour
integral
\begin{equation}
h(s)=\frac 1{2\pi i} \oint d\omega\, \omega^{1-2s} \left( 1 -
\frac{d\lambda_\omega}{d\omega}\vert_{\omega=\lambda_\omega}\right)^{-1}
\partial_\omega (\ln f(\omega \vert \omega ) ),\label{hs}
\end{equation}
where the contour encircles $[m,\infty[$. One can write
\begin{equation}
\partial_\omega f(\omega | \omega) =[\partial_\omega f(\omega |\lambda)
+\partial_\lambda f(\omega |\lambda)]_{\lambda =\omega}.\label{209}
\end{equation}
On the other hand, the condition $f(\omega |\lambda_\omega)=0$ defines the
dependence of $\lambda_\omega$ on $\omega$. By differentiating this condition,
one gets
\begin{equation}
0=\partial_\omega f(\omega |\lambda_\omega)=
[\partial_\omega f(\omega|\lambda)]_{\lambda =\lambda_\omega} +
[\partial_\lambda f(\omega|\lambda)]_{\lambda =\lambda_\omega}
\frac{d\lambda_\omega}{d\omega}.\label{215}
\end{equation}
By using (\ref{209}) and (\ref{215}) we rewrite (\ref{hs})
as\footnote{This equation can be also obtained in a different way. As follows
from the analysis of \cite{Fursaev:2001yu,Strelchenko:2007xh}, the factor
$(1-(d\lambda/d\omega))^{-1}$ is the difference between the spectral density
of eigenfrequencies $\omega_j$ and the spectral density of the eigenvalues
$\lambda$ for a given $\omega$ taken at $\lambda=\omega$. The integral
(\ref{hs2}) is simply a sum over the eigenfrequencies with the latter
density.}
\begin{equation}
h(s)=\frac 1{2\pi i} \oint d\omega\, \omega^{1-2s}
[\partial_\lambda \ln f(\omega |\lambda )]_{\lambda =\omega}.
\label{hs2}
\end{equation}
By using the identities which we have just derived one can represent
the zeta-regularized boundary contribution to the v.e.v.\ of $H$ in
the form
\begin{equation}
\langle H \rangle^{\rm b}(s)=
\frac {T\mu^{2s}}4 \, \frac 1{2\pi i}\oint d\omega\, \omega^{1-2s}
[\partial_\lambda (\ln f_D(\omega |\lambda )-
\ln f_R(\omega |\lambda ))]_{\lambda =\omega}\,,\label{Hbs}
\end{equation}
where $f_{D,R}$ correspond to Dirichlet and Robin boundary conditions
respectively (cf.\ eq.\ (\ref{fol}) and the line below). As in the previous
section, on the upper part of the contour we approximate
$\sin (2(kl(\omega)+\alpha)$ by $-(1/2i)\exp (-2i(kl(\omega)+\alpha))$,
and by $(1/2i)\exp (2i(kl(\omega)+\alpha))$ on the lower part. Then the
terms with $l(\omega)$ cancel, and we arrive at the expression
\begin{equation}
\langle H \rangle^{\rm b}(s)=-T\mu^{2s} \int_m^\infty \frac{d\omega}{2\pi}\,
\omega^{1-2s}[\partial_\lambda \alpha (k(\lambda))]_{\lambda =\omega}.
\label{235}
\end{equation}
Next we observe that $[\partial_\lambda \alpha (k(\lambda))]_{\lambda =\omega}
=\partial_\omega \alpha (k(\omega))$ with $k(\omega)=\sqrt{\omega^2-m^2}$
and change the integration variable to $k$.
\begin{equation}
\langle H \rangle^{\rm b}(s)=-T\mu^{2s}
\int_0^\infty \frac{dk}{2\pi}\, (k^2+m^2)^{\frac 12 -s} \partial_k \alpha (k),
\label{239}
\end{equation}
or,
\begin{equation}
\langle H \rangle^{\rm b}(s)=TE^{\rm b}(s). \label{240}
\end{equation}
In the heat kernel subtraction scheme
$\langle H \rangle^{\rm div}(s)=TE^{\rm div}(s)$, so that for the renormalized
values we also have the relation
\begin{equation}
\langle H \rangle^{\rm b}_{\rm ren}=TE^{\rm b}_{\rm ren}.
\label{306}
\end{equation}
Taking into account (\ref{Ekbren}) and (\ref{Hbk}), we conclude that
\begin{equation}
\langle H \rangle^{\rm k}_{\rm ren}=
TE^{\rm k}_{\rm ren}= -T\sqrt{\lambda/2}\, \frac{v_0}{\pi}.\label{Hkren}
\end{equation}
This is a very natural result. It tells us that the interpretation
of the new canonical Hamiltonian as the energy integrated over a
time interval remains valid also at the one-loop level.
\section{Conclusions}\label{sec-con}
In this work we studied quantum corrections to the mass of the
kink of supersymmetric NC $\varphi^4$. Contrary to the
nonsupersymmetric case \cite{Konoplya:2007xt}, the counterterm
required to remove the divergences is precisely the same as in the
commutative theory. The strategy of calculations of the one-loop
corrections was taken from \cite{Bordag:2002dg}. We introduced
boundaries, so that the spectrum of the fluctuations becomes
discrete. Because of the nonlocality of NC theories, the position
of the boundary depends on the frequency of each fluctuation. For
the system of the kink and the boundaries, we used the
isospectrality of bosonic an fermionic fluctuations which follows
from supersymmetry. The total energy of this system vanishes. Then
we subtracted the contribution from the boundaries, which was
calculated in a relatively simple effective theory. The heat
kernel subtraction scheme (which is equivalent to the "no-tadpole"
normalization condition in two-dimensional commutative models)
gave a value of the mass correction which did not depend on the
NC parameter and coincided with the commutative value.
By making use of an unconventional canonical formalism we were
able to define supercharges (despite the presence of an infinite
number of time derivatives and the absence of locally conserved
currents), and to show that the new brackets of these supercharges
give an analog of the Hamiltonian and an analog of the central
charge. (Note, that the supercharges do generate the supersymmetry
transformations, and the Hamiltonian does generate the equations
of motion, provided the new canonical brackets are used). This
Hamiltonian can be interpreted as the energy integrated over an
interval $T$ of the time. The one-loop vacuum expectation value of
this Hamiltonian appears to be the quantum correction to the mass
of the kink times $T$, i.e., the picture remains consistent after
turning on the quantum effects. Although we have two different
definitions of quantum corrections to the energy (one through a
sum over the eigenfrequencies, and the other through the
Hamiltonian of the unconventional canonical formalism), both
definitions give essentially equivalent results.
In a future publication we are going to calculate quantum
corrections to the central charge. This will allow to check
whether the quantum BPS bound remains saturated in NC theories. It
would also be interesting to consider quantum corrections to
solitons in higher dimensional NC theories.
\section*{Acknowledgments}
This work was supported in part by FAPESP and CNPq.
|
2,869,038,154,997 | arxiv | \section{Introduction}
The set $\Gamma$, subset of the set of nonnegative integers
$\mathbb{N}$, is called a \textit{numerical semigroup} if it is
closed under addition, contains zero and generates $\mathbb{Z}$ as
a group. Every numerical semigroup $\Gamma$ satisfies the following two
fundamental properties (see \cite{rs}): The complement $\mathbb{N}\setminus \Gamma$ is
finite and $\Gamma$ has a unique minimal system of generators
$a_{1} < \cdots < a_{n}$. The greatest integer not belonging to $\Gamma$,
usually denoted by $F(\Gamma)$ is called the \textit{Frobenius number}
of $\Gamma$. The integers $a_{1}$ and $n$, denoted by $m(\Gamma)$ and
$e(\Gamma)$ respectively are known as the \textit{multiplicity} and the
\textit{embedding dimension} of the semigroup $\Gamma$. The
\textit{Ap\'{e}ry set} of $\Gamma$ with respect to a non-zero $a\in \Gamma$ is
defined to be the set $\rm{Ap}(\Gamma,a)=\{s\in \Gamma\mid s-a\notin \Gamma\}$.
In this paper, we study a class of numerical semigroups, which are special
in the sense that they have the uniqueness of representation of each element
in the Ap\'{e}ry set $\rm{Ap}(\Gamma,a)$. Given positive
integers $a_{1} < \cdots < a_{n}$, every numerical semigroup ring
$k[\Gamma] = k[t^{a_{1}}, \ldots , t^{a_{n}}]$ is the coordinate
ring of an affine monomial curve given by the monomial parametrization
$\nu : k[x_{1}, \ldots, x_{n}]\longrightarrow k[t]$, such that
$\nu(x_{i}) = t^{a_{i}}$, $1\leq i\leq p$. The ideal $\ker(\nu)=\mathfrak{p}$
is the defining ideal of the parametrized monomial curve, which is graded
with respect to the weighted gradation.
\medskip
It is known that uniqueness of
representations of the Ap\'{e}ry set elements of a numerical semigroup is
actually quite helpful; see \cite{rs1}, \cite{mss}. Let us call these as
numerical semigroups with unique Ap\'{e}ry expansions.
One requires the Ap\'{e}ry table in order to
understand the tangent cone, which is quite hard to compute in general. However,
uniqueness of expressions of the Ap\'{e}ry set elements makes it easier.
In this paper, we will be presenting two classes of numerical semigroups with this property.
In fact, we have stumbled upon these classes while looking for a large
class of numerical semigroups with this property, especially from the standpoint
of computing tangent cones.
\medskip
Let $f(x),g(x)\in\mathbb{Q}[x]$ such that
$f(\mathbb{N})\subset \mathbb{N}$, $g(\mathbb{N})\subset \mathbb{N}$ and
both are increasing, so called increasing numerical polynomials. In this paper
we study numerical semigroups minimally generated by integers of the form
$\{a, g(i)a+f(i)d\mid \gcd(a,d)=1, 1\leq i\leq n\}$. Some
of the interesting classes of numerical semigroups that have been studied fall
under this general class. For example, $g(x)=1$, $f(x)=x$ gives an
arithmetic sequence (see \cite{patil}, \cite{pss}) and $g(x)$ = constant,
$f(x)=x$ gives a generalized arithmetic sequence (see \cite{matthews}).
We study the following two cases:
\medskip
\begin{enumerate}
\item When $g(x)=x+1$, $f(x)=\dfrac{x(x+1)}{2}$ and $n=3$; we denote the semigroup
by $\Gamma_{4}$. A complete study has been carried out in sections 2 through 6 in the
following oder - Ap\'{e}ry set, the pseudo Frobenius numbers, the
defining ideal, syzygies and finally the Ap\'{e}ry table and the tangent cone. All
these are known to be extremely hard to compute in general. We have used the computer
algebra system \cite{GAP4} to form initial guesses for many of the theorems that we
have proved here.
\medskip
\item We take $g(x)$ to be a constant numerical function, $f(x) = r^{x}$ and
define the numerical semigroup $\mathfrak{S}_{n+2}$, in section 7. We
compute the the Ap\'{e}ry table and the tangent cone for $\mathfrak{S}_{n+2}$.
\end{enumerate}
\bigskip
\section{Ap\'{e}ry set of $\Gamma_{4}$}
We now consider the numerical semigroup generated
by the positive integers $s_{1},\ldots, s_{4}$, where $d>0$ and $a>0$ are integers with
$\gcd(a,d)=1$, and $s_{n}=\frac{n}{2}[2a+(n-1)d]$, for $1\leq n\leq 4$. We denote this numerical semigroup
by $\Gamma_{4}$, the semigroup ring by $k[\Gamma_{4}]$ and the defining ideal
by $\mathfrak{p}_{4}$. We will see in the next Theorem that we need to impose some
bounds on $a$ so that $\{s_{1},\ldots, s_{4}\}$ is a minimal generating
set for the numerical semigroup $\Gamma_{4}$.
\medskip
\begin{theorem}\label{min4}
Let $d>0$ and $a\geq 7$ be integers with $\gcd(a,d)=1$. Let
$s_{n}=\frac{n}{2}[2a+(n-1)d]$, for $1\leq n\leq 4$. The set
$T_{4}=\{s_{1},\ldots, s_{4}\}$ is a minimal generating
set for the numerical semigroup $\Gamma_{4} = \langle s_{1},\ldots, s_{4} \rangle$.
\end{theorem}
\proof
Suppose $s_{3}=m_{1}s_{1}+m_{2}s_{2}$, for some $m_{1},m_{2}\geq 0$. We get,
\begin{eqnarray}\label{eq1}
(m_{1}+2m_{2}-3)a &=& (3-m_{2})d.
\end{eqnarray}
Since $\gcd(a,d)=1$, we get $a\mid (3-m_{2})$.
\medskip
If $m_{2}\leq 3$, then $3=m_{2}+ka$, for some $k\geq 0$, and we get
$m_{2}=3$, since $a\geq 7$. Therefore, $m_{1}+3 =0$ (using equation \ref{eq1}) -
a contradiction. If $m_{2}> 3$, then the L.H.S. of equation
\ref{eq1} is positive whereas the R.H.S. is negative - a contradiction.
\medskip
Suppose $s_{4}=m_{1}s_{1}+m_{2}s_{2}+m_{3}s_{3}$, for some $m_{1},m_{2},m_{3}\geq 0$.
We get,
\begin{eqnarray}\label{eq2}
(m_{1}+2m_{2}+3m_{3}-4)a &=& (6-m_{2}-3m_{3})d.
\end{eqnarray}
Therefore $a\mid (6-m_{2}-3m_{3})$, since $\gcd(a,d)=1$.
\medskip
If $m_{2}+3m_{3}\leq 6$, then $6=m_{2}+3m_{3}+ka$, for some $k\geq 0$. Therefore,
we get $6=m_{2}+3m_{3}$, since $a\geq 7$. Possible solutions for $(m_{2}, m_{3})$
are $(0,2)$, $(3,1)$, $(6,0)$. Substituting these values of
$m_{2},m_{3}$ in the equation \ref{eq2}, we get $m_{1}<0$ - a contradiction.
\medskip
If $m_{2}+3m_{3}> 6$, then the L.H.S. of the equation \ref{eq2} is positive,
whereas the R.H.S. is negative - a contradiction. \qed
\medskip
\begin{theorem}\label{apery}
Let $a\geq 7$. For each $1\leq i\leq a-1$, let $i=6\mu_{i}+q_{i}$,
such that $0\leq q_{i}< 6$. For each $ 1\leq i\leq a-1 $, we define $\nu_{i},\xi_{i}$
as follows;
\begin{enumerate}
\item[(i)] $(\nu_{i},\xi_{i})=(1,q_{i}-3)$, if $q_{i}\geq 3$;
\item[(ii)] $(\nu_{i},\xi_{i})=(0,q_{i})$, if $q_{i}< 3$.
\end{enumerate}
Let $\mathrm{Ap}(\Gamma_{4},a)$ denote the Ap\'{e}ry set of $\Gamma_{4}$,
with respect to the element $a$. Then
$\mathrm{Ap}(\Gamma_{4},a)=\{(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id \mid 1\leq i\leq a-1 \}\cup \{0\}$.
\end{theorem}
\proof Let $T=\{(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id\mid 1\leq i\leq a-1 \}$. We notice that
$i=6\mu_{i}+3\nu_{i}+\xi_{i}$, therefore for $1\leq i\leq a-1$, we have
$$(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id=\mu_{i}(4a+6d)+\nu_{i}(3a+3d)+\xi_{i}(2a+d)\in \Gamma_{4}.$$
Hence $T\subset \Gamma_{4}$. Let $s\in \mathrm{Ap}(\Gamma_{4},a)\setminus\{0\}$, with
$s\equiv id(\mbox{mod} \, a)$.
Suppose
\begin{align*}
s&=c_{1}(2a+d)+c_{2}(3a+3d)+c_{3}(4a+6d)\\
&=(2c_{1}+3c_{2}+4c_{3})a+(c_{1}+3c_{2}+6c_{3})d,
\end{align*}
then $(c_{1}+3c_{2}+6c_{3})\equiv i(\mbox{mod} \, a)$, as $\gcd(a,d)=1$. Therefore
\begin{equation}\label{eqnap}
c_{1}+3c_{2}+6c_{3}= i +ka=6\mu_{i}+q_{i}+ka,
\end{equation}
for some $k\geq 0$. It is enough to show that, $4c_{3}+3c_{2}+2c_{1}\geq 4\mu_{i}+3\nu_{i}+2\xi_{i}$.
Suppose $4c_{3}+3c_{2}+2c_{1}< 4\mu_{i}+3\nu_{i}+2\xi_{i}$, then from \ref{eqnap},
substituting $\mu_{i}$, we have
\begin{equation}\label{ineq}
6\xi_{i}+9\nu_{i}-2q_{i}>4c_{1}+3c_{2}+2ka .
\end{equation}
We consider the following cases:
\medskip
\textbf{Case A.} If $q_{i}=0$, then $(\nu_{i},\xi_{i})=(0,0)$, and from \ref{ineq} we get
$0>4c_{1}+3c_{2}+2ka$, which is impossible.
\medskip
\textbf{Case B.} If $q_{i}=1$, then $(\nu_{i},\xi_{i})=(0,1)$ and
from \ref{ineq} we get $4>4c_{1}+3c_{2}+2ka$. Therefore $k=0$ and
$(c_{1},c_{2})\in\{(0,0),(0,1)\}$. Putting values of $c_{1}$ and
$c_{2}$ in equation \ref{eqnap}, we get the following:
$$
\begin{cases}
6c_{3}=6\mu_{i}+1 & \mbox{if} \quad (c_{1},c_{2})=(0,0),\\
6c_{3}=6\mu_{i}-2 & \mbox{if} \quad (c_{1},c_{2})=(0,1).
\end{cases}
$$
All lead to contradictions.
\medskip
\textbf{Case C.} If $q_{i}=2$, then $(\nu_{i},\xi_{i})=(0,2)$, and
from \ref{ineq} we get $8>4c_{1}+3c_{2}+2ka$.
Therefore $k=0$ and $(c_{1},c_{2})\in\{(0,0),(1,0),(1,1),(0,1),(0,2)\}$.
Putting values of $c_{1}$ and $c_{2}$ in equation \ref{eqnap}, we get
the following:
$$
\begin{cases}
6c_{3}=6\mu_{i}+2 & \mbox{if} \quad (c_{1},c_{2})=(0,0),\\
6c_{3}=6\mu_{i}+1 & \mbox{if} \quad (c_{1},c_{2})=(1,0),\\
6c_{3}=6\mu_{i}-2 & \mbox{if} \quad (c_{1},c_{2})=(1,1),\\
6c_{3}=6\mu_{i}-1 & \mbox{if} \quad (c_{1},c_{2})=(0,1),\\
6c_{3}=6\mu_{i}-4 & \mbox{if} \quad (c_{1},c_{2})=(0,2).
\end{cases}
$$
All lead to contradictions.
\medskip
\textbf{Case D.} If $q_{i}=3$, then $(\nu_{i},\xi_{i})=(1,0)$, and
from \ref{ineq} we get $3>4c_{1}+3c_{2}+2ka$. Therefore $k=0$ and
$(c_{1},c_{2})=(0,0)$. Then from equation \ref{eqnap}, we get $6c_{3}=6\mu_{i}+3$;
which is not possible.
\medskip
\textbf{Case E.} If $q_{i}=4$, then $(\nu_{i},\xi_{i})=(1,1)$, and
from \ref{ineq} we get $7>4c_{1}+3c_{2}+2ka$. Therefore $k=0$ and
$(c_{1},c_{2})\in\{(0,0),(1,0),(0,1),(0,2)\}$. Putting values of
$c_{1}$ and $c_{2}$ in equation \ref{eqnap}, we get the following:
$$
\begin{cases}
6c_{3}=6\mu_{i}+4 & \mbox{if} \quad (c_{1},c_{2})=(0,0),\\
6c_{3}=6\mu_{i}+3 & \mbox{if} \quad (c_{1},c_{2})=(1,0),\\
6c_{3}=6\mu_{i}+1 & \mbox{if} \quad (c_{1},c_{2})=(0,1),\\
6c_{3}=6\mu_{i}-2 & \mbox{if} \quad (c_{1},c_{2})=(0,2).
\end{cases}
$$
All lead to contradictions.
\medskip
\textbf{Case F.} If $q_{i}=5$, then $(\nu_{i},\xi_{i})=(1,2)$, from \ref{ineq} we get $11>4c_{1}+3c_{2}+2ka$.
Therefore $k=0$ and
$$(c_{1},c_{2})\in\{(0,0),(0,1),(0,2),(0,3),(1,0),(1,1),(1,2),(2,0)\}.$$
Putting
values of $c_{1}$ and $c_{2}$ in equation \ref{eqnap}, we get the following:
$$
\begin{cases}
6c_{3}=6\mu_{i}+5 & \mbox{if} \quad (c_{1},c_{2})=(0,0),\\
6c_{3}=6\mu_{i}+2 & \mbox{if} \quad (c_{1},c_{2})=(0,1),\\
6c_{3}=6\mu_{i}-1 & \mbox{if} \quad (c_{1},c_{2})=(0,2),\\
6c_{3}=6\mu_{i}-4 & \mbox{if} \quad (c_{1},c_{2})=(0,3),\\
6c_{3}=6\mu_{i}+4 & \mbox{if} \quad (c_{1},c_{2})=(1,0),\\
6c_{3}=6\mu_{i}+1 & \mbox{if} \quad (c_{1},c_{2})=(1,1),\\
6c_{3}=6\mu_{i}-2& \mbox{if} \quad (c_{1},c_{2})=(1,2),\\
6c_{3}=6\mu_{i}+3 & \mbox{if} \quad (c_{1},c_{2})=(2,0).
\end{cases}
$$
All lead to contradictions. \qed
\medskip
\noindent An example has been discussed in \ref{atexample}.
\bigskip
\section{Pseudo Frobenius numbers and type of $\Gamma_{4}$}
\begin{definition}
Let $\Gamma$ be a numerical semigroup, we say thet $x\in\mathbb{Z}$ is a
\textit{pseudo-Frobenius number} if $x\notin \Gamma$ and $x+s\in \Gamma$
for all $s\in \Gamma\setminus \{0\}$. We denote by $\mathbf{PF}(\Gamma)$
the set of pseudo-Frobenius numbers of $\Gamma$. The cardinality of
$\mathbf{PF}(\Gamma)$ is denoted by $t(\Gamma)$ and we call it the
\textit{type} of $\Gamma$.
\end{definition}
Let $a,b\in \mathbb{Z}$. We define $\leq_{\Gamma}$ as $a\leq_{\Gamma} b$
if $b-a\in \Gamma$. This order relation defines a poset structure on
$\mathbb{Z}$.
\begin{theorem}\label{max}
Let $\Gamma$ be a numerical semigroup and $a\in\Gamma\setminus \{0\}$. Then
$\mathbf{PF}(\Gamma)=\{w-a\mid w\in \,\mathrm{Maximals}_{\leq_{\Gamma}}Ap(\Gamma, a)\}$.
\end{theorem}
\proof See Proposition 8 in \cite{as}.\qed
\medskip
Let $\omega(i)=(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id$, for $1\leq i\leq a-1$.
Therefore, $\mathrm{Ap}(\Gamma_{4},a)=\{\omega(i)\mid 1\leq i\leq a-1 \}$.
\medskip
\begin{theorem}\label{min5}
Let $a\geq 7$ and $d$ be two integers such that $\gcd(a,d)=1$. Suppose
$\Gamma_{4}=\langle s_{1},\ldots, s_{4} \rangle$, where $s_{n}=\frac{n}{2}[2a+(n-1)d]$,
for $1\leq n\leq 4$. Let $\mathbf{PF}(\Gamma_{4})$ be the set of pseudo Frobenius numbers
of the numerical semigroup $\Gamma_{4}$. Write $a=6m+q$, $0\leq q\leq 5$; then
\begin{eqnarray*}
\mathbf{PF}(\Gamma_{4}) & = &
\begin{cases}
\{\omega(a-1)\}, \quad \mbox{if} \quad q=0;\\
\{\omega(a-1),\omega(a-2)\}, \quad \mbox{if} \quad q=1;\\
\{\omega(a-1),\omega(a-3)\}, \quad \mbox{if} \quad q=2;\\
\{\omega(a-1),\omega(a-4)\}, \quad \mbox{if} \quad q=3;\\
\{\omega(a-1),\omega(a-2),\omega(a-5)\}, \quad \mbox{if} \quad q=4;\\
\{\omega(a-1),\omega(a-3),\omega(a-6)\}, \quad \mbox{if} \quad q=5.
\end{cases}
\end{eqnarray*}
\end{theorem}
\proof We first note that $\omega(i+6)- \omega(i)=4a+6d\in\Gamma_{4}$,
for $0\leq i<a-6$. Therefore, $\omega(i)\leq_{\Gamma_{4}} \omega(i+6)$,
$0\leq i<a-6$. Hence, by Theorem \ref{max}, $\mathbf{PF}(\Gamma_{4})\subset \{\omega(a-i)\mid 1\leq i\leq 6\}$. Proof of the theorem follows easily by checking each case.\qed
\medskip
\begin{corollary} Let $\mathrm{Der}_{k}(\Gamma_{4})$ be the set of $k$-derivations of $k[[t^{a},t^{2a+d},t^{3a+3d},t^{4a+6d}]]$, then
$$ \mathrm{Der}_{k}(\Gamma_{4})=\{t^{\alpha+1}\mid \alpha\in \mathbf{PF}(\Gamma_{4}) \}.$$
\end{corollary}
\proof Follows from \cite{kraft}, page 875.\qed
\medskip
\begin{corollary}
Let $F(\Gamma_{4})$ be the Frobenius number of $\Gamma_{4}$. Then
\begin{enumerate}[(i)]
\item $F(\Gamma_{4})=\omega(a-1)$, \quad if $q\in\{0,3,5\}$;
\medskip
\item If $q=1$, then\\
$F(\Gamma_{4})=
\begin{cases}
\omega(a-2) & \mbox{if} \quad 3a>d;\\
\omega(a-1) & \mbox{otherwise}.
\end{cases}$
\medskip
\item If $q=2$ then\\
$F(\Gamma_{4})=
\begin{cases}
\omega(a-3) & \mbox{if} \quad a>2;\\
\omega(a-1) & \mbox{otherwise}.
\end{cases}$
\medskip
\item If $q=4$ then\\
$F(\Gamma_{4})=
\begin{cases}
\omega(a-2) & \mbox{if} \quad a>d;\\
\omega(a-1) & \mbox{otherwise}.
\end{cases}$
\end{enumerate}
\end{corollary}
\proof One can easily find the maximum element from \ref{min5}. \qed
\bigskip
\section{Minimal generating set for the defining ideal $\mathfrak{p}_{4}$}
Let us begin with the following theorem from \cite{g}, which helps us compute
a minimal generating set for the defining ideal of a monomial curve.
\medskip
\begin{theorem}\label{gastinger}
Let $A = k[x_{1},\ldots,x_{n}]$ be a polynomial ring, $I\subset A$ the defining
ideal of a monomial curve defined by natural numbers $a_{1},\ldots,a_{n}$,
whose greatest common divisor is $1$. Let $J \subset I$ be a subideal.
Then $J = I$ if and only if $\mathrm{dim}_{k} A/\langle J + (x_{i}) \rangle =a_{i}$
for some $i$. (Note that the above conditions are also equivalent to
$\mathrm{dim}_{k} A/\langle J + (x_{i}) \rangle =a_{i}$ for any $i$.)
\end{theorem}
\proof See \cite{g}.\qed
\medskip
\begin{lemma}\label{equal}
Let $A = k[x_{1},\ldots,x_{n}]$ be a polynomial ring. For a monomial ideal $J$ of $A$, we write
the unique minimal generating set of $J$ as $G(J)$. Let $I=\langle f_{1},\ldots f_{k}\rangle$ and
$I_{i}=\langle f_{1},\ldots,\hat{f}_{i},\ldots f_{k}\rangle$, $1\leq i\leq k$.
Suppose that, with respect to some monomial order on $A$, $\{ \mathrm{LT}(f_{1}),\ldots \mathrm{LT}(f_{k})\} \subset G(\mathrm{LT}(I)) $ and
$G(\mathrm{LT}(I_{i})) \subset G(\mathrm{LT}(I))\setminus \{\mathrm{LT}(f_{i})\}$ for all $1\leq i\leq k$.
Then $I$ is minimally generated by $\{f_{1},\ldots f_{k}\}$.
\end{lemma}
\proof Suppose $I$ is not minimally generated by $\{f_{1},\ldots f_{k}\}$. Then there is a
polynomial $f_{i}$ such that $f_{i}\in I_{i}$. Therefore there is a monomial
$m\in G(\mathrm{LT}(I_{i}))$, such that $m\mid \mathrm{LT}(f_{i})$. But $m$ and
$\mathrm{LT}(f_{i})$ are distinct elements of $G(\mathrm{LT}(I))$, which gives a
contradiction. \qed
\medskip
\noindent\textbf{Notations.} We now introduce some notations specific to the polynomial ring with
$4$ variables $A=k[x_{1},x_{2},x_{3},x_{4}]$. Let $m$ and $d$ be fixed positive integers.
We define the subsets
$H_{0}, H_{1}, H_{2}, H_{3}, H_{4}, H_{5}$ of $A$ as follows:
\medskip
\begin{enumerate}
\item[(i)] $H_{0}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{4m+d}-x_{4}^{m}\}$.
\medskip
\item[(ii)]
\begin{itemize}
\item[(a)]
$H_{1}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{7}-x_{2}x_{4},x_{1}^{4}x_{2}^{2}-x_{3}x_{4},x_{1}^{2}x_{2}^{2}x_{3}-x_{4}^{2} \} $, if $m=d=1$.
\medskip
\item[(b)]
$H_{1}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{(4m+d-6)}x_{2}^{5}-x_{4}^{m+1},x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m},x_{1}^{(4m+d+2)}-x_{2}x_{4}^{m}\}$, otherwise.
\end{itemize}
\medskip
\item[(iii)] $H_{2}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{(4m+d-4)}x_{2}^{4}-x_{4}^{m+1},x_{1}^{(4m+d+1)}x_{2}-x_{3}x_{4}^{m},x_{1}^{(4m+d+4)}-x_{2}^{2}x_{4}^{m}\}$.
\medskip
\item[(iv)] $H_{3}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{(4m+d-2)}x_{2}^{3}-x_{4}^{m+1},x_{1}^{(4m+d+3)}-x_{3}x_{4}^{m}\}$.
\medskip
\item[(v)] $H_{4}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{(4m+d)}x_{2}^{2}-x_{4}^{m+1},x_{1}^{(4m+d+5)}-x_{2}x_{3}x_{4}^{m}\}$.
\medskip
\item[(vi)] $H_{5}=\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{(4m+d+2)}x_{2}-x_{4}^{m+1},x_{1}^{(4m+d+7)}-x_{2}^{2}x_{3}x_{4}^{m}\}$.
\end{enumerate}
\medskip
\begin{theorem}\label{mingen}
Suppose $a=6m+q$, where $0\leq q\leq5$. Then $H_{q}$ is a minimal generating set for the
ideal $\mathcal{P}_{4}$.
\end{theorem}
\proof We now use Theorem \ref{gastinger} to show that $\mathrm{dim}_{k}(A/\langle H_{q},x_{1}\rangle)=a$.
Let $B=k[x_{2},x_{3},x_{4}]$ and $H^{'}_{q}=\langle H_{q},x_{1}\rangle/\langle x_{1}\rangle\subset B$. Therefore we need to show that $\mathrm{dim}_{k}(B/\langle H_{q}^{'}\rangle)=a$. Let $\kappa_{q}$ be the dimension of the vector space $B/\langle H_{q}^{'}\rangle$. We define
$$\mathfrak{B}=\{x_{2}^{i}x_{3}^{j}x_{4}^{k}\mid 0\leq i\leq 2,0\leq j\leq 1,0\leq k\leq m \}.$$
and show that image of the set $\mathfrak{B}\setminus\mathfrak{B}_{q}$ forms a basis of the
vector space $B/\langle H_{q}^{'}\rangle$, through the following cases:
\begin{enumerate}[(A)]
\item We have $H_{0}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m}\}$ and $\mathfrak{B}_{0}=\{x_{4}^{m},x_{2}x_{4}^{m}, x_{2}^{2}x_{4}^{m},x_{3}x_{4}^{m},x_{2}x_{3}x_{4}^{m}, x_{2}^{2}x_{3}x_{4}^{m}\} $. Hence $\kappa_{0}=6m$.
\medskip
\item $H_{1}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m+1},x_{3}x_{4}^{m},x_{2}x_{4}^{m}\}$ and $\mathfrak{B}_{1}= \{x_{2}x_{4}^{m}, x_{2}^{2}x_{4}^{m},x_{3}x_{4}^{m},x_{2}x_{3}x_{4}^{m}, x_{2}^{2}x_{3}x_{4}^{m}\}$.
Hence $\kappa_{1}=6m+1$.
\medskip
\item $H_{2}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m+1},x_{3}x_{4}^{m},x_{2}^{2}x_{4}^{m}\}$ and
$\mathfrak{B}_{2}=\{ x_{2}^{2}x_{4}^{m},x_{3}x_{4}^{m},x_{2}x_{3}x_{4}^{m}, x_{2}^{2}x_{3}x_{4}^{m}\}$.
Therefore $\kappa_{2}=6m+2$.
\medskip
\item $H_{3}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m+1},x_{3}x_{4}^{m}\}$ and $\mathfrak{B}_{3}= \{x_{3}x_{4}^{m},x_{2}x_{3}x_{4}^{m}, x_{2}^{2}x_{3}x_{4}^{m}\}$. Hence $\kappa_{3}=6m+3$.
\medskip
\item $H_{4}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m+1},x_{2}x_{3}x_{4}^{m}\}$ and $\mathfrak{B}_{4}=\{x_{2}x_{3}x_{4}^{m}, x_{2}^{2}x_{3}x_{4}^{m}\}$. Hence $\kappa_{4}=6m+4$.
\medskip
\item $H_{5}^{'}=\{x_{3}^{2},x_{2}^{3},x_{4}^{m+1},x_{2}^{2}x_{3}x_{4}^{m}\}$ and $\mathfrak{B}_{5}= \{ x_{2}^{2}x_{3}x_{4}^{m}\}$. Hence $\kappa_{5}=6m+5$.
\end{enumerate}
\noindent We now apply Lemma \ref{equal} to each case to prove that these indeed give us the
minimal generating sets for the ideal $\mathcal{P}_{4}$ in various cases. \qed
\bigskip
\section{Syzygies of $k[\Gamma_{4}]$}
\begin{lemma}\label{reg}
Suppose $a=6m$, and $\gcd(a,d)=1$. Then, the set
$\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{4m+d}-x_{4}^{m}\}$ forms a regular
sequence in $A$.
\end{lemma}
\proof With respect to the lexicographic monomial order induced by
$x_{2}>x_{3}>x_{1}>x_{4}$ on $A$, the leading terms of these polynomials
are mutually coprime. Hence the set
$\{x_{3}^{2}-x_{1}^{2}x_{4},x_{2}^{3}-x_{1}^{3}x_{3},x_{1}^{4m+d}-x_{4}^{m}\}$
forms a regular sequence.\qed
\medskip
\begin{corollary}\label{ci}
Suppose $a=6m$ and $\gcd(a,d)=1$, then the Koszul complex resolves
$A/\mathfrak{p}_{4}$ and the Betti numbers are $\beta_{0}=1,\beta_{1}=3,\beta_{2}=3,\beta_{3}=1$.
Hence the ring $A/\mathfrak{P}_{4}$ is complete intersection.
\end{corollary}
\proof Proof follows from lemma \ref{reg}.\qed
\medskip
\begin{lemma}\label{regsequence}
Let $m,d$ be two positive integers; consider the polynomials $g_{1}=-x_{1}^{(4m+d+2)}+x_{2}x_{4}^{m}$,
$g_{2}=x_{1}^{5}x_{4}-x_{2}^{3}x_{3}$ and $g_{3}=x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m}$. The
set $\{g_{1},g_{2},g_{3}\}$ forms a regular sequence in $A=k[x_{1},x_{2},x_{3},x_{4}]$.
\end{lemma}
\proof Let $x_{3}>x_{1}>x_{2}>x_{4}$ induce the lexicographic monomial order on $A$.
Then $\mathrm{Lt}(g_{1})=-x_{1}^{(4m+d+2)}$, $\mathrm{Lt}(g_{2})=-x_{2}^{3}x_{3}$ and
$\mathrm{Lt}(g_{3})=-x_{3}x_{4}^{m}$. Since $\gcd(\mathrm{Lt}(g_{1}),\mathrm{Lt}(g_{2}))=1$,
the set $\{g_{1},g_{2}\}$ forms Gr\"{o}bner basis of $\mathfrak{G}$, with respect to the
chosen monomial order and forms a regular sequence. Let $\mathfrak{G}=\langle g_{1},g_{2}\rangle$
and $g_{3}h\in \mathfrak{G}$; we have to show that $h\in \mathfrak{G}$. After division we may
assume that $\mathrm{Lt}(g_{1})\nmid \mathrm{Lt}(h)$ and $\mathrm{Lt}(g_{2})\nmid \mathrm{Lt}(h)$.
Since $g_{3}h\in \mathfrak{G}$ and $\mathrm{Lt}(g_{1})\nmid \mathrm{Lt}(h)$,
$\mathrm{Lt}(g_{2})\nmid \mathrm{Lt}(h)$, we have $x_{2}^{3}\mid \mathrm{Lt}(h)$ and
$x_{3}\nmid \mathrm{Lt}(h)$. We write $h=m_{0}+\cdots +m_{r}$, where each $m_{i}$'s
are monomials and $m_{0}>\cdots>m_{r}$, with respect to the chosen monomial order.
Since $x_{3}>x_{1}>x_{2}>x_{4}$ is the lexicographic monomial order on $A$,
$x_{3}\nmid m_{0}$ implies $x_{3}\nmid m_{i}$, for $1\leq i\leq r$. Suppose
$x_{1}^{l_{i}}\mid m_{i}$ but $x_{1}^{l_{i}+1}\nmid m_{i}$, then $i<j$ implies
$l_{j}\leq l_{i}<4m+d+2$. Let $m_{0}=x_{2}^{3}m_{0}^{'}$, then
$$(x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m})(x_{2}^{3}m_{0}^{'}+m_{1}+\cdots+m_{r})\in \mathfrak{G} .$$ After dividing by $g_{2}$ we get
$$(x_{1}^{(4m+d-1)}x_{2}^{5}-x_{1}^{5}x_{4}^{m+1})m_{0}^{'}+(x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m})(m_{1}+\cdots+m_{r})\in \mathfrak{G}.$$ Then leading term of above polynomial is $-x_{3}x_{4}^{m}m_{1}$ and we can divide by $g_{2}$. Continuing this way we get
$$(x_{1}^{(4m+d-1)}x_{2}^{5}-x_{1}^{5}x_{4}^{m+1})(m_{0}^{'}+\cdots+m_{r}^{'})\in \mathfrak{G},$$
where $m_{i}=x_{2}^{3}m_{i}^{'}$, for $0\leq i\leq r$. Notice that $m_{0}^{'}>\cdots>m_{r}^{'}$.
If $m=d=1$, then $-x_{1}^{5}x_{4}^{2}m_{0}^{'}$, otherwise $x_{1}^{(4m+d-1)}x_{2}^{5}m_{0}^{'}$
is the leading term of the above polynomial.
\medskip
\noindent\textbf{Case 1.}
Suppose $m=d=1$, then
$$(x_{1}^{4}x_{2}^{5}-x_{1}^{5}x_{4}^{2})(m_{0}^{'}+\cdots+m_{r}^{'})\in \mathfrak{G}.$$
We have $\mathrm{Lt}(g_{1})=-x_{1}^{7}\mid -x_{1}^{5}x_{4}^{2}m_{0}^{'}$
(since $x_{3}\nmid m_{0}=\mathrm{Lt}(h)$), hence $x_{1}^{2}\mid m_{0}^{'}$.
Let $m_{0}=x_{2}^{3}m_{0}^{'}=x_{1}^{2}x_{2}^{3}m_{0}^{''}$. After dividing by
$g_{1}$ we get $$(x_{1}^{6}x_{2}^{5}-x_{2}x_{4}^{3})m_{0}^{''}+(x_{1}^{4}x_{2}-x_{1}^{5}x_{4}^{2})(m_{1}^{'}+\cdots+m_{r}^{'})\in \mathfrak{G}.$$ We continuously divide the above polynomial
by $g_{1}$ and we get,
$$(x_{1}^{6}x_{2}^{5}-x_{2}x_{4}^{3})(m_{0}^{''}+\cdots+m_{s}^{''} )+(x_{1}^{4}x_{2}-x_{1}^{5}x_{4}^{2})(m_{s+1}^{'}+\cdots+m_{r}^{'}) \in \mathfrak{G},$$
where $0\leq s\leq r$, and for $0\leq i\leq s$ we have $m_{i}^{'}=x_{1}^{2}m_{i}^{''}$
and the leading term is $x_{1}^{6}x_{2}^{5}m_{0}^{''}$. Therefore
$x_{1}^{2}\nmid m_{i}^{'}$ for $s+1\leq i\leq r$. Hence
$\mathrm{Lt}(g_{1})=-x_{1}^{7}\mid x_{1}^{6}x_{2}^{5}m_{0}^{''}$,
therefore $x_{1}\mid m_{0}^{''}$. Let $m_{0}=x_{2}^{3}x_{1}^{3}m_{0}^{'''}$,
then we have
$$(x_{2}^{6}x_{4}-x_{4}^{3}x_{2}x_{1})m_{0}^{'''}+(x_{1}^{6}x_{2}^{5}-x_{2}x_{4}^{3})(m_{1}^{''}+\cdots+m_{s}^{''} )+(x_{1}^{4}x_{2}-x_{1}^{5}x_{4}^{2})(m_{s+1}^{'}+\cdots+m_{r}^{'}) \in \mathfrak{G}.$$
Again we continuously divide by $g_{1}$ and for $0\leq s\leq r$ we get
$$(x_{2}^{6}x_{4}-x_{4}^{3}x_{2}x_{1})(m_{0}^{'''}+\cdots+m_{s}^{'''})+(x_{1}^{4}x_{2}-x_{1}^{5}x_{4}^{2})(m_{s+1}^{'}+\cdots+m_{r}^{'}) \in \mathfrak{G}.$$
If the leading term of above polynomial is $-x_{1}^{5}x_{4}^{2}m_{s+1}^{'}$, then
$\mathrm{Lt}(g_{1})\nmid -x_{1}^{5}x_{4}^{2}m_{s+1}^{'} $ (as $x_{1}^{2}\nmid m_{s+1}^{'}$ ).
Therefore the leading term of above polynomial is $-x_{4}^{3}x_{2}x_{1}m_{0}^{'''} $.
Thus, we have $\mathrm{Lt}(g_{1})=-x_{1}^{7}\mid -x_{4}^{3}x_{2}x_{1}m_{0}^{'''}$,
hence $x_{1}^{6}\mid m_{0}^{'''}$, which implies that $\mathrm{Lt}(g_{1})\mid \mathrm{Lt}(h)$ -
a contradiction.
\medskip
\noindent\textbf{Case 2.} If $m$ or $d$ is greater than $1$, then
$x_{1}^{(4m+d-1)}x_{2}^{5}m_{0}^{'}$ is the leading term of
$(x_{1}^{(4m+d-1)}x_{2}^{5}-x_{1}^{5}x_{4}^{m+1})(m_{0}^{'}+\cdots+m_{r}^{'})$.
Therefore $\mathrm{Lt}(g_{1})=-x_{1}^{4m+d+2}\mid x_{1}^{(4m+d-1)}x_{2}^{5}m_{0}^{'}$,
hence $x_{1}^{3}\mid m_{0}^{'}$. After dividing by $g_{1}$ we get
$$(x_{2}^{6}x_{4}^{m}-x_{1}^{8}x_{4}^{m+1})m_{0}^{''}+
(x_{1}^{(4m+d-1)}x_{2}^{5}-x_{1}^{5}x_{4}^{m+1})(m_{1}^{'}+\cdots+m_{r}^{'})\in \mathfrak{G}.$$
We proceed along the same line of argument as in Case 1. The variable
$x_{3}$ is not present in the polynomial in each step, the leading term
is always divisible by $\mathrm{Lt}(g_{1})$ and after finite steps we get
$\mathrm{Lt}(g_{1})\mid \mathrm{Lt}(h)$ - a contradiction.\qed
\medskip
\begin{proposition} Suppose $a=6$ and $d=1$. Then the complex,
$$0\longrightarrow A^{2}\stackrel{\varphi_{3}}\longrightarrow A^{6}\stackrel{\varphi_{2}}\longrightarrow A^{5}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathfrak{p}_{4}\longrightarrow 0 $$
is a minimal graded free resolution of $A/\mathfrak{p}_{4}$, where the maps $\varphi_{i}$ are given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4},f_{5}),$$ with
$f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$, $f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$,
$f_{3}=x_{1}^{7}-x_{2}x_{4}$, $f_{4}=x_{1}^{4}x_{2}^{2}-x_{3}x_{4}$,
$f_{5}=x_{1}^{2}x_{2}^{2}x_{3}-x_{4}^{2}$;
$$\varphi_{2}=\begin{pmatrix}
x_{1}^{4}& x_{4} &0& x_{1}^{2}x_{3}& 0 & x_{3}^{2}\\
0 & 0&x_{4}& x_{1}^{5}& x_{1}^{2}x_{2}^{2}&x_{1}^{3}x_{3}-x_{2}^{3}\\
-x_{3}&-x_{2}^{2}&0&-x_{4}&0&-x_{1}^{2}x_{2}^{2}\\
x_{2}&x_{1}^{3}&-x_{3}&0&-x_{4}&x_{1}^{5}\\
0&0&x_{1}^{2}&x_{2}&x_{3}&0
\end{pmatrix} $$ and
$$\varphi_{3}=\begin{pmatrix}
x_{4}&x_{2}^{2}x_{3}\\
-x_{1}^{4} &-x_{3}^{2}\\
0& -x_{1}^{3}x_{3}+x_{2}^{3}\\
-x_{3}& -x_{1}^{2}x_{2}^{2}\\
x_{2}&x_{1}^{5}\\
x_{1}^{2}&x_{4}
\end{pmatrix}.$$
\end{proposition}
\proof We use the Buchsbaum-Eisenbud acyclicity theorem (see in \cite{bh}).
It is easy to show that $\mathrm{grade}(I_{4}(\varphi_{2}), A)\geq 2$. We take the minors,
$$[1234 \mid 1235]=(x_{3}x_{4}-x_{1}^{4}x_{2}^{2})(x_{1}^{2}x_{2}^{2}x_{3}-x_{4}^{2}),
[2345 \mid 1236]=x_{1}^{2}(x_{1}^{3}x_{3}-x_{2}^{3})^{2},$$ which
have distinct irreducible factors, hence they form a regular sequence.
We now consider the following minors,
$$[56 \mid 12]=-x_{1}^{7}+x_{2}x_{4}, [15\mid 12]=x_{1}^{5}x_{4}-x_{2}^{3}x_{3},
[46 \mid 12]=x_{1}^{4}x_{2}^{2}-x_{3}x_{4}^{m},$$
which form a regular sequence by Lemma \ref{regsequence}. \qed
\medskip
\begin{proposition}\label{syz1} Suppose $a=6m+1$ and either $m$ or $d$ is greater than $1$.
Then the complex
$$0\longrightarrow A^{2}\stackrel{\varphi_{3}}\longrightarrow A^{6}\stackrel{\varphi_{2}}\longrightarrow A^{5}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathfrak{p}_{4}\longrightarrow 0$$
is a minimal graded free resolution of $A/\mathfrak{p}_{4}$, where the maps $\varphi_{i}$ are
given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4},f_{5}),$$
with
$f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$, $f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$,
$f_{3}=x_{1}^{(4m+d+2)}-x_{2}x_{4}^{m}$, $f_{4}=x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m}$,
$f_{5}=x_{1}^{(4m+d-6)}x_{2}^{5}-x_{4}^{m+1}$;
$$\scriptsize{\varphi_{2}=\begin{pmatrix}
x_{1}^{2}x_{4}-x_{3}^{2}& x_{1}^{(4m+d-1)} & x_{4}^{m}& x_{1}^{(4m+d-4)}x_{2}^{2} & x_{1}^{(4m+d-3)}x_{3}+x_{1}^{(4m+d-6)}x_{2}^{3} &x_{1}^{(4m+d-6)}x_{2}^{2}x_{3}\\
-x_{1}^{3}x_{3}+x_{2}^{3}& 0& 0& x_{4}^{m}& x_{1}^{4m+d}& x_{1}^{(4m+d-3)}x_{2}^{2}\\
0&-x_{3}&-x_{2}&0&-x_{4}&0\\
0&x_{2}&x_{1}^{3}&-x_{3}&0&-x_{4}\\
0&0&0&x_{1}^{2}&x_{2}&x_{3}
\end{pmatrix}}$$
and
$$\scriptsize{\varphi_{3}=\begin{pmatrix}
x_{1}^{(4m+d-3)}&x_{4}^{m}\\
-x_{4} &-x_{2}^{2}x_{3}\\
0& -x_{1}^{2}x_{4}+x_{3}^{2}\\
0& x_{1}^{3}x_{3}-x_{2}^{3}\\
x_{3}&x_{1}^{2}x_{2}^{2}\\
-x_{2}&-x_{1}^{5}
\end{pmatrix}.}$$
\end{proposition}
\proof We use the Buchsbaum-Eisenbud acyclicity theorem. It is easy to show that
$\mathrm{grade}(I_{4}(\varphi_{2}), A)\geq 2$. We take the minors
$[1345 \mid 1246] = -x_{3}(x_{1}^{2}x_{4}-x_{3}^{2})^{2}$ and
$[2345 \mid 2345] = (x_{4}^{m}x_{2}-x_{1}^{(4m+d+2)})(-x_{1}^{3}x_{3}+x_{2}^{3})$. These
have distinct irreducible factors. Hence they form a regular sequence.
Next we have to show that $\mathrm{grade}(I_{2}(\varphi_{3}), A)\geq 3$. By Lemma \ref{regsequence},
the minors $[16\mid 12]=-x_{1}^{(4m+d+2)}+x_{2}x_{4}^{m}$,
$[26\mid 12]=x_{1}^{5}x_{4}-x_{2}^{3}x_{3}$ and $[15\mid 12]=x_{1}^{(4m+d-1)}x_{2}^{2}-x_{3}x_{4}^{m}$
form a regular sequence. \qed
\medskip
\begin{proposition} Suppose $a=6m+2$. Then the complex,
$$0\longrightarrow A^{2}\stackrel{\varphi_{3}}\longrightarrow A^{6}\stackrel{\varphi_{2}}\longrightarrow A^{5}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathfrak{p}_{4}\longrightarrow 0 $$
is a minimal graded free resolution of $A/\mathfrak{p}_{4}$, where the maps $\varphi_{i}$ are given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4},f_{5}),$$ with $f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$,
$f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$,
$f_{3}=x_{1}^{(4m+d+1)}x_{2}-x_{3}x_{4}^{m}$,
$f_{4}=x_{1}^{(4m+d+4)}-x_{2}^{2}x_{4}^{m}$,
$f_{5}=x_{1}^{(4m+d-4)}x_{2}^{4}-x_{4}^{m+1}$;
$$\scriptsize{\varphi_{2}=\begin{pmatrix}
x_{1}^{2}x_{4}-x_{3}^{2}& x_{4}^{m} & x_{1}^{(4m+d-2)}x_{2}& x_{1}^{(4m+d+1)} & x_{1}^{(4m+d-4)}x_{2}x_{3} &x_{1}^{(4m+d-1)}x_{3}-x_{1}^{(4m+d-4)}x_{2}^{3}\\
-x_{1}^{3}x_{3}+x_{2}^{3}& 0& x_{4}^{m}&0& x_{1}^{(4m+d-1)}x_{2}& x_{1}^{(4m+d+2)}\\
0&x_{1}^{3}&-x_{3}&x_{2}^{2}&-x_{4}&0\\
0&-x_{2}&0&-x_{3}&0&-x_{4}\\
0&0&x_{1}^{2}&0&x_{3}&x_{2}^{2}
\end{pmatrix}}$$
and
$$\scriptsize{\varphi_{3}=\begin{pmatrix}
x_{1}^{(4m+d-1)}&x_{4}^{m}\\
0 &-x_{1}^{2}x_{4}+x_{3}^{2}\\
0& x_{1}^{3}x_{3}-x_{2}^{3}\\
-x_{4}& -x_{2}x_{3}\\
-x_{2}^{2}&-x_{1}^{5}\\
x_{3}&x_{1}^{2}x_{2}
\end{pmatrix}.}$$
\end{proposition}
\proof The proof is similar to that of Proposition \ref{syz1}. We note that the following minors
$[1345 \mid 1235]=x_{2}(x_{1}^{2}x_{4}-x_{3}^{2})^{2}$,
$[2345 \mid 2345]=(x_{4}^{m}x_{3}-x_{1}^{(4m+d+1)}x_{2})(-x_{1}^{3}x_{3}+x_{2}^{3})$
in $I_{4}(\varphi_{2})$ form a regular sequence. We then show that the minors belonging to
$I_{2}(\varphi_{3})$, given by $[15 \mid 12]=-x_{1}^{(4m+d+4)}+x_{2}x_{4}^{m}$,
$[45 \mid 12]=x_{1}^{5}x_{4}-x_{2}^{3}x_{3}$ and $[16\mid 12]=x_{1}^{(4m+d+1)}x_{2}^{2}-x_{3}x_{4}^{m}$,
form a regular sequence.\qed
\medskip
\begin{proposition} Suppose $a=6m+3$. Then the complex,
$$0\longrightarrow A^{2}\stackrel{\varphi_{3}}\longrightarrow A^{5}\stackrel{\varphi_{2}}\longrightarrow A^{4}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathfrak{p}_{4}\longrightarrow 0 $$ is a
minimal graded free resolution of $A/\mathfrak{p}_{4}$, where the maps $\varphi_{i}$ are given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4}),$$ with $f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$,
$f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$, $f_{3}=x_{1}^{(4m+d+3)}-x_{3}x_{4}^{m}$,
$f_{4}=x_{1}^{(4m+d-2)}x_{2}^{3}-x_{4}^{m+1}$;
$$\varphi_{2}=\begin{pmatrix}
x_{1}^{2}x_{4}-x_{3}^{2}&x_{1}^{(4m+d)} & x_{3}x_{4}^{m}& x_{1}^{(4m+d-2)}x_{3} & x_{1}^{(4m+d-2)}x_{2}^{3}-x_{4}^{m+1} \\
-x_{1}^{3}x_{3}+x_{2}^{3}& x_{4}^{m}&x_{1}^{3}x_{4}^{m}& x_{1}^{(4m+d+1)}&0\\
0&-x_{3}&-x_{2}^{3}&-x_{4}&0\\
0&x_{1}^{2}&x_{1}^{5}&x_{3}&-x_{1}^{3}x_{3}+x_{2}^{3}\\
\end{pmatrix} $$
and
$$\scriptsize{\varphi_{3}=\begin{pmatrix}
x_{4}^{m}& x_{1}^{(4m+d+1)}\\
-x_{2}^{3} &-x_{1}^{3}x_{4}\\
x_{3}& x_{4}\\
0& x_{1}^{3}x_{3}-x_{2}^{3}\\
x_{1}^{2}&x_{3}\\
\end{pmatrix}.}$$
\end{proposition}
\proof The proof is similar to that of Proposition \ref{syz1}. We observe that the minors
$[134\mid 124]=(x_{1}^{2}x_{4}-x_{3}^{2})^{2}$ and
$[234\mid 123]=x_{1}^{2}(-x_{1}^{3}x_{3}+x_{2}^{3})^{2}$ belonging to
$I_{3}(\varphi_{2})$ form a regular sequence. We further observe that the minors
$[15\mid 12]=-x_{1}^{(4m+d+3)}+x_{3}x_{4}^{m}$,
$[13\mid 12]=x_{4}^{m+1}-x_{1}^{4m+d+1}x_{3}$ and
$[34\mid 12]=x_{1}^{3}x_{3}^{2}-x_{2}^{3}x_{3}$, belonging to
$I_{2}(\varphi_{3})$, form a regular sequence.\qed
\medskip
\begin{proposition} Suppose $a=6m+4$. Then the complex,
$$0\longrightarrow A^{3}\stackrel{\varphi_{3}}\longrightarrow A^{6}\stackrel{\varphi_{2}}\longrightarrow A^{4}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathfrak{p}_{4}\longrightarrow 0 $$
is minimal graded free resolution of $R/\mathcal{p}_{4}$, where the maps $\varphi_{i}$ are given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4}),$$ with $f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$,
$f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$, $f_{3}=x_{1}^{(4m+d+5)}-x_{2}x_{3}x_{4}^{m}$,
$f_{4}=x_{1}^{(4m+d)}x_{2}^{2}-x_{4}^{m+1}$;
$$\scriptsize{\varphi_{2}=\begin{pmatrix}
x_{1}^{2}x_{4}-x_{3}^{2}& x_{1}^{(4m+d+2)} & x_{3}x_{4}^{m}& x_{1}^{(4m+d)}x_{3} & x_{1}^{(4m+d)}x_{2}^{2}-x_{4}^{m+1} &0\\
-x_{1}^{3}x_{3}+x_{2}^{3}& x_{4}^{m}x_{2}& x_{1}^{3}x_{4}^{m}&x_{1}^{(4m+d+3)}& 0&x_{1}^{(4m+d)}x_{2}^{2}-x_{4}^{m+1}\\
0&-x_{3}&-x_{2}^{2}&-x_{4}&0&0\\
0&x_{1}^{2}x_{2}&x_{1}^{5}&x_{2}x_{3}&-x_{1}^{3}x_{3}+x_{2}^{3}&-x_{1}^{2}x_{4}+x_{3}^{2}
\end{pmatrix} }$$
and
$$\scriptsize{\varphi_{3}=\begin{pmatrix}
x_{4}^{m}&0& x_{1}^{(4m+d)}\\
-x_{2}^{2} &0&-x_{4}\\
x_{3}& x_{4}&0\\
0& -x_{2}^{2}&x_{3}\\
x_{1}^{2}&x_{3}&0\\
0&x_{1}^{3}&-x_{2}
\end{pmatrix}.}$$
\end{proposition}
\proof The proof is similar to that of Proposition \ref{syz1}. We observe that the minors
$[134\mid 124]=x_{2}(x_{1}^{2}x_{4}-x_{3}^{2})^{2}$ and
$[234\mid 123]=x_{1}^{2}(-x_{1}^{3}x_{3}+x_{2}^{3})^{2}$ in $I_{3}(\varphi_{2})$ form
a regular sequence. We also observe that the minors
$[124\mid 123]=x_{1}^{(4m+d)}x_{2}^{4}-x_{2}^{2}x_{4}^{m+1}$,
$[246\mid 123]=x_{1}^{3}x_{2}^{2}x_{3}-x_{2}^{5}$ and
$[256\mid 123]=x_{2}^{3}x_{3}-x_{1}^{5}x_{4}$, belonging to $I_{3}(\varphi_{3})$,
form a regular sequence. \qed
\medskip
\begin{proposition} Suppose $a=6m+5$. Then the complex,
$$0\longrightarrow A^{3}\stackrel{\varphi_{3}}\longrightarrow A^{6}\stackrel{\varphi_{2}}\longrightarrow A^{4}\stackrel{\varphi_{1}}\longrightarrow A\longrightarrow A/\mathcal{P}_{4}\longrightarrow 0 $$ is
a minimal graded free resolution of $A/\mathcal{p}_{4}$, where the maps $\varphi_{i}$ are given by
$$\varphi_{1}=(f_{1},f_{2},f_{3},f_{4}),$$ with $f_{1}=-x_{2}^{3}+x_{1}^{3}x_{3}$,
$f_{2}=-x_{3}^{2}+x_{1}^{2}x_{4}$, $f_{3}=x_{1}^{(4m+d+2)}x_{2}-x_{4}^{m+1}$,
$f_{4}=x_{1}^{(4m+d+7)}-x_{2}^{2}x_{3}x_{4}^{m}$;
$$\scriptsize{\varphi_{2}=\begin{pmatrix}
x_{1}^{2}x_{4}-x_{3}^{2}& x_{3}x_{4}^{m}& x_{1}^{(4m+d+4)} & x_{1}^{(4m+d+2)}x_{2}-x_{4}^{m+1}&0 & x_{1}^{(4m+d+2)}x_{3}\\
-x_{1}^{3}x_{3}+x_{2}^{3}& x_{1}^{3}x_{4}^{m}&x_{4}^{m}x_{2}^{2}&0 &x_{1}^{(4m+d+2)}x_{2}-x_{4}^{m+1}& x_{1}^{(4m+d+5)}\\
0&x_{1}^{5}&x_{1}^{2}x_{2}^{2}&-x_{1}^{3}x_{3}+x_{2}^{3}&-x_{1}^{2}x_{4}+x_{3}^{2}&x_{2}^{2}x_{3}\\
0&-x_{2}&-x_{3}&0&0&-x_{4}
\end{pmatrix} }$$
and
$$\scriptsize{\varphi_{3}=\begin{pmatrix}
x_{4}^{m}&0& x_{1}^{(4m+d+2)}\\
x_{3}& x_{4}&0\\
-x_{2} &0&-x_{4}\\
x_{1}^{2}&x_{3}&0\\
0&x_{1}^{3}&-x_{2}^{2}\\
0& -x_{2}&x_{3}\\
\end{pmatrix}.}$$
\end{proposition}
\proof The proof is similar to that of Proposition \ref{syz1}. We observe that the minors
$[134\mid 125]=x_{2}(x_{1}^{2}x_{4}-x_{3}^{2})^{2}$ and
$[234\mid 123]=x_{1}^{2}(-x_{1}^{3}x_{3}+x_{2}^{3})^{2} $
belonging to $I_{3}(\varphi_{2})$ form a regular sequence.
We further observe that the minors
$[123\mid 123]=x_{1}^{(4m+d+2)}x_{2}x_{4}-x_{4}^{m+2}$,
$[345\mid 123]=x_{2}^{3}x_{3}-x_{1}^{5}x_{4}$,
$[456\mid 123]=x_{1}^{5}x_{3}-x_{1}^{2}x_{2}^{3}$,
belonging to $I_{3}(\varphi_{3})$, form a regular sequence. \qed
\medskip
\begin{lemma}\label{stci}
The curve $k[\Gamma_{4}]$ is a set-theoretic complete intersection if
$a\equiv i(\mathrm{mod}\, a)$, where $i\in\{0,3,4,5\}$.
\end{lemma}
\proof If $a=6m$, then $k[\Gamma_{4}]$ is a set-theoretic complete intersection by Corollary \ref{ci}.
For the other cases it follows from theorem 5.3. in \cite{eto}.\qed
\bigskip
\section{Ap\'{e}ry table and Tangent Cone of $k[\Gamma_{4}]$}
Throughout this section we assume that the field $k$ is infinite.
\medskip
\begin{definition} Let $(R,m)$ be a Noetherian local ring and $I\subset R$
be an ideal of $R$. The fibre cone of $I$ is the ring
$$F(I)=\displaystyle\bigoplus_{n\geq 0}\dfrac{I^{n}}{mI^{n}}\cong R[It]\otimes R/m.$$
Krull dimension of the ring $F(I)$ is called the \textit{analytic spread} of
$I$, denoted by $\ell(I)$.
\end{definition}
\medskip
An ideal $J\subset I$ is called a \textit{reduction} of $I$ if there exists an
integer $n>0$ such that $I^{n+1}=JI^{n}$. A reduction $J$ of $I$ is a \textit{minimal reduction}
if $J$ is minimal with respect to inclusion among reductions of $I$.
A minimal reduction always exists by \cite{nr}. It is well known that $J$ is a
minimal reduction of $I$ if and only if $J$ is minimally generated by $\ell(I)$
number of elements, i.e, $\mu(J)=\ell(I)$. If $J$ is a minimal reduction of $I$,
then the least integer $r$ such that $I^{r+1}=JI^{r}$, is the reduction number
of $I$ with respect to $J$, denoted by $r_{J}(I)$.
\medskip
We are particilarly interested in the semigroup ring $k[[\Gamma_{4}]]$,
which is the coordinate ring of the algebroid monomial curve defined by
the numerical semigroup $\Gamma_{4}$. Let $a\geq 7$ and $d > 0$ be two integers,
such that $\gcd(a,d)=1$. Let $R=k[[t^{a},t^{2a+d},t^{3a+3d},t^{4a+6d}]]$ and
$\mathfrak{m}$ is the maximal ideal $\langle t^{a},t^{2a+d},t^{3a+3d},t^{4a+6d}\rangle$.
Consider the principal ideal $I=\langle t^{a}\rangle \subset R$. The fibre cone of
$I$ is the ring
$$F(I)=\displaystyle\bigoplus_{n\geq 0}\dfrac{I^{n}}{\mathfrak{m}I^{n}}\cong R
[It]\otimes R/\mathfrak{m}.$$
We note that here $\ell(I)=1$ and the tangent cone $G_{\mathfrak{m}}=\displaystyle\bigoplus_{n\geq 0}\dfrac{\mathfrak{m}^{n}}{\mathfrak{m}^{n+1}}$ is an $F(I)$-algebra. Moreover $F(I)\hookrightarrow G_{\mathfrak{m}} $ is a Noether normalisation (see \cite{cz1} and \cite{cz2}).
\medskip
Suppose $\Gamma$ be a numerical semigroup minimally generated by $a_{1}<\cdots
<a_{e}$. Let $M=\Gamma\setminus\{0\}$ and for a positive integer $n$, we write
$nM:=M+\cdots+M$ ($n$-copies). Let $\mathfrak{m}$ be the maximal ideal of the
ring $k[[t^{a_{1}},\ldots t^{a_{e}}]]$. Then $(n+1)M=a+nM$ for all $n\geq r$
if and only if $r=r_{(t^{a_{1}})}(\mathfrak{m})$.
\medskip
Let $\mathrm{Ap}(\Gamma,a_{1})=\{0,\omega_{1},\ldots,\omega_{a_{1}-1}\}$. Now for
each $n\geq 1$, let us define $\mathrm{Ap}(nM)=\{\omega_{n,0},\ldots\omega_{n,a_{1}-1}\}$
inductively. We define $\omega_{1,0}=a_{1}$ and $\omega_{1,i}=\omega_{i}$, for $1\leq i\leq a_{1}-1$.
Then $\mathrm{Ap}(M)=\{a_{1},\omega_{1},\ldots,\omega_{a_{1}-1}\}$. Now we define
$\omega_{n+1,i}=\omega_{n,i}$, if $\omega_{n,i}\in (n+1)M $, and $\omega_{n+1,i}=\omega_{n,i}+a_{1}$,
otherwise. We note that $\omega_{n+1,i}=\omega_{n,i}+a_{1} $ for all $0\leq i\leq a_{1}-1$ and $n\leq r_{(t^{a_{1}})}(\mathfrak{m})$. Then, the Ap\'{e}ry table $\mathrm{AT}(\Gamma,a_{1})$ of $\Gamma$ is a table
of size $(r_{(t^{a_{1}})}(\mathfrak{m})+1)\times a_{1}$, whose $(0,t)$ entry is $\omega_{t}$,
for $0\leq t\leq {a_{1}-1}$ (we take $\omega_{0}=0$), and the $(s,t)$ entry is $\omega_{st}$,
for $1\leq s\leq r_{(t^{a_{1}})}(\mathfrak{m})$ and $ 0\leq t\leq {a_{1}-1}$.
\medskip
Next we want to describe Apery table of $\Gamma_{4}$ and we need following Lemmas.
\medskip
\begin{lemma}\label{aperytable}
Elements of the Ap\'{e}ry set $\mathrm{Ap}(\Gamma_{4},a)$ have unique expressions.
\end{lemma}
\proof We have $(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id=\mu_{i}(4a+6d)+\nu_{i}(3a+3d)+\xi_{i}(2a+d)$, for $i\leq i\leq a-1$. Suppose for some $i\leq i\leq a-1$, $$(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id=c_{1}(2a+d)+c_{2}(3a+3d)+c_{3}(4a+6d) .$$ Then
\begin{equation}\label{aptableeq1}
[(4c_{3}+3c_{2}+2c_{1})- (4\mu_{i}+3\nu_{i}+2\xi_{i})]a=[(6\mu_{i}+3\nu_{i}+\xi_{i})-(6c_{3}+3c_{2}+c_{1})]d.
\end{equation}
We have already shown in Theorem \ref{apery}, that, $4c_{3}+3c_{2}+2c_{1}\geq 4\mu_{i}+3\nu_{i}+2\xi_{i} $ and $ 6c_{3}+3c_{2}+c_{1}\geq 6\mu_{i}+3\nu_{i}+\xi_{i}$. From equation \ref{aptableeq1} we have,
\begin{equation}\label{aptableeq2}
4c_{3}+3c_{2}+2c_{1}= 4\mu_{i}+3\nu_{i}+2\xi_{i}
\end{equation}
\begin{equation}\label{aptableeq3}
6c_{3}+3c_{2}+c_{1}= 6\mu_{i}+3\nu_{i}+\xi_{i}.
\end{equation}
We eliminate $\mu_{i}$ and get,
\begin{equation}\label{aptableeq4}
3(c_{2}-\nu_{i})= 4(\xi_{i}-c_{1}).
\end{equation}
Let $c_{2}-\nu_{i}=4k$ and $\xi_{i}-c_{1}=3k$, for $k\in \mathbb{Z}$. If
$k>0$ then $\xi_{i}=3k+c_{1}$, which is impossible, since $0\leq \xi_{i}\leq 2$.
If $k<0$ then $\nu_{i}=c_{2}-4k$, again a contradiction, since $0\leq \nu_{i}\leq 1$.
Therefore $k=0$, hence $(\mu_{i},\nu_{i},\xi_{i})=(c_{3},c_{2},c_{1})$. \qed
\medskip
\begin{lemma}\label{red}
Let $(\mu_{i},\nu_{i},\xi_{i})$ be the same as in Theorem \ref{apery},
for $1\leq i\leq a-1$, and $(\mu_{0},\nu_{0},\xi_{0})=(0,0,0)$. Then
$\lfloor\frac{a}{6}\rfloor+2 =\max\{\mu_{i}+\nu_{i}+\xi_{i}\mid 1\leq i\leq a-1\}$,
where $\lfloor\centerdot\rfloor$ denotes the greatest integer function.
\end{lemma}
\proof Let $a=6\mu +q$, $0\leq q\leq 5$. We note that $i\leq 6\mu+4$ for all
$1\leq i\leq a-1$. Therefore, $\mu_{i}+\nu_{i}+\xi_{i}\leq \mu+2$, for all $1\leq i\leq a-1$.
On the other hand, $\mu_{a-q-1}+\nu_{a-q-1}+\xi_{a-q-1}=\mu+2=\lfloor\frac{a}{6}\rfloor+2$. \qed
\medskip
\begin{corollary} Let $\mathrm{AT}(\Gamma_{4})$ denote the Ap\'{e}ry table for
$\Gamma_{4}$. Then $\mathrm{AT}(\Gamma_{4})$ will be of order
$(\lfloor\frac{a}{6}\rfloor+3) \times a$. Let $\omega_{st}$ be the
$(s,t)$ entry of the table $\mathrm{AT}(\Gamma_{4})$. Then,
$\omega_{st} = (4\mu_{t}+3\nu_{t}+2\xi_{t})a+td$, if $0\leq s\leq \mu_{t}+\nu_{t}+\xi_{t}$ and
$0\leq t\leq a-1$. On the other hand,
$\omega_{st} = (3\mu_{t}+2\nu_{t}+\xi_{t}+s)a+td$, if $\mu_{t}+\nu_{t}+\xi_{t}<s\leq\lfloor\frac{a}{6}\rfloor+2$
and $0\leq t\leq a-1$. Hence the reduction number of $r_{\mathfrak{I}}(\mathfrak{m})$ is $\lfloor\frac{a}{6}\rfloor+2$.
\end{corollary}
\proof Proof follows from Lemmas \ref{aperytable} and \ref{red}.\qed
\medskip
\noindent\textbf{Remark.} Minimal generating set of the defining ideal can be found abstractly in \cite{rs1}, when elements of Apery set has unique representation. But here we have written explicitly.
\medskip
\begin{lemma}\label{power}
Let $(\mu_{i},\nu_{i},\xi_{i})$ be the same as in Lemma \ref{red}, for $0\leq i\leq a-1$.
Let $a=6\mu+q$, $\mu\geq 1$, $0\leq q\leq 5$. Let $t_{k}$ be the number of solutions
of the equation $\mu_{i}+\nu_{i}+\xi_{i}=k$, for $0\leq k\leq \mu+2$. Then
\begin{eqnarray*}
t_{k} & = & \begin{cases}
1 & \mbox{if} \quad k=0,\\
3 & \mbox{if} \quad k=1,\\
\lfloor\frac{q}{2}\rfloor+2 & \mbox{if} \quad k=2 \, \mbox{and}\, \mu=1,\\
5 & \mbox{if} \quad k=2 \quad \mbox{and} \quad \mu\geq 2,\\
6 & \mbox{if} \quad 3\leq k\leq\mu,\\
\lfloor\frac{q}{2}\rfloor+3 & \mbox{if} \quad k=\mu+1 \, \mbox{and} \, \mu\geq 2,\\
1 & \mbox{if} \quad k=\mu+2 \, \mbox{and} \, q\in\{0,1,2\},\\
2 & \mbox{if} \quad k=\mu+2 \, \mbox{and} \, q\in\{3,4\},\\
3 & \mbox{if} \quad k=\mu+2 \, \mbox{and} \, q=5.
\end{cases}
\end{eqnarray*}
\end{lemma}
\proof For each of the following cases we write the set of solutions.
\begin{itemize}
\item[Case 1.] If $k=0$ then $(\mu_{i},\nu_{i},\xi_{i})=(0,0,0)$ is the only solution.
\item[Case 2.] If $k=1$ then $\{(1,0,0),(0,1,0),(0,0,1)\}$ is the set of solutions.
\item[Case 3.] If $k=2$ and $\mu\geq 2$, then $\{(2,0,0),(1,1,0),(1,0,1),(0,1,1),(0,0,2)\}$ is the set of solutions.
\item[Case 4.] If $3\leq k\leq \mu$,
then $\{(k,0,0),(k-1,1,0),(k-1,0,1),(k-2,1,1),(k-2,0,2),(k-3,1,2)\}$ is the set of solutions.
\item[Case 5.] If $k=\mu+1$ and $\mu\geq 2$, then,
\begin{enumerate}
\item[(i)] if $q\in\{0,1\}$ then $\{(\mu-1,1,1),(\mu-1,0,2),(\mu-2,1,2)\}$ is the set of solutions;
\item[(ii)] if $q\in\{2,3\}$ then $\{(\mu,0,1),(\mu-1,1,1),(\mu-1,0,2),(\mu-2,1,2)\}$ is the set of solutions;
\item[(iii)] if $q\in\{4,5\}$ then $\{(\mu,0,1),(\mu,1,0),(\mu-1,1,1),(\mu-1,0,2),(\mu-2,1,2)\}$ is the set of solutions.
\end{enumerate}
\item[Case 6.] If $k=\mu+2$, then,
\begin{enumerate}
\item[(i)] if $q\in\{0,1,2\}$ then $\{(\mu-1,1,2)\}$ is the set of solutions;
\item[(ii)] if $q\in\{3,4\}$ then $\{(\mu,0,2),(\mu-1,1,2)\}$ is the set of solutions;
\item[(iii)] if $q=5$ then $\{(\mu,1,1),(\mu,0,2),(\mu-1,1,2)\}$ is the set of solutions.
\end{enumerate}
\end{itemize}
For the case $\mu=1$ and $k=2$, it is easy to calculate
(see example \ref{atexample}).
\bigskip
We take some definitions from \cite{cz2}. Let $W =\{a_{0},\ldots,a_{n}\}$ be a set of integers. We call it a \textit{ladder} if $a_{0}\leq\ldots\leq a_{n}$. Given a ladder, we say that a subset $L=\{a_{i},\ldots,a_{i+k}\}$, with $k\geq 1$, is a \textit{landing} of length $k$ if $a_{i-1}<a_{i}=\cdots=a_{i+k}<a_{i+k+1}$ (where $a_{-1}= -\infty$ and $a_{n+1}=\infty$). In this case, $s(L)=i$ and $e(L)=i+k$. A landing $L$ is said to be a \textit{true landing} if $s(L)\geq 1$. Given two landings $L$ and $L^{'}$, we set $L<L^{'}$ if $s(L)<s(L^{'})$. Let $p(W)+1$ be the number of landings and assume that $L_{0}<\cdots<L_{p(W)}$ are the distinct landings. Then we define the following numbers:
$s_{j}(W)=s(L_{j})$, $e_{j}(W)=e(L_{j})$, for each $0\leq j\leq p(W)$;
$c_{j}(W)=s_{j}(W)-e_{j-1}(W)$, for each $0\leq j\leq p(W)$.
\medskip
Suppose $\Gamma$ be a numerical semigroup minimally
generated by $a_{1}<\cdots <a_{e}$ and $\mathfrak{m}$ be the maximal ideal of $k[[t^{a_{1}},\ldots t^{a_{e}}]]$. Let $r= r_{(t^{a_{1}})}(\mathfrak{m})$, $M=\Gamma\setminus\{0\}$ and
$\mathrm{Ap}(nM)=\{\omega_{n,0},\ldots\omega_{n,a_{1}-1}\}$ for $0\leq n \leq r$. For every $1\leq i\leq a_{1}-1$, consider the ladder of the values $W^{i}=\{\omega_{n,i}\}_{0\leq n\leq r}$ and define the following integers:
\begin{enumerate}[(i)]
\item $p_{i}=p(W^{i})$
\item $d_{i}=e_{p_{i}}(W^{i})$
\item $b_{j}^{i}=e_{j-1}(W^{i})$ and
$c_{j}^{i}=c_{j}(W^{i})$, for $1\leq j\leq p_{i}$.
\end{enumerate}
\medskip
\begin{theorem}\textbf{(Cortadellas, Zarzuela.)}\label{tangentcone} With the above notations,
$$G_{m}\cong F\oplus\displaystyle\bigoplus_{i=1}^{a_{1}-1}\left(F(-d_{i})\displaystyle \bigoplus_{j=1}^{p_{i}}\dfrac{F}{(({t^{a_{1}})^{*})^{c_{j}^{i}}}F}(-b_{j}^{i})\right),$$
where $G_{m}$ is the tangent cone of $\Gamma$ and $F=F((t^{a_{1}}))$ is the fiber cone.
\end{theorem}
\proof See Theorem 2.3 in \cite{cz2}.\qed
\medskip
\begin{corollary}\label{exptangent} The tangent cone $G_{\mathfrak{m}}$ of $\Gamma_{4}$ is a free
$F(\mathfrak{I})$-module. Moreover
$$ G_{\mathfrak{m}}= \displaystyle\bigoplus_{k=0}^{\lfloor \frac{a}{6}\rfloor+2}(F(\mathfrak{I})(-k))^{t_{k}},$$
where $t_{k}$'s are given in Lemma \ref{power}.
\end{corollary}
\proof Proof follows from corollary \ref{aperytable} and \ref{tangentcone}.\qed
\medskip
\begin{corollary} The following properties hold good
for the tangent cone $G_{\mathfrak{m}}$ of $\Gamma_{4}$ :
\begin{enumerate}
\item[(i)] $G_{\mathfrak{m}}$ is Cohen-Macaulay;
\item[(ii)] $G_{\mathfrak{m}}$ is not Gorenstein;
\item[(iii)] $G_{\mathfrak{m}}$ is Buchsbaum.
\end{enumerate}
\end{corollary}
\proof $(i)$ and $(ii)$ easily follow from the fact that $G_{\mathfrak{m}}$ is a free $F(\mathfrak{I})$-module (see section 4 in \cite{cz2}). For proving $(ii)$, we use
Theorem 20 in \cite{cz1}. Here we observe that if
$G_{\mathfrak{m}}$ is Gorenstein then $\mathfrak{m}^{n}\cap(\mathfrak{m}^{n+2}:\mathfrak{I})=\mathfrak{m}^{n+1} $,
for $1\leq n\leq r_{\mathfrak{I}}(\mathfrak{m})$. Now
$\mathfrak{m}^{n}\cap(\mathfrak{m}^{n+2}:\mathfrak{I})=\mathfrak{m}^{n+1} $, for all $1\leq n\leq r_{\mathfrak{I}}(\mathfrak{m})$ if and only if $nM_{4}\cap (n+2)M_{4}-a=(n+1)M_{4}$, for all $1\leq n\leq r_{\mathfrak{I}}(\mathfrak{m})$, where $M_{4}=\Gamma_{4}\setminus \{0\}$. Which is impossible, since $ (n+1)a\notin nM_{4}$. \qed
\medskip
\begin{corollary} Let $HG_{\mathfrak{m}}(x)$ be the Hilbert series of $G_{\mathfrak{m}}$. Then $$HG_{\mathfrak{m}}(x)=\displaystyle\left(\sum_{k=0}^{\lfloor \frac{a}{6}\rfloor+2} t_{k}x^{k}\right)/(1-x).$$
Where $t_{k}$'s are given in Lemma \ref{power}.
\end{corollary}
\proof Follows from Corollay \ref{exptangent}.\qed
\medskip
\begin{example}\label{atexample}
Let us consider an example where $a=11$ and $d=24$. Hence $\Gamma_{4}=\langle 11,46,105,188\rangle$. Here $d\equiv 2(\mathrm{mod}\, a)$ and we have $\mathrm{Ap}(\Gamma_{4},a)=\{(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id \mid 1\leq i\leq a-1 \}\cup \{0\}$, where $(\mu_{i},\nu_{i},\xi_{i})$ are same as in \ref{red}. Let $\omega_{i}=(4\mu_{i}+3\nu_{i}+2\xi_{i})a+id$ for $0\leq i\leq a-1$; the values are given in the table below:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$i$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
\hline
$\xi_{i}$ & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1\\
\hline
$\nu_{i}$ & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1\\
\hline
$\mu_{i}$ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1\\
\hline
$\omega_{i}$& 0 & 46 & 92 & 105 & 151 & 197 & 188 & 234 & 280 & 293 & 339\\
\hline
\end{tabular}
\end{center}
Let $M_{4}=\Gamma_{4}\setminus \{0\}$, then, we have,
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Ap($\Gamma_{4}$)& 0 & 46 & 92 & 105 & 151 & 197 & 188 & 234 & 280 & 293 & 339\\
\hline
Ap$(M_{4})$ & 11 & 46 & 92 & 105 & 151 & 197 & 188 & 234 & 280 & 293 & 339\\
\hline
Ap$(2M_{4})$ & 22 & 57 & 92 & 116 & 151 & 197 & 199 & 234 & 280 & 293 & 339\\
\hline
Ap$(3M_{4})$ & 33 & 68 & 103 & 127 & 162 & 197 & 210 & 245 & 280 & 304 & 339\\
\hline
\end{tabular}
\end{center}
\end{example}
\noindent From the Ap\'{e}ry table we get
$G_{\mathfrak{m}}=F\oplus F(-1)^{3}\oplus F(-2)^{4}\oplus F(-3)^{3} $, where $F=F(t^{a})$, the fiber cone of $(t^{a})$. Therefore we have the Hilbert series
$$HG_{\mathfrak{m}}(x)=\dfrac{1+3x+4x^{2}+3x^{3}}{1-x}.$$
\bigskip
\section{Ap\'{e}ry set, Ap\'{e}ry table and the tangent cone of $k[\mathfrak{S}_{n+2}]$}
Let $a,d ,r,h$ be positive integers with $\gcd(a,d)=\gcd(a,r)=1$ and $d>hn(r-1)$.
Suppose $a_{0}=a$ and $a_{k+1}=ha+r^{k}d$, for $0\leq k\leq n$.
Let $\mathfrak{S}_{n+2}=\langle \{a_{0},a_{1},\ldots,a_{n+1}\}\rangle$ be the numerical semigroup with embedding dimension
$n+2$, such that $\{a_{0}, a_{1},\ldots, a_{n+1}\}$
form a minimal system of generators for $\mathfrak{S}_{n+2}$.
\medskip
\begin{definition}
Let $m,r,n$ be positive integers and $m=\displaystyle\sum_{k=0}^{n}\alpha_{k}r^{k}$, where $0\leq \alpha_{i}\leq r-1$ for $i\in\{0,\ldots,n-1\}$. Then the expression $m=\displaystyle\sum_{k=0}^{n}\alpha_{k}r^{k}$ is called the $r$-adic representation of $m$ upto order $n$.
\end{definition}
\medskip
\begin{lemma}\label{r-adic}
Let $m$ and $r$ be two positive integers and $m=\displaystyle\sum_{k=0}^{n}\alpha_{k}r^{k}$ be the $r$-adic representation of $m$ upto order $n$. Then for any expression $m=\displaystyle\sum_{k=0}^{n}\beta_{k}r^{k}$, we have
$$\displaystyle\sum_{k=0}^{n}\alpha_{k}\leq \displaystyle\sum_{k=0}^{n}\beta_{k}.$$ Moreover,
$\displaystyle\sum_{k=0}^{n}\alpha_{k}< \displaystyle\sum_{k=0}^{n}\beta_{k}$, if $\displaystyle\sum_{k=0}^{n}\beta_{k}r^{k}$
is not an $r$-adic representation of $m$ upto order $n$.
\end{lemma}
\proof We proceed by induction on $n$. If $n=0$ then it follows trivially. At first we claim that $\beta_{n}\leq \alpha_{n}$.
If not, then $\alpha_{n}+1\leq\beta_{n}$, hence $(\alpha_{n}+1)r^{n}\leq\beta_{n}r^{n}$. Now
$$m=\displaystyle\sum_{k=0}^{n}\alpha_{k}r^{k}\leq \displaystyle\sum_{k=0}^{n-1}(r-1)r^{k} +\alpha_{n}r^{n}= (r^{n}-1)+\alpha_{n}r^{n} <(\alpha_{n}+1)r^{n}\leq \beta_{n}r^{n},$$ which is a contradiction. Let $\alpha_{n}=t+\beta_{n}$, where $t\geq 0$. Again $\displaystyle\sum_{k=0}^{n-1}\alpha_{k}r^{k}+(t+\beta_{n})r^{n}=\displaystyle\sum_{k=0}^{n}\beta_{k}r^{k}$, therefore $\displaystyle\sum_{k=0}^{n-2}\alpha_{k}r^{k}+(tr+\alpha_{n-1})r^{n-1}=\displaystyle\sum_{k=0}^{n-1}\beta_{k}r^{k}$. By the induction hypothesis, $\displaystyle\sum_{k=0}^{n-2}\alpha_{k}+(tr+\alpha_{n-1})\leq \displaystyle\sum_{k=0}^{n-1}\beta_{k}$. Hence, we have,
\begin{align*}
\displaystyle\sum_{k=0}^{n}\alpha_{k}
&=\displaystyle\sum_{k=0}^{n-1}\alpha_{k}+t+\beta_{n}\\
&\leq \displaystyle\sum_{k=0}^{n-2}\alpha_{k}+(tr+\alpha_{n-1})+\beta_{n}\\
&\leq \displaystyle\sum_{k=0}^{n}\beta_{k}. \hspace*{3.5in} \qed
\end{align*}
\medskip
\begin{theorem}\label{apery}
Let for each $i\in \{1,\ldots,a-1\}$, $i=\displaystyle\sum_{k=0}^{n}a_{ki}r^{k}$ be the $r$-adic representation of $i$ upto order $n$. Suppose $\ell_{i}=\displaystyle\sum_{k=0}^{n}a_{ki}$, for $1\leq i\leq a-1$. Then
$\mathrm{Ap}(\mathfrak{S}_{n+2}, a)=\{\ell_{i}ha+id\mid 1\leq i\leq a-1\}\cup\{0\}$.
\end{theorem}
\proof Let $T=\{\ell_{i}ha+id\mid 1\leq i\leq a-1\}$. At first we note that $$\ell_{i}ha+id=\displaystyle\sum_{k=0}^{n}a_{ki}(ha+r^{k}d),\quad 1\leq i\leq a-1 .$$ Therefore $T\subset \mathfrak{S}_{n+2}$. Suppose $s\in \mathrm{Ap}(\mathfrak{S}_{n+2}, a)\setminus\{0\}$, with $s\equiv id(\mathrm{mod}a)$. Let $s=\displaystyle\sum_{k=0}^{n}c_{k+1}(ha+r^{k}d)$, then
$\gcd(a,d)=1$ forces that $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}\equiv i(\mathrm{mod}a)$. Therefore $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}= i+pa$, and we have $s=\displaystyle\left(\sum_{k=0}^{n}c_{k+1}\right)ha+(i+pa)d$.
\medskip
If $p>0$ then
\begin{align*}
s&=\displaystyle\left(\sum_{k=0}^{n}c_{k+1}\right)ha+(i+pa)d\\
&\geq\displaystyle\left(\sum_{k=0}^{n}c_{k+1}+n(r-1)\right)ha+id \\
&> nh(r-1)a+id\quad (\mathrm{as}\,\, s>0\,\,\mathrm{implies}\,\, \sum_{k=0}^{n}c_{k+1}>0)\\
&\geq \ell_{i}+id.
\end{align*}
This gives a contradiction as $s\in \mathrm{Ap}(\mathfrak{S}_{n+2}, a) $ and $s\equiv \ell_{i}+id(\mathrm{mod} a) $. If $p=0$, then by Lemma \ref{r-adic}, we have $\ell_{i}\leq \displaystyle\sum_{k=0}^{n}c_{k+1} $. Therefore $s\geq \ell_{i}+id $. Now $s\in \mathrm{Ap}(\mathfrak{S}_{n+2}, a) $ and $s\equiv \ell_{i}+id(\mathrm{mod} a) $, therefore we have $s=\ell_{i}+id$, hence $s\in T$. \qed
\medskip
\begin{lemma}\label{unique}
Every element of $\mathrm{Ap}(\mathfrak{S}_{n+2})$ has a unique expression.
\end{lemma}
\proof Let $$\omega(i)=\ell_{i}ha+id=\displaystyle\sum_{k=0}^{n}c_{k+1}(ha+r^{k}d)=(\displaystyle\sum_{k=0}^{n}c_{k+1})ha+(\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k})d,$$ for $1\leq i\leq a-1$, where $\ell_{i}$'s are the same as in Theorem
\ref{apery}. Therefore, $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}\equiv i(\mathrm{mod}a)$, hence $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}= i+pa $ for some $p\geq 0$. If $p>0$ then,
\begin{align*}
\omega(i)&=\displaystyle\left(\sum_{k=0}^{n}c_{k+1}\right)ha+(i+pa)d\\
&\geq\displaystyle\left(\sum_{k=0}^{n}c_{k+1}+n(r-1)\right)ha+id \\
&> nh(r-1)a+id\quad (\mathrm{as}\,\, \omega(i)>0\,\,\mathrm{implies}\,\, \sum_{k=0}^{n}c_{k+1}>0)\\
&\geq \ell_{i}+id.
\end{align*}
This gives a contradiction as $\omega(i)\in \mathrm{Ap}(\mathfrak{S}_{n+2}, a) $. Therefore $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}= i $. If the expression $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}$ is not an $r$-adic representation of $i$ upto order $n$, then $\displaystyle\sum_{k=0}^{n}c_{k+1}>\ell_{i} $ by lemma \ref{r-adic}, which is a contradiction. Therefore $\displaystyle\sum_{k=0}^{n}c_{k+1}r^{k}$ is
an $r$-adic representation of $i$ upto order $n$ and by the uniqueness of $r$-adic representation,
$\omega(i)$ upto order $n$ has unique expression for
$1\leq i\leq a-1$. \qed
\medskip
\begin{theorem}\label{aperytab}
Let $r=\max\{\ell_{i}\mid 1\leq i\leq a-1\} $, where $\ell_{0}=0$ and $\ell_{i}$'s are the same as in Theorem \ref{apery}, for $1\leq i\leq a-1$. Let $\mathrm{AT}(\mathfrak{S}_{n+2},a)$ denote the Ap\'{e}ry table of $\mathfrak{S}_{n+2}$. Then $\mathrm{AT}(\mathfrak{S}_{n+2},a)$ will be of order $r \times a$. Let $\omega_{st}$ be the $(s,t)$ entry of the table $\mathrm{AT}(\mathfrak{S}_{n+2},a)$. Then
\begin{align*}
\omega_{st} & = &
\begin{cases}
\ell_{t}ha+td & \mbox{if} \quad 0\leq s\leq \ell_{t}, \, 0\leq t\leq a-1;\\
\ell_{t}ha+td+(s-\ell_{t})a & \mbox{if} \quad
\ell_{t}<s\leq r, \, 0\leq t\leq a-1.
\end{cases}
\end{align*}
\end{theorem}
\proof Follows from Lemma \ref{unique}.\qed
\medskip
\begin{theorem} Let $k$ be an infinite field.
The following properties hold for the tangent cone $G_{\mathfrak{m}}$ of $\mathfrak{S}_{n+2}$:
\begin{enumerate}
\item[(i)] $G_{\mathfrak{m}}$ is Cohen-Macaulay,
\item[(ii)] $G_{\mathfrak{m}}$ is not Gorenstein,
\item[(iii)] $G_{\mathfrak{m}}$ is Buchsbaum.
\end{enumerate}
\end{theorem}
\proof Follows from Theorem \ref{aperytab} and \cite{cz2}.\qed
\bibliographystyle{amsalpha}
|
2,869,038,154,998 | arxiv | \section{Introduction}
\noindent In the last two decades gauge/gravity duality has emerged as a powerful tool to study various condensed matter systems which are strongly correlated \cite{jm, jmetal}. This apparent connection between a gravity theory and a gauge theory was expected for many years in the form of holographic principle and has been precisely conjectured for a particular gauge theory relating classical gravity theory in anti de-Sitter (AdS) spacetime \cite{jm}. Although the conjecture was about a duality between a gravity theory in AdS spacetime and a conformal field theory in one lower dimension spacetime, that is, at the boundary of AdS spacetime where gravity theory lives, later researches conceded to a more general form of the strong/weak duality between asymptotically AdS spacetime and nearly conformal field theory at the boundary of AdS. Later on, this duality has been utilised to study various physical systems from both sides. However, it turns out that there are many strongly correlated systems in condensed matter physics which are difficult to deal with traditional field theoretic methods. Fortunately, gauge/gravity duality provides us with an opportunity to study such difficult systems via their gravity dual models in one higher dimensional spacetime. These gravitational duals are far easy to deal with as we can study them in the classical general relativistic senario.\\
\noindent Inspired by the simple model of Abelian symmetry breaking around a charged black hole in AdS spacetime proposed in \cite{ssg}, a gravity dual model that mimicked the properties of a s-wave superconductor was developed in \cite{hhh}. Since then so many investigations have been around investigating such gravity duals mimicking various types of superconductors in numerous physical situations \cite{ssp, gks, hr, pwpop, lcz, st, sgdr1, sg1, RGC, rc2, assg}. One particular interesting study in this regard has been to see the effect of nonlinear electrodynamics in these gravity duals. There are many ways to incorporate such nonlinearity in these models but the inclusion of the Born-Infeld (BI) electrodynamics \cite{dbi1, dbi2, dbi3, kru} is of profound interest as it is the only nonlinear theory that has duality symmetry just like ordinary Maxwell electrodynamics. Several studies have been carried out incorporating the effect of BI electrodynamics in holographic superconductors \cite{jc, sgdr, sgdr2, rbsg, sg2, pcgs, dgsg1, dgsg2, dgsg3}. Another motivation to consider the BI electrodynamics comes straight from the string theory where the BI electrodynamics describes the low energy behaviour of D branes \cite{rbsg}.\\
\noindent There is another important gravity model with a charged vector field in the bulk as the vector order parameter that corresponds to the holographic $p$-wave superconductor. Such a model for holographic $p$-wave superconductor using $SU(2)$ Yang-Mills field in the bulk was first provided in \cite{ssp}. In this model a gauge boson generated by one $SU(2)$ generator works as a dual to the vector order parameter. Unlike in a $s$-wave holographic superconductor, here the onset of the condensate spontaneously breaks not only the $U(1)$ symmetry but also $SO(2)$ rotational symmetry in the $x$-$y$ plane \cite{RGC}.\\
\noindent Recently a new gravity dual model has been proposed for the $p$-wave superconductor using a complex vector field non-minimally coupled to the Maxwell field \cite{rc2}. A detailed analysis of the phase diagram for this model was also provided in \cite{rc2}. A similar phase diagram analysis has been done for a slightly modified version of this model where the effect of non-linearity was incorporated in the Maxwell field via Born-Infeld parameter \cite{pcgs}. However, explicit analytic calculations for the condensation and conductivity in this model has not been carried out in the literature. In this paper we have analytically obtained the critical temperature, the condensation operator value and the conductivity for the holographic $p$-wave superconductor model proposed in \cite{rc2} in the presence of Born-Infeld electrodynamics. \\
\noindent We have organised this paper in the following manner. In section II, we have developed the model and have found the equations of motion for the matter field and the gauge field with appropriate ansatz. Then, in section section III, we have used the St{\"u}rm-Liouville method to find the critical temperature and the condensation operator. We have calculated the conductivity for this model in section IV. Finally, we have summarised our findings and draw relevant conclusions in section V. We have performed all our computations in the probe limit, where we can ignore the backreaction of the matter field in the metric.
\section{Set Up for $p$-wave Holographic Superconductors}
\noindent \hypertarget{sec2}{Holographic} superconductors with $p$-wave gap are based on the solutions to field equations of Einstein-Yang-Mills theory with a cosmological constant. The action for this model reads
\begin{eqnarray}
\mathcal{S} = \dfrac{1}{2\mathcal{G}^2} \int d^{4}x \bigg(\mathcal{R}-\dfrac{1}{4}(F^{a}_{\mu\nu})^{2}+\dfrac{6}{L^{2}}\bigg)
\label{EYM action}
\end{eqnarray}
where $F^{a}_{\mu\nu}$ is the field strength tensor of an $SU(2)$ gauge field.\\
\noindent \hypertarget{sec2}{We} work with the metric of a planar black hole in $AdS_{3+1}$ spacetime arising from the solution of Einstein gravity
\begin{eqnarray}
ds^{2}=-f(r) dt^{2}+r^{2}(dx^{2}+dy^{2})+\dfrac{dr^{2}}{f(r)}
\label{metric}
\end{eqnarray}
where $$f(r)=\bigg(r^{2}-\dfrac{r_0^{3}}{r}\bigg) $$ with $r_{0}$ being the event horizon of the black hole, and the AdS radius has been set to unity. The Hawking temperature associated with the above black hole geometry is given by
\begin{eqnarray}
T=\dfrac{3r_{0}}{4\pi}~.
\label{Hawking_temperature}
\end{eqnarray}
\noindent We now write down the model for holographic $p$-wave superconductor with the Lagrangian density consisting of a Maxwell field $A_{\mu}$ and a massive complex vector field $\rho_{\mu}$. The action for this model reads
\begin{eqnarray}
\mathcal{S} = \dfrac{1}{16\pi G}\int d^{4}x \sqrt{-g}\bigg(\mathcal{R}-2\Lambda +\mathcal{L}\bigg)
\label{Action}
\end{eqnarray}
where
\begin{eqnarray}
~~~~~~~~~ \mathcal{L}=\dfrac{1}{b}\bigg(1-\sqrt{1+\dfrac{b}{2}F^{\mu\nu}F_{\mu\nu}}\bigg)-\dfrac{1}{2}\rho_{\mu\nu}^{\dagger}\rho^{\mu\nu}-m^{2}\rho_{\mu}^{\dagger}\rho^{\mu}
\label{Lagrangian_density}
\end{eqnarray}
$$F_{\mu\nu} \equiv \partial_{[\mu}A_{\nu]} ~ , ~ D_{\mu} \equiv (\partial_{\mu}-iA_{\mu}) ~ , ~ \rho_{\mu\nu} \equiv D_{\mu}\rho_{\nu}-D_{\nu}\rho_{\mu}. $$
The Lagrangian density $\mathcal{L}$ consists of Born-Infeld electrodynamics and $b$ is the Born-Infeld parameter. Since the metric $g_{\mu\nu}$ depends only on $r$, we take the following ansatz for the matter field and the gauge field respectively $$\rho_{\mu}=\delta^{x}_{\mu}~\rho(r)~~,~~A_{\mu}=\delta^{t}_{\mu}\Phi(r)~.$$
Now varying the action $\mathcal{S}$ in eq.(\ref{Action}), we get the equations of motion for the matter field $\rho(r)$ and the gauge field $\Phi(r)$
\begin{eqnarray}
\rho^{\prime\prime}+ \dfrac{f^{\prime}}{f}\rho^{\prime}+\bigg(\dfrac{\Phi^{2}}{f^{2}}-\dfrac{m^{2}}{f}\bigg)\rho=0
\label{eom_rho}
\end{eqnarray}
\begin{eqnarray}
\Phi^{\prime\prime}+\dfrac{2}{r}\Phi^{\prime}(1-b\Phi^{\prime 2})-\dfrac{2\Phi\rho^{2}}{r^{2}f}(1-b\Phi^{\prime 2})^{3/2}=0
\label{eom_Phi}
\end{eqnarray}
where prime denotes the derivative with respect to $r$.\\
We now make the change of coordinate, $z=\dfrac{r_{0}}{r}$, such that the horizon is at $z=1$ while the AdS boundary is at $z=0$. In this coordinate, the field eq.(s)(\ref{eom_rho}, \ref{eom_Phi}) take the following form
\begin{eqnarray}
\rho^{\prime\prime}-\dfrac{3z^{2}}{(1-z^{3})}\rho^{\prime}+\dfrac{1}{z^{2}(1-z^{3})}\bigg(\dfrac{z^{2}\Phi^{2}}{r_{0}^{2}(1-z^{3})}-m^{2}\bigg)\rho=0
\label{eom_rho_in_z}
\end{eqnarray}
\begin{eqnarray}
\Phi^{\prime\prime}+\dfrac{2bz^{3}}{r_{0}^{2}}\Phi^{\prime3}-\dfrac{2\Phi\rho^{2}}{r_{0}^{2}(1-z^{3})}\bigg(1-\dfrac{bz^{4}}{r_{0}^{2}}\Phi^{\prime 2}\bigg)^{3/2}=0
\label{eom_Phi_in_z}
\end{eqnarray}
where prime denotes derivative with respect to the new coordinate $z$.\\
From the gauge/gravity duality dictionary, the behaviour of $\Phi(z)$ and $\rho(z)$ near the AdS boundary are known to be of the following form
\begin{eqnarray}
\Phi(z) = \mu - \dfrac{\tilde{\rho}}{r_{0}}z
\label{phi_asymptote}
\end{eqnarray}
\begin{eqnarray}
\rho(z) \simeq \dfrac{\rho_{+}}{r_{0}^{\Delta_{+}}}z^{\Delta_{+}}+\dfrac{\rho_{-}}{r_{0}^{\Delta_{-}}}z^{\Delta_{-}}
\label{rho_asymp}
\end{eqnarray}
where $\mu$ is the chemical potential and $\tilde{\rho}$ is the charge density. $\Delta_{\pm}$ are roots of the equation
\begin{eqnarray}
\Delta = \dfrac{1}{2}(1 \pm \sqrt{1+4m^{2}})~.
\label{con_dim}
\end{eqnarray}
Here $\Delta$ is known as the conformal dimension and it depends on $m^{2}$ through the above relation \footnote{This relation can be obtained using eq.(\ref{rho_asymp}) in eq.(\ref{eom_rho_in_z}).}.
It is apparent from eq.(\ref{rho_asymp}) that $\Delta$ must be real and positive. With this condition on $\Delta$, the choice of $m^{2}$ is also restricted. To fulfil the above mentioned condition for $\Delta$, $m^{2}$ needs to satisfy the following lower bound.
\begin{eqnarray}
m^{2} \geq -\dfrac{1}{4}~.
\label{B_F_bound}
\end{eqnarray}
Eq.(\ref{B_F_bound}) is famously known as the Breitenlohner-Freedman (BF) bound \cite{BF2}. The BF bound implies that the vector field, even if it has negative mass, is stable in AdS spacetime as long as eq.(\ref{B_F_bound}) is satisfied. \\
With this set up in hand, we shall proceed to carry out the St\"urm-Liouville analysis in the next section.
\section{St{\"u}rm-Liouville Analysis}
\subsection{Critical Temperature}
\noindent \hypertarget{sec4}{In} this section, we shall apply the St{\"u}rm-Liouville eigenvalue method to find the critical temperature and the value of the condensation operator. We first recall that the matter field $\rho(z)$ vanishes at the critical temperature $T_{c}$. Hence, at $T = T_{c}$, eq.(\ref{eom_Phi_in_z}) simplifies to the following form
\begin{eqnarray}
\Phi^{\prime\prime}+\dfrac{2bz^{3}}{r_{0}^{2}}\Phi^{\prime 3}=0~.
\label{eom_Phi_at_Tc}
\end{eqnarray}
The analytic solution of eq.(\ref{eom_Phi_at_Tc}) up to first order in the Born-Infeld parameter $b$ is given by \cite{sgdr}
\begin{eqnarray}
\Phi(z)= \lambda r_{0}(1-z) \bigg[1-\dfrac{b(\lambda^{2}|_{b=0})}{10}\zeta(z)\bigg]
\label{exact_Phi}
\end{eqnarray}
where $\zeta(z)=(1+z+z^{2}+z^{3}+z^{4})$ and $\lambda=\dfrac{\tilde{\rho}}{r_{0c}^{2}}$, $r_{0c}$ being the horizon radius at the critical temperature. From eq.(\ref{Hawking_temperature}), we find the expression for the critical temperature to be
\begin{eqnarray}
T_{c} = \dfrac{3}{4\pi}\sqrt{\dfrac{\tilde{\rho}}{\lambda}}~.
\label{Critical_temp_eq}
\end{eqnarray}
Now using eq.(\ref{exact_Phi}) in eq.(\ref{eom_rho_in_z}), we get the following field equation for $\rho$
\begin{eqnarray}
\rho^{\prime\prime}-\dfrac{3z^{2}}{(1-z^{3})}\rho^{\prime}+\dfrac{\lambda^{2}}{(1+z+z^{2})^{2}}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg)\rho - \dfrac{m^{2}}{z^{2}(1-z^{3})}\rho=0~.
\label{eom_rho_at_Tc}
\end{eqnarray}
To proceed further, we consider the following non-trivial form of the field $\rho(z)$
\begin{eqnarray}
\rho(z)=\dfrac{\langle \mathcal{O}_{\Delta} \rangle}{\sqrt{2}r_{0}^{\Delta}}z^{\Delta}F(z)
\label{rho_sturm}
\end{eqnarray}
with the conditions $F(0)=1$ and $F^{\prime}(0)=0$. These boundary conditions on $F(z)$ are consistent with the behaviour of $\rho(z)$ near the AdS boundary given by eq.(\ref{rho_asymp}).
Substituting the form of the field $\rho(z)$ given in eq.(\ref{rho_sturm}) in eq.(\ref{eom_rho_at_Tc}), we obtain
\begin{eqnarray}
\hspace*{-23mm}(z^{2\Delta}(1-z^{3})F^{\prime})^{\prime}-(3\Delta z^{2\Delta +1}+m^{2}z^{2\Delta -2}-\Delta(\Delta -1)z^{2\Delta -2}(1-z^{3}))F \nonumber\\
+\dfrac{\lambda^{2}z^{2\Delta}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg)F=0~.\hspace*{45mm}~
\label{field_eq_in_F_z}
\end{eqnarray}
Comparing eq.(\ref{field_eq_in_F_z}) with the standard form of the St{\"u}rm-Liouville eigenvalue equation given by
\begin{eqnarray}
\dfrac{d(p(z)F^{\prime})}{dz}-q(z)F+\lambda^{2}r(z)F=0
\label{Sturm_form_F_z_eq_delta_1}
\end{eqnarray}
we can identify the form of the functions $p(z)$, $q(z)$ and $r(z)$ to be
\begin{eqnarray}
p(z) = z^{2\Delta}(1-z^{3}) \hspace*{30mm}~ \nonumber \\
q(z) = 3\Delta z^{2\Delta +1}+m^{2}z^{2\Delta -2}-\Delta(\Delta -1)z^{2\Delta -2}(1-z^{3}) \label{general functions}\\
r(z) = \dfrac{z^{2\Delta}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg).\hspace*{10mm}~ \nonumber
\end{eqnarray}
We can now find the eigenvalue $\lambda^{2}$ in eq.(\ref{field_eq_in_F_z}) from the following relation
\begin{eqnarray}
\lambda^{2} = \dfrac{\displaystyle\int\limits_{0}^{1}dz \big(p(z)F^{\prime 2} + q(z)F^{2}\big)}{\displaystyle\int\limits_{0}^{1}dzr(z)F^{2}}~.
\label{eigenvalue_eq}
\end{eqnarray}
To estimate $\lambda^{2}$, we choose a trial function for $F(z)$ as $F_{\alpha}(z) = (1-\alpha z^{2})$. The eigenvalue $\lambda^{2}$ is determined by minimizing eq.(\ref{eigenvalue_eq}) with respect to $\alpha$. The value of $\lambda_{\alpha_{min.}}$ can then be used in eq.(\ref{Critical_temp_eq}) to determine the critical temperature of the $p$-wave holographic superconductor from the equation
\begin{eqnarray}
T_{c} = \dfrac{3}{4\pi}\sqrt{\dfrac{\tilde{\rho}}{\lambda_{\alpha_{min.}}}}~.
\label{Critical Temp Min.}
\end{eqnarray}
To move ahead, we select some particular conformal dimension via eq.(\ref{con_dim}). We would focus on the following two choices of $m^{2}$ and its corresponding conformal dimensions $\Delta=(\Delta_{+},~\Delta_{-})$.
\begin{eqnarray}
m^{2} = 0 ~~ \rightarrow ~~\Delta = (1,~0)
\label{m2=0}
\end{eqnarray}
\begin{eqnarray}
m^{2} = -\dfrac{3}{16} ~~ \rightarrow ~~ \Delta = \bigg(\dfrac{3}{4},~\dfrac{1}{4}\bigg)~.
\label{m2=-3/16}
\end{eqnarray}
We know that near the AdS boundary $\rho(z)$ takes the form given by eq.(\ref{rho_asymp}). In order to have spontaneous symmetry breaking, we set the source term $\rho_{-}=0$ for the above choices. Therefore the boundary behaviour of $\rho(z)$ is now given as
\begin{eqnarray}
\rho(z) \simeq \dfrac{\rho_{+}}{r_{0}^{\Delta_{+}}}z^{\Delta_{+}}~.
\label{rho_asymp_positive}
\end{eqnarray}
As the subscript is no longer needed in the above equation, we would simply drop it from now onwards.
\subsubsection*{\textbf{Case(I): $m^{2} = 0,~ \Delta = 1$}}
\noindent In this case, the functions $p(z)$, $q(z)$ and $r(z)$ are obtained by substituting $m^{2} = 0$ and $\Delta = 1$ in eq.(\ref{general functions}) and are given as
\begin{eqnarray}
p(z) = z^{2}(1-z^{3}) \hspace*{40mm}~ \nonumber \\
q(z) = 3 z^{3} \hspace*{53mm}~ \label{m=0 functions}\\
r(z) = \dfrac{z^{2}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg). \nonumber
\end{eqnarray}
Using eq.(\ref{eigenvalue_eq}) with trial function $F_{\alpha}(z) = (1-\alpha z^{2})$ and eq.(\ref{m=0 functions}), the eigenvalue $\lambda^{2}$ reads
\begin{eqnarray}
\lambda_{\alpha}^{2} = \dfrac{\displaystyle\int\limits_{0}^{1}dz \big( 4\alpha^{2}z^{4}(1-z^{3}) + 3 z^{3} (1-\alpha z^{2})^{2}\big)}{\displaystyle\int\limits_{0}^{1}dz \dfrac{z^{2}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg)(1-\alpha z^{2})^{2}}~.
\label{eigenvalue_eq_m=0}
\end{eqnarray}
Thus we obtain
\begin{eqnarray}
\lambda_{\alpha}^{2} = \dfrac{60\bigg(\alpha - \dfrac{3}{4} - \dfrac{27\alpha^{2}}{40}\bigg)}{\bigg[(30\ln3 - 10\sqrt{3}\pi + 21)\alpha^{2} + (120\ln3 -130)\alpha + (30\ln3 + 10\sqrt{3}\pi - 90) + \dfrac{b (\lambda^{2}|_{b=0})}{5}} ~.~~~~~~\label{eigenvalue_solution_m2=0}\\
\bigg((30\ln3 + 10\sqrt{3}\pi - 85.91)\alpha^{2} + (-60\ln3 + 20\sqrt{3}\pi -48.14)\alpha + (72 - 60\ln3 + )\bigg)\bigg]~~~~~~~~ \nonumber
\end{eqnarray}
For $b=0$, the eigenvalue expression (\ref{eigenvalue_solution_m2=0}) reduces to the following form
\begin{eqnarray}
\lambda_{\alpha}^{2}|_{b=0} = \dfrac{60\bigg(\alpha - \dfrac{3}{4} - \dfrac{27\alpha^{2}}{40}\bigg)}{(30\ln3 - 10\sqrt{3}\pi + 21)\alpha^{2} + (120\ln3 -130)\alpha + (30\ln3 + 10\sqrt{3}\pi - 90)}
\label{eigenvalue_solution_m=0_b=0}
\end{eqnarray}
which attains minima at $\alpha \approx 0.50775$. The minimum value of $\lambda_{\alpha}^{2}|_{b=0}$ is found to be
\begin{eqnarray}
\lambda_{\alpha_{min.}}^{2}|_{b=0} \approx 13.7674~.
\label{lambda_m=0_b=0}
\end{eqnarray}
The critical temperature is then determined using eq.(\ref{Critical Temp Min.}) and reads
\begin{eqnarray}
T_{c} = \dfrac{3}{4\pi}\sqrt{\dfrac{\tilde{\rho}}{\lambda_{\alpha_{min.}}|_{b=0}}} \approx 0.1239\sqrt{\tilde{\rho}}~.
\label{critic_temp_m=0_b=0}
\end{eqnarray}
It is interesting to note that the critical temperature obtained in this case, with the BI parameter $b = 0$, is matching exactly with the critical temperature obtained for the holographic $p$-wave superconductor constructed out of the Einstein-Yang-Mills theory \cite{sgdr1}.\\
Note that in eq.(\ref{eigenvalue_eq_m=0}) we would now use $\lambda_{\alpha_{min.}}^{2}|_{b=0}$ in place of $\lambda^{2}|_{b=0}$ for successive computations of the eigenvalues for different values of the BI parameter $b$. In that case, we can write eq.(\ref{eigenvalue_solution_m2=0}) as below
\begin{eqnarray}
\lambda_{\alpha}^{2} = \dfrac{60\bigg(\alpha - \dfrac{3}{4} - \dfrac{27\alpha^{2}}{40}\bigg)}{\bigg[(30\ln3 - 10\sqrt{3}\pi + 21)\alpha^{2} + (120\ln3 -130)\alpha + (30\ln3 + 10\sqrt{3}\pi - 90) + \dfrac{b (13.7674)}{5}} ~~~~~~~\label{eigenvalue_solution_m=0_withlb0}\\
\bigg((30\ln3 + 10\sqrt{3}\pi - 85.91)\alpha^{2} + (-60\ln3 + 20\sqrt{3}\pi -48.14)\alpha + (72 - 60\ln3 + )\bigg)\bigg]~~~~~ \nonumber
\end{eqnarray}
where we have substituted $\lambda^{2}|_{b=0} = 13.7674$~.\\
We now take some small value for BI parameter $b$ in eq.(\ref{eigenvalue_solution_m=0_withlb0}) and minimize it with respect to $\alpha$ to find the corresponding eigenvalue $\lambda^{2}_{\alpha_{min.}}|_{b \ne 0}$. We then determine the critical temperature using eq.(\ref{Critical Temp Min.}). The critical temperature $T_{c}$ for some values of the BI parameter $b$ are given in Table I.
\subsubsection*{\textbf{Case(II): $m^{2} = -3/16,~ \Delta = 3/4$}}
\noindent In this case as well, we shall first find out the critical temperature when there is no BI correction, that is, $b = 0$ and shall then provide the critical temperature for some small values of the BI parameter $b$. To do so, we first write the functions $p(z)$, $q(z)$ and $r(z)$ deduced from eq.(\ref{general functions}) for this case. These functions have the following form for the present case
\begin{eqnarray}
p(z) = z^{3/2}(1-z^{3}) \hspace*{38mm}~ \nonumber \\
q(z) = \dfrac{33}{16} z^{5/2} \hspace*{47mm}~ \label{m=-3/16 functions}\\
r(z) = \dfrac{z^{3/2}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg). \nonumber
\end{eqnarray}
Now we use the same trial function $F_{\alpha}(z)$, as in the previous case, along with the above functions to find eigenvalue given by
\begin{eqnarray}
\lambda_{\alpha}^{2} = \dfrac{\displaystyle\int\limits_{0}^{1}dz \big( 4\alpha^{2}z^{7/2}(1-z^{3}) + \dfrac{33}{16} z^{5/2} (1-\alpha z^{2})^{2}\big)}{\displaystyle\int\limits_{0}^{1}dz \dfrac{z^{3/2}(1-z)}{(1+z+z^{2})}\bigg(1-\dfrac{b(\lambda^{2}|_{b=0})}{5}\zeta(z)\bigg)(1-\alpha z^{2})^{2}}~.
\label{eigenvalue_eq_m=-3/16}
\end{eqnarray}
Upon solving for the integrals in the above expression, we get
\begin{eqnarray}
\lambda_{\alpha}^{2} = \dfrac{\dfrac{3465}{5040}\bigg(3780\alpha - 2970 - 3178\alpha^{2}\bigg)}{\mathcal{D}} ~.
\label{eigenvalue_solution_m=-3/16}
\end{eqnarray}
where \\
\hspace*{-12mm}$\mathcal{D} = \bigg[(-3465\ln3 + 3776)\alpha^{2} + (-3465\ln3 + 3465\sqrt{3}\pi -14916)\alpha + (1732.5\ln3 + 1732.5\sqrt{3}\pi - 1150)+\dfrac{b (\lambda^{2}|_{b=0})}{5} \bigg((1732.5\ln3 + 1732.5\sqrt{3}\pi - 11234.1667)\alpha^{2} + (6930\ln3 - 7974.1538)\alpha + (1732.5\ln3 - 1732.5\sqrt{3}\pi + 7994)\bigg)\bigg]$.\\
To find out the critical temperature in this case, we again put in some small values for the BI parameter $b$ in the above expression for the eigenvalue and then we go on to minimize it with respect to $\alpha$. After finding corresponding minimum values $\lambda^{2}_{\alpha_{min.}}$, we use eq.(\ref{Critical Temp Min.}) to determine the critical temperature $T_{c}$.\\
\noindent In Table I, we have provided tabular summary for the critical temperatue with the Born-Infeld correction for both the cases we have discussed above. It should be noted that the presence of the BI parameter is weakening the critical temperature for both the cases.
\begin{table}[h]
\begin{center}
\begin{tabular}{ |c| c| c|}
\hline
~~~~~~Born-Infeld parameter, $b$~~~~~~ & \multicolumn{2}{c|}{~~~The critical temperature, $T_{c}$ ~~~~~~} \\
\hline
& ~~~~~~$m^{2}=0,~\Delta = 1$~~~~~~ &~~~~~ $m^{2}=-3/16,~\Delta = 3/4$ ~~~\\
\hline
0.0 & 0.1239$\sqrt{\tilde{\rho}}$ & 0.1425$\sqrt{\tilde{\rho}}$\\
\hline
0.01 & 0.1221$\sqrt{\tilde{\rho}}$ & 0.1414$\sqrt{\tilde{\rho}}$\\
\hline
0.02 & 0.1201$\sqrt{\tilde{\rho}}$ & 0.1402$\sqrt{\tilde{\rho}}$ \\
\hline
0.03 & 0.1182$\sqrt{\tilde{\rho}}$ & 0.1390$\sqrt{\tilde{\rho}}$\\
\hline
\end{tabular}
\label{tab1}
\end{center}
\caption{Critical temperature with the Born-Infeld correction}
\end{table}
\subsection{Condensation Operator}
\noindent Now we move on to find the condensation operator value. To calculate it we notice that near the critical temperature, we have $\rho (z)$ given by eq.(\ref{rho_sturm}). We have also found the solution for the field $\Phi(z)$ at the critical temperature $T_{c}$ (eq.(\ref{exact_Phi})). Now we expect that near the critical temperature, $\Phi(z)$ would slightly differ from eq.(\ref{exact_Phi}). For this reason, we add a small fluctuation $\chi(z)$ in the solution given in eq.(\ref{exact_Phi}) with appropriate boundary conditions. Hence, we have
\begin{eqnarray}
\Phi(z) = \lambda r_{0}(1-z)\bigg[1-\dfrac{b(\lambda^{2}|_{b=0})}{10}\zeta(z)\bigg] + \dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{r_{0}^{2\Delta -1}}\chi(z)
\label{phi_strum}
\end{eqnarray}
where $\chi(1) = 0$ and $\chi^{\prime}(1) = 0$. \\
To determine the specific form of the field $\Phi(z)$ near the critical temperature, we substitute eq.(\ref{phi_strum}) in eq.(\ref{eom_Phi_in_z}) keeping terms only of $\mathcal{O}(b)$ and $\mathcal{O}(\langle \mathcal{O}_{\Delta} \rangle^{2})$. This gives the following equation for the fluctuation field $\chi(z)$
\begin{eqnarray}
\chi^{\prime\prime}+ 6b\lambda^{2}z^{3}\chi^{\prime} = \dfrac{\lambda z^{2\Delta} F^{2}}{r_{0}^{2}(1+z+z^2)}\bigg[1-\dfrac{b}{2}\bigg(\dfrac{(\lambda|_{b=0})^{2}}{5}\zeta(z)+3\lambda^{2}z^{4}\bigg)\bigg]~.
\label{eom_chi}
\end{eqnarray}
As the BI parameter $b$ is very small, we approximate $\lambda^{2}$ in eq.(\ref{eom_chi}) with $(\lambda|_{b=0})^{2}$ whenever it appears with $b$. In that case, eq.(\ref{eom_chi}) reduces to
\begin{eqnarray}
\chi^{\prime\prime}+ 6b(\lambda|_{b=0})^{2}z^{3}\chi^{\prime} = \dfrac{\lambda z^{2\Delta} F^{2}}{r_{0}^{2}(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg]~.
\label{eom_chi_1}
\end{eqnarray}
To find the solution of the above equation, we multiply it with $e^{\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)}$ and simplify it further to get the following form
\begin{eqnarray}
\bigg(e^{\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)}\chi^{\prime}\bigg)^{\prime} = \dfrac{\lambda z^{2\Delta} F^{2}}{r_{0}^{2}(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] e^{\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)}~.
\label{eom_chi_2}
\end{eqnarray}
Integrating eq.(\ref{eom_chi_2}) between $z=0$ and $z=1$ with the boundary conditions on $\chi(z)$ and $\chi^{\prime}(z)$, we find the following condition on the fluctuation field near the AdS boundary
\begin{eqnarray}
\chi^{\prime}(0) = -\dfrac{\lambda}{r_{0}^{2}}\mathcal{A}_{\Delta}
\label{chi_prime_near_0}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{A}_{\Delta} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{2\Delta} F^{2}}{(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] \exp\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)~.
\label{Sturm Coeffi}
\end{eqnarray}
Taylor expanding $\chi(z)$ near the AdS boundary
\begin{eqnarray}
\chi(z) = \chi(0)+z\chi^{\prime}(0)+...
\label{chi expansion}
\end{eqnarray}
and comparing the coefficients of $z$ of eq.(s)(\ref{phi_strum}, \ref{phi_asymptote}) considering the above expansion of the field $\chi(z)$, we get
\begin{eqnarray}
- \dfrac{\tilde{\rho}}{r_{0}} = -\lambda r_{0}+\dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{r_{0}^{2\Delta -1}}\chi^{\prime}(0)~.
\label{rho chi 0 rel}
\end{eqnarray}
Now we use eq.(\ref{chi_prime_near_0}) to substitute for $\chi^{\prime}(0)$ in the above equation. This yields
\begin{eqnarray}
\dfrac{\tilde{\rho}}{r_{0}^{2}} = \lambda \bigg(1+\dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{r_{0}^{2\Delta +2}}\mathcal{A}_{\Delta}\bigg)~.
\label{operator_value_r0}
\end{eqnarray}
Finally we replace $r_{0}$ in terms of the Hawking temperature $T$ using eq.(\ref{Hawking_temperature}) and $\lambda$ in terms of the critical temperature $T_{c}$ using the relation $\lambda = \dfrac{\tilde{\rho}}{r_{0c}^{2}}$. This gives the condensation operator in the following form
\begin{eqnarray}
\dfrac{\langle \mathcal{O}_{\Delta} \rangle}{T_{c}^{(\Delta+1)}} = \sqrt{\dfrac{2}{\mathcal{A}_{\Delta}}}\bigg(\dfrac{4\pi}{3}\bigg)^{(\Delta+1)}\sqrt{\bigg(1-\dfrac{T}{T_{c}}\bigg)}~.
\label{condensation_op}
\end{eqnarray}
In the above result $\Delta$ can take any positive value consistent with eq.(s)(\ref{con_dim}, \ref{B_F_bound}). It is also important to note that the condensation operator shows the second order phase transition with the critical exponent $1/2$.\\
\noindent We have discussed two particular cases by choosing $m^{2}$ and the corresponding value for the conformal dimension $\Delta$ in the previous section. For those cases, the expression for the value of the condensation operator is given below.
\subsubsection{\textbf{$m^{2} = 0,~ \Delta = 1$}}
\noindent In this case, eq.(\ref{condensation_op}) reduces to the following form
\begin{eqnarray}
\dfrac{\langle \mathcal{O}_{1} \rangle}{T_{c}^{2}} = \sqrt{\dfrac{2}{\mathcal{A}_{1}}}\bigg(\dfrac{4\pi}{3}\bigg)^{2}\sqrt{\bigg(1-\dfrac{T}{T_{c}}\bigg)}
\label{condensation_op_m=0}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{A}_{1} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{2} F^{2}}{(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] \exp\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)~.
\label{Sturm Coeffi_m=0}
\end{eqnarray}
Now we find the value of $\dfrac{\langle \mathcal{O}_{1} \rangle}{T_{c}^{2}}$ near $T \rightarrow 0$ such that eq.(\ref{condensation_op_m=0}) gives
\begin{eqnarray}
\dfrac{\langle \mathcal{O}_{1} \rangle}{T_{c}^{2}} \simeq \sqrt{\dfrac{2}{\mathcal{A}_{1}}}\bigg(\dfrac{4\pi}{3}\bigg)^{2} \approx \dfrac{24.8137}{\sqrt{\mathcal{A}_{1}}}
\label{condensation_op_m=0}
\end{eqnarray}
Taking the trial function $F_{\alpha} = (1 - \alpha z^{2})$ with the value of $\alpha$ that minimizes the eigenvalue $\lambda^{2}_{\alpha_{min.}}$ in $\mathcal{A}_{1}$ given by eq.(\ref{Sturm Coeffi_m=0}), we get
\begin{eqnarray}
\mathcal{A}_{1} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{2} (1 - \alpha z^{2})^{2}}{(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] \exp\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)~.
\label{Sturm Coeffi_m=0_trial}
\end{eqnarray}
We first consider the case when $b = 0$. In this case, eq.(\ref{Sturm Coeffi_m=0_trial}) becomes
\begin{eqnarray}
\mathcal{A}_{1} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{2} (1 - 0.50775 z^{2})^{2}}{(1+z+z^2)}~.
\label{Sturm Coeffi_m=0_trial_b=0}
\end{eqnarray}
In the above eq.(\ref{Sturm Coeffi_m=0_trial_b=0}) we have used the value $\alpha \approx 0.50775$ which we have obtained in the previous section. We have shown there that at this value of $\alpha$, the eigenvalue attains its minimum value, $\lambda^{2}_{\alpha_{min.}}|_{b=0} \approx 13.7674$, when there is no BI correction. Using eq.(\ref{Sturm Coeffi_m=0_trial_b=0}) in eq.(\ref{condensation_op_m=0}), we find the value of $\dfrac{\langle \mathcal{O}_{1} \rangle}{T_{c}^{2}}$ is approximately 87.2482.\\
We have also considered the BI correction to the value of the condensation operator. These corrections are listed in Table II for some small values of the BI parameter $b$.
\subsubsection{\textbf{$m^{2} = -3/16,~ \Delta = 3/4$}}
\noindent We now present the value of the condensation operator for $m^{2} = -\dfrac{3}{16}$ and $\Delta = \dfrac{3}{4}$. From eq.(\ref{condensation_op}), we get the following form of the condensation operator value in the present case
\begin{eqnarray}
\dfrac{\langle \mathcal{O}_{3/4} \rangle}{T_{c}^{7/4}} = \sqrt{\dfrac{2}{\mathcal{A}_{3/4}}}\bigg(\dfrac{4\pi}{3}\bigg)^{7/4}\sqrt{\bigg(1-\dfrac{T}{T_{c}}\bigg)}
\label{condensation_op_m=-3/16}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{A}_{3/4} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{3/2} F^{2}}{(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] \exp\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)~.
\label{Sturm Coeffi_m=-3/16}
\end{eqnarray}
As in the previous case, we find that near $T \rightarrow 0$ the value of $\dfrac{\langle \mathcal{O}_{3/4} \rangle}{T_{c}^{7/4}}$ is
\begin{eqnarray}
\dfrac{\langle \mathcal{O}_{3/4} \rangle}{T_{c}^{7/4}} \simeq \sqrt{\dfrac{2}{\mathcal{A}_{3/4}}}\bigg(\dfrac{4\pi}{3}\bigg)^{7/4} \approx \dfrac{17.3448}{\sqrt{\mathcal{A}_{3/4}}}~.
\label{condensation_op_m=-3/16}
\end{eqnarray}
We now take the trial function $F_{\alpha} = (1 - \alpha z^{2})$ with the value of $\alpha$ that minimizes the eigenvalue $\lambda^{2}_{\alpha_{min.}}$ in $\mathcal{A}_{3/4}$ given by eq.(\ref{Sturm Coeffi_m=0}) which gives
\begin{eqnarray}
\mathcal{A}_{3/4} = \displaystyle\int\limits_{0}^{1}dz \dfrac{z^{3/2} (1 - \alpha z^{2})^{2}}{(1+z+z^2)}\bigg[1-\dfrac{b}{2}(\lambda|_{b=0})^{2} \bigg(\dfrac{\zeta(z)}{5}+3z^{4}\bigg)\bigg] \exp\bigg(\dfrac{3b}{2}(\lambda|_{b=0})^{2}z^{4}\bigg)~.
\label{Sturm Coeffi_m=-3/16_trial}
\end{eqnarray}
We have considered the BI correction to the value of the condensation operator in this case as well which are listed in Table II for some small values of the BI parameter $b$.\\
\noindent In the table II, we display the value of condensation operator near $T = 0$ for two different cases ($m^{2}=0, \Delta = 1$) and ($m^{2}=-3/16, \Delta = 3/4$). We have noted earlier in table I that the critical temperature $T_{c}$ matches exactly for both the holographic $p$-wave superconductor models for the case ($m^{2}=0, \Delta = 1$) when the BI parameter $b$ is zero. However, the value of the condensation operator given in table II shows a departure by a factor of $\sqrt{2}$ from the value of condensation operator obtained in the Einstein-Yang-Mills $p$-wave holographic superconductor \cite{sgdr1}. It is also worth noting that the BI correction is increasing the values of the condensation operator in both the cases we have discussed.
\begin{table}[h]
\begin{center}
\begin{tabular}{ |c| c| c|}
\hline
~~~~~~Born-Infeld parameter ($b$)~~~~~~ & \multicolumn{2}{c|}{~~~The condensation operator value, $\langle \mathcal{O}_{\Delta} \rangle / T_{c}^{\Delta+1}$ ~~~~~~} \\
\hline
& ~~~~~~$m^{2}=0,~\Delta = 1$~~~~~~ &~~~~~ $m^{2}=-3/16,~\Delta = 3/4$ ~~~\\
\hline
0.0 & 87.2482 & 49.509\\
\hline
0.01 & 89.4636 & 50.1235\\
\hline
0.02 & 92.5642 & 50.8645 \\
\hline
0.03 & 96.9787 & 51.7611\\
\hline
\end{tabular}
\label{tab2}
\end{center}
\caption{Condensation operator value for different values of BI parameter}
\end{table}
\section{Conductivity}
\noindent In this section, we obtain the holographic conductivity, which is accomplished by perturbing the gauge field in the bulk along the boundary, as a function of frequency. We consider the perturbation in the gauge field along $y$-direction $$A_{\mu} = (0,0,\phi(r,t),0)$$
where $\phi(r,t) = A(r)~e^{-i\omega t}$. However, we take the previous ansatz for the matter field which is given by $$\rho_\mu=(0,\rho(r),0,0)$$
Varying the action $\mathcal{S}$ in eq.(\ref{Action}) with respect to $A(r)$ and ignoring terms of $\mathcal{O}(b^{2})$ and $\mathcal{O}(\omega^{2}b)$, we get the following equation of motion corresponding to $A(r)$
\begin{eqnarray}
\hspace*{-15mm}\bigg(1-\dfrac{3b}{2r^{2}}f(r)A^{\prime2}\bigg)A^{\prime\prime} + \dfrac{f^{\prime}(r)}{f(r)}\bigg(1-\dfrac{b}{r^{2}}f(r)A^{\prime2}\bigg)A^{\prime} \nonumber \\
+\dfrac{b}{r^{3}}f(r)A^{\prime3}+\bigg(\dfrac{\omega^{2}}{f^{2}(r)}-\dfrac{2\rho^{2}}{r^{2}f(r)}\bigg)A=0 \hspace*{15mm}~
\label{eom_A_r_b}
\end{eqnarray}
where prime denotes derivative with respect to $r$.
Eq.(\ref{eom_A_r_b}) is highly nonlinear and is very difficult to solve. So for simplicity, we would ignore all the nonlinear terms in eq.(\ref{eom_A_r_b}). This can be done because nonlinear terms in eq.(\ref{eom_A_r_b}) appear with the BI parameter $b$ which is very small. However, one should note that the effect of the BI parameter would still enter in the solution through $\rho(z)$ which we have found in the previous section. We shall now solve eq.(\ref{eom_A_r_b}) after ignoring all the nonlinear terms. This gives
\begin{eqnarray}
A^{\prime\prime} + \dfrac{f^{\prime}(r)}{f(r)}A^{\prime}+\bigg(\dfrac{\omega^{2}}{f^{2}(r)}-\dfrac{2\rho^{2}}{r^{2}f(r)}\bigg)A=0~.
\label{eom_A_r_c}
\end{eqnarray}
Now changing the coordinate to $z=\dfrac{r_{0}}{r}$, eq.(\ref{eom_A_r_c}) becomes
\begin{eqnarray}
A^{\prime\prime} + \bigg(\dfrac{f^{\prime}(z)}{f(z)}+\dfrac{2}{z}\bigg) A^{\prime}+\dfrac{r_{0}^{2}}{z^{4}}\bigg(\dfrac{\omega^{2}}{f^{2}(z)}-\dfrac{2z^{2}\rho^{2}}{r_{0}^{2}f(z)}\bigg)A=0~.
\label{eom_A_z_c}
\end{eqnarray}
We now move to the Tortoise coordinate given by
\begin{eqnarray}
r_{*} = \int \dfrac{dr}{f(r)}
\label{tortoise_r}
\end{eqnarray}
which can be written in the $z$ coordinate as
\begin{eqnarray}
r_{*} = -\int \dfrac{dz}{r_{0}(1-z^{3})}~.
\label{tortoise_z}
\end{eqnarray}
From eq.(\ref{tortoise_z}) we find that
\begin{eqnarray}
r_{*} = -\dfrac{1}{r_{0}}\bigg(\ln(1+z+z^2)^{1/6} - \ln (1-z^3)^{1/3}\bigg) - \dfrac{1}{\sqrt{3}r_{0}}\arctan\bigg(\dfrac{1+2z}{\sqrt{3}}\bigg) ~.
\label{r_star_full}
\end{eqnarray}
In that above equation the integration constant is chosen so that the AdS boundary appears at $r_{*} = 0$.
Considering leading order behaviour of eq.(\ref{r_star_full}), we get
\begin{eqnarray}
r_{*} \simeq ln (1-z)^{1/3r_{0}}~.
\label{r_star}
\end{eqnarray}
In the Tortoise coordinate, eq.(\ref{eom_A_r_c}) leads to the following equation
\begin{eqnarray}
\dfrac{d^{2}A}{dr_{*}^{2}}+(\omega^{2}-V)A=0
\label{eom_A_r_star}
\end{eqnarray}
where $V$ is given by
\begin{eqnarray}
V=2(1-z^{3})\rho^{2}~.
\label{V_rho}
\end{eqnarray}
The solution to the above equation for $V = 0$ is straightforward and is given by
\begin{eqnarray}
A \sim e^{-i\omega r_{*}}~.
\label{V=0 solution}
\end{eqnarray}
Using eq.(\ref{r_star}) in the above solution, we get
\begin{eqnarray}
A \sim (1-z)^{-i\omega/3r_{0}}~.
\label{A_r_star_V_0}
\end{eqnarray}
We shall now generalize this solution for the case $V \neq 0$. In this case, we obtain
\begin{eqnarray}
A(z) = (1-z)^{-(i\sqrt{\omega^{2} - \langle V \rangle})/3r_{0}}
\label{general_A_z}
\end{eqnarray}
where $\langle V \rangle$ is defined as
\begin{eqnarray}
\langle V \rangle = \dfrac{\int dr_{*} V |A(r_{*})|^{2}}{\int dr_{*} |A(r_{*})|^{2}}~.
\label{<V>}
\end{eqnarray}
Now using eq.(\ref{rho_sturm}), with $F(z) \simeq 1$ near the boundary, we get
\begin{eqnarray}
V = (1-z^{3})z^{2\Delta}\dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{r_{0}^{2\Delta}}~.
\label{V_z}
\end{eqnarray}
Now using $V$ from eq.(\ref{V_z}) in eq.(\ref{<V>}) and using the fact that $r_{*} = -\dfrac{z}{r_{0}}$ near the boundary, we get the following expression for $\langle V \rangle$
\begin{eqnarray}
\langle V \rangle =\dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{2^{2\Delta}} \bigg(\dfrac{\Gamma (2\Delta +1)}{(-i\sqrt{\omega^{2} - \langle V \rangle})^{2\Delta}}\bigg)~.
\label{<V>2}
\end{eqnarray}
At low frequency, we can set $\omega = 0$, which leads to
\begin{eqnarray}
\langle V \rangle^{\Delta +1} = \dfrac{\langle \mathcal{O}_{\Delta} \rangle^{2}}{2^{2\Delta}} \Gamma (2\Delta +1)~.
\label{<V>final}
\end{eqnarray}
For the two choices of $\Delta$ that we made earlier, we have the following expressions for $\langle V \rangle$
\begin{eqnarray}
\Delta = 1 \hspace*{10mm} \longrightarrow \hspace*{10mm} \langle V \rangle = \dfrac{\langle \mathcal{O}_{1} \rangle}{\sqrt{2}} \hspace*{19mm}~
\label{V_for_m=0}
\end{eqnarray}
\begin{eqnarray}
\Delta = \dfrac{3}{4} \hspace*{10mm} \longrightarrow \hspace*{10mm} \langle V \rangle = \dfrac{3\langle \mathcal{O}_{3/4} \rangle^{8/7}}{8}\sqrt{\dfrac{\pi}{2}}~.
\label{V_for_m=-3/16}
\end{eqnarray}
Near $z \rightarrow 0$, we can expand $A(z)$ in eq.(\ref{general_A_z}) as
\begin{eqnarray}
A(z) \simeq A(0)+zA^{\prime}(0)+\mathcal{O}(z^{2})+...~.
\label{A_z_near_0_a}
\end{eqnarray}
On the other hand, we know that we can expand gauge field near $z \rightarrow 0$ in the following manner
\begin{eqnarray}
A_{x}(z) \simeq A_{x}^{(0)}+\dfrac{A_{x}^{(1)}}{r_{0}}z +...~.
\label{A_z_near_0_b}
\end{eqnarray}
Now comparing eq.(s)(\ref{A_z_near_0_a}, \ref{A_z_near_0_b}), we get the following relations
\begin{eqnarray}
A_{x}^{(0)} = A(0)~~,~~ A_{x}^{(1)} = r_{0} A^{\prime}(0)~.
\label{related A's}
\end{eqnarray}
The expression for conductivity reads $$\sigma(\omega) = \dfrac{\langle J_{x} \rangle}{E_{x}} = -\dfrac{iA_{x}^{(1)}}{\omega A_{x}^{(0)}}~.$$
Then using eq.(\ref{related A's}), we get the following expression for conductivity
\begin{eqnarray}
\sigma(\omega) = -\dfrac{ir_{0}}{\omega} \dfrac{ A^{\prime}(0)}{A(0)}~.
\label{sigma_def}
\end{eqnarray}
Using eq.(\ref{general_A_z}) in eq.(\ref{sigma_def}), we find that
\begin{eqnarray}
\sigma(\omega) = \dfrac{1}{3}\sqrt{1 - \dfrac{\langle V \rangle}{\omega^{2}}}~.
\label{general_sigma}
\end{eqnarray}
Substituting $\langle V \rangle$ from eq.(\ref{<V>final}) in eq.(\ref{general_sigma}), we obtain the following expression for the conductivity in the low frequency limit,
\begin{eqnarray}
\sigma(\omega) = \dfrac{i}{3} \dfrac{\langle \mathcal{O}_{\Delta} \rangle^{1/(\Delta+1)}}{2^{\Delta/(\Delta+1)}}\dfrac{\Gamma (2\Delta+1)^{1/2(\Delta+1)}}{\omega}~.
\label{Low_Freq_Conductivity}
\end{eqnarray}
It is clear from eq.(\ref{Low_Freq_Conductivity}) that $\sigma(\omega)$ has a pole of order one. This implies that the DC conductivity diverges in this holographic $p$-wave superconductor model. Explicit expressions for DC conductivity for the cases we have considered in this paper are the following
\begin{eqnarray}
\Delta = 1 \hspace*{10mm} \longrightarrow \hspace*{10mm} \sigma(\omega) = \dfrac{i}{3\omega}\sqrt{\dfrac{\langle \mathcal{O}_{1} \rangle}{\sqrt{2}}} \hspace*{19mm}~
\label{sigma_m=0}
\end{eqnarray}
\begin{eqnarray}
\Delta = \dfrac{3}{4} \hspace*{10mm} \longrightarrow \hspace*{10mm} \sigma(\omega) = \dfrac{i}{3\omega} \bigg(\dfrac{9\pi\langle \mathcal{O}_{3/4} \rangle^4}{128}\bigg)^{1/7}~.
\label{sigma_m=-3/16}
\end{eqnarray}
\section{Conclusions}
\noindent In this paper, we have studied a holographic model of a $p$-wave superconductor constructed from a massive vector field with the nonlinear Born-Infeld electrodynamics in the matter sector of the Lagrangian. Considering probe approximation, where matter does not backreact with the spacetime geometry of the background, we have worked with a planar Schwarzschild-AdS metric. We have observed that the condensation gets suppresed due to presence of the Born-Infeld parameter $b$. In fact, we have found that the critical temperature for two choices of $m^{2}$, that is, ($m^{2} = 0, ~ -3/16$) decreases, making condensation harder, as we increase the value of $b$. We have also analysed the effect of Born-Infeld parameter in the condensation operator value. It turns out that the Born-Infeld correction to the condensation operator value is very nontrivial. It is found that the value of the condensation operator increases with the increase in the value of $b$ for both choices of $m^{2}$. \\
\noindent We would like to point out that for the choice of $m^{2} = 0$, our result for the critical temperature, without Born-Infeld correction, matches with the earlier non-Abelian model of the holographic $p$-wave superconductor, which is conceptually very different with the model we have considered in this paper. However, as we have pointed out earlier that the value of the condensation operator is different from the earlier model of holographic $p$-wave superconductor \cite{sgdr1}. This is because the two theories are quite different in form at the level of the action, although both exhibit a $p$-wave characteristic. With these observations, we conclude that the presence of Born-Infeld parameter is making the condensation difficult in the holographic $p$-wave superconductor model considered in this paper.\\
\noindent We have finally calculated the conductivity following a self-consistent approach developed in \cite{st} and have explicitly shown that the DC conductivity in this model indeed diverges. We would like to stress that such an analysis was absent in the context of $p$-wave holographic superconductors. \\
\noindent {\bf{Acknowledgements}}: DG would like to thank DST-INSPIRE, Govt. of India for financial support. SG acknowledges the Visiting Associateship at Inter-University Centre for Astronomy and Astrophysics, Pune.
|
2,869,038,154,999 | arxiv | \section{Artifact Appendix}
\vspace{-5pt}
\subsection{Abstract}
NVMExplorer is an open-source framework for modeling, evaluating, and comparing embedded non-volatile memory solutions under different application and system-level properties and constraints.
NVMExplorer's code base includes (1) a python-based user interface for configuring and running design sweeps, (2) a modified and extended version of NVSim for memory array characterization, (3) a python-based application-level fault injection tool with a stand-alone interface, (4) scripts to both generate and parse associated configuration and output files from memory characterization, and (5) an analytical model extrapolates array-level data according to user-input application and system properties and constraints.
This release also includes (1) a per-technology database of properties extracted from paper survey of IEDM, VLSI, and ISSCC 2016-2020, (2) application characteristics for the workloads in our submission, including graph search, neural networks, and SPEC CPU2017, (3) fault model characteristics and data format transformations for fault injection studies, and (4) sample configuration files and customized cell-level characteristics.
NVMExplorer was developed and validated on both Mac and Ubuntu Linux systems, with successful configuration and some tests also on Windows.
Support for more advanced technology nodes and alternative memory characterization backends is under development.
\vspace{-5pt}
\subsection{Artifact check-list (meta-information)}
{\small
\begin{itemize}
\item {\bf Compilation:
\begin{verbatim}$ cd nvmexplorer_src/nvsim_src
$ make\end{verbatim}}
\item {\bf Execution: \begin{verbatim}$ python run.py config/main_dnn_study.json\end{verbatim}}
\item {\bf Output: \begin{verbatim}output/results/[eNVM]_1BPC-combined.csv\end{verbatim}}
\item {\bf How much disk space required (approximately)?: \normalfont{37MB}}
\item {\bf Preparation Time?: \normalfont{Less than 1 hour.}}
\item {\bf Execution Time (for this artifact)?: \normalfont{about 1 hour (desktop CPU).}}
\item {\bf Publicly available?: \normalfont{Yes}}
\item {\bf Code licenses (if publicly available)?: \normalfont{MIT}}
\item {\bf Archived (provide DOI)?: \url{https://zenodo.org/badge/latestdoi/375786583} }
\end{itemize}
}
\vspace{-5pt}
\subsection{Description}
\subsubsection{How to access}
[\href{https://zenodo.org/badge/latestdoi/375786583}{Zenodo}] for current release, [\href{https://github.com/lpentecost/NVMExplorer}{Github}] for up-to-date and development versions.
\subsubsection{Hardware dependencies}
Tested on a selection of laptop and desktop setups; no specific HW dependencies required.
\subsubsection{Software dependencies}
Python 3.8; pandas, numpy, (optional) pytorch for fault injection experiments; gcc
\subsubsection{Data sets}
Please see provided workload characterization results and default configuration settings in the following paths from the main NVMExplorer repository: \begin{verbatim} config/README.md\end{verbatim} \begin{verbatim} data/workload_data\end{verbatim} \begin{verbatim} output/NVM_data\end{verbatim}
\vspace{-5pt}
\subsection{Installation}
\begin{verbatim}$ git clone --recurse-submodules
https://github.com/lpentecost/NVMExplorer
$ cd nvmexplorer_src/nvsim_src
$ make\end{verbatim}
Prior to running NVMExplorer, please verify you are using Python 3.8 and have the pandas and numpy packages available.
\vspace{-5pt}
\subsection{Experiment workflow}
The general usage of NVMExplorer is via passing a JSON config file that specifies your desired design sweep to run.py:
\begin{verbatim}python run.py config/[config name].json\end{verbatim}
For example, to generate the DNN-focused case study in Section 4.1, you can run:
\begin{verbatim}python run.py config/main_dnn_study.json\end{verbatim}
and verify that the per-eNVM-technology CSV outputs generated are consistent with those provided in
\begin{verbatim}AE_dnn_output/[eNVM]_1BPC-combined.csv\end{verbatim}
\section{NVMExplorer}
\label{sec:nvmexp}
At a high level, NVMExplorer~is a comprehensive design space exploration (DSE) framework integrating application-level characteristics, system constraints, and circuit and device parameters in a publicly-available, simple-to-use flow.
The overall structure of NVMExplorer\ (Fig.~\ref{fig:diagram_of_framework}) relies on three stages, described in more details in the following subsections:
\begin{enumerate}
\item A comprehensive cross-stack configuration interface to specify the design space of interest. This configuration spans the computing stack from application (blue) and system (orange) down to circuits and devices (green).
\item An evaluation engine which automatically generates configurations, simulates memory arrays, processes application behavior, computes key metrics such as performance, power, area, accuracy, and lifetime, and generates meaningful visualizations. Evaluation steps which extend existing tools are shaded grey in Fig. \ref{fig:diagram_of_framework}.
\item An interactive, web-based visualization tool to aide discovering, filtering and refining eNVM design points.
\end{enumerate}
\vspace{-5pt}
\subsection{Cross-Stack Configuration}
\label{subsec:cross-stack}
To evaluate and compare eNVM solutions in system settings, it is not just cell or even array-level characteristics of a particular technology that matter.
Rather, viable solutions depend on the area/power budget of a system and how applications running on that system interact with the memory.
NVMExplorer\ provides a rich interface for configuring key application, system, and circuit and device parameters.
At the application level, the user inputs information about memory traffic, which may include the number of read and write operations, their proportion relative to the total number of memory accesses, and how accesses are spread out over execution time.
These configuration parameters may be fixed values (e.g., characterization results of a specific workload) or provided as ranges to generate generic memory traffic patterns.
Some applications may have additional demands or metrics which are tightly related to memory technology characteristics.
For example, machine learning applications or approximate computing methods may trade-off relaxed accuracy for performance and energy, and NVMExplorer~also provides an interface for designers to study the application interactions and implications of fault-prone eNVM solutions.
At the system level, the user has the freedom to evaluate a wide variety of memory configuration options by either setting performance, power, and area constraints and optimization goals or by choosing memory array specifications such as capacity, multi-level programming, bank configuration, and more.
The circuits and devices level of the design space configuration comprises per-technology memory cell characteristics, in addition to sensing and programming circuitry choices.
NVMExplorer~also provides a database of eNVM cell configurations derived from ISSCC, IEDM, and VLSI publications, as described in Section \ref{sec:tech_landscape}, but it is also possible (and encouraged!) for users to extend the current NVMExplorer~database with new simulation-based (\textit{i.e.} SPICE or TCAD models), measured, or projected circuit and device properties.
Once the full-stack specifications are set, NVMExplorer~automatically generates configuration files, which are used as input to the evaluation engine.
\input{tables/techcells}
\vspace{-5pt}
\subsection{Evaluation Engine}
Given the auto-generated cell and system-level sweep configurations, the evaluation engine produces memory array architecture characterizations and computes application- and system-level power, performance, area, and reliability metrics.
NVMExplorer\ combines a customized memory array simulator, an application-level fault injection tool, and an analytical model to extrapolate application-level metrics.
To characterize memory arrays, we rely on a customized version of NVSim, a previously validated tool to compute array-level timing, energy, and area~\cite{NVSim}.
We build on existing efforts to extend NVSim to support multi-level cells and ferroelectric-based eNVMs \cite{fefet-arxiv, maxnvm}.
In addition, we modified the tool interface to ease data collection and post-processing.
We introduce the capabilities of NVMExplorer~in comparing eNVMs in Section~\ref{subsubsec:mem_array_comp}.
Results of cell-level and circuit-level simulations can be used to parameterize fault models and perform application-level fault injection, as described in Section~\ref{subsubsec:fault_modeling}.
For performance estimations, in lieu of cycle-accurate simulation, we utilize a long-pole, bandwidth driven model that takes memory access latency and available read/write bandwidth and compares aggregated access latency per workload execution and per second of execution to workload access statistics.
This is similar in spirit to performance models in \cite{dark_silicon, nvdla}, and it serves the primary purpose of identifying memory solutions that cause application slowdown, rather than predicting precise latency implications.
To extract other critical application-level metrics, such as energy, we aggregate the read and write access energy based on the number of application accesses and array energy-per-access with the leakage power, scaling according to use-case and wake-up frequency for intermittent operation.
Similarly, memory lifetime is extrapolated by comparing the average reported endurance to the write access pattern per workload and the use-case.
\subsubsection{Example Array-Level Comparison}
\label{subsubsec:mem_array_comp}
Figure~\ref{fig:motivate_area_readperf} presents example array characterization output generated by NVMExplorer~after evaluating various eNVM configurations implemented in a 22nm node.
The design points are color-coded to highlight optimistic (green), pessimistic (red), or reference (blue) designs across surveyed publications per cell technology.
The figure also reports the characteristics of 16nm SRAM as a comparison point.
For each technology, we show array characterization under different optimization goals, which result in a variety of internal array architectures.
For example, we observe a wide range for the read-energy-per-bit of an iso-capacity SRAM array.
This result reflects the effect of different array optimization targets (read energy-delay product, write characteristics, area) on the internal bank configuration and periphery overhead.
This preliminary study already provides a few key takeaways.
Each eNVM is able to attain read access characteristics competitive with SRAM, with the exception of an array characterized with pessimistic underlying PCM cell characteristics.
However, write access characteristics vary dramatically across published eNVM examples, in addition to the range of reported endurance per technology.
The tension between these properties and potential storage density (even in the absence of multi-level cell programming) indicates that array-level comparison in isolation may guide a system designer towards sub-optimal solutions.
For example, a FeFET-based memory may seem a fitting choice for high-density, read-performant storage, but we find that both performance and energy efficiency of those memories are highly shaped by application traffic patterns and underlying cell assumptions.
Thus, the cross-stack nature of data exploration supported by NVMExplorer~is essential in guiding system-level choices and further investigation.
\subsubsection{Fault Modeling and Reliability Studies} \label{subsubsec:fault_modeling}
In addition to characterizing memory performance, power, area, and lifetime, NVMExplorer\ extends previously validated efforts in application-level fault injection to provide an interface for fault modeling and reliability studies~\cite{ares}.
Users can provide an expected error rate or more detailed, technology-specific fault models and storage formats to perform fault injection trials on application data stored in different eNVMs.
To quantify the impact on application-specific metrics of accuracy, the fault injection tool is tightly integrated with application libraries for data-intensive workloads, including PyTorch for DNNs and snap for graph processing \cite{pytorch, snapnets}, as well as numpy for generic application data.
As a demonstration, we perform SPICE simulation and extract fault charactieristics associated with single-level vs. multi-level cell (SLC vs. MLC) programming and sensing circuitry characteristics.
In this work, we consider a subset of eNVMs, namely, RRAM, CTT, and FeFET, whose fault characteristics could be derived from existing modeling efforts \cite{maxnvm, fefet-arxiv}.
We use our extended fault injection framework to simulate the impact of storing workload data in SLCs vs. MLCs in Section \ref{subsec:mlc}.
Armed with these additional capabilities, NVMExplorer\ can replicate the results of previous considerations of eNVM storage reliability \cite{maxnvm}, in addition to providing a broader platform for studying the interactions between programming choices, cell characteristics, and application accuracy.
\vspace{-5pt}
\subsection{Exploring Results \& Conducting Studies} \label{subsec:visualization}
The figures in this work are snapshots from NVMExplorer's interactive web-based data visualization tool, which will be freely available at the time of publication of this work.
In each study, we filter and constrain evaluated results according to system optimization priorities and application use cases, as described in the text.
The basic NVMExplorer\ data visualization dashboard presents power, performance, area, and memory lifetime results across all user-configured sweep results (e.g., many application traffic patterns, array provisioning choices, and/or eNVM cell configurations) alongside array-level metrics for a holistic design exploration experience.
A user can filter results in terms of important constraints (e.g., latency or accuracy targets, power or memory area budget) and identify design points of interest.
While several features of these visualizations, built using Tableau~\cite{tableau}, are evident in the figures in this work, including dynamic filtering across plots, click-and-drag to narrow the design space, and pop-up details about results, we encourage the reader to use their imagination in how they might explore and filter the data shown in alternative ways according to their interests, questions, or confusions.
\section{Application-Driven Case Studies}
\label{sec:case_studies}
We now present three case studies that highlight different ways NVMExplorer~can search design spaces in order to identify benefits and limitations of the diverse range of eNVM storage solutions.
Each scenario presents unique optimization goals and system priorities and, in each case, we compare how each eNVM's power, performance, and area fairs relative to similarly-provisioned SRAM or DRAM in a baseline system.
\input{dnn}
\input{graph}
\input{llc}
\subsection{Architecture-Driven Opportunities} \label{subsec:arch_driven}
There are many opportunities for architects to innovate alongside device designers in order to best take advantage of the current eNVM technology proposals available. In fact, NVMExplorer~can be used to analytically probe architectural opportunities to best leverage the unique trade-offs of eNVM devices compared to more traditional devices. Examples of these critical system-level choices include intermittent operation (Section \ref{subsubsec:intermittent}) and write buffering (Section \ref{subsubsec:write-buf}).
\subsection{Intermittent Operation Highlights Relative\\ eNVM Benefits}
\label{subsec:intermittent}
As demonstrated in the domain-specific case of DNN inference under continuous vs. intermittent operation in Section \ref{subsec:dnn}, non-volatility of on-chip storage resources is a particularly compelling property for resource-constrained systems that experience intermittent power supply (e.g., deployed solar-powered agricultural sensors or satellites) or otherwise operate intermittently (e.g., a voice-enabled assistant executing NLP tasks on wake-up). Across a range of potential device wake-up frequency and per-wake-up compute patterns, we can make the high-level observation that several eNVM proposals become particularly compelling under these conditions, and the preferred NVM choice for further investigation varies depending on both of these factors. Figure~\ref{fig:intermittent} shows total energy as a function of inference per day for two workloads of interest: ResNet26 (image classification) and ALBERT (NLP). Here total energy is presented because it is the closest proxy we have for battery life. From the figure, we can observe that when the number of inferences per day is sufficiently low (less than 1e5 for ResNet and less than 1e4 for ALBERT), the opt. FeFET technology results in the lowest total energy. After those point, the opt. STT results in the lowest energy for both tasks. The difference in transition point between FeFET and STT for the two workloads is due to the fact that ALBERT has higher energy inferences since sentence classification is a particularly intensive task.
\begin{figure}[t]
\centering
\includegraphics[width=0.22\textwidth]{figures/ResNet26-int.png}
\includegraphics[width=0.22\textwidth]{figures/ALBERT-int.png}
\caption{The eNVM storage solution (iso-capacity arrays provisioned per task, optimized for ReadEDP) that minimizes total energy consumption varies according to system wake-up frequency and DNN inference task; All solutions shown maintain application accuracy and a 1s latency per inference.}
\label{fig:intermittent}
\end{figure}
\subsection{Write Buffering Could Change \\ Performance Comparison Landscape}
\label{subsec:write-buf}
In conjunction with technology innovation to reduce write latency, adoption of a wider set of eNVMs in general-purpose computing contexts could be made possible by employing existing architectural techniques to mask poor write characteristics. For example, in an effort to extend memory lifetime and mask the performance impact of write access, a more performant technology (e.g., SRAM, or STT) could be employed as a write-buffer. For illustrative purposes, we consider a simple SRAM write cache that would hold write requests to the eNVM, write back to eNVM when the buffer is full, and allow in-place updates in the case of multiple writes to the same address before an update to eNVM. Rather than employ a costly and engineering-intensive cycle-accurate simulator to gauge the impact of provisioning a write buffer, NVMExplorer~enables an analytical study under user-specified traffic patterns to narrow the space of eNVMs worthy of further simulation and design effort in this case. This approach guides the high-level questions regarding whether write-buffering could make a difference in enabling additional technologies for applications with significant write traffic, and, if so, how much benefit would need to be extracted using the write buffer?
Figure \ref{fig:write_buffer} shows the results for this study for the SPEC2017 and Facebook-Graph-BFS. In particular, we look at the effects of masking write latency and reducing write traffic on total memory latency and power. We observe that for Facebook-Graph-BFS, if the write traffic load is reduced, FeFET now becomes a comparable option, while for SPEC2017, FeFET is not able to meet the application traffic load regardless of write masking techniques. STT and RRAM are still the optimal technology choices for SPEC2017.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/wbuff-latency.png}
\caption{Masking write latency or reducing write traffic via introduction of a write buffering scheme enables a broader set of eNVM technologies in terms of performance, power, and memory lifetime.}
\label{fig:write_buffer}
\end{figure}
\section{Technology Landscape} \label{sec:tech_landscape}
NVMExplorer\ provides a broad survey of published eNVM examples (Section \ref{subsec:cells}), which can be parameterized so that systems experts can make meaningful, high-level comparisons across technologies despite different underlying trade-offs and maturity (Section \ref{subsec:tentpoles}). We validate this approach per-technology against fabricated memory arrays (Section \ref{subsec:validation}).
\subsection{Cell Definitions} \label{subsec:cells}
We compile device- and array-level data across eNVM technologies, as summarized in Table \ref{tab:cell_level_ranges} alongside SRAM properties.
We source the majority of the cell-level parameters from ISSCC, IEDM, and VLSI publications and focus primarily on works from 2017-2020 to reflect the most recent range of achievable behavior per technology.
Previous efforts detailed the physical properties and limitations per technology \cite{ibm_survey}, while NVMExplorer\ focuses on compiling sufficient cell-level details and leaning on existing technology models to provide a broad and practical database of cell definitions.
While we hope these extracted cell definitions are helpful to the community in calibrating the current state-of-the-art, NVMExplorer\ is extensible as the design space continues to evolve, as demonstrated in Section \ref{sec:codesign}.
The technology classes in Table \ref{tab:cell_level_ranges} are at different levels of maturity.
For example, SOT is a relatively recent technology, and while it boasts very impressive write speed and lower write current compared to STT, it is not yet published at advanced process nodes.
We also see that endurance varies by multiple orders of magnitude across different technologies.
Thus, adoption will depend on the write intensity of target applications and system dynamics, so incorporating memory lifetime estimation becomes a critical design consideration.
Grey cells in Table \ref{tab:cell_level_ranges} indicate parameters unavailable in recent publications.
This could be for reasons of propriety from industry fabrication or experimental constraints.
However, for architects, it is important to have some concept of the possible range of values associated with these parameters.
In those cases where SPICE models for a technology are available, we use simulations to fill in missing parameters.
Alternatively, we consider older publications and consult with device experts to reason about cell and array parameters.
\section{Co-Design Opportunities} \label{sec:codesign}
Exploration of the design space in Section~\ref{sec:case_studies} shows that no single eNVM technology is best.
Rather, technology choice depends on the application and system-level targets.
This also means there are ample co-design opportunities across the computing stack -- from devices to architecture.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/FigBGfefet.png}
\caption{Back-gated (BG) FeFETs provide the high density and low operating power for example graph processing benchmarks with SRAM-comparable performance and begin to close the performance gap between non-BG FeFET and other memory technologies across SPEC2017 benchmarks.}
\vspace{-10pt}
\label{fig:bg_fefet}
\end{figure}
\subsection{Alternative FeFET fabrication choices unlock performant solutions for graph processing} \label{subsec:bg_fefet}
Previous FeFET-based device characterization and modeling efforts have exhibited write pulses on the order of $100ns$-$1\mu s$.
However, alternative FeFET fabrication strategies in early development stages, such as back-gated FeFETs \cite{intel_beol}, offer compelling potential advancements in write latency ($10ns$ programming pulse) and projected endurance ($10^{12}$).
Section \ref{subsec:graphs} noted that the primary limition of FeFETs in the context of graph processing was an inability to meet the application latency targets under higher write traffic.
Thus, using the underlying cell properties of back-gated FeFETs reported in \cite{intel_beol}, we can rapidly re-examine the viability of FeFET-based memory and probe whether this change could make a difference in the viability of FeFET-based memory for graph processing.
Figure \ref{fig:bg_fefet} shows the total memory power and total memory latency of an 8MB memory array of back-gated FeFETs (in yellow) compared to using previous FeFET standards (red, green) and SRAM (blue).
We examine these metrics under a range of read and write traffic patterns which are inclusive of the graph benchmarks described in Section~\ref{subsec:graphs} and the SPEC benchmarks used in Section~\ref{subsec:llc}, but here showing access patterns for an 8MB capacity LLC.
The underlying array-level characterization is shown in Figure \ref{fig:bg_fefet}, right. From the array characterization, we observe that the back-gated FeFETs show a slight increase in read energy per access and slight decrease in storage density compared to prior state-of-the-art cells.
However, we observe that they enable comparable application latency to SRAM across a wide range of write traffic where previous FeFET versions fall short.
Furthermore, back-gated FeFETs results in the lowest operating power over most of the range of read accesses per second, including for the example graph processing benchmark, Wikipedia--BFS8MB.
Based on these observations, we posit that back-gated FeFET memory may close the performance gap between prior FeFETs and other memory technologies (including SRAM) and unlock additional application domains.
NVMExplorer's ability both to quickly and efficiently gauge the impact of cell-level innovations and to match emerging device designs to compelling use cases can enable productive future co-design collaborations.
This feedback loop is mutually beneficial in providing direct motivation for further device development and encouraging system designers to integrate more energy-efficient, highly dense on-chip memory.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/FigAreaEff.png}
\caption{Results for 8MB arrays are filtered according to a maximum area efficiency (top right). Arrays with lower area efficiency are highlighted across all views and tend to result in low memory latency across many traffic scenarios.}
\vspace{-10pt}
\label{fig:area_eff}
\end{figure}
\vspace{-5pt}
\subsection{Trade area efficiency for performance} \label{subsec:area_eff}
One theme we can highlight across the architecture-driven case studies from Section~\ref{sec:case_studies} is that the subset of characterized results that exhibit lower area efficiency (i.e., internal array architectures that do less amortization of periphery and sensing overhead) also tend to result in lower total memory latency across many traffic scenarios. This is perhaps counter-intuitive given the effort spent in the devices community to manufacture very small cell sizes. We also note that in Figure \ref{fig:area_eff}, where such design points are highlighted across the plots, that slight advantages in terms of energy-per-access (e.g., Opt. STT and PCM compared to FeFET) tend to correlate to large total power advantages in high-traffic scenarios. As such, pointing out to device designers the greater relative impact of reduced energy per access rather than decreased cell size could usher in a more productive, product-ready set of eNVM technologies.
Additionally, we observe that reducing energy per write access for STT and RRAM would drastically improve their relative power advantage for data-intensive applications, even at a cost of relatively lower area efficiency or storage density.
\subsection{Multi-Level Cell (MLC) advantages vary among eNVMs} \label{subsec:mlc}
While programming multiple bits per memory cell is an important strategy for increasing storage density across many eNVMs, previous work has revealed that MLC eNVMs may exhibit significantly higher fault rates that must be carefully considered in conjunction with application resilience \cite{maxnvm}.
NVMExplorer~enables efficient and broad probing of reliability vs. storage density by providing an application-agnostic fault injection tool and templates for technology-specific fault modes (Section~\ref{subsubsec:fault_modeling}).
To demonstrate, we quantify the application accuracy for ResNet18 image classification under weight storage in SLC vs. 2-bit MLC across multiple technologies for which there exists sufficient cell and circuit level data to produce detailed fault models.
The density vs. reliability trade-off is distinct for each technology.
For example, Figure \ref{fig:mlc} displays 8MB and 16MB characterized arrays, including 2-bit MLC RRAM and 2-bit MLC FeFET, filtered such that only those arrays meeting application latency requirements and maintaining image classification accuracy are included.
Note that these results replicate previous efforts that indicate that image classification inference is robust to 2-bit MLC RRAM storage (we also verified this for CTT-based memories with fault modeling details provided in \cite{maxnvm, dac_ctt}), while we show that MLC FeFET devices only exhibit acceptable accuracy for larger cell sizes.
This is because smaller FeFETs are more difficult to program reliably due to device-to-device variation \cite{fefet-arxiv}.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/FigMLC.png}
\caption{When we consider multi-level cells (MLC)s and filter out memory solutions that don't provide acceptable ResNet18 inference accuracy after fault injection, we note MLC RRAM is denser and more performant than SLC RRAM, while MLC FeFET is only sufficiently reliable for larger cell sizes (red).}
\vspace{-12pt}
\label{fig:mlc}
\end{figure}
\subsection{Write buffering changes the performance landscape}
\label{subsec:write-buf}
In conjunction with technology innovations to reduce write latency, adoption of a wider set of eNVMs in general-purpose computing contexts could be made possible by employing existing architectural techniques to mask poor write characteristics.
For example, in an effort to extend memory lifetime and mask the performance impact of write access, a more performant technology (e.g., SRAM, or STT) could be employed as a write-buffer.
Rather than employ a costly and engineering-intensive cycle-accurate simulator to gauge the impact of provisioning a write buffer, NVMExplorer~enables an analytical study under user-specified traffic patterns to narrow the space of eNVMs worthy of further simulation and design effort. This approach answers high-level questions regarding whether write-buffering could make a difference in making additional eNVMs viable for applications with significant write traffic, and, if so, how much benefit would need to be extracted using the write buffer?
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/wbuff-latency.png}
\caption{Masking write latency or reducing write traffic via introduction of a write caching scheme could enable a broader set of eNVM technologies.}
\label{fig:write_buffer}
\end{figure}
\input{tables/scope}
For illustrative purposes, we consider a simple write cache that would hold write requests to the eNVM, write back to eNVM when the buffer is full, and allow in-place updates in the case of multiple writes to the same address before an update to eNVM.
Figure \ref{fig:write_buffer} shows the results for this study for SPEC2017 and Facebook-Graph-BFS.
Just buffering the writes will mask the effective write latency experienced by the system, while a write cache that allows updates could additionally reduce traffic and extend lifetime.
In particular, we look at the effects of masking write latency and reducing write traffic on total memory latency and power.
We observe that for Facebook-Graph-BFS, if the write traffic load is reduced by at least half, FeFET emerges as a performant option, while STT remains the lowest power solution for this particularly high-traffic workload.
STT and RRAM are still the optimal technology choices for SPEC2017 in terms of performance, but write-buffering could empower FeFETs as a lower-power alternative if latency could be masked or write traffic to the eNVM could be reduced by at least 25\%.
\section{Conclusion} \label{sec:conclusion}
Next-generation on-chip memory will need to push the boundaries of efficiency and density, and a diverse set of embedded non-volatile memory (eNVM) technologies have compelling characteristics to address these limitations.
NVMExplorer\ provides architects the flexibility to explore and compare these storage solutions under realistic constraints.
NVMExplorer~is open source, with interactive data visualizations freely available online, which we hope will unlock the potential of eNVMs in a broad range of systems.
\subsection{Circuit \& Device-Driven Opportunities} \label{subsec:device-driven}
\subsection{Alternative FeFET Fabrication Choices \\ Unlock Performant Solutions} \label{subsec:bg_fefet}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/FigBGfefet.png}
\caption{Alternative FeFET Fabrication Choices Unlock Performant Solutions. Back-gated (BG) FeFETs now provide the highest density and lowest operating power for example graph processing benchmarks with SRAM-comparable performance and begin to close the performance gap between non-BG FeFET and other memory technologies across SPEC2017.}
\label{fig:bg_fefet}
\end{figure}
Previous FeFET-based memory device characterization and modeling efforts have exhibited write pulses on the order of $100ns$-$1\mu s$.
However, alternative FeFET fabrication strategies in early development stages, such as back-gated FeFETs \cite{intel_beol}, offer compelling potential advancements in write latency ($10ns$ programming pulse) and projected endurance ($1e12$). As noted in Section \ref{subsec:graphs}, the primary limiting factor to FeFET-based memory in the context of graph processing was an inability to meet the application latency targets under higher write traffic. Thus, using the underlying cell properties of back-gated FeFETS reported in \cite{intel_beol}, we can rapidly re-examine the viability of FeFET-based memory and probe whether this change could make a difference in the viability of FeFET-based memory for graph processing.
Figure \ref{fig:bg_fefet} shows the total power and total memory latency of an 8MB memory array with back-gated FeFET-based (in yellow) compared to using previous FeFET standards (red, green) and SRAM (blue). We examine these metrics under a range of read and write traffic patterns which are inclusive of the graph benchmarks described in Section~\ref{subsec:graphs} and the SPEC benchmarks used in Section~\ref{subsec:llc}. Three different array organizations are tested which represent three different optimization targets: read latency, read energy, and read EDP. In addition to estimating total power and latency, we perform an underlying array-level characterization, which is shown in Figure \ref{fig:bg_fefet} on the right. From the array characterization, we observe that the back-gated FeFETs show a slight increase in read energy per access and slight decrease in storage density compared to prior state-of-the-art cells. However, we can observe that they enable comparable application latency to SRAM across a wide range of write traffic where previous FeFET versions fall short. Furthermore, back-gated FeFETs results in the lowest operating power over most of the range of read accesses per second, including for the example graph processing benchmark, Wikipedia--BFS8MB.
Based on these observation, we can conclude that these back-gated FeFET memory arrays have started to close the performance gap between non-BG FeFET and other memory technologies (including SRAM). NVMExplorer's ability both to quickly and efficiently gauge the impact of cell-level innovations and to match emerging device designs to compelling use cases can enable productive co-design collaborations in the future. This feedback loop is mutually beneficial in providing direct motivation for further development of a given cell proposal and encouraging system designers to integrate more energy-efficient, highly dense on-chip memory.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/FigAreaEff.png}
\caption{Trading Area Efficiency for Increased Performance. Using NVMExplorer, the data is filtered for the bolded technologies in the upper-right plot. Memories that exhibit lower area efficiency tend to result in low total memory latency across many traffic scenarios.}
\label{fig:area_eff}
\end{figure}
\subsection{Trading Area Efficiency for Increased \\ Performance} \label{subsec:area_eff}
One theme we can highlight across the architecture-driven case studies from Section~\ref{sec:case_studies} is that the subset of characterized results that exhibit lower area efficiency (i.e., internal array architectures that do less amortization of periphery and sensing overhead) also tend to result in lower total memory latency across many traffic scenarios. This is perhaps counter-intuitive given the effort spent in the devices community to manufacture very small cell sizes. We also note that in Figure \ref{fig:area_eff}, where such design points are highlighted across the plots, that slight advantages in terms of energy-per-access (e.g., Opt. STT and PCM compared to FeFET) tend to correlate to large total power advantages in high-traffic scenarios. As such, pointing out to device designers the greater relative impact of reduced energy per access rather than decreased cell size could usher in a more productive, product-ready set of eNVM technologies.
Additionally, we observe that reducing energy per write access for STT and RRAM would drastically improve their relative power advantage for data-intensive applications, even at a cost of relatively lower area efficiency or storage density.
\subsection{Multi-Level Programming Advantages \\Vary Per Technology} \label{subsec:mlc}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{figures/FigMLC.png}
\caption{Achievable storage density via multi-level programming varies per technology; here, we filter those memory solutions that don't provide acceptable application accuracy for ResNet18 inference after fault injection trials, and we note that MLC RRAM offers denser, more performant memory than SLC RRAM while meeting application accuracy, while MLC FeFET is only sufficiently reliable for larger cell sizes (pessimistic points). \textcolor{red}{Fixme, should this be filtered to just the SLC /MLC / RRAM / FeFET comparison? Or is the context helpful?}}
\label{fig:mlc}
\end{figure}
While programming multiple bits per memory cell is an important strategy for increasing storage density across many eNVM proposals, previous work has revealed that MLC eNVMs often exhibit significantly higher fault rates that must be carefully considered in conjunction with application resilience \cite{maxnvm}. NVMExplorer~enables efficient and broad probing of reliability vs. storage density by providing an application-agnostic fault injection tool and templates for technology-specific fault modes, as discussed in Section~\ref{subsubsec:fault_modeling}.
To demonstrate, we evaluate the impact of density on application accuracy for image classification using ResNet18 under storage in SLC vs. 2-bit MLC across multiple technologies which provide MLC capabilities and for which there exists sufficient cell and circuit level modeling to produce detailed fault models. The density vs. reliability trade-off is distinct for each technology. For example, Figure \ref{fig:mlc} displays 8MB and 16MB characterized arrays for storing weights for ResNet18 inference, including 2-bit MLC RRAM and 2-bit MLC FeFET, filtered such that only those arrays meeting application latency requirements and acceptable image classification accuracy are included. Note that these results replicate previous efforts that indicate that image classification inference is robust to 2-bit MLC RRAM storage (of note, we also verified this accuracy finding for CTT-based memories with fault modeling details provided in \cite{maxnvm, dac_ctt}), while we show that MLC FeFET devices only exhibit acceptable accuracy for larger cell sizes (pessimistic cell assumptions). This is because smaller FeFET cells are more difficult to program reliably due to increased device-to-device variation, but we note that optimistic SLC FeFETs offer denser, more reliable storage than larger, MLC-programmed FeFET-based memories.
\subsection{DNN Inference Accelerator}
\label{subsec:dnn}
Prior studies have demonstrated the potential benefits of eNVM storage for Deep Neural Network (DNN) inference accelerators \cite{maxnvm, memti, rram-dnn-blaauw}, albeit with limited scope in terms of eNVM technologies and cross-stack parameters considered.
NVMExplorer\ empowers researchers to approach a broader set of questions that compare eNVMs in different storage scenarios (e.g., limited to weights vs. storage of DNN parameters and intermediate results) and system constraints (e.g., strict area budget, or power budget).
In this work, we consider two distinct use cases for a DNN inference accelerator: continuous operation, as in image processing per frame of a streamed video input, and intermittent operation, where the system is woken up per inference task and can leverage the non-volatility of eNVM by retaining DNN parameters on-chip in power-off state between inferences.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figures/ResNet26-int.png}
\includegraphics[width=0.47\textwidth]{figures/ALBERT-int.png}
\caption{The eNVM storage solution (iso-capacity arrays provisioned per task, optimized for ReadEDP) that minimizes total memory energy consumption varies according to system wake-up frequency and DNN inference task; All solutions shown maintain application accuracy and a $<1s$ latency per inference.}
\vspace{-15pt}
\label{fig:intermittent}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/FigPowerGraph.png}
\vspace{-5pt}
\caption{Memory power, latency, and projected lifetime for generic traffic patterns encompassing graph processing demands, including specific graph kernels as labeled. The lowest power solution depends on the expected read traffic. FeFET solutions fail to match SRAM performance. STT provides superior performance and memory lifetime.}
\vspace{-10pt}
\label{fig:graphs}
\end{figure*}
\subsubsection{Continuous Operation}
We consider the commonly-used and well-studied NVDLA \cite{nvdla_hotchips} as a base computing platform and compare its 2MB SRAM with iso-capacity eNVMs.
We use the NVDLA performance model \cite{nvdla} to extract realistic memory access patterns and bandwidth requirements of the on-chip buffer.
More specifically, we evaluate the power and performance of accesses to on-chip memory storing ResNet26 weights for single-task image classification using the ImageNet dataset vs.\ multi-task image processing, comprising object detection, tracking, and classification, at a consistent frame rate of 60 frames-per-second, as is typical for HD video.
We additionally consider the impact of storing activations in eNVM, but this ostensibly ignores endurance limitations.
First, we observe the read and storage density characteristics for 2MB arrays using the cell-level tentpoles of several promising eNVM technology classes, as shown in Figure~\ref{fig:dnn_array} compared with SRAM.
Notice that read energy effectively divides arrays into two tiers.
STT, PCM, and RRAM offer lower read energies and competitive read latencies vs,\ SRAM.
In contrast, FeFET-based eNVMs suffer from higher read energies, but optimistic FeFET offers the highest storage density with low latency.
At similar low latency, optimistic STT offers 6$\times$ higher density over SRAM. PCM and RRAM outperform SRAM in terms of both read latency and storage density.
While such comparative insights can readily be extracted from this pair of plots, there are other important dimensions to also consider, and NVMExplorer\ facilitates more comprehensive analyses that consider the impact of application priorities and system-level use cases on eNVM design decisions.
Figure~\ref{fig:dnn_app} (left) summarizes total operating power (both dynamic access and leakage power) for the 2MB memory arrays characterized in Figure~\ref{fig:dnn_array} and accessed according to traffic patterns of different ResNet deployment scenarios, i.e., single- vs. multi-task and weights-only vs. storing both weights and activations.
These results exclude eNVM candidates that cannot support 60 FPS operation nor maintain DNN accuracy targets.
Recall NVMExplorer\ includes fault injection wherein high eNVM fault rates can degrade model accuracy to unacceptable levels.
While not explicitly shown here, NVMExplorer\ exposes numerous additional interactions for users to probe and build intuition.
For example, while total memory power increases as the number of accesses per frame increases to compute multiple tasks, the ratio of read-to-write traffic stays roughly the same.
Hence, the relative power of eNVM arrays also remains similar.
In particular, PCM, RRAM, and STT all offer over 4$\times$ reduction in total memory power over SRAM.
One important reason for this is that SRAM leakage power will dominate compared to eNVM solutions, even under high traffic.
Of the energy-efficient solutions, STT offers best performance (lowest application latency per frame).
In contrast, optimistic FeFET offers higher storage density while maintaining 60FPS and a 1.5-3$\times$ power advantage over SRAM.
\input{tables/dnn}
\subsubsection{Intermittent Operation}
Let us now consider eNVM storage for two additional use cases that alter system-level optimization goals and corresponding eNVM selection, further highlighting the flexibility and ease of exploration the NVMExplorer\ framework offers.
A major advantage of storing DNN weights in eNVMs is that non-volatility supports intermittent operation that powers off the accelerator between inferences.
Using SRAMs would either consume leakage power to keep the weights memory powered on or consume power to restore the weights from off-chip memory, e.g., by incurring a latency and energy penalty by fetching from DRAM.
In this use case, we provision monolithic eNVM storage to hold all DNN weights (e.g., up to 32MB for Natural Language Processing (NLP) tasks).
For image processing, all weight memory accesses are to eNVM, eliminating the wake-up latency and power associated with loading parameters on-chip, in addition to reducing distance between compute system and higher-capacity memory.
Figure \ref{fig:dnn_app} (right) compares the resulting memory-energy-per-inference across eNVMs for both single-task image classification and multi-task image processing, as determined by the total number of accesses to retrieve all DNN weights over the course of processing one input frame.
The lowest-energy technology choice differs between the single vs. multi-task inference and, perhaps more interesting, both are eNVM candidates with \textit{lower} storage density (RRAM and pessimistic FeFET), as opposed to the highest density options (STT and optimistic FeFET), which hints at a cross-stack prioritization of read performance as opposed to cell size reduction, as probed further in Sec. \ref{subsec:area_eff}.
We repeat this study for single task vs. multi-task natural language processing using the ALBERT network, a relatively small-footprint, high-accuracy, transformer-based DNN \cite{albert}.
To further study this result, we dig into the implications of intermittent operation and compare the total energy versus the number of inferences per day, showing a continuum of wake-up frequency that may arise (e.g., deployed solar-powered agricultural sensors or satellites, or a voice-enabled assistant executing NLP tasks on wake-up).
The left plot of Figure~\ref{fig:intermittent} shows total memory energy as a function of inferences per day for image classification.
Here, total memory energy is presented as a proxy for device battery life.
From the figure, we observe that when the number of inferences per day is sufficiently low (less than 1e5), optimistic FeFET yields the lowest energy.
At higher wake-up frequency, optimistic STTs take over because of the relatively lower energy-per-access.
Figure~\ref{fig:intermittent} (right) investigates the impact on an NLP workload.
While results are similar, optimistic STT emerges as the best technology at lower inference rates (as compared to image classification), because ALBERT requires more computational power per inference than ResNet26.
Table \ref{tab:dnn} summarizes the preferred eNVM technology across different use cases and tasks, with ``Opt. eNVM'' indicating the preferred choice under optimistic underlying cell characteristics and ``Alt. eNVM'' indicating the preferred technology assuming pessimistic assumptions and reference points, and table entries for intermittent operation are selected at a fixed wake-up rate.
Across a range of device wake-up frequencies and per-wake-up compute patterns, we observe that several eNVMs become compelling, and the preferred NVM choice for further investigation varies depending on both of these factors.
\subsection{Enabling Efficient Graph Processing} \label{subsec:graphs}
Our second case study explores the potential benefits of using eNVMs for graph processing, which imposes an entirely different set of constraints in terms of memory read and write characteristics.
Graph processing comprises many read-dominated tasks with less predictable data reuse than DNNs (e.g., search kernels), but still involves write traffic and, overall, is incredibly data-intensive in terms of memory bandwidth and capacity.
As an initial exploration of compatibility and viability between graph processing workloads and eNVM storage solutions, we consider the total power and resulting memory lifetime per technology under generic traffic patterns covering the range of read and write bandwidths for critical graph tasks, as described in previous workload characterization efforts \cite{beamer_graphs}.
As a proof of concept in a specific system, we additionally evaluate eNVM storage solutions under access patterns for benchmarks executed on a domain-specific accelerator \cite{graph_accel}.
\subsubsection{Analysis for generic traffic patterns}
We consider different memories experiencing a range of generic traffic patterns representing graph processing kernels (i.e., read access rates from 1-10GB/s and write access rates from 1-100MB/s) \cite{beamer_graphs}.
NVMExplorer\ provides a wide array of critical metrics to compare and user-configurable visualizations to extract important trends and limitations.
For example, in Figure~\ref{fig:graphs}, we choose to display total memory power against read traffic, as number of read accesses becomes a dominant factor in total power for read-dominated workloads, and total memory latency against write traffic, as overall performance for several eNVMs is strongly determined by write traffic.
As shown by Figure~\ref{fig:graphs}, left, total memory power generally increases with read access rate and the lowest power solution depends on the application traffic load.
For applications that exhibit fewer than 10$^7$ read accesses per second, optimistic FeFET is a clear winner, while pessimistic FeFET and RRAM are next best candidates.
On the other hand, for higher rates of read traffic (e.g., $> 10^8$), optimistic STT is best.
For mid-range read access rates, PCM and RRAM are also viable solutions sometimes offering the lowest power solution.
However, this relationship alone does not dictate memory technology choice.
A slightly different and more consistent story emerges when we analyze the impact of different eNVMs on overall memory latency (both read and write) versus write access rates, shown by the middle plot of Figure~\ref{fig:graphs}.
While there is a clear preference for optimistic STT, RRAM and optimistic PCM are also worth considering.
In contrast, most pessimistic eNVM technologies and all FeFET-based solutions are significantly inferior, even failing to match SRAM performance for many traffic patterns.
When we additionally consider projected memory lifetime, STT emerges the clear winner overall.
Note that the right chart of Figure~\ref{fig:graphs} plots the memory lifetime assuming continuous operation at a particular write access rate.
Hence, the highest write traffic always yields the lowest lifetime.
While RRAM seemed promising based on performance and power, it has the worst endurance and lowest lifetimes.
\subsubsection{Analysis for domain-specific systems}
In addition to relying on generic traffic to represent the full range of expected load of graph processing, NVMExplorer\ can also be leveraged to answer a more specific design question: For performance targets and traffic patterns to a specific storage resource in a graph processing accelerator system, which eNVMs offer compelling characteristics that warrant further investigation?
To this end, Figure~\ref{fig:graphs} also includes points, identified in pink, corresponding to memory traffic to run breadth-first search on two different social network graphs \cite{snapnets}.
Traffic patterns are extracted from throughput and accesses reported for the compute stream of a domain-specific graph processing accelerator utilizing an 8MB eDRAM scratchpad \cite{graph_accel}.
In the baseline system, about $90\%$ of the energy is spent on the eDRAM scratchpad (not including DRAM controller energy), with an operating power of at least 3.1W at the 32nm process technology node as reported from Cacti \cite{graph_accel, cacti}.
We analyze the benefits of replacing the 8MB eDRAM scratchpad with an iso-capacity eNVM array provisioned to meet the cited latency target (1.5ns).
If we exclude RRAM due to low lifetime projections, FeFET, PCM, and STT all offer significantly lower memory power (about 2-10$\times$ lower than SRAM) and even pessimistic STT offers consistent performance.
These observations, based on a realistic graph processing use case extracted from prior work, are consistent with the results generated using generic traffic patterns.
Again, optimal technology choice depends on higher, system-level optimization goals, and NVMExplorer\ provides critical insights in the presence or absence of a specific system solution and simulation results.
If the high-level goal is to maximize storage density, FeFET is highly attractive, but severely limited by poor write latency (unable to meet application latency expectations under the higher range of traffic patterns).
Rather than prematurely eliminating FeFET, designers can leverage NVMExplorer\ to study the impact of relaxing or adapting application targets or to explore co-design solutions that target improvements to the underlying technology (Sec.~\ref{subsec:bg_fefet}) or architecture (Sec.~\ref{subsec:write-buf}).
\section{Introduction}
The wide adoption of data-intensive algorithms to tackle today's computational problems introduces new challenges in designing efficient computing systems to support these applications.
Hardware specialization has shown potential in supporting state-of-the-art machine learning and graph analytics algorithms across several computing platforms; however, data movement remains a major performance and energy bottleneck.
As repeated memory accesses to off-chip DRAM impose an overwhelming energy cost, we need to rethink the way embedded memory systems are built in order to increase on-chip storage density and energy efficiency beyond what is currently possible with SRAM.
In recent years, CMOS-compatible, embedded nonvolatile memory (eNVM) research has transitioned from articles and technical reports to manufacturing flows and product lines.
These technologies hold incredible promise toward overcoming the memory wall problem.
For example, one approach inspired by these new technologies combines the advantages of highly specialized architectures with the benefits of non-volatile memories by leveraging analog compute capabilities~\cite{prime, isaac, pipelayer, cascade}.
On the other hand, the need for optimized on-chip storage solutions and memory innovation applies both to specialized hardware accelerators and for general-purpose CPU systems as well.
More broadly, prior works have unveiled incredible potential improvements in storage density and energy efficiency by employing eNVMs across various architecture domains~\cite{hankin, deepnvm++, maxnvm}.
With many publications showcasing the benefits of eNVM storage technologies, it is critical for system designers to be able to explore their varying capabilities and empower efficient future on-chip storage.
Unfortunately the architecture and broader research community lacks a holistic tool to quantify the system and application-level implications of memory cell technologies and to make informed decisions while navigating the vast eNVM design space.
\begin{figure}[t]
\centering
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,bottom},capbesidewidth=0.4\hsize}}]{figure}[\FBwidth]
{\caption{Number of NVM publications from VLSI, ISSCC, and IEDM 2016-2020 (cited in text) shows strong interest in RRAM and STT and emerging technologies, such as ferroelectric-based ones.
}}
{\includegraphics[width=0.99\hsize]{figures/pie_fig1.png}\label{fig:design-space}}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\hsize]{figures/nvmexplorer_framework.pdf}
\caption{NVMExplorer~framework overview; cross-stack design space specifications and application characteristics are evaluated in an efficient multi-stage process, then displayed in an interactive set of data visualizations to enable informed, application-aware comparisons of future on-chip storage solutions, as described in more detail in Section \ref{sec:nvmexp}.}
\vspace{-5pt}
\label{fig:diagram_of_framework}
\end{figure*}
Figure~\ref{fig:design-space} summarizes device and circuit conference publications relating to eNVMs from 2016 to 2020 \cite{2016_1,2016_2,2016_3,2016_4,2016_5,2016_6,2016_7,2016_8,2016_9,2016_10,2016_11,2016_12,2016_13,2016_14,2016_15,2016_16,2016_17,2016_18,2016_19,2016_20,2016_21,2016_22,2016_23,2016_24,2016_25,2016_26,2016_27,2016_28,2016_29,2016_30,2016_31,2016_32,2016_33,2016_34,2016_35,2016_36,2016_37,2016_38,2017_1,2017_2,2017_3,2017_4,2017_5,2017_6,2017_7,2017_8,2017_9,2017_10,2017_11,2017_12,2017_13,2017_14,2017_15,2017_16,2017_17,2017_18,2017_19,2017_20,2017_21,2017_22,2017_23,2017_24,2017_25,2018_1,2018_2,2018_3,2018_4,2018_5,2018_6,2018_7,2018_8,2018_9,2018_10,2018_11,2018_12,2018_13,2018_14,2018_15,2018_16,2018_17,2018_18,2018_19,2018_20,2019_1,2019_2,2019_4,2019_5,2019_6,2019_7,2019_8,2019_9,2019_10,2019_11,2019_12,2019_13,2019_14,2019_15,2019_16,2019_17,2019_20,2019_21,2019_22, 2020_1,2020_2,2020_3,2020_4,2020_5,2020_6,2020_7,2020_8,2020_9,2020_10,2020_11,2020_12,2020_13,2020_14,2020_15,2020_16,2020_17,2020_18,2020_19,2020_20,2020_21,2020_22,2020_23,2020_24,2020_25}.
In the past five years, consistent interest in RRAM and STT was accompanied by emerging solutions with different physical properties such as FeFET-based memories.
Each published example offers compelling and distinct trade-offs in terms of read and write characteristics, storage density, and reliability.
In addition, the space of eNVM technologies is constantly evolving with certain technologies moving out of fashion or into production.
Given the fluidity and complexity of this design space, application experts and system designers need to be able to evaluate which cell technologies are most likely to provide better efficiency, higher storage density, or improvements on other key metrics in the context of different computing demands.
Similarly, device designers and memory architects need high-level guidance to co-design their innovations toward more practical and maximally beneficial future, heterogeneous memory systems.
This work introduces NVMExplorer, an end-to-end design space exploration framework that addresses key cross-stack design questions and reveals future opportunities across eNVM technologies under realistic system-level constraints, while providing a flexible interface to empower further investigations.
In this work, we describe NVMExplorer\ and present case studies made uniquely possible by the capabilities of NVMExplorer. In summary, NVMExplorer\ makes the following key contributions:
\begin{itemize}
\item An open-source code base including:
\begin{itemize}
\item A database of eNVM cells described in recent literature (122 surveyed ISSCC, IEDM, and VLSI publications) (Section \ref{subsec:cells})
\item A ``tentpole'' methodology to summarize limits and trends across technology classes (Section~\ref{subsec:tentpoles})
\item Our end-to-end evaluation flow (Fig. \ref{fig:diagram_of_framework})
\item Extensive source-code documentation
\item Many example configuration files and tutorial materials for cross-stack design studies
\item An interactive web-based data visualization dashboard (Section \ref{subsec:visualization})
\end{itemize}
\item A unified platform to explore the viability of eNVMs in specific application and system settings, which reveals cross-stack dependencies and optimization opportunities, in addition to reproducing and expanding previous published studies, (e.g., \cite{maxnvm} \cite{hankin}) (Section \ref{sec:case_studies}).
\item A unified platform to perform co-design studies of application properties, system constraints, and devices in order to bridge the gap between architects and device designers for future eNVM solutions. Our example co-design studies reveal both opportunities and potential disconnects among current research efforts (Section \ref{sec:codesign}).
\end{itemize}
After describing NVMExplorer\ (Section \ref{sec:nvmexp}), we present a snapshot of the current eNVM landscape and extract a representative range of cell-level behavior (Section \ref{sec:tech_landscape}).
Surveying recent eNVM publications reveals diverse characteristics, highlighting the challenge in identifying solutions that satisfy a broad range of application scenarios.
Thus, Section \ref{sec:case_studies} presents application-driven case studies using NVMExplorer\ to explore and analyze eNVM storage solutions for DNN inference acceleration, graph processing, and general-purpose compute.
We find that each eNVM is viable in certain contexts, and the most compelling eNVM is dependent on application behavior, system constraints, and device-level choices.
This finding suggests the existence of many possible architecture-device co-design opportunities, which is the focus of Section \ref{sec:codesign}.
Finally, we differentiate NVMExplorer\ from related tools (Section \ref{sec:relatedwork}).
\subsection{Non-Volatile LLC Solutions} \label{subsec:llc}
Improved density and energy efficiency could revolutionize general-purpose on-chip storage, and recent efforts have endeavored to replace high-performance memories, like caches, with eNVM-based alternatives~\cite{korgaonkar,hankin,deepnvm++}.
However, caches must handle a large volume of writes depending on the application, so the achievable write latency and endurance per eNVM comes to the forefront of design considerations.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/FigLLC_16MB.png}
\caption{Array access characteristics in isolation for consideration of replacing (iso-capacity) a 16MB LLC.}
\vspace{-10pt}
\label{fig:array_comparison}
\end{figure}
In this study, we consider the last-level cache (LLC) of a high-performance desktop processor, similar to Intel's 14nm, 8-core Skylake.
The memory hierarchy includes a private 32 KiB L1I\$; a private 32 KiB L1D\$; a private 512 KiB L2\$ (non-inclusive, write-back); and a shared ring 16MiB L3\$ with 64 B line, 16 ways (inclusive and write-back).
The system includes DRAM with 2 channels, 8 B/cycle/channel, 42cycles + 51 ns latency.
Representative application behavior comes from SPECrate CPU2017 (integer and floating point), and we warm-up the cache for 500M instructions and simulate for 1-billion instructions in detail using the Sniper simulator \cite{SPEC_CPU2017, sniper}.
This provides application modeling data for a 16MB LLC (e.g., reads, writes, execution time per benchmark) that are inputs to NVMExplorer (see Section~\ref{subsec:cross-stack}).
First we focus on the array characteristics of the different memory technologies in isolation, as shown in Figure~\ref{fig:array_comparison}.
From the left plot, we note a competetive range of read energy and read latency does not reveal a clear winner.
For example, if read energy per access is highest priority, FeFET, RRAM or even SRAM offer array configurations that trade access latency for energy efficient, while STT and optimistic FeFET offer pareto-optimal read characteristics.
For writes (Figure~\ref{fig:array_comparison}, right), a PCM-based last level cache appears to minimize energy per access.
On the other hand, only STT and RRAM are able to beat SRAM write latency.
Again, we find array characteristics in isolation do not offer sufficient guidance to choose the best eNVM for LLC, and NVMExplorer\ allows us to go further.
Figure~\ref{fig:spec_power_16MB} shows the resulting power, performance, and lifetime when using different eNVMs as LLC and assuming memory traffic from SPEC2017 benchmarks.
The leftmost figure shows total memory power versus read access rate, where each column of points corresponds to a particular benchmark traffic pattern.
We again see that the lowest power eNVM solution depends on the traffic pattern.
In broad terms, RRAM and FeFET fair better for lower read access rates while PCM is better for higher rates until STT emerges best for the highest rates.
In terms of memory access latency with respect to write access rates, STT is usually the best choice, though arrays unable to meet application bandwidth are excluded.
Lastly, the rightmost figure compares lifetimes across the eNVM technologies for a range of write access rates.
Again, STT offers the best longevity on average.
However, PCM and FeFET may warrant consideration for read-dominated workloads.
RRAM, on the other hand, does not appear viable as an LLC.
\section{Related Work} \label{sec:relatedwork}
Previous work in evaluating eNVMs can be characterized as either focusing on device- and array-level evaluations, or providing in-depth cross-stack analysis for a particular combination of eNVM and application target.
In Table~\ref{tab:scope}, we codify the key differences between NVMExplorer\ and related works. Survey works such as the Stanford Memory Trends \cite{stanford_memory_trends} maintain a list of key parameters, like storage capacity and write energy, while previously validated array-level characterization tools, such as NVSim \cite{NVSim}, characterize timing, energy, and area of eNVM-based memory structures.
DESTINY \cite{DESTINY} modifies NVSim to evaluate 3D integration and could be similarly extended and used as a back-end characterization tool for NVMExplorer.
To evaluate eNVMs in a system setting, prior work typically integrates NVSim with a system simulator.
DeepNVM/DeepNVM++ \cite{DeepNVM,deepnvm++} enables cross-layer modeling, optimization, and design space exploration of MRAM-based technologies in the context of GPU cache for DNNs using GPGPUSim.
NVMain \cite{NVMain} enables evaluation of eNVM-based main memory using gem5.
NeuroSim+ \cite{neurosim+} focuses on evaluation of processing-in-memory for DNN inference and training.
While these frameworks are great examples of domain-specific explorations and evaluations, NVMExplorer can evaluate a variety of system and application domains, in addition to offering reliability analysis, additional metrics such as memory lifetime, and a database of technology cell characteristics and configurable device parameters.
In contrast, NVMExplorer\ offers more breadth by including application-, system-, and device-level considerations, and accommodating a wider range of devices without requiring a separate system simulator. Additionally, NVMExplorer\ offers a broad range of evaluations, including fault modeling and reliability studies. It is built for ease of navigation and fluidity, and it exposes the unique cross-stack trade-offs among application characteristics, system constraints, and circuit and device level innovations in a user-friendly configuration interface and companion data visualization interface.
By integrating these components, NVMExplorer\ additionally provides a platform for architects and device designers to perform co-design evaluations required for the advancement of technologically-heterogeneous memory systems.
\subsection{Tentpoles of the Design Space}
\label{subsec:tentpoles}
Comparing eNVMs at varying stages of development and with varying underlying physical properties is a challenging task.
The case studies in this work aim to provide high-level guidance and relative judgments about which eNVM cell technologies are worthy of further investigation under specific system and application constraints.
Thus, rather than focus in on specific, physically accurate cell configurations, we aim to model the bounds of what is conceivable per eNVM technology across the full range of published recent academic work.
We liken identifying and evaluating these bounds per-technology to forming the poles of a tent that encompasses the full extent of eNVM properties, so we call the extrema in terms of cell-level characteristics (i.e., smallest, lowest read energy, best retention vs. largest cell size, lowest endurance) the device-level ``tentpoles''.
In an actively evolving technology space, this approach allows us to make meaningful classifications about which technologies are potentially adoptable solutions.
These modeling choices are classified into two fixed cell configurations for applicable technologies, as summarized in Section \ref{subsec:opt_pess} and the figure alongside Table \ref{tab:cell_level_ranges}.
We validate that the ``tentpoles'' of the cell-level design space result in array-level characterization that provides coverage of published memory array properties, as discussed in Section \ref{subsec:validation}.
\subsubsection{Optimistic and Pessimistic Cell Configurations}
\label{subsec:opt_pess}
For the technology classes most represented in our survey (Fig. \ref{fig:design-space}), we compute which published example has the best-case and worst-case storage density in terms of Mb/F$^{2}$, and this data serves as the foundation of the bounds of the cell-level design space; those points which are most and least dense across recent published examples.
Any critical cell-level parameters not reported with those cell definitions are assigned values (e.g., read characteristics and programming settings) using the best (lowest power, highest efficiency) or worst (highest power, lowest efficiency) value per metric across all other recent publications with sufficient supporting data.
These best-case and worst-case technologies per class form the tentpoles of the underlying cell design space, and we label these fixed cell definitions as ``optimistic'' or ``pessimistic'' accordingly.
For the purposes of the case studies presented in Sections \ref{sec:case_studies} and \ref{sec:codesign}, all array- and application-level results are produced using these fixed underlying optimistic and pessimistic cell properties, though we note that a user of NVMExplorer\ can draw either on these constructed, bounding example cells or on the full database of surveyed configurations, or on fully customized definitions with respect to cell size, access properties, and operating conditions (e.g.,read/write voltage, temperature).
Corresponding fault models and error rates for reliability studies are extracted after optimistic vs. pessimistic cell-level properties are fixed, as discussed in one of the presented case studies (Section \ref{subsec:mlc}).
This approach helpful for many reasons: for one, these extremes help us answer exploratory questions about what we will likely see in the near future; secondly, comparing the best-case of one technology to the worst-case of another can help gauge less mature technologies against more mature reference points; thirdly, if such optimistic configurations are untenable or even pessimistic configurations are attractive in a specific system setting, we can build confidence for further exploration and more detailed modeling efforts without implementing and attempting to meaningfully compare many many cell definitions with insufficient data.
A limitation of this methodology is that inherent trade-offs between certain parameters for a technology may not be linked (e.g., area, latency, and retention for STT); however, this amalgam of cell properties represent the full spectrum of achievable characteristics per technology, rather than specific fabricated results.
As a point of additional comparison, the results shown in the following studies include a reference cell configuration for RRAM as a relatively mature eNVM, with parameters derived from a specific industry result \cite{2018_1}.
The resulting optimistic, pessimistic, and reference cell size and write pulse are shown to the right of Table \ref{tab:cell_level_ranges}.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{figures/stt_validate.png}
\caption{``Tentpole'' STT vs. published array data shows coverage of the space across critical metrics.}
\label{fig:validate}
\vspace{-10pt}
\end{figure}
\vspace{-5pt}
\subsection{Validation} \label{subsec:validation}
Our array-level area, energy, and latency characterizations rely on the previously-validated procedures of NVSim to extrapolate cell-level configurations and array design constraints to optimized memory layouts and properties~\cite{NVSim}.
However, in employing our ``tentpole'' approach, it is critical that we verify that array-level results using our optimistic and pessimistic underlying cell characteristics fully cover and match expectations of existing fabricated eNVM solutions.
Whenever possible, we select publications with array-level characterizations for a given technology, and compare those results to iso-capacity memory arrays modeled through our ``tentpole’’ approach. Figure \ref{fig:validate} shows an example of such an exercise. We compare a 1MB STT-RAM array published at ISSCC in 2018 to optimistic and pessimistic STT design points produced by~NVMExplorer. Here, we note that our tentpole results effectively represent the range of actual array properties by producing metrics that are both higher and lower, but similar in magnitude, to the reference STT-RAM array. The studies presented in this work consider only validated configurations for which we were able to either complete this validation exercise or run SPICE-level simulations.
It is worth noting that NVMExplorer\ is set up to evaluate all cell technologies in Table \ref{tab:cell_level_ranges} (e.g., though SOT is a compelling emerging solution and NVMExplorer users can configure and evaluate SOT-RAM, our survey found insufficient array-level data for validation, so it is omitted in Section \ref{sec:case_studies} and \ref{sec:codesign}). System validation and application characteristics are derived from existing, state-of-the-art references, as addressed in each study in Section \ref{sec:case_studies}.
|
2,869,038,155,000 | arxiv | \section{Introduction}
While the mantra of a common distance, age, and composition is invariably cited to justify the study of star clusters as
testbeds of stellar evolution, the reality imposed by cluster dynamical evolution within a Galactic gravitational potential well
often limits the application of these admittedly valuable characteristics. The typical timescale for evaporation of open clusters is only a
few $\times$ $10^8$ years, as confirmed by statistical studies of star clusters within a few hundred parsecs of the Sun \citep{LA05},
while for clusters interior to the solar circle, the decline in numbers beyond 1 Gyr in age is even more dramatic \citep{JA94, CA14}.
For clusters at greater distance, contamination of sparsely populated regions of the color-magnitude diagram (CMD) by field stars,
sometimes complicated by differential reddening, can make delineation of the more rapid phases of post-main-sequence evolution an
exercise in self-delusion. These limitations heighten the impact of any richly populated, nearby open cluster older than 2 Gyr, as
exemplified by the prolific literature on iconic clusters such as M67 and NGC 188 and, more recently, NGC 2420 and NGC 6791. A surprisingly
understudied cluster for many years was NGC 6819 \citep{BU71, AU74}, surprising because of its extremely rich, well-defined CMD from
the main sequence to the tip of the giant branch within the underpopulated cluster age range of 2-4 Gyrs. Its profile has changed
dramatically in the last few years due to its inclusion within the Kepler field, making it the focus of asteroseismic studies reaching
down the giant branch \citep{ST11}, with a rapidly expanding literature related to the cluster and its
members \citep{AT13,JE13,PL13,YA13,WU14,WL14}.
With the goal of using atmospheric Li to probe stellar structure and evolution among low mass stars, the authors
have undertaken an extensive spectroscopic program to survey members of a key set of open clusters from the
tip of the giant branch to as far down the main sequence as the technology allows. First results have been published
for the clusters NGC 3680 (age = 1.75 Gyr) \citep{AT09} and NGC 6253 (3.0 Gyr) \citep{AT10, CU12}. Among the clusters
currently under analysis are NGC 7789 (1.5 Gyr) and NGC 6819 (2.3 Gyr), with over 300 stars observed in each. The age of NGC 6819
places the turnoff stars in a mass range where partial degeneracy at hydrogen exhaustion slows the evolutionary rate enough
to populate both the subgiant branch and the first-ascent giant branch below the red giant clump. Preliminary spectroscopic
analysis has already led to the discovery of a unique Li-rich giant fainter than the level of the clump \citep{AT13}, below the point
where standard stellar evolution models predict the initiation of mixing assumed to create Li-rich atmospheres.
However, high dispersion spectroscopic analysis requires reliable input parameters for the models used in interpreting the spectra,
specifically temperatures and surface gravities. These, in turn, demand precise estimates of the cluster reddening
and distance, usually derived from comparison of the observed CMD to theoretical isochrones of appropriate age and metallicity.
The goal of this paper is to define the fundamental properties of NGC 6819 through an analysis of the cluster on
the $uvbyCa$H$\beta$ photometric system. The efficacy of this approach has been demonstrated many times in the past, including studies
of NGC 5822 \citep{CA11}, NGC 6253 \citep{TA03}, NGC 6791 \citep{AT07}, and Mel 71 \citep{TW06}. Of particular value for open clusters
in rich and extended fields is the photometric capability to derive individual reddening estimates while eliminating probable
field stars which may have proper motions consistent with cluster membership, a point we will return to in Sec. 3.
The outline of the paper is as follows: Sec. 2 discusses the CCD observations and their reduction to the
standard system for intermediate-band photometry; Sec. 3 uses the photometry, in conjunction with proper-motion membership, to
identify and isolate probable cluster members which become the core data set for selecting single, main sequence stars
for reddening and metallicity estimates in Sec. 4. We also test (and confirm) the existence of the reddening gradient
across the face of the cluster as derived by \citet{PL13} (hereinafter referred to as PL). Sec. 5 contains a discussion
of the potential impact of the new reddening and metallicity on the cluster distance and age and a summary of our conclusions.
\section{Observations and Data Reduction}
\subsection{Observations}
Intermediate and narrow-band imaging of NGC 6819 was completed using the WIYN 0.9-m telescope on UT dates July 2-7, 2011,
June 14-20, 2012, and June 8-13, 2013. Fourteen of these nineteen nights were usable for photometry but only a few
were totally photometric. For all nights other than five during the June 2012 run, the S2KB CCD was used at
the $f$/7.5 focus of the telescope for a 20\arcmin $\times$ 20\arcmin\ field with 0.6\arcsec\ pixels. $vbyCa$H$\beta$
data from five of the nights in 2012 were obtained using the smaller T1KA chip with identical pixel size. The
psf-based photometry for each smaller frame for each filter was transformed to the instrumental system of the S2KB
chip using a linear calibration and a color term coupled to $(b-y)$. However, the standard stars observed during
these five nights have not been utilized for calibration purposes; only standards observed with the S2KB chip have
been used for calibration. All seven filters are from the 3\arcsec $\times$ 3\arcsec\ filter set owned jointly by the University of Kansas and Mt. Laguna Observatory.
Bias frames and dome flats for each filter were obtained every night, with sky flats observed at twilight for the $u$, $v$, and $Ca$
filters when feasible. On all photometric nights, fields in NGC 6819, as well as standard stars and extinction stars, were observed over a range in air mass. Extinction coefficients were separately determined for each photometric night.
Standard IRAF routines were used to perform initial processing of the frames, i.e. bias-subtraction and flat-fielding.
Illumination corrections were applied for frames obtained in 2013 and for the shortest wavelength frames obtained in 2012.
A fairly comprehensive discussion of our procedure for obtaining PSF-based instrumental magnitudes and merging multiple
frames of a given filter can be found in \citet{AT00}.
Our calibrations to the standard extended Str\"omgren system are based on aperture photometry in the program
cluster, of field star standards, and of stars in NGC 6633 for each photometric night. For every frame contributing
to the photometric calibration solution, aperture magnitudes for standard stars are obtained within apertures scaled
to five times the FWHM for the frame; sky annuli are uniformly chosen with the inner radius one pixel larger
than the aperture and a uniform annular width. A number of sources were consulted for field star standard index values,
including the catalog of \citet{TA95} for $V$, $b-y$ and $hk$ indices, catalogs of $uvby$H$\beta$ observations by
\citet{OL83,OL93,OL94}, and compilations of H$\beta$ indices by \citet{HM98} and \citet{SN89}.
H$\beta$ indices for stars in NGC 6633, obtained by \citet{ED76}, were used to augment the field star
standards in calibrating H$\beta$ indices.
Following standard procedures for Str\"omgren photometry, a single $b-y$ calibration equation was derived for
warmer dwarfs and giant stars and a separate calibration equation for dwarfs with $(b-y)_0 \geq 0.42$.
Calibrations of $m_1$ and $c_1$ for cooler giants are determined independently from calibrations applied
to bluer dwarfs or calibrations applied to cooler main sequence stars. All photoelectric standards, field
stars and cluster stars alike, were used to determine slopes and color terms for the calibration equations, summarized in Table
1. An independent zero-point was determined for each calibration equation on each night, based on field
star standards, with the exception that stars in NGC 6633 were used as H$\beta$ standards for the one
photometric night contributing to the calibration of H$\beta$ indices.
As is our usual procedure, we extend the calibration equations to the indices for NGC 6819 stars based on merged
profile-fit photometry
by determining the average differences between profile-fit indices and indices determined
from aperture photometry in the cluster on each photometric night. These data presented an unusually challenging
case for this aperture correction scheme, caused jointly by moderately poor seeing on some nights and a relatively
crowded cluster core field. It was necessary to determine the average difference between the aperture and
profile-fit indices based on carefully selected -- and not very large -- sets of
stars for which no neighbors would be included inside apertures of $\sim 20$\arcsec\ radius.
With such aperture corrections, the calibration equations from each photometric night may be applied to the aperture photometry
in the program cluster, and then by extension to the profile-fit indices in NGC 6819, with an independent zero-point determined
for each equation from each photometric night.
Several indicators of the precision of the calibration equations' zero-points are presented in Table 1. For each photometric
night, $\sigma_1$ quantifies the dispersion of calibrated values about standard values for field star standards; $\sigma_2$
quantifies the dispersion among zero-points for the several photometric nights. Both $\sigma$ values are standard deviations.
The final calibration equation's zero-point is determined from a weighted sum of the independent night evaluations; the
final statistic labeled ``sem" denotes the standard error of the weighted mean.
Final photometry on the $uvbyCa$H$\beta$ system can be found in Table 2, where most columns are self-explanatory:
(X,Y) CCD positions have been translated to the right ascension and declination coordinate frame of PL. As described further in Section 3, we were able to match
a majority of our photometric sample to the positions and photometry prsented in PL; membership probabilities for the nearly 4000 matched stars are included in Table 2 where available. Stars are
included in Table 2 only if they were observed at least twice in both $y$ and $b$ filters to construct $V$ and $b-y$,
twice each in $v$, $Ca$, and $u$ for $m_1$, $hk$, and $c_1$, respectively. Two observations each in the narrow and wide
filter were required for the H$\beta$ index to be retained. Standard errors of the mean for the indices are calculated by combining the errors for
individual filters in quadrature and are defined solely by the internal, i.e. frame-to-frame, precision of the of the individual filters. The increase in scatter among the indices for the brightest stars is due the inclusion of two bright, very red giants that exhibit apparent variability and to the reduced number of CCD frames with exposure times short enough to leave the brightest stars unsaturated. A plot of the average {\it sem} for each index as a function of
$V$ can be seen in Fig. 1. The longer, on-line version of Table 2 includes
photometry for 7187 stars, subject to the limitations described above as well as restriction to stars with $\sigma_{b-y} \leq 0.10$.
\subsection{Comparison to Previous Photometry}
Our $V$ magnitudes can be compared directly to
the three comprehensive broad-band surveys to date which include $V$ photometry \citep{RV98, KA01, YA13}
(hereinafter referred to as RV, KA, and YA, respectively) with some surprising results. Beginning with RV,
in the bottom panel of Fig. 2 we show the residuals in $V$, in the sense (RV - Table 2), for 1690 stars matched via the WEBDA (X,Y)
coordinates for RV and our data. No known variables or stars with larger than average errors have been eliminated
from the figure. The overall pattern exhibits a mean offset close to 0.00 at all magnitudes with increasing
scatter as $V$ approaches 18. Somewhat surprising is the dispersion in the residuals among the brighter stars.
One would expect that, given the high precision of both photometric surveys at $V$ = 16 and brighter, the trend
should narrow significantly toward smaller $V$.
After running a number of tests, the source of the scatter emerged as a gradient in the $V$ photometry
with right ascension or X in WEBDA coordinates. Using 566 stars brighter than $V$ = 16 to minimize the impact
of larger photometric scatter, we plot the residuals in $V$ as a function of X CCD position in the top panel of Fig. 2.
The trend is obvious and is confirmed even if stars at all magnitudes are included. Fitting a linear relation to the data, the residual gradient becomes
\noindent
$\Delta V = (0.003 \pm 0.001) - (0.043 \pm 0.002) \times X (kilopixels)$
\noindent
Applying this to the photometry of RV, the revised residual plot is shown in the middle panel of Figure 2. The improvement is obvious; the mean
residual is, by definition, 0.000 with a dispersion of only $\pm$0.016 for $V\leq 16$ and $\pm$0.029 for all stars in the sample.
Again, using only stars brighter than $V$ = 16, no correlations of significance were found for the residuals with either
$B-V$ color or declination (Y). We can attribute the trend to the photometry of RV because a comparison
between KA and RV shows the same gradient. We also note that position-dependent gradients of similar size have
been identified in the broad-band data of \citet{GI98} for NGC 7789 \citep{TW13}.
Turning next to KA, we plot in Fig. 3 the residuals in $V$, in the sense (KA - Table 2), for 1819 stars
common to the two surveys. We emphasize that the residuals have a slightly larger range than in Fig. 2 and no stars
with residuals more negative than -0.10 mag have been excluded from the plot. Two trends are obvious:
(1) There is a distinct dichotomy among the stars above and below $V$ = 14. The mean offset among the brighter sample
is about -0.01 mag but the scatter is excessive, while the fainter sample shows a mean offset closer to 0.02 mag.
(2) Among the fainter stars, the dispersion is highly asymmetric, with a long and well-populated tail toward larger
residuals, virtually independent of the magnitude.
Both of these trends are confirmed through comparisons between RV and KA, with or without a spatial correction to the
data of RV. Among the brighter sample, there is weak evidence for a spatial dichotomy in the photometric zero-point coupled
to declination, with stars in the north being offset from those in the southern half of the field by 0.03 to 0.04 mag, but the
sample is small. For the stars fainter than $V$ = 14, no obvious spatial gradient emerges which would explain the extended tail toward
positive residuals. The asymmetry may be a reflection of the way in which crowded stellar images have been handled by the PSF
software. KA provided no comparisons with the photometry of RV so this issue was never addressed.
The third and final large survey is that of YA in $VI$. While YA include some discussion of the residuals between their
work and earlier studies, noting good agreement, their comparison sample includes fewer than 100 stars, more than an order of
magnitude fewer than we can readily find in common with RV and KA. The reason for this deficiency remains a mystery.
For comparison purposes, the data of Table 2 were cross-matched with those of KA after the CCD coordinates for both samples were
transformed to right ascension and declination, as defined in PL. Using a match radius of 1\arcsec\ and eliminating any
stars with magnitude differences greater than 0.2 mag to minimize mismatches leaves a common sample of 1619 stars to $V$ = 18.5.
The residuals in $V$, in the sense (YA - Table 2), are plotted in Figure 4. The agreement with YA is excellent, with a modest
asymmetry to the positive side of the residuals. From 516 stars brighter than $V$ = 16.0, the mean offset is +0.003 $\pm$ 0.023;
for all 1619 stars, the mean residual is +0.004 $\pm$ 0.028, where the quoted errors are standard deviations.
We conclude that our $V$ photometry is on the same system as YA and RV, corrected for a spatial gradient, to better
than a few millimagnitudes in the mean. By contrast, the sample of KA exhibits a distinct change in zero point at
$V$ $\sim$ 14, with the fainter sample approximately 0.03 mag too faint relative to the other three studies.
\section{The Color-Magnitude Diagram}
The CMD based upon ($V, b-y$) for all stars in Table 2 is shown in Fig. 5.
Stars with {\it sem} errors in $b-y$ below 0.015 are shown as open circles, while stars with errors greater than
this cutoff to an {\it sem} limit of 0.15 mag are plotted as crosses. While some primary features of the CMD, the location of the
turnoff and the red giant clump, are obvious, due to the area of the sky covered by the frames, field star confusion makes
delineation of the giant branch below the clump and the fainter main sequence almost impossible.
As a first attempt to minimize the field star contamination, we plot in Fig. 6 all stars within 5\arcmin\ of the cluster center as defined
by PL. The improvement in isolating the cluster is readily apparent, though the reduction in both field stars and
cluster members for $V \geq 18$ still leaves a respectable level of contamination. The cluster core exhibits a rich population of blue
stragglers, a well-defined red clump and bright giant branch, but a non-negligible degree of confusion among potential subgiants and
first-ascent red giants below the clump.
Fortunately, the astrometric analysis of the cluster by PL supplies proper-motion membership for the majority of the stars in our field.
As stated in Sec. 2, we have transformed our CCD (X,Y) coordinates to the (RA,DEC) system of PL, which should be J2000, epoch 2009.875.
As a first step, all stars' coordinates common to both data sets and coincident within a radial distance of 2\arcsec\ were identified.
Next, the stars common to YA and PL were used to derive a transformation between the $g$, $g-r$ photometry of PL and the $V, V-I$ data of YA.
From 1250 stars brighter than $V$ = 18, we found
\noindent
$ V = g -(0.038 \pm 0.006) - (0.595 \pm 0.018)(g-r) + (0.061 \pm 0.012)(g-r)^2 $
\noindent
$V-I = (0.403 \pm 0.010) - (0.408 \pm 0.029)(g-r) + (0.358 \pm 0.020)(g-r)^2 $
\noindent
The dispersions among the residuals in $\Delta$$V$ and $\Delta$$(V-I)$ are 0.022 mag and 0.029 mag, respectively. With the transformed
$g$ photometry in hand, any star common to PL and Table 2 through RA and DEC which showed a difference in $V$ greater than 0.1 mag was
excluded from the sample. Finally, all stars with membership probabilities less than 50\% were excluded. Three stars brighter than
$V$ = 17 with proper-motion probabilities above 50\% which had been excluded due to an excessive $V$ residual were reinstated. All
other excluded stars from the original match are either fainter than $V$ = 17 and/or have membership probabilities below 50\%. The
resulting ($V, b-y$) CMD for all members within the CCD frames is given in Fig. 7. Symbols
have the same meaning as before, but the sample has been cut at $V$ = 19 to reduce scatter caused by increasing errors in $b-y$.
The improvement relative to even Fig. 6 is dramatic. While all the primary features from the core sample CMD remain, a plausible outline
of the subgiant branch and first-ascent giant branch below the clump now emerges. The main sequence is tightly defined, with only a modest
level of scatter to the red, as expected for the band defined by probable binaries extending up to 0.75 mag above the unevolved main sequence.
The unevolved main sequence can now be traced down to the limit of the plot, though it is apparent that some of the background field stars
with proper motions comparable to the cluster still remain near $b-y$ = 0.5, as well as their evolved counterparts in the vertical band between
$b-y$ = 0.7 and 0.8.
\section{Cluster Properties - Reddening and Metallicity}
With the restricted sample of Fig. 7, we can now approach the photometric reddening and metallicity estimate. Reddening on the $uvby$H$\beta$
system is defined by comparison of the predicted $b-y$ color tied to the stellar temperature as derived from the reddening-free H$\beta$
index, adjusted for evolutionary effects and metallicity, to the observed $b-y$ index for each F and early G cluster dwarf. Potential
sources of scatter in the final cluster averages include poor photometry, non-members, composite systems with distorted photometry, and
inadequate correction for the effects of post-main-sequence evolution. As we have done consistently in the past, our approach is to eliminate
any stars which, for any of the reasons noted above, might reduce the reliability or skew the estimate of the final cluster parameters.
The first simple cut is to eliminate all stars brighter than $V$ = 15.75, i.e. stars significantly evolved beyond the main sequence
and/or populating the turnoff in a region likely to be contaminated by the extended binary sequence, and all stars fainter
than $V$ = 17.50, where photometric errors in indices other than $b-y$ begin to grow rapidly with increasing magnitude. An
expanded ($V$, $b-y$) CMD for this sample is shown in Fig. 8.
The delineation of a tight main sequence with a full range of only 0.03 to 0.04 mags in $b-y$ at a given $V$ is encouraging, though some
stars do lie well off the main sequence. To isolate likely single {\it member} stars, we have drawn a mean linear relation through the main sequence and
tagged any star within 0.02 mag at each $V$ as a probable single-star member (open circles). Stars 0.02 to 0.07 mag redder than the mean relation
are defined as probable binaries (crosses); stars at least 0.02 bluer than or $\geq 0.07$ mag redder than the main line
are tagged as probable non-members (filled triangles). This leads to a sample of 382 probable single-star (or binaries with low mass ratios) members.
An immediate question in light of claims for potential reddening variation across the face of the cluster (PL) is whether or not the red
limit eliminates true cluster members with higher than average reddening rather than binaries or non-members. Fortunately, $hk$ photometry
offers an effective resolution. The $hk$ index is designed to supply a metallicity estimate for stars of a given temperature but,
for stars of a given metallicity, it is a strong function of temperature. However, unlike $b-y$, $hk$ is only weakly dependent on reddening,
with $E(hk)$ = -0.16$E(b-y)$. Thus a star which appears too red at a given $V$ in Fig. 8 because it is intrinsically cooler will shift in a
($V, hk$) diagram by an even larger amount. If the star is shifted in $b-y$ due to enhanced reddening, the shift in ($V, hk$) will be
small to negligible and toward the blue, i.e. smaller $hk$. The ($V, hk$) diagram is shown in Fig. 9; symbols have the same meaning as in Fig. 8.
All stars with errors in $hk$ larger than 0.03 mags have been eliminated, reducing the sample by 17 stars, including four stars
classed as singles in Fig. 9. While some stars classed as deviants in Fig. 8 do lie on the main sequence in Fig. 9, the
classifications for the majority are confirmed. Additionally, a half dozen stars below $V$ = 16.9 appear significantly bluer
than expected (filled circles). Based upon H$\beta$ and $m_1$ indices, these are
either metal-deficient background stars or heavily reddened, hotter dwarfs; neither group contains probable cluster members.
Based upon the positions in Fig. 9, we eliminate 12 additional stars which may be binaries or field stars, leaving a sample of 366 stars.
To ensure that only the most precise photometry is included in the analysis, we eliminate all stars with errors in H$\beta$, $m_1$,
and $hk$ larger than 0.02 mag, 0.02 mag, and 0.03 mag, respectively. Finally, color transformations between H$\beta$ and $b-y$
invariably include a correction for evolution, defined via the $c_1$ index, with more evolved stars having larger $c_1$ values than
an unevolved star on the main sequence. While the majority of stars in our restricted sample have measured $c_1$ indices, since we
are extending the data below $V$ = 16.5, the errors in $c_1$ grow rapidly with increasing $V$, as indicated by Fig. 1. To avoid
elimination of a large fraction of the data set, we have chosen to adopt the $c_1$ value at the observed H$\beta$ as defined by
the standard relation as the reddening-corrected $c_1$ index for each star. In short, we have assumed that all stars are
unevolved, in keeping with the restriction imposed by eliminating all stars brighter than $V$ = 15.75.
As in past cluster analyses, use is made of two intrinsic H$\beta$-$(b-y)_0$ relations to define the intrinsic colors. The intrinsic $b-y$
color is derived in iterative fashion, starting with an assumed $E(b-y)$ and metallicity, calculating the cluster metallicity, redefining the
intrinsic color for the calculated metallicity to obtain a new reddening, and repeating the sequence until the change in reddening
is too small to be statistically significant. The first $E(b-y)$ from \citet{OL88} applies to F stars in the H$\beta$ range from
2.58 to 2.72. From 278 F dwarfs that meet all the criteria, the mean $E(b-y)$ is found to be 0.115 $\pm$ 0.016 (sd). The second
intrinsic color relation is that of \citet{NI88}, a slightly modified version of the original relations derived by \citet{CR75, CR79}
for F and A stars. For the same 278 F dwarfs, the alternate relation implies $E(b-y)$ = 0.119 $\pm$ 0.017 (sd). It should be noted that the
slightly higher reddening for F stars using the \citet{NI88} relation compared to that of \citet{OL88} is a consistent
occurrence from such comparisons \citep{TW06, AT07}. A weighted average of the results leads to $E(b-y)$ = 0.117 $\pm$ 0.002 (sem), or
$E(B-V)$ = 0.160 $\pm$ 0.003 (sem). When combined with the zero-point uncertainties in $b-y$ and H$\beta$, the total uncertainties
in $E(b-y)$ and $E(B-V)$ become $\pm$0.005 mag and $\pm$0.007 mag, respectively. It should be emphasized that the dominant source of
uncertainty in the reddening is the zero-point of the $b-y$ photometry since the $(b-y)$, H$\beta$ relation has a relatively shallow
slope over the color range of interest, as seen in Fig. 11 of \citet{CA11}.
With the reddening fixed, the next step is the derivation of metallicity, a parameter that can be defined using $hk$ or
$m_1$ coupled to either $b-y$ or H$\beta$ as the primary temperature indicator. In past studies using $uvbyCa$H$\beta$ photometry,
the metallicity from $hk$ tied to H$\beta$ invariably has been given the greatest weight due to the greater sensitivity of $hk$ to
modest metallicity changes, while the H$\beta$-based relations allow decoupling between errors in the two indices and minimize the impact
of potential reddening variations, if any exist. We will follow the same approach with NGC 6819, allowing us to tie our results
directly into the same metallicity scale from past intermediate-band cluster studies.
With $E(b-y)$ = 0.117, the mean $\delta$$m_1$($\beta$) for 282 F dwarf probable members between H$\beta$ = 2.58 and 2.72
is 0.025 $\pm$ 0.001 (sem), where $\delta$$m_1$ = 0.0 is set at the adopted Hyades metallicity of +0.12. On this same scale,
NGC 3680, NGC 5822 and IC 4651 respectively have $\delta$$m_1$ = 0.027 $\pm$ 0.002 (sem) \citep{AT04},
+0.017 $\pm$ 0.003 (sem) \citep{CA11}, and 0.000 $\pm$ 0.002 (sem) \citep{AT00}, implying that NGC 6819 is clearly lower in [Fe/H]
than the Hyades, and almost as deficient as NGC 3680, a somewhat surprising result given the consistent claim that the
cluster is metal-rich, a point we will return to in Sec. 4.2. The $\delta$$m_1$ measure translates to [Fe/H] = -0.116 $\pm$ 0.012 (sem)
on a scale where NGC 3680, NGC 5822, and IC 4651 have [Fe/H] = -0.17, -0.06, and +0.12, respectively. The translation from
index to metallicity is partially dependent upon the mean color/temperature of the sample, which explains the modest
shift in the relative ranking of the clusters in switching from index to [Fe/H]. As an additional reference point,
the photoelectric $uvby$H$\beta$ data of M67 produce [Fe/H] = -0.06 $\pm$ 0.07 \citep{NI87}. Taking into account the uncertainty
in the zero-point of the $m_1$ indices, [Fe/H] = -0.12 $\pm$ 0.10 with internal and external errors combined.
Turning to the $hk$ index for 282 stars, $\delta$$hk$($\beta$) = 0.050 $\pm$ 0.003 (sem), which translates to
[Fe/H] = -0.055 $\pm$ 0.010 (sem), on a scale where [Fe/H] = +0.12, 0.01, and -0.10 for the Hyades, NGC 5822, and NGC 3680.
Taking errors in the $hk$ zero-points into account, [Fe/H] = -0.06 $\pm$ 0.03, internal and external uncertainty combined.
A weighted average of the two metallicity estimates leads to [Fe/H] = -0.06 $\pm$ 0.04, where the errors refer to the
combined internal and external errors from the combined indices.
Before discussing the significance of these results, an obvious question is the impact of variable reddening on our conclusions. For the
[Fe/H] derived from $hk$, the impact is negligible. By definition, H$\beta$ is reddening-independent. If we include a range
of 0.08 in $E(V-I)$ among the stars in the cluster, this translates to a range of 0.007 in $E(hk)$, or a spread of $\sim$0.025 in [Fe/H].
For $m_1$, the greater impact of reddening on the index coupled to a steeper $\delta$$m_1$ - [Fe/H] relation translates the same reddening range to
an [Fe/H] range seven times larger. Fortunately, we can derive the reddening for each star individually, rather than
adopting the cluster mean for all stars, and recalculate the cluster metallicity . If we adopt the photometric reddening estimate from $b-y$, H$\beta$
for each star, the resulting mean [Fe/H] from $m_1$ becomes -0.120
$\pm$ 0.013 while the analogous estimate from $hk$ is [Fe/H] = -0.055 $\pm$ 0.011 (sem), leading to a weighted average [Fe/H] = -0.06 $\pm$ 0.04.
Given the very modest range derived below for $E(b-y)$ and the impact of even small photometric scatter, the lack of a statistically
significant change in the mean abundances is expected.
\subsection{Reddening Variability}
As discussed in Sec. 1, one of the potential sources of parametric scatter for a cluster spread over a field 20\arcmin\ across at a distance of
more than a kiloparsec is variable reddening along the line of sight. Due to the well-defined nature of the CMD at the turnoff plotted
in Fig. 7, we can immediately use the full width of the turnoff in $b-y$ at a fixed $V$ to place
an upper limit of 0.045 mag on the range in $E(b-y)$, 0.060 for $E(B-V)$, without applying any
adjustment for photometric scatter in $b-y$ and/or contamination by a binary sequence. PL have used the $VI$ data of YA for proper-motion
members to define a blue edge in the ($V, VI$) CMD at the cluster turnoff, the color region where differential reddening effects should
be maximized due to the combined shift in $V$ and $V-I$. The color offset for each star relative to this fiducial relation is adopted
as an estimate of the differential reddening, up to a limit of $\delta$$E(V-I)$ = 0.09, where binaries may begin to dominate. The
positional averages of these values are then used to construct a spatial reddening map (see Fig. 10 of PL), showing that
stars $\geq 2$\arcmin\ east of the cluster center are typically 0.05 to 0.07 mag redder in $E(B-V)$ than the low
reddening region in the southwest quadrant of the cluster, i.e. for stars $\geq 2$\arcmin\ west and $\geq$ 2 \arcmin\ south of the cluster center.
To test this claim, we have used the photometry defining Fig. 7 and isolated two distinct samples. The group expected to be more highly
reddened is composed of all stars $\geq 2$\arcmin\ east of the cluster center while the predicted blue group encompasses stars in the southwest
quadrant, $\geq$ 2\arcmin\ west and south of the cluster center. The CMD for the expanded region near the turnoff is shown in Fig. 10a with
the colors and symbol types indicating which reddening group the star belongs to (blue triangles or red crosses).
The pattern is obvious; there is little doubt that the reddening trend
established by PL is real. We can place a tight constraint on the size of the range between these two regions by shifting the blue stars
appropriately in $b-y$ and $V$ until the CMDs overlap. Fig. 10b shows the impact of adding 0.022 (0.030) mag of reddening in $E(b-y)$ ($E(B-V)$)
to the blue stars of the southwest quadrant. The scatter in the turnoff
is cut in half, reduced almost to what one would expect from photometric scatter alone. Since the mean shift in color between the two
regions should be less than the most extreme variation across the cluster, we estimate that the true spread in $E(B-V)$ lies somewhere
between the 0.03 estimate above and the point-to-point range of 0.06 found by PL or $\Delta$$E(B-V)$ = 0.045 $\pm$ 0.015.
\subsection{Previous Reddening and Metallicity Estimates}
With the confirmation that NGC 6819 suffers from variable reddening in a range of 0.03 to 0.06 mag in $E(B-V)$, the import of past
reddening estimates is reduced since the values derived will depend in part on where in the cluster the reddening was measured.
A partial summary of derived and adopted reddening estimates for NGC 6819 can be found in \citet{WL14}. With the exception of two early studies
by \citet{AU74} and \citet{LI72}, and those studies which later adopted these flawed estimates \citep{FJ93, TW97}, the range among more
recent analyses is $E(B-V)$ = 0.10 to 0.16. Care must be taken, however, in assigning weight to many of these values. For
example, \citet{KA01} adopts without explanation $E(B-V)$ = 0.10, a value later assumed without explanation by \citet{HO09}.
\citet{AT13} use the mean of the derived literature values between 0.12 and 0.16. \citet{JE13} attempt to
derive the cluster reddening through a differential comparison of the clump stars in M67 and NGC 6819. However, the final value is
built upon the assumption that the clump stars in NGC 6819 have uniform reddening and that NGC 6819 is more metal-rich than M67 by
0.05 dex. \citet{BA13} investigate map-based $A_V$ estimates from the Kepler Input Catalog \citep{BR11} to obtain $E(B-V)$ = 0.189 $\pm$ 0.002
for 564 stars within 12\arcmin\ of the cluster; the mean reddening rises to 0.203 $\pm$ 0.003 for cluster members. However, they
default to 0.15, referring back to RV and the spectroscopic results of \citet{BR01}. \citet{WU14} derive reddening through an isochrone
match to the $BV$ data of \citet{HO09}, arriving at a simultaneous estimate of the reddening, metallicity, and distance with $E(B-V)$
= 0.13 and an unrealistically small uncertainty of $\pm$0.01. In the most recent paper, \citet{WL14} simply default to the mean supplied by
\citet{BR01}. In summary, there are few direct, reliable estimates of the reddening to NGC 6819 which don't presume either uniform
reddening and/or an assumed high metallicity. Only RV derived $E(B-V)$ = 0.16 and [Fe/H] $\sim$-0.05 through isochrone matches to the $BV$ CMD.
\citet{BR01} represents the most cited source for a reddening ($E(B-V)$ = 0.14 $\pm$ 0.04) and metallicity
estimate ([Fe/H] = +0.09 $\pm$ 0.03), but these parameters are defined by 3 red clump giants, one of which is now known to
be a binary \citep{HO09}. \citet{FJ93} derived [Fe/H] = +0.05 from moderate-dispersion spectroscopy, adopting $E(B-V)$ = 0.28 on a scale
where M67 had [Fe/H] = -0.09. The differential was revised by \citet{TW97} using the same excessive reddening value, finding [Fe/H] =
0.07 on a scale where M67 has [Fe/H] = 0.00. If one adopts the correction to [Fe/H] of -0.06 dex for a decline of 0.05 in $E(B-V)$ \citep{FJ93},
a shift from $E(B-V)$ = 0.28 to 0.16 should lower the metallicity of NGC 6819 by 0.14 dex, placing it between the metallicity
of M67 and -0.07 dex lower. However, \citet{FR02} revise the reddening of NGC 6819 to $E(B-V)$ = 0.16 and find [Fe/H] -0.11 for the cluster
on a scale where M67 is fixed at -0.15. We conclude that NGC 6819 is most probably more metal poor than M67 by about -0.05 dex, rather
than comparable to the Hyades in metallicity.
\section{Summary and Conclusions}
Recently the old open cluster, NGC 6819, has become a high profile object of considerable astrophysical interest due to its location within
the Kepler field. However, apart from this particular circumstance, the cluster still would be invaluable to those focused
on stellar and galactic evolution because of its rich stellar population and an age between 2 and 4 Gyr, a range shared by very few
nearby objects. With the goal of deriving high-dispersion spectroscopic abundances for an extensive
sample of over 300 stars, an intermediate-band photometric program was undertaken to derive the key cluster parameters of
reddening and metallicity as input for estimating individual stellar parameters tied to the cluster age and distance. The need for
such a study is heightened when the literature parameters for the cluster are reviewed; direct reddening estimates
are few in number, low in quality, and/or often dependent upon an assumed high metallicity tied primarily to spectroscopy of 3 red clump
stars \citep{BR01}. Equally important, greater photometric and astrometric scrutiny of the cluster has led to a claim that the
cluster suffers from variable reddening (PL), an effect never included in past derivations of the reddening or metallicity.
Using $uvbyCa$H$\beta$ photometry of a 20\arcmin\ by 20\arcmin\ field of NGC 6819 collected over three years, high precision photometric indices
have been combined with proper-motion membership to isolate a sample of 382 single-star, unevolved F-G dwarfs on the cluster main
sequence. Restricting this sample further to only 278 stars with high precision measures in all indices within the prescribed limits of the photometric
calibrations leads to a mean reddening of $E(b-y)$ = 0.117 $\pm$ 0.005 or $E(B-V)$ = 0.160 $\pm$ 0.007, internal and external uncertainties
included. Somewhat surprising, even with the slightly higher than usual reddening, the cluster metallicity, when derived using individual
reddening values for each star or adopting a mean value for all stars, becomes [Fe/H] = -0.06, well below what has become the
canonical value of +0.09. Equally important, the exceptional precision of the ($V, b-y$) photometry supplies a convincing confirmation
that the cluster has higher reddening on the east side and lower reddening in the southwestern quadrant of the field. The minimum range
in variation that we derive as a lower limit (0.03 mag in $E(B-V)$) is somewhat smaller than the maximum range found by PL (0.07 mag), but
this may depend upon the spatial resolution of the mapping and the ability to eliminate the effect of binaries from the main sequence
broadening. We conclude that the range probably lies between 0.03 and 0.06 mag in $E(B-V)$.
We close by estimating the broad impact of these changes on the derivation of the cluster distance and age, leaving a
detailed star-by-star correction for a future paper dealing with the individual spectroscopic abundances \citep{LB14}.
Recent estimates of the cluster distance and age via a variety of techniques typically fall between $(m-M)$ = 12.25 and 12.50 and
1.9 to 2.6 Gyrs, respectively \citep{WL14}. With the exception of RV, the studies regularly used $E(B-V)$ below 0.16 and solar metallicity
or higher, with heavy emphasis on the metal-rich value from \citet{BR01}.
To test the revised reddening and metallicity, we use the $BV$ photometry of RV, cross-identified with PL and selected to only include
members with probabilities above 50\%. We have applied a correction in $V$ to the photometry of RV to adjust for the position-dependent
offset illustrated in Fig. 2. As a quick correction to the variable reddening, we have adjusted all the colors in the eastern
zone discussed in Sec. 4 by -0.015 mag in $B-V$, all the colors in the delineated southwestern zone by +0.015, and left all other stars
unchanged. The resulting CMD is shown in Fig. 11; the main sequence and the giant branch are surprisingly tight given the simple
approach to adjusting the reddening, a confirmation of the small range in reddening across the cluster face. For comparison, we have
superposed a set of $Y^2$ \citep{DE04} isochrones with ages 2.3 (blue, solid line) and 2.5 (red, dashed line) Gyr on the data. The metallicity adopted is [Fe/H] = -0.06,
with $E(B-V)$ = 0.16. The zero-points of the isochrones have been adjusted by -0.03 and +0.008 in $M_V$ and $B-V$, respectively, to
place them on the same system as our consistently adopted solar values.The fit to the data is very good to excellent for an age of 2.3 $\pm$ 0.2
Gyr. The distance modulus is slightly smaller than that found in \citet{AT13} due to the competing effects of raising the reddening, which should
increase the apparent modulus by 0.11 mag, and lowering the metallicity by 0.15 dex, which will decrease the distance modulus by 0.18 mag \citep{TW09}.
Taking into account the estimated uncertainties in the absolute reddening (0.007 mag), the metallicity scale (0.05 dex), and a conservative estimate
of $\pm$0.10 mag in the ideal vertical fit of the isochrones to the data, all else being equal, one arrives at $(m-M)$ = 12.40 $\pm$ 0.12.
It should be noted that this result is virtually identical to the conclusions reached by RV using a different set of isochrones; the slightly
larger age (2.4 Gyr) and smaller distance modulus (12.35) are completely attributable to a bluer adopted solar color to zero their isochrones.
\acknowledgments
Extensive use was made of the WEBDA database maintained by E. Paunzen at the University of Vienna, Austria (http://www.univie.ac.at/webda).
The filters used in the program were obtained by BJAT and BAT through NSF grant AST-0321247 to the University of Kansas. NSF support for
this project was provided to BJAT and BAT through NSF grant AST-1211621, and to CPD through NSF grant AST-1211699. Observing support by
undergraduate Brian Schafer is gratefully acknowledged.
|
2,869,038,155,001 | arxiv | \section{Introduction}
Since the foundational work of Kodaira-Spencer \cite{9} and Kuranishi \cite{10} on the existence of local moduli space parametrizing all the nearby structures of complex compact manifolds with respect to a given one, many similar existence theorems of such a kind have been proved for various other geometric objects among which we can mention: algebraic varieties \cite{14}, complex compact spaces \cite{6}, isolated singularities \cite{7}, holomorphic vector bundles \cite{12}, etc. This space and the associated family are usually known under the names: Kuranishi space and Kuranishi family, respectively, or sometimes semi-universal deformation for the latter. A natural question to pose is whether the automorphism group of the object under deformation could be lifted to an action on the Kuranishi space, which is compatible with the associated family. This is indeed the case when the associated family is universal. However, if the object in question has non-trivial automorphisms, which often happens in pratice, then the family cannot be universal in general.
There are several attempts to answer this question, among which the work of D. S. Rim is outstanding \cite{13}. Namely, he gave an affirmative answer for a large class of local moduli problems (or equivalently for a large class of functors of artinian rings in Schlessinger's language). A vivid corollary of this beautiful work is the existence of equivariant structure on semi-universal deformation of projective schemes equipped with linearly reductive actions, unique up to non-canonical equivariant isomorphism. This is even more surprising that a counterexample constructed in \cite{2} or \cite{4} confirms the formal un-extendability for the non-reductive case in general. This is the reason why we shall focus only on reductive group actions.
The main disadvantage of Rim's method is that his constructions are merely formal and that it only works well the algebraic world but not in the analytic world in which we have to deal with deformations problems associated to analytic objects where the convergence is naturally needed. This is the principal difference between the two worlds. Therefore, the group extension problem seems even harder on the analytic side. In some specific situations, we can expect to prove the convergence. We can mention the case where the object under deformation is a complex compact manifold equipped with actions of a reductive complex Lie group, of which the main ingredient of the proof is a combination of an equivariant version of Kuranishi's classical construction of local moduli spaces of complex compact manifolds and representations of reductive complex Lie group (cf. \cite{3}). Inspired by this result, we continue to consider the case where the analytic object under deformation is a holomorphic vector bundle on which a reductive complex Lie group acts holomorphically. It should be noted as well that our main result in this paper is the existence of group operations on local moduli space for reductive subgroups of the automorphism group of the considered bundle without any further assumption on the bundle (cf. Theorem \ref{t4.1} and Corollary \ref{c4.1}) while N. Buchdahl, G. Schumacher in \cite{1} proved that the whole automorphism group can be lifted to a compatible group action on its local moduli space provided that the Kählerianness on the given complex manifold and the poly-stability assumption on the holomorphic vector bundle under consideration are added (see \cite[Theorem 5]{1}). This suggests once again that the group extension problem is not feasible in general unless some additional hypothesis is imposed on either the group or on the considered geometric structure. It would be of great interest if one could find a counterexample in either case.
Let us now outline the organization of this article. First, we give a general description of holomorphic vector bundles and their deformations in $\S$\ref{s2} and $\S$\ref{s3}, respectively. Next, we prove the existence of reductive group actions on Kuranishi spaces of vector bundle in $\S$\ref{s4}. The main techniques are essentially inspired from those in \cite{3}. In $\S$\ref{s5}, we compute the differential graded Lie algebra associated to the deformation problem of holomorphic vector bundles, from which a formal version of reductive group actions on local moduli space of holomorphic vector bundles, obtained in $\S$\ref{s4}, follows easily. At last, in $\S$\ref{s6}, we introduce a general philosophy hidden behind our work.
\begin{ackn} We would like to warmly thank Prof. S. Kosarew for many useful discussions and references. We show gratitude to Prof. Julien Grivaux for reading the first version of this note and giving several precious comments. Finally, we are specially thankful to the referee whose dedicated work led to a great amelioration of this paper.
\end{ackn}
\section{Holomorphic vector bundles}\label{s2}
Let $E$ be a differentiable vector bundle of rank $r$ over a compact complex compact manifold $X$. Let $A^{p,q}(E)$ be the space of $(p,q)$-forms with values in $E$.
\begin{defi} A semi-connection on $E$ is a $\mathbb{C}$-linear map $D:\;A^{0,0}(E) \rightarrow A^{0,1}(E) $ satisfying the Leibnitz rule, i.e. $$D(fs)=(\overline{\partial} f)s+f\cdot Ds$$
for $f \in C^{\infty}(X)$ and $s\in A^{0,0}(E) $.
\end{defi}
It is evident that each semi-connection $D$ can be extended to a first order differential operator $D:\;A^{p,q}(E) \rightarrow A^{p,q+1}(E) $. Moreover, if we let $\mathcal{D}(E)$ be the space of semi-connections on $E$, then it is well-known that for a fixed semi-connection $D_0$, $\mathcal{D}$ can be identified with $D_0+ A^{0,1}(\operatorname{End}(E))$ where $\operatorname{End}(E)$ is the group of differentiable endomorphisms of $E$ (inducing the identity on the base manifold) and thus an affine space.
\begin{defi} A semi-connection $D$ is called a holomorphic structure if
$$D\circ D=0.$$
This condition is called the integrability condition.
\end{defi}
Now, let $\mathcal{H}(E)$ be the subset of $\mathcal{D}(E)$, consisting of holomorphic semi-connections $D$. Then $\mathcal{H}(E)$ is nothing but the set of holomorphic bundle structure on the differentiable complex vector bundle $E$.
If we denote the group of differentiable automorphisms of $E$ (inducing the identity on the base manifold $X$) by $\operatorname{GL}(E)$ then $\operatorname{End}(E)$ be thought of as the Lie algebra of $\operatorname{GL}(E)$ and an action of $\operatorname{GL}(E)$ on $\mathcal{D}(E)$ is given by $$g.D=g^{-1}\circ D \circ g$$ where $g \in \operatorname{Aut}(E)$ and $D\in \mathcal{D}(E)$.
\begin{rem} If $E$ is a holomorphic vector bundle whose holomorphic structure is uniquely determined by a holomorphic connection $D$ then $g.D=D$ for any holomorphic automorphism of $E$.
\end{rem}
\begin{prop}\label{p2.1} Let $D$ be a holomorphic connection on $E$ and $\alpha$, $\alpha_1$, $\alpha_2 \in A^{0,1}(\operatorname{End}(E))$.
\begin{enumerate}
\item[(i)] $D+\alpha$ defines a structure of holomorphic vector bundle on $E$ if and only if
$$\mathcal{P}_{D}(\alpha):= D\alpha +\alpha \wedge \alpha=0$$ where the wedge product $\alpha \wedge \alpha$ is given by the usual wedge product in the form part and the usual composition of endomorphisms in $\operatorname{End}(E)$.
\item[(ii)] Let $g \in \operatorname{GL}(E)$. Then $g.(D+\alpha_2)=D+\alpha_1$ if and only if $$g\circ \alpha_1 -\alpha_2 \circ g -Dg=0.$$
\end{enumerate}
\end{prop}
\section{Deformation of holomorphic vector bundles}\label{s3}
We first recall some basic definitions in deformation theory of holomorphic vector bundles (cf. \cite{12} for more details). Let $\mathfrak{B}$ be the category of germs of pointed complex spaces $(B,0)$ (a complex space with a reference point) whose associated reduced complex space is a point and $X$ a complex compact manifold. Let $E$ be a holomorphic vector bundle over $X$, whose associated holomorphic semi-connection and underlying differentiable complex vector bundle will be denoted by $D_E$ and $\underline{E}$, respectively. We fix once for all a sufficiently large integer $k$. Consider the Hilbert space $ A^{0,1}(\operatorname{End}(E))_k$ (resp. $ A^{0,2}(\operatorname{End}(E))_{k-1}$) obtained by completing $ A^{0,1}(\operatorname{End}(E)))$ (resp. $A^{0,2}(\operatorname{End}(E))))$ with respect to the Sobolev $k$-norm (resp $k-1$-norm). We have an induced analytic map coming from Proposition \ref{p2.1}
\begin{align*}
\mathcal{P}_{D_E} :A^{0,1}(\operatorname{End}(E))_k &\rightarrow A^{0,2}(\operatorname{End}(E))_{k-1}\\
\alpha &\mapsto D_E.\alpha +\alpha \wedge \alpha
\end{align*} which then gives a germ of a Banach analytic space $(\mathcal{P}_{D_E}^{-1}(0), 0)$.
\begin{defi}
\begin{enumerate} \label{d3.1}
\item[(i)] \label{d3.1i} A local deformation of $E$ over a germ of pointed complex spaces $(B,0)$ is a pair $(\pi,(B,0) )$ where $\pi$ a holomorphic map $\pi$ from $(B,0)$ to the germ of Banach analytic spaces $(\mathcal{P}_{D_E}^{-1}(0), 0)$ which is of class $C^{\infty}$ on $X\times D$ where $D$ is the ambient space of $(B,0)$.
\item[(ii)]\label{d3.1i} Two local deformations $(\pi,(B,0) )$ and $(\sigma,(B,0) )$ of $E$ are equivalent if there exists an analytic map $$\rho:\;(B,0 ) \rightarrow (\operatorname{GL}(E)_{k+1},\mathrm{Id}_E)$$ which is of class $C^{\infty}$ such that
$$\rho(t)\circ \pi(t) -\pi(t) \circ \rho(t) -D\rho(t)=0 $$
in $A^{0,1}(\operatorname{End}(E))$.
\end{enumerate}
\end{defi}
\begin{rem}In other words, if $(\pi,(B,0) )$ a local deformation of $E$, the we obtain a family of holomorphic vector bundles $\lbrace E_{\pi(b)} \rbrace_{b \in B}$ varying holomorphically in $b$. Here, $E_{\pi(b)}$ is the differentiable complex vector bundle $\underline{E}$ equipped with the holomorphic structure $D_{E}+\pi(b)$. In addition, it defines a holomorphic vector bundle $\mathcal{E}_{B} \rightarrow X\times B$ such that the restriction on $X\times \lbrace 0\rbrace$ is nothing but the initial holomorphic vector bundle $E$. We call $\mathcal{E}_{B} \rightarrow X\times B$ the associated bundle to the local deformation $(\pi,(B,0) )$.
\end{rem}
If $(\pi, (B,0))$ is a local deformation of $E$ and $f:\;(S,0)\rightarrow (B,0)$ is an analytic map of germs of complex spaces then the pullback of $(\pi, (B,0))$ by $f$ is defined to be the local deformation $(\pi\circ f,(S,0))$, which we shall denote by $f^*(\pi,(B,0))$.
\begin{defi} A local deformation $(\pi,(B,0))$ of $E$ is semi-universal if any other local deformation $(\rho,(S,0))$ of $E$ is defined by the pullback of $(\pi,(B,0))$ under some holomorphic map from $(T,0)$ to $(S,0)$, whose differential at the reference point is unique.
\end{defi}
The following fundamental theorem is essentially due M. Kuranishi (\cite[Theorem 1]{12}).
\begin{thm} Let $E$ be a holomorphic vector bundle defined over a compact complex manifold $X$. Then there exists a semi-universal local deformation of $E$, unique up to non-canonical isomorphisms.
\end{thm}
Next, let us take a moment to recall the definition of group actions on complex spaces. For the sake of completeness, we recall first that a mapping $\alpha$ from a real analytic (resp. complex) manifold $W$ to a Fréchet space $F$ over $\mathbb{C}$ is called \textit{real analytic} (reps. \textit{holomorphic}) if for each point $w_0\in W$ there exists an open coordinate neighborhood $N_{w_0}$ and a real analytic (resp. holomorphic) coordinate system $t_1,\ldots,t_n $ in $N$ such that $t_i(w_0)=0$ and for all $w\in N$, we have
that $$\alpha(w)=\sum a_{i_1,\ldots,i_n}t_1^{i_1}(w)\ldots t_n^{i_n}(w) $$ where $a_{i_1,\ldots,i_n} \in F$ and the convergence is absolute with respect to any continuous semi-norm on $F$. Furthermore, by a $C^p$-map, we insinuate a $p$-times continuously differentiable function. Let $G$ be a real (resp. complex) Lie group and $X$ a complex space. A $G$-action on $X$ is given by a group homomorphism $\Phi:\; G \rightarrow \operatorname{Aut}(X)$, where $\operatorname{Aut}(X)$ is the group of biholomorphisms of $X$.
\begin{defi} The $G$-action determined by $\Phi$ is said to be real analytic (resp. holomorphic) if for each open relatively compact $U \Subset X$ and for each open $V\subset X$, the following conditions are satisfied
\begin{enumerate}
\item[(i)]$W:=W_{\overline{U},V}:=\lbrace g\in G \mid g\cdot \overline{U}\subset V \rbrace $ is open in $G$,
\item[(ii)]the map \begin{align*}
*:W&\rightarrow \mathcal{O}(U)\\
g &\mapsto f\circ g\mid_U
\end{align*} is real analytic (resp. holomorphic) for all $f\in \mathcal{O}(V)$ ,
\end{enumerate}
where $\overline{U}$ is the closure of $U$ and $\mathcal{O}(P)$ is the set of holomorphic functions on $P$ for any open subset $P$ of $X$ ($\mathcal{O}(P)$ is equipped with the canonical Fréchet topology).
\end{defi}
Finally, it is time for us to introduce $G$-equivariant deformations, which are of central interest of the article. As before, let $X$ be a complex compact manifold over which a holomorphic vector bundle $E$ is defined. Let $G$ be a subgroup of the group of holomorphic automorphisms of $E$.
\begin{defi} \label{d3.4}
A real analytic (resp.\ holomorphic) $G$-equivariant local deformation of $E$ is a usual local deformation of $(\pi,(B,0))$ of $E$ whose associated bundle $\mathcal{E}_B$ can be equipped with a real analytic (resp.\ holomorphic) $G$-action extending the given (resp.\ holomorphic) $G$-action on $E$ and a real analytic (resp. holomorphic) $G$-action on $B$ in a way that the projection $\mathcal{E}_{B} \rightarrow X\times B$ is a $G$-equivariant map with respect to these actions. We call these extended actions a real analytic (resp.\ holomorphic) $G$-equivariant structure on $\mathcal{E}_{B} \rightarrow X\times B$.
\end{defi}
\begin{rem} We make the following convention. Whenever we have a $G$-action on $(B,0)$, the $G$-action on $X\times B$ in Definition \ref{d3.4} is defined by $$g(x,b)=(x,g.b)$$ for $g \in G$ and $(x,b)\in X\times B$. This is exactly the action with respect to which we want the projection $\mathcal{E}_{B} \rightarrow X\times B$ to be $G$-equivariant. Moreover, the restriction of the $G$-action on $\mathcal{E}_B$ on the $\mathcal{E}_{B,0}$ is nothing but the initial $G$-action on $E$.
\end{rem}
\hfill \break
\begin{center}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (71,235.5) -- (269,235.5) -- (472,235.5) ;
\draw (119,11) -- (421,11) -- (421,151) -- (119,151) -- cycle ;
\draw (351,11) -- (351,151) ;
\draw (270,11) -- (270,151) ;
\draw (191,11) -- (191,151) ;
\draw [line width=1] (195,92) .. controls (196.98,125.66) and (304.32,113.26) .. (345.28,83.89) ;
\draw [shift={(346.5,83)}, rotate = 503.13] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=1] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=1] (195,245) .. controls (193.01,271.89) and (240.52,293.78) .. (342.95,246.71) ;
\draw [shift={(344.5,246)}, rotate = 515.12] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=1] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw[line width=1] [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (268,127) .. controls (213.55,157.69) and (209.09,30.58) .. (264.31,53.27) ;
\draw [shift={(266,54)}, rotate = 204.52] [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [line width=1][color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (267,232.5) .. controls (200.33,165.83) and (340.59,165.5) .. (276.97,231.5) ;
\draw [shift={(276,232.5)}, rotate = 314.57] [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw (172.75,42.15) node [anchor=north west][inner sep=0.75pt] {$\mathcal{E}_{s}$};
\draw (220.75,18.15) node [anchor=north west][inner sep=0.75pt] {$\mathcal{E}_{0} =E$};
\draw (355.75,41.15) node [anchor=north west][inner sep=0.75pt] {$\mathcal{E}_{gs}$};
\draw (444.75,22.15) node [anchor=north west][inner sep=0.75pt] {$\mathcal{E}$};
\draw (184.75,240.15) node [anchor=north west][inner sep=0.75pt] {$s$};
\draw (432.75,210.15) node [anchor=north west][inner sep=0.75pt] {$( S,0)$};
\draw (266.75,240.15) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (273.25,95.15) node [anchor=north west][inner sep=0.75pt] {$\cong $};
\draw (273.25,110.15) node [anchor=north west][inner sep=0.75pt] {$g$};
\draw (186.75,232.15) node [anchor=north west][inner sep=0.75pt] {$\bullet $};
\draw (346.75,232.15) node [anchor=north west][inner sep=0.75pt] {$\bullet $};
\draw (266.75,232.15) node [anchor=north west][inner sep=0.75pt] {$\bullet $};
\draw (347.25,240.15) node [anchor=north west][inner sep=0.75pt] {$gs$};
\draw (228.75,73.15) node [anchor=north west][inner sep=0.75pt] {$g$};
\draw (209.75,73.15) node [anchor=north west][inner sep=0.75pt] {$\cong $};
\draw (255.75,275.15) node [anchor=north west][inner sep=0.75pt] {$g$};
\draw (264.75,190.15) node [anchor=north west][inner sep=0.75pt] {$g$};
\end{tikzpicture}
\end{center}
As a matter of course, the main goal of this paper is to construct a real analytic (resp.\ holomorphic) $G$-equivariant semi-universal local deformation of a complex vector bundle with a real analytic (resp.\ holomorphic) $G$-action. Intuitively, the expected extended $G$-action on the ``Kuranishi space" permutes the nearby holomorphic structures and keeps the central one untouched.
\begin{rem}
For simplicity, by $G$-actions (resp.\ $G$-equivariant local deformations), we really mean real analytic $G$-actions (resp.\ real analytic $G$-equivariant local deformations).
\end{rem}
\section{Existence of group operations on local moduli spaces}\label{s4}
In this section, we shall follow strictly the construction given in \cite{12}, in which the $G$-action can be naturally integrated along the lines. As usual, let $X$ be a compact complex manifold over which a holomorphic vector bundle $E$ is defined and $G$ a subgroup of the group of holomorphic automorphisms of $E$. The case that $G$ is a compact Lie group shall be treated first. It should be noted that $G$ will induce a holomorphic $G$-action on the bundle $\operatorname{End}(E)$ given by
\begin{equation}\label{e4.1} g.\sigma =g^{-1}\circ \sigma \circ g
\end{equation} for $g \in G$ and $\sigma \in \operatorname{End}(E)$. Consequently, we obtain natural $G$-action on $A^{0,1}(\operatorname{End}(E))$, $A^{0,2}(\operatorname{End}(E))$ and then on $H^{1}(X,\operatorname{End}(E))$. The compactness of $G$ permits us to impose a $G$-invariant Hermitian metric on $\operatorname{End}(E)$, by the unitary trick. This metric induces a $G$-invariant metric on $A^{0,1}(\operatorname{End}(E))$ with respect to which a formal adjoint $$D_{E}^*: A^{p,q}(\operatorname{End}(E)) \rightarrow A^{p,q-1}(\operatorname{End}(E)) $$ of $$D_{E}:A^{p,q-1}(\operatorname{End}(E)) \rightarrow A^{p,q}(\operatorname{End}(E))$$ is provided. Since the $G$-action is holomorphic then the connection $D_E$ is a $G$-equivariant differential operator. The equivariance of $D_{E}^*$ follows from the one of $D_E$ and from the $G$-invariance of the imposed metric. As a matter of fact, the Laplace-Beltrami operator associated to $\operatorname{End}(E)$, $$\square_{E}:= D_{E}^* D_{E}+D_{E}D_{E}^*$$ is $G$-equivariant as well. Moreover, the principal part of $\square_{E}$ coincides with that of the usual Laplace-Beltrami operator. The latter is well-known to be a strongly elliptic self-adjoint operator of second order. Hence, so is $\square_{E}$. This is where we can make use of the Hodge theory for the bundle $\operatorname{End}(E)$. Namely, if we denote the space of harmonic $(0,1)$-form with coefficients in $\operatorname{End}(E)$ by $\mathcal{H}^{0,1}$ then $\mathcal{H}^{0,1}$ can be naturally identified with the first cohomology $H^1(X,\operatorname{End}(E))$ and the following orthogonal decomposition is available
\begin{align*}
A^{0,1}(\operatorname{End}(E)) &=\mathcal{H}^{0,1}\oplus \square_{E} A^{0,1}(\operatorname{End}(E)) \\
&= \mathcal{H}^{0,1}\oplus D_E A^{0,0}(\operatorname{End}(E))\oplus D_E^* A^{0,2}(\operatorname{End}(E))
\end{align*} together with two linear operators:
\begin{enumerate}
\item[(i)] The Green operator: $\mathcal{G}:\; A^{0,1}(\operatorname{End}(E)) \rightarrow \square_{E} A^{0,1}(\operatorname{End}(E))$,
\item[(ii)] The harmonic projection: $\mathrm{P}_{\mathcal{H}^{0,1}}:\; A^{0,1}(\operatorname{End}(E)) \rightarrow \mathcal{H}^{0,1}$
\end{enumerate} such that \begin{equation} \label{e4.2} \mathrm{Id}_{A^{0,1}(\operatorname{End}(E))}= \mathrm{P}_{\mathcal{H}^{0,1}}+\square_{E}\mathcal{G}.
\end{equation}
\begin{lem} The operators $\mathrm{P}_{\mathcal{H}^{0,1}}$ and $\mathcal{G}$ are $G$-equivariant.
\end{lem}
Consider the map
\begin{align*}
\mathcal{P}_{D_E} :A^{0,1}(\operatorname{End}(E))_k &\rightarrow A^{0,2}(\operatorname{End}(E))_{k-1}\\
\alpha &\mapsto D_E.\alpha +\alpha \wedge \alpha
\end{align*} defined the previous section.
\begin{lem} $\mathcal{P}_{D_E}$ is $G$-equivariant.
\end{lem}
\begin{proof} Indeed, it suffices to prove that for $g\in G$ and $\alpha \in A^{0,1}(\operatorname{End}(E))$, we have
$$g.(\alpha\wedge \alpha)=g.\alpha \wedge g.\alpha.$$
However, this follows immediately from the fact that $G$ acts trivially on the form part and acts on the endomorphism part by the rule (\ref{e4.1}).
\end{proof}
Now, we are ready to state our main result.
\begin{thm} \label{t4.1} Let $X$ be a compact complex manifold over which a holomorphic vector bundle $E$ is defined. Let $G$ be a compact real Lie subgroup of the automorphism group of $E$. Then there exists a real analytic $G$-equivariant semi-universal local deformation of $E$.
\end{thm}
\begin{proof} We consider the following set
$$Q:= \lbrace \alpha \in A^{0,1}(\operatorname{End}(E))_k\vert \left \| \alpha \right \|_k < \epsilon,\; D_E^*\alpha=0,\; D_E^*\circ \mathcal{P}_{D_E}(\alpha)=0 \rbrace .$$ Each $\alpha \in Q$ is a solution of the elliptic partial differential equation
\begin{equation}\square_{E}\alpha + D_E^*(\alpha\wedge \alpha)=0.
\end{equation} \label{e4.3}Thus, $Q$ is actually a subset of $A^{0,1}(\operatorname{End}(E))$. We claim further that $Q$ is a direct finite-dimensional sub-manifold of an open neighborhood of $0$ in $ A^{0,1}(\operatorname{End}(E))_k$. Indeed, let us take into account the following auxiliary analytic function
\begin{align*}
\gamma :A^{0,1}(\operatorname{End}(E))_k &\rightarrow \mathcal{H}^{0,1}\oplus D_E^* A^{0,1}(\operatorname{End}(E))_{k} \oplus D_E^* A^{0,2}(\operatorname{End}(E))_{k-1}\\
\alpha &\mapsto \left( \mathrm{P}_{\mathcal{H}^{0,1}}(\alpha),D_E^*\alpha,D_E^*\circ \mathcal{P}_{D_E}(\alpha) \right)
\end{align*} The differential of $\gamma$ at $0$ is $$d\gamma_0 =(\mathrm{P}_{\mathcal{H}^{0,1}},D_E^*,D_E^*D_E)$$ whose inverse can be explicitly given by
$$(d\gamma_0)^{-1}(\alpha_0,\alpha_1,\alpha_2)=\alpha_0+\mathcal{G}D_E(\alpha_2)+\mathcal{G}(\alpha_2)$$ so that $\gamma$ is a local analytic isomorphism around $0$ due to the inverse mapping theorem for Banach manifolds. Therefore, locally, $$Q=\gamma^{-1}(\mathcal{H}^{0,1}\times 0 \times 0)$$ where $\mathcal{H}^{0,1}$ is known to be a finite dimensional vector space over $\mathbb{C}$. This justifies the claim for $\epsilon$ small enough. Now, on one hand, note that because each component of $\gamma$ is $G$-equivariant then so is $\gamma$. On the other hand, $\mathcal{H}^{0,1}\times 0 \times 0$ is $G$-invariant. Thus, after shrinking $Q$ (if necessary), we obtain a $G$-action on $Q$. For the sake of $G$-equivariance of $\mathcal{P}_{D_E}$, the germ of Banach analytic spaces $(\mathcal{P}_{D_E}^{-1}(0), 0)$ carries as well a $G$-action. Let $T =Q \cap \mathcal{P}_{D_E}^{-1}(0)$. Then the germ of complex space $(T,0)$ and the inclusion
\begin{equation}\label{e4.4}\omega:\; (T,0) \rightarrow (\mathcal{P}_{D_E}^{-1}(0),0)
\end{equation} will determine a semi-universal local deformation $\mathcal{E} \rightarrow X\times (T,0)$ of $E$. This can be carried out in a similar way as in the rest of the proof of \cite[Theorem 1]{12}. What is new here is the fact that the analytic map $\omega$ is further $G$-equivariant.
Now, we would like to equip $\mathcal{E}$ with a compatible $G$-action so that the local deformation $\mathcal{E} \rightarrow X\times (T,0)$ becomes $G$-equivariant in the sense of Definition \ref{d3.4}. First, as a complex space, the bundle $\mathcal{E}$ is nothing but $\underline{E} \times T$ equipped with the complex structure induced by the holomorphic semi-connection $\omega$. In addition, each fiber $\mathcal{E}_{s}$ is exactly $\underline{E}$ equipped with a structure $\omega(s)$ of holomorphic vector bundles and in particular a structure $J_s$ of complex manifolds on the differentiable manifold $\underline{E}$. Thus, the $G$-equivariance of $\omega$ implies that \begin{equation} \label{e4.5} dg.J_s =J_{gs}.dg \end{equation} where $g\in G$ and $dg$ is the differential of $g$. At this point, it should be noted that $g$ is a biholomorphism with respect to the complex structure $J_0$ of the initial holomorphic bundle $E$ and that elsewhere we think of $g$ just as a diffeomorphism of $\underline{E}$. These discussions allow us to define the following $G$-action on $\underline{E} \times T$,
\begin{align*}
g :\underline{E} \times T &\rightarrow \underline{E} \times T\\
(e,t) &\mapsto (ge,gt)
\end{align*}
for each $g \in G$. We claim that the diffeomorphism $g$ defined in this way is actually a biholomorphism of $\mathcal{E}$. It is the same as verifying that the differential of $g$ at the point $(e,t)$ $$dg_{(e,t)}: \;\mathcal{T}_{(e,t)}^{\text{Zar}}\mathcal{E}=\mathcal{T}_e\underline{E}\oplus \mathcal{T}_t^{\text{Zar}} T \rightarrow \mathcal{T}_{g.(e,t)}^{\text{Zar}}\mathcal{E}=\mathcal{T}_{g.e}\underline{E}\oplus \mathcal{T}_{gt}^{\text{Zar}}T $$ is $\mathbb{C}$-linear, where for each complex space $S$ and for each point $s \in S$, $\mathcal{T}_s^{\text{Zar}}S$ denotes the Zariski tangent space of $S$ at $s$. On one hand, $dg_{(e,t)}=(dg_e,dg_t)$ is diagonal. On the other hand, the $G$ acts on $T$ by biholomorphisms. Therefore, it is reduced to checking that $$dg_e:\;(\mathcal{T}_e\underline{E},J_{e,t}) \rightarrow (\mathcal{T}_{ge}\underline{E},J_{ge,gt})$$ is $\mathbb{C}$-linear. However, this follows immediately from (\ref{e4.5}), the fact that $g$ is biholomorphic on the central fiber and \cite[Lemma 3.1]{3}. In brief, we have just defined a compatible $G$-action on $\mathcal{E}$ by biholomorphisms, satisfying all the conditions given in Definition \ref{d3.4}.
\end{proof}
\begin{rem} A local chart of $Q$ is given by the harmonic projection $$\mathrm{P}_{\mathcal{H}^{0,1}}:\; Q \rightarrow \mathcal{H}^{0,1}$$ whose target can be identified with $\mathbb{C}^{\dim_{\mathbb{C}} \mathcal{H}^{0,1}}$. This map turns out to be the restriction of the usual Kuranishi map on $Q$
\begin{align*}
\mathcal{K} :Q \subset A^{0,1}(\operatorname{End}(E))_k &\rightarrow A^{0,1}(\operatorname{End}(E))_k\\
\alpha &\mapsto \alpha -\frac{1}{2}D_E^* \mathcal{G}[\alpha,\alpha],
\end{align*} by the definition of $\mathcal{P}_{D_E}$ and by the decomposition (\ref{e4.2}).
\end{rem}
\begin{coro}\label{c4.1}Let $X$ be a compact complex manifold over which a holomorphic vector bundle $E$ is defined. Let $G$ be a complex reductive Lie subgroup of the automorphism group of $E$. Then there exists a holomorphic $G$-equivariant semi-universal local deformation of $E$ where the extended holomorphic $G$-actions are local.
\end{coro}
\begin{proof} Let $K$ be the connected real maximal compact subgroup of $G$ such that its complexification is $G$. We repeat the proof of Theorem \ref{t4.1} to obtain a $K$-equivariant analytic maps of germs of (Banach) analytic spaces $$\omega:\; (T,0) \rightarrow (\mathcal{P}_{D_E}^{-1}(0),0).$$ By \cite[Theorem 5.1]{3}, we can equip local $G$-actions on $(T,0)$ and on $(\mathcal{P}_{D_E}^{-1}(0),0)$ (extending the initial $K$-actions), with respect to which the map $\omega$ is $G$-equivariant. Finally, we can use the same argument as in the proof of Theorem \ref{t4.1} to construct a holomorphic $G$-equivariant semi-universal local deformation $\mathcal{E}$ of $E$ where the extended holomorphic $G$-actions are local.
\end{proof}
\begin{rem} The extended $G$-actions constructed in the proof of Theorem \ref{t4.1} are global while those in that of Corollary \ref{c4.1} are only local.
\end{rem}
Corollary \ref{c4.1} tells us that if the automorphism group $\operatorname{Aut}(E)$ of a holomorphic vector bundle $E$ is reductive then $\operatorname{Aut}(E)$ acts holomorphically on the base $(T,0)$ of its semi-universal local deformation. A somehow natural question arising along this story is to describe the local structure of the moduli space in terms of Kuranishi spaces. More precisely, if we think of $E$ as a point in the ``moduli space" $\mathcal{M}(\underline{E})$ of holomorphic complex bundle structures on $\underline{E}$, it would be interesting to know whether a neighborhood of $E$ in $\mathcal{M}(\underline{E})$ can be modeled on the quotient $T/\operatorname{Aut}(E)$ in some sense. We refer the curious reader to the papers \cite{16}, \cite{17} of F. Catanese and \cite{18} of L. Meerssemann for an analogue discussion where Teichm\"{u}ller space and Kuranishi space of complex structures on a given differentiable manifold are taken into account.
\section{The associated differential graded Lie algebra} \label{s5} Over a field of characteristic zero, a well-known theorem of Lurie in \cite{11} (obtained independently by J. Pridham in \cite{20}) claims that any reasonable moduli problem is controlled by a differential graded Lie algebra (dgLa). The philosophy hidden behind this theorem is often credited to many big names in the domain: P. Deligne and V. Drinfeld first and foremost,
then M. Kontsevich, J. Stasheff, M. Schlessinger, S. Barannikov, V. Schechtman, V. Hinich, M. Manetti. In this approach, given an object $X$ of which one wishes to study small variations (complex compact manifolds, algebraic schemes, vector bundles, isolated singularities, etc), the philosophy suggests that there exists a dgLa $\mathfrak{g}_X$ governing deformations of $X$ in the sense that the deformation functor which to each local Artin algebra $A$ associates the set of Maurer-Cartan solutions modulo the gauge action (defined by means of $\mathfrak{g}_X$), is isomorphic to the set of isomorphism classes of deformations of $X$ over $\operatorname{Spec}(A)$. One of the classic illustrations of this phenomenon is when $X$ is a complex compact manifold. In this case, the controlling dgLa is nothing but the Dolbeault complex with values in the holomorphic tangent bundle of $X$. This allows us to transform a purely geometric problem to a purely algebraic one in view that the associated dgLa gives almost all information about the initial local moduli problem: its $0^{\text{th}}$, $1^{\text{st}}$ and $2^{\text{nd}}$ cohomology groups are nothing but the space of infinitesimal automorphisms, that of first order infinitesimal deformations (or equivalently, the tangent space) and that of obstructions to the (formal) smoothness, respectively. For more historical details about this direction, the interested reader is referred to the exceptionally beautiful seminar paper \cite{19} of B. To\"{e}n.
Continuing this spirit, in this section, we first translate the deformation problem of vector bundle in $\S$\ref{s3} into the language of functors of artinian rings and then compute the dgLa associated to the local moduli problem of holomorphic vector bundle. Hence, a formal version of Corollary \ref{c4.1} follows immediately from the mechanism that we developed in \cite{5}. Let us first recall some standard conventions:
\begin{enumerate}
\item $\operatorname{\textbf{Set}}$ is the category of sets.
\item $\operatorname{\textbf{Grp}}$ is the category of groups.
\item $\operatorname{\bf Art _{\mathbb{C}}}$ is the category of local artinian $\mathbb{C}$-algebras with residue field $\mathbb{C}$. For each $A \in \operatorname{\bf Art _{\mathbb{C}}}$, we denote its associated germ of complex spaces and its maximal ideal by $\mathrm{Spec}(A)$ and $\mathfrak{m}_A$, respectively.
\end{enumerate}
In the language of artinian rings, Definition \ref{d3.1} can be read as follows.
\begin{defi} \begin{enumerate}
\item[(i)] \label{d5.1i} A deformation of $E$ over $A \in \mathrm{\operatorname{\bf Art _{\mathbb{C}}}}$ is a section $\alpha \in A^{0,1}(\operatorname{End}(E)) \otimes \mathfrak{m}_A$ such that \begin{equation}D_E.\alpha +\alpha \wedge \alpha =0
\end{equation}
in $ A^{0,2}(E)\otimes\mathfrak{m}_A$.
\item[(ii)]\label{d5.1ii} Two deformations $\alpha_1, \alpha_2$ of $E$ over $A$ are equivariant if there exists a section $$\rho \in A^{0,0}(\operatorname{GL}(E))\otimes \mathfrak{m}_A $$ inducing the identity section on $E$ such that $$\rho^{-1}\circ(D_E+\alpha_2)\circ \rho=D_E+\alpha_1.$$
\end{enumerate}
\end{defi}
Now, consider the Dolbeault complex with values in the endomorphism bundle $\operatorname{End}(E)$ of $E$ $$A^{0,0}(\operatorname{End}(E)) \overset{D_E}{\longrightarrow} A^{0,1}(\operatorname{End}(E)) \overset{D_E}{\longrightarrow} A^{0,2}(\operatorname{End}(E))\overset{D_E}{\longrightarrow}\cdots\overset{D_E}{\longrightarrow}A^{0,n}(\operatorname{End}(E))$$ which can be further equipped with a graded Lie structure by using the following Lie bracket
\begin{equation}\label{e5.2}[\phi d\bar{z}_I, \psi d\bar{z}_J]=(\phi\circ\psi-(-1)^{\left | I \right |.\left | J \right |} \psi\circ \phi) d\bar{z}_I \wedge\bar{z}_J \end{equation} where $n:=\dim{X}$, $I,J\subset \lbrace 1,\ldots, n \rbrace$ and $z_1,\ldots, z_n$ are local holomorphic coordinates. We denote this dgLa by $\mathfrak{g}_{*}$. Observe that the relation (\ref{d5.1i}) becomes $$D_E\alpha+\frac{1}{2}[\alpha,\alpha]=0$$ which is in the form of a Maurer-Cartan equation. As a matter of fact, the functor of artinian rings corresponding to the local moduli problem of $E$ is given by
\begin{align*}
\operatorname{Def}_E :\operatorname{\bf Art _{\mathbb{C}}} &\rightarrow \operatorname{\textbf{Set}} \\
A &\mapsto \left \{\alpha \in A^{0,1}(\operatorname{End}(E))\otimes \mathfrak{m}_A \mid D_E\alpha+\frac{1}{2}[\alpha,\alpha]=0 \right \}/\sim
\end{align*} where the equivalence relation $\sim$ is given in Definition \ref{d5.1ii}(ii).
Finally, for the completeness, we recall the classical deformation functor $\mathbf{\mathrm{MC}}_{\mathfrak{g}_*}$ associated to $\mathfrak{g}_*$, defined via the Maurer-Cartan equation. We have two functors:
\begin{itemize}
\item[(1)] The Gauge functor
\begin{align*}
G_{\mathfrak{g}_*}:\; \operatorname{\bf Art _{\mathbb{C}}} & \rightarrow \operatorname{\textbf{Grp}} \\
A &\mapsto \mathrm{exp}(\mathfrak{g}_0\otimes \mathfrak{m}_A)
\end{align*}
\item[(2)] The Maurer-Cartan functor $MC_{\mathfrak{g}_*}:\; \operatorname{\bf Art _{\mathbb{C}}} \rightarrow \operatorname{\textbf{Set}} $ defined by
\begin{align*}
MC_{\mathfrak{g}_*}:\; \operatorname{\bf Art _{\mathbb{C}}} & \rightarrow \operatorname{\textbf{Grp}} \\
A & \mapsto \left \{ x \in \mathfrak{g}_1\otimes \mathfrak{m}_A\mid D_E x+\frac{1}{2}[x,x]=0 \right \}.
\end{align*}
\end{itemize}
For each $A$, the gauge action of $G_{\mathfrak{g}_*}(A)$ on the set $MC_{\mathfrak{g}_*}(A)$ is functorial in $A$ and gives an action of the group functor $G_{\mathfrak{g}_*}$ on $MC_{\mathfrak{g}_*}$. This allows us to define the quotient functor \begin{align*}
\mathbf{\mathrm{MC}}_{\mathfrak{g}_*}:\; \operatorname{\bf Art _{\mathbb{C}}} & \rightarrow \operatorname{\textbf{Set}} \\
A & \mapsto MC_{\mathfrak{g}_*}(A)/G_{\mathfrak{g}_*}(A),
\end{align*}
\begin{thm} As a consequence, there is an isomorphism
$$\mathbf{\mathrm{MC}}_{\mathfrak{g}_*}\cong \operatorname{Def}_E$$ as functors of artinian rings. As a sequence, the differential graded Lie algebra controlling the deformations of $E$ is $$A^{0,0}(\operatorname{End}(E)) \overset{D_E}{\longrightarrow} A^{0,1}(\operatorname{End}(E)) \overset{D_E}{\longrightarrow} A^{0,2}(\operatorname{End}(E))\overset{D_E}{\longrightarrow}\cdots\overset{D_E}{\longrightarrow}A^{0,n}(\operatorname{End}(E))$$ where the differential is given by the connection $D_E$ and the Lie bracket given by the rule (\ref{e5.2}).
\end{thm}
\begin{proof}
The local isomorphism
$$ \mathrm{exp}:\;(A^{0,0}(\operatorname{End}(E)),0) \rightarrow (A^{0,0}(\operatorname{GL}(E)),\mathrm{Id}_E) $$ and the fact that $(A^{0,0}(\operatorname{GL}(E)),\mathrm{Id}_E)$ acts on $A^{0,1}(\operatorname{End}(E))$ by conjugations permit us to conclude that the equivalence relation $\sim$ is given in Definition \ref{d5.1ii}(ii) is the same as the one induced by the gauge action of $G_{\mathfrak{g}_*}(A)$. Therefore, the desired isomorphism follows immediately.
\end{proof}
\begin{coro} \label{c5.1} Let $X$ be a compact complex manifold over which a holomorphic vector bundle $E$ is defined. Let $G$ be a complex reductive Lie subgroup of the automorphism group of $E$. Then there exists a compatible formal $G$-action on the local moduli space of $E$.
\end{coro}
\begin{proof}
The functor $\mathbf{\mathrm{MC}}_{\mathfrak{g}_*}$ can be naturally upgraded to a derived formal moduli problem $F_{\mathfrak{g}_*}$ in Lurie's sense (cf. \cite{11}) via a simplicial version of the Maurer-Cartan equation (see \cite{8} for such a construction). Moreover, the associated dgLa of $F_{\mathfrak{g}_*}$ is nothing but $\mathfrak{g}_*$. Consequently, $F_{\mathfrak{g}_*}$ is a naturally extension of $\operatorname{Def}_E$ in the derived world.
Now, note that the action of $G$ on $E$ induces a natural a $G$-action on $E$. By the same argument as in \cite[Lemma 3.1]{5}, we can write $\mathfrak{g}_*$ as a homotopic colimit of ``simple" dgLas, i.e.
$$\mathfrak{g}_*=\mathrm{colim}_i \; \mathfrak{g}(i)_*$$ where \begin{enumerate}
\item[(i)] each $\mathfrak{g}(i)_k$ is finite-dimensional,
\item[(ii)] $\mathfrak{g}(i)_*$ is cohomologically concentrated in $[0,+\infty)$,
\item[(iii)] each $ \mathfrak{g}(i)_*$ carries a $G$-action and the colimit of these $G$-actions gives back the initial $G$-action on $\mathfrak{g}_*$.
\end{enumerate} A remark is in order. Even in the formal aspect, to make the above $G$-approximation of the associated dgLa $\mathfrak{g}_*$ possible, the $G$-equivariant Hodge decomposition $$A^{0,n}(\operatorname{End}(E)) =\mathcal{H}^{0,n}\oplus \square_{E} A^{0,n}(\operatorname{End}(E)),$$ which is purely analytic still plays a crucial role.
By \cite[Theorem 2.3]{5}, the semi-prorepresentable object of $F_{\mathfrak{g}_*}$ carries a $G$-action. Hence, the restriction of $F_{\mathfrak{g}_*}$ on $\operatorname{\bf Art _{\mathbb{C}}}$, (which is nothing but $\operatorname{Def}_E$) has a semi-universal element whose base is equipped with a compatible $G$-action. This finishes the proof.
\end{proof}
\begin{rem} Corollary \ref{c5.1} reflects the fact that for deformation problems, a formal solution is somehow easy to produce whereas Corollary \ref{c4.1} tells us that among formal solutions, we can extract a convergent one.
\end{rem}
\section{Perspectives}\label{s6}
In this final section, we summarize what we did in this paper and in \cite{3} in a more general setting in terms of associated dgLas (we also refer the reader to \cite{15} for a version without group actions).
To start, we consider the deformation problem of an analytic object $X_0$, whose associated controlling differential graded Lie algebra is $(\mathfrak{g}_*,d)$. As usual, the space of infinitesimal deformations and that of obstructions are the first and the second cohomology of $\mathfrak{g}_*$, i.e. $H^1(\mathfrak{g}_*)$ and $H^2(\mathfrak{g}_*)$, respectively. Let \begin{align*}
MC_{\mathfrak{g}_*} :\mathfrak{g}_1 &\rightarrow \mathfrak{g}_2\\
\alpha &\mapsto d\alpha +\frac{1}{2}[ \alpha, \alpha]
\end{align*} be the Maurer-Cartan equation associated to $\mathfrak{g}_*$. Any subgroup $G$ of the automorphism group of $X_0$ induces a natural $G$-action on each component of $\mathfrak{g}_*$ compatible with the differential $d$. We assume further that there are good analytic structures on $\mathfrak{g}_0$, $\mathfrak{g}_1$ and $\mathfrak{g}_2$ where the implicit function theorem is available (for example, Banach analytic spaces) and there exists a $G$-invariant metric on $\mathfrak{g}_*$, with respect to which we are able to compute the formal adjoint $d^*$ of degree $-1$. Let us denote $\square := dd^*+d^*d$. Supposedly, we have a decomposition
\begin{align*}
\mathfrak{g}_1 &=\ker\square \bigoplus \mathrm{Im}\square
\end{align*} together with two linear operators:
\begin{enumerate}
\item[(i)] The ``Green operator": $\mathcal{G}:\; \mathfrak{g}_1 \rightarrow \mathrm{Im}\square $,
\item[(ii)] The ``harmonic projection": $\mathrm{P}_{\ker\square}:\;\mathfrak{g}_1 \rightarrow \ker\square$
\end{enumerate} such that $$\mathrm{Id}_{\mathfrak{g}_1}= \mathrm{P}_{\ker\square}+\square\mathcal{G}$$ and $\ker\square $ can be naturally identified with $H^1(\mathfrak{g}_*)$. Consider the following ``Kuranishi map"
\begin{align*}
\mathcal{K} :\mathfrak{g}_1 &\rightarrow \mathfrak{g}_1\\
\alpha &\mapsto\alpha +\frac{1}{2}d^*\mathcal{G} [\alpha, \alpha].
\end{align*}
\begin{thm} There exists a compatible $G$-action on the local moduli space of $X_0$.
\end{thm}
\begin{proof} Let us denote by $N$ the following space
$$\lbrace \alpha \in \mathfrak{g}_1 \mid \left( \mathcal{K}-\mathrm{P}_{\ker\square}\right)(\alpha)=0 \rbrace.$$
Then it can be checked that the germ of analytic space $$(T,0):=(N,0)\cap(MC_{\mathfrak{g}}^{-1}(0),0)$$ is the desired ``Kuranishi space" (see \cite[Theorem 3.1]{15} for such a verification). The existence of group operations on $(T,0)$ follows immediately from the $G$-equivariance of all the maps and of all the operators involved.
\end{proof}
\begin{rem} The key point here is the existence of the $G$-invariant metric and that of the splitting $$\mathrm{Id}_{\mathfrak{g}_1}= \mathrm{P}_{\ker\square}+\square\mathcal{G}.$$ The former is assured if $G$ is a compact Lie group by the unitary trick while the latter can come from the Hodge theory if we deal with complex compact manifolds. In general, we do not have such powerful tools.
\end{rem}
The existence of reductive group operations on the Kuranishi space of complex compact manifolds (cf. \cite{3}) and that on the Kuranishi space of holomorphic vector bundles, dealt in this paper, can be thought of as living illustrations of the following philosophy.
\textit{\enquote{Reductive subgroups of the automorphism group of the analytic object under deformation can be (at least locally) analytically extended to its semi-universal deformation.}}
In other words, there should be a compatible extended action on the ``Kuranishi space", which permutes nearby complex structures and the initial group action might be regarded as the stabilizer group with respect to the prescribed complex structure (corresponding to the reference point). The formal aspect of this philosophy was systematically treated in the groundbreaking work of D. S. Rim, as mentioned in the introduction, in which a formal extendability of reductive actions is guaranteed, unique up to non-canonical equivariant isomorphisms, for any homogeneous fibred category in groupoid. However, the convergence of his construction, which is necessarily required in the analytic setting, is extremely hard to prove even in simple cases. Therefore, analytically speaking, a rigorously mathematical formulation of this philosophy might be a good problem to work on.
\bibliographystyle{amsplain}
|
2,869,038,155,002 | arxiv | \section{The Kilo Degree Survey}
One of the radical advances that optical astronomy has seen in recent
years is the advent of wide-field CCD-based surveys. On Paranal,
ESO has recently started operating two dedicated survey telescopes:
VISTA in the infra-red wavelength region and the VLT Survey Telescope
(VST) in the optical. The lion's share of the observing time on both
survey telescopes will be invested in a set of Public Surveys. The
largest of the optical surveys is the Kilo-Degree Survey (KiDS), which
will image 1500 square degrees in four filters ($u$,$g$,$r$,$i$) over
a period of 3--4 years. Combined with one of the VISTA surveys,
VIKING, which will observe the same area in ZYJHK, this will provide a
sensitive, 9-band multi-colour survey.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{P157_fig1.ps}
\caption{Lay-out of the KiDS-North (top) and KIDS-South (bottom)
fields, shown by the hatched areas. Also shown are the areas where
2DF spectra are available, indicated by the large dots, and the area
covered by DR7 of the SDSS survey, indicated by the small dots. The
CFHTLS-W2 field and the DS/COSMOS deep field are overplotted on the
top panel.}
\label{fig:areas}
\end{figure}
{\bf Observational set-up.} KiDS will cover 1500 square degrees, which is approximately 7\% of the extragalactic
sky. It consists of two patches that ensure that observations can take
place year-round. The Northern patch lies on the celestial equator,
while the Southern area straddles the South Galactic Pole
(Fig. \ref{fig:areas}). These specific areas were chosen because they
were the target of massive spectroscopic galaxy surveys already: the
2dF redshift survey \citep{2dfgrs} covers almost the same area, and
KiDS-North overlaps with the SDSS spectroscopic and imaging survey
\citep[SDSS, ][]{sdssdr8}. The exposure times for KiDS and VIKING have
been chosen to yield a median galaxy redshift of 0.8, so that the
evolution of the galaxy population and matter distribution over the
last $\sim$ half of the age of the universe can be studied. They are
also well-matched to the natural exposure times for efficient VST and
VISTA operations, and balanced over the astro-climate conditions on
Paranal (seeing and moon phase) so that all bands can be observed at
the same average rate. This strategy makes optimal use of the fact
that all observations are queue-scheduled, allowing the best seeing
time to be used for deep $r$-band exposures, for example, and the
worst seeing for $u$.
{\bf Science drivers.} The main scientific objective of KiDS and VIKING is to map the matter
distribution in the universe through weak gravitational lensing and
photometric redshift measurements. The large numbers of galaxies that
KiDS will detect, with accurate photometric redshifts up to
$z\simeq1.2$ will allow the Baryonic Accoustic Oscillations, an
important cosmological standard candle, to be measured over a large
redshift range, and thus unveil its evolution. Galaxy-galaxy lensing
(GGL) studies into the structure of galaxy halos for various redshifts
and galaxy types, will exploit the excellent image quality of the
OmegaCAM wide-field camera and the VST on the one hand and the shear
size of the KiDS data set on the other. The deep photometry and
accurate photometric redshifts also will ensure that KiDS data will be
a powerful tool to study the evolution of galaxies and clusters out to
redshifts of $z\simeq1.5$. Additionally, the extensive data set that KiDS will deliver, will be useful in a broad range of rsearch areas in astronomy, for example the study of stellar streams
and the Galactic halo.
{\bf Survey data products.} Being a Public Survey, all KiDS data will be made publicly
available. The KiDS catalogue will contain some 100,000 sources per
square degree (150 million sources over the full survey area), and for
each square degree there will be 10 GB of final image data, 15 TB for
the whole survey. A set of basic data products will be made public, both
through ESO and through the \textsf{Astro-WISE} database: calibrated coadded images, weight maps, calibration images, single-band and multi-band catalogues.
In the long-term, we intend to provide more advanced data products, for example images with
gaussianized point-spread-functions, or morphological parameters of
all detected sources.
\section{Data-centric survey handling in \textsf{Astro-WISE}}
The KiDS survey team is an international collaboration with team members at institutes spread around Europe. The European-wide hardware resources are pooled within the survey handling system \textsf{Astro-WISE} (Vriend et al., 2012, Mcfarland et al.\ 2011). In \textsf{Astro-WISE} the KiDS team members share their work on survey calibration and quality control. \textsf{Astro-WISE} is a data-centric survey handling system: all survey handling is implemented as operations by data objects on other data objects. Any type of survey product, from raw to final, is represented by a class of data objects. Survey products are framed as objects: informational entities consisting of pixel and/or metadata. Metadata is defined here as {\it all} non pixel data. The objects carrying the information of final survey products also carry the information on how they can be created out of intermediate objects. This backward chaining procedure is recursively implemented up to the raw data (see left diagram in Figure~\ref{fig:astrowise}). Thus, a request by a KiDS team member for a survey product, a target, triggers a backward information flow, in the direction of the raw data. The net effect is a forward work flow description, towards the target, that is then executed. The backward information flow is implemented as queries to database initiated by the requested target itself. The database is queried for objects on which the target depends with the right characteristics including validity and quality. Either they exist and are returned or the query is 'backwarded' to the next level objects closer to the raw survey data. In conclusion, in \textsf{Astro-WISE} survey handling is realized by backward information flows that control forward processing steps. The information flows are the mechanism to manage the sharing of the ocean of KiDS survey data, to control its calibration and to control and improve its quality.
\begin{figure}[ht]
\label{fig:astrowise}
\includegraphics[width=5.1cm]{P157_fig2a.eps}
\includegraphics[width=5.1cm]{P157_fig2b.eps}
\caption{{\bf Left:} Each box in this target diagram represents a class of survey objects. These objects not only contain the survey products denoted by familiar names in wide-field imaging. They also carry the information how they, as requested target, can be created out of other objects, illustrated by the arrows. Underlying is an object model that captures the relationship between requested information and the physics of the atmosphere-to-detector observational system.{\bf Right:} This diagram shows the survey operational levels at which data can reside within the KiDS project evironment in \textsf{Astro-WISE}.
The baseline KiDS survey products reside at 2:PROJECT. These data can be accessed only by KiDS survey team members. Each KiDS team member can experiment in her/his own level 1:MYDB to create improved versions of these baseline products. Survey data at 1:MYDB is only accessible by the single team member. If content, the member promotes the products to 2:PROJECT to share them with the team. The KiDS project manager can publish baseline survey data from 2 to levels 3 to 5. Survey data at 3:ASTRO-WISE can be accessed by all \textsf{Astro-WISE} users. At 4:WORLD, the data become accessible additionally to the astronomical community without an \textsf{Astro-WISE} account (anonymous users). At 5:VO, the data are accessible also from the Virtual Observatory.}
\end{figure}
{\bf Managing the survey data.} The objects of KiDS inside \textsf{Astro-WISE} are managed by having 5 different survey operational levels named privileges levels. \textsf{Astro-WISE}'s data-centric viewpoint leads to the term privileges. The object has increasing privileges to access users with numerical increase of the privileges level. The right diagram in Figure~\ref{fig:astrowise} illustrates how data is tranferred through these levels for quality control and survey delivery of the Public Survey KiDS. When using the \textsf{Astro-WISE} environment, KiDS members configure the 'Context' for their handling that includes limiting queries for and by objects to those with certain privileges.
{\bf Survey quality control.} Objects representing survey products verify their own quality via their own verification method. It is automatically executed upon creation and sets the value of a quality flag attribute to indicate if / how its quality is compromised. Users also validate each object invoking an inspect method of the object. The user's verdict is stored with the object as a separate attribute of the object (always named is\_valid). The privileges levels serve to distinguish between experimental and baseline versions of survey data. A KiDS member tests improvements to e.g., a calibration method at the MYDB level (see Figure~\ref{fig:astrowise}). Bad outcomes are discarded by invalidating the data. Promising outcomes can be shared with the team by publishing the object to the PROJECT level (see Figure~\ref{fig:astrowise}). The fellow team members can then inspect the data and provide feedback. Upon team acceptance the object becomes baseline and can be published higher up eventually for delivery / sharing with the outside world.
{\bf Survey calibration control.} Calibration data is represented also as objects in \textsf{Astro-WISE}. These objects carry a creation date and editable timestamps that mark their validity period. A request for a target generates a database query that returns all valid objects in the survey with the required validity period. The newest calibration object is then selected using the survey handling rule "newer is better". \textsf{Astro-WISE} provides webservices to manipulate this eclipsing of older calibrations by new ones by adjusting timestamps and validity. The calibration scientist uses Context to limit the survey calibration operations using these rules to the pool of calibration data at certain privileges. Calibration objects with privileges level 3 can be accessed by all \textsf{Astro-WISE} users and form a shared pool of calibration data.
KiDS survey operations have started 15 October 2011. The KiDS team will move from a 'quick-look' versions of first survey products towards publishing of the complete KiDS Public Survey, using \textsf{Astro-WISE} as a 'live archive' that captures the accumulation of knowledge about OmegaCAM, VST and the KiDs survey data.
\begin{acknowledgements}
This work is financially supported by the Netherlands Research School for Astronomy (NOVA) and Target (www.rug.nl/target). Target is supported by Samenwerkingsverband Noord Nederland, European fund for regional development, Dutch Ministry of economic affairs, Pieken in de Delta, Provinces of Groningen and Drenthe. Target operates under the auspices of Sensor Universe.
\end{acknowledgements}
\vspace{-0.5cm}
|
2,869,038,155,003 | arxiv | \subsubsection*{\bibname}}
\begin{document}
\twocolumn[
\aistatstitle{Self-Supervised Visual Representation Learning Using Lightweight Architectures}
\aistatsauthor{ Prathamesh Sonawane\textsuperscript{1}* \And Sparsh Drolia\textsuperscript{1}* \And Saqib Shamsi\textsuperscript{2} \And Bhargav Jain\textsuperscript{2}}
\vspace{0.4cm}
\aistatsaddress{
\textsuperscript{1} Pune Institute of Computer Technology, Maharashtra, India \textsuperscript{2} Whirlpool Corporation \\
(pratt3000, sparshdrolia, shamsi.saqib)@gmail.com, bhargav\[email protected]
}
]
\begin{abstract}
In self-supervised learning, a model is trained to solve a pretext task, using a data set whose annotations are created by a machine. The objective is to transfer the trained weights to perform a downstream task in the target domain. We critically examine the most notable pretext tasks to extract features from image data and further go on to conduct experiments on resource constrained networks, which aid faster experimentation and deployment. We study the performance of various self-supervised techniques keeping all other parameters uniform. We study the patterns that emerge by varying model type, size and amount of pre-training done for the backbone as well as establish a standard to compare against for future research. We also conduct comprehensive studies to understand the quality of representations learned by different architectures.
\end{abstract}
\section{INTRODUCTION}
Self-supervised learning is a class of methods that leverage \textit{pretext} tasks to create labels automatically from the data. These labels are used to learn representations. The representations learned from the pretext task are then used to train the same model on some task of interest called the \textit{downstream} task. Self-supervised learning is prominently used in various tasks including NLP(Natural Language Processing), computer vision and reinforcement learning to learn spatial and temporal features depending on the task, data set and model architecture.
Although we have observed significant progress in Self-Supervised learning research, it still remains to be outperform the traditional way of training models which is by using human annotated data sets. Although in recent research Self-Supervised learning techniques have come close to performing on par with Supervised training, they appear somewhat unapproachable due to the amount of compute as well as time demanded for experimentation and research. Despite its said drawback, it does provide a way of eliminating label acquisition cost and reducing time required to create a data set. Moreover the networks trained aren't hinged on to a specific task like classification or object detection, the models weights remain universal for a variety of downstream tasks.
It is to these benefits that we are observing an increased amount of research in similar fields like transfer learning, semi-supervised learning, weakly supervised learning and unsupervised learning. In this paper we focus on self-supervised learning. We study the most notable pretext tasks for visual representation learning on lightweight architectures in a resource constrained environment.
Precisely, we aim to answer the rather intriguing question, \textit{"How well do techniques like Rotnet \cite{Rotnet}, BYOL \cite{BYOL}, SimCLR \cite{SimCLR}, etc., which are traditionally trained on computationally heavier architectures like AlexNet\cite{alexnet}, ResNet \cite{resnet} learn features when they are trained on lightweight architectures like an EfficientNet-lite0 \cite{effnetlite}?"} We conduct a comparative analysis across various techniques with a controlled set of hyper parameters, and see how they perform in a low resource setting. We hope to establish a standard against which future research can be conducted and evaluated prematurely. Our aim is to establish a standard for comparison before dedicating huge chunks of time and capital in order to perform pretext tasks on bigger architectures and consequently bigger data sets. Additionally in our study we also found that lightweight architectures could achieve comparable accuracies with significantly less carbon footprints(Tab. \ref{tab:top-validation-acc}).
To measure the computation complexity we have not only used the number of Floating Point Operations (FLOPs) performed by each model, as it is an indirect metric, but also have measured amount of training time, which is a more direct measure \cite{FLOPSvsTIME}. You can find all of the code for our experiments here.[Once the the paper gets accepted we will release the link to our code base here]
\section{RELATED WORK}
With the success of deep neural networks, a lot of tasks can be solved very well by collecting a labelled data set and using supervised learning. In order to perform well, the models usually need a large corpus of labeled examples. However, getting labels for data turns out to be expensive and scaling it up is a major challenge. With the vast amount of unlabeled data being generated in the form of text, images, videos, everyday, the goal of self-supervised learning is to get supervision from the data itself to learn useful representations instead of using explicit labels. Once a good representation is learned on a \textit{pretext} task, the model can be fine tuned on a \textit{downstream} task with comparatively fewer data than what would have been required were the model trained from scratch on the downstream task.
The general nature of self-supervised learning allows it to be used across different modalities including images, text, video, audio and even in robotics. The use of context to predict words \cite{word2vec,glove,ulmfit,bert} is a popular technique in the domain of natural language processing. In the similar vein, the temporal information in video can also be leveraged to learn representations \cite{shuffle_and_learn,arrow_of_time}. One could also use different modalities in videos for the same \cite{cross_and_learn,multimodal}.
There has been a lot of work over the years in the field of visual representation learning. Doersch et al. \cite{patches_ssl} proposed a patch based method to predict the placement of patches relative to each other within an image. They motivated a line of patch based methods such as the "jigsaw puzzle" based task \cite{jigsaw}, where they used nine patches from the full image instead of just two patches to make the pretext task more challenging. There have been more works following the two \cite{jigsaw++,learning_to_count}.
In contrast to patch based methods, there have been other methods that used the image information to create a classification pretext task. Notable examples of these include RotNet \cite{Rotnet}, where the authors rotate the images in multiples of 90 degrees and the model performs a 4-way classification predicting the angle the image was rotated by. Another class of methods have focused on generative modeling to learn representations from images. Researchers have relied on tasks like predicting a subset of channels of the image from another subset of the same image such as in a colorization pretext task \cite{colorful_colorization} and its improvement Split-Brain Auto Encoders \cite{SplitBrainAEZhang2017}. Pathak et al. used image in painting as a pretext task for self-supervised learning \cite{inpainting}.
Recently there has been a rise in the use of contrastive methods which have performed extremely well in this domain. Contrastive methods aim to learn a model which transforms an input into an embedding vector such that examples from the same class have similar embeddings relative to embeddings of samples from different classes. SimCLR \cite{SimCLR} is a framework which learns representations by maximizing the agreement between different augmented views of the same input in a latent space. Barlow Twins \cite{barlow_twins} method learns to make the cross-correlation matrix between the two distorted versions of the same image close to the identity matrix. MOCO \cite{moco} and MOCO-v2 \cite{mocov2} rely on a framework of representation learning from images as a dynamic dictionary look-up. BYOL \cite{BYOL} aims to learn a representation using two neural networks, which are referred to as online and target networks, without the use of negative samples. The two networks have the same architecture, with the target network having polyak averaged weights.
There has been a lot of focus and effort from the research community on coming up with new pretext tasks. We aim to take a complementary approach in the self-supervised research landscape by evaluating various pretext tasks on low resource architectures like MobileNetv2 \cite{mobilenetv2}, ShuffleNetv2 \cite{shufflenetv2}, SqueezeNet \cite{squeezenet} and EfficientNetLite0 \cite{effnetlite} and compare and contrast the performance on a computationally expensive architecture like a ResNet \cite{resnet}. Our work is similar to the study by Kolesnikov et al. \cite{revisiting_ssl} where they investigate how architectural choices affect the performance of various pretext tasks for visual representation learning. However, while they use different variants of Residual Networks \cite{resnet}, which are computationally expensive architectures, the aim of our work is to examine how self-supervised visual representation learning fares on computationally lighter architectures.
\begin{table*}[ht]
\centering
\caption{Max validation accuracy achieved for downstream classification task on different pretext tasks and backbone architectures. C.T. refers to contrastive techniques. The column mentions if the respective technique is contrastive or not. CO$_2$ Gen. is CO$_2$ generated, as in the carbon emitted in kg CO$_2$ equivalent for training on average for the respective architecture.}
\label{tab:top-validation-acc}
\begin{tabular}{cccccccc}
\cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{9B9B9B}\textbf{Val. Acc.}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Efficientnetlite0}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Mobilenetv2}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{ResNet-18}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Shufflenetv2}} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{Squeezenet}} & \multicolumn{1}{c|}{.} & \multicolumn{1}{c|}{\cellcolor[HTML]{C0C0C0}\textbf{C.T.}} \\ \cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{Rotnet}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}\textbf{0.28}} & \multicolumn{1}{c|}{0.31} & \multicolumn{1}{c|}{0.33} & \multicolumn{1}{c|}{0.37} & \multicolumn{1}{c|}{0.30} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{No} \\ \cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{MOCOv2}} & \multicolumn{1}{c|}{0.47} & \multicolumn{1}{c|}{0.51} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}\textbf{0.78}} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}0.65} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}0.10} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Yes} \\ \cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{Split Brain}} & \multicolumn{1}{c|}{0.58} & \multicolumn{1}{c|}{0.57} & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.45} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{No} \\ \cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{SimCLR}} & \multicolumn{1}{c|}{0.69} & \multicolumn{1}{c|}{0.65} & \multicolumn{1}{c|}{0.70} & \multicolumn{1}{c|}{0.59} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Yes} \\ \cline{1-6} \cline{8-8}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{BYOL}} & \multicolumn{1}{c|}{0.51} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{\cellcolor[HTML]{FFFFFF}\textbf{0.78}} & \multicolumn{1}{c|}{0.71} & \multicolumn{1}{c|}{0.68} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Yes} \\ \cline{1-6} \cline{8-8}
. & & & & & & & \\ \cline{1-6}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{Avg. FLOPs}} & \multicolumn{1}{c|}{3.90E+08} & \multicolumn{1}{c|}{3.27E+08} & \multicolumn{1}{c|}{1.80E+09} & \multicolumn{1}{c|}{4.01E+07} & \multicolumn{1}{c|}{3.20E+08} & & \\ \cline{1-6}
\multicolumn{1}{|c|}{\cellcolor[HTML]{C0C0C0}\textbf{CO2 Gen.}} & \multicolumn{1}{c|}{1.9} & \multicolumn{1}{c|}{1.8} & \multicolumn{1}{c|}{3.22} & \multicolumn{1}{c|}{1.84} & \multicolumn{1}{c|}{1.99} & & \\ \cline{1-6}
\end{tabular}
\end{table*}
\section{EXPERIMENTAL SETUP}
\label{section:experimental_setup}
\subsection{Dataset}We have used STL-10 \cite{STL10} for all of our experiments. STL-10 is made up of a subset of images from the ImageNet \cite{imagenet} . It is composed of labeled and unlabeled subsets. The first set consists of $10$ classes with $500$ training and $800$ validation RGB images per class. The images are of size $96 \times 96$. These are used for downstream training and evaluation. The second set consists of $100,000$ unlabelled RGB images of size $96 \times 96$, which are used for pretext task training.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/validation acc/Split_brain_Validation_accuracy.png}
\caption{Split-Brain Auto Encoders}
\label{fig:val_acc_graphs1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/validation acc/SimCLR_val_accuracy.png}
\caption{SimCLR}
\label{fig:val_acc_graphs2}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/validation acc/Rotnet_validation_accuracy.png}
\caption{Rotnet}
\label{fig:val_acc_graphs3}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/validation acc/MOCOv2.jpeg}
\caption{MOCOv2}
\label{fig:val_acc_graphs4}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/validation acc/BYOL.jpeg}
\caption{BYOL}
\label{fig:val_acc_graphs5}
\end{subfigure}
\caption{The graphs represent max validation accuracies achieved on downstream classification task with increased amount of pretraining done(in intervals of 10 epochs) on different architectures for every technique. Each graph represents one among 5 techniques evaluated. (x-axis : number of epochs for which the model was pre-trained. y-axis : Max validation accuracy achieved on downstream classification task.)
\label{fig:val_acc_graphs}}
\end{figure*}
\subsection{Architectures}
\label{subsection:architectures}
\subsubsection{ResNet-18}
To compare the results of the different techniques against a computationally expensive architecture we train a ResNet-18 \cite{resnet} on all of the techniques separately. The network consists of an initial $7 \times 7$ convolutional layer followed by a max pooling operation. The network then consists of $16$ $3 \times 3$ convolutions and ReLU non-linearity. Down sampling is performed by a strided convolutions instead of pooling layers. The network has $1.8$ GFLOPs (multiply-adds).
\subsubsection{Mobilenet-v2}
We chose Mobilenet-v2 \cite{mobilenetv2} as one of the lightweight architectures for this study. Mobilenet-v2 is built using inverted residual blocks with shortcut connections between thin bottleneck layers. Its design includes an initial fully convolutional layer with 32 filters, followed by 19 residual bottleneck layers. The size of all the kernels in all of the convolution layers is $3 \times 3$
\subsubsection{Efficientnet-lite0}
We compared the performance of Efficientnet-lite0 \cite{effnetlite} as it is one of the modern state-of-the-art architectures. It is a smaller version of EfficientNet \cite{effnetlite} and uses compound scaling to uniformly scale depth $\alpha$, width $\beta$, image size $\gamma$. Neural architecture search is used to get the baseline model and bigger architectures are obtained by scaling this up. It consist of convolutions followed by inverted residual blocks found in MobileNet-v2.
\subsubsection{Squeezenet}
We also trained Squeezenet \cite{squeezenet} in addition to the architectures mentioned above on all the techniques separately. It is composed of fire modules that further consists of a squeezed convolution layer (has only $1 \times 1$ filters), that feeds into an expand layer (that has a mix of $1 \times 1$ and 3×3 convolution filters). The architectures is constructed using a convolution layer, followed by $9$ fire modules, followed by a convolution and a Softmax unit at the end. The down sampling in SqueezeNet is performed by pooling layers.
\subsubsection{Shufflenet-v2}
Shufflenet-v2 \cite{shufflenetv2} was our final lightweight architecture for this study. Shufflenet-v2 unit is a residual block. In its residual branch, for the $3 \times $3 layer, a computational economical $3 \times $3 depth wise convolution on the bottleneck feature map is applied. Then the first $1 \times $1 layer is replaced with point wise group convolution followed by a channel shuffle operation, to form a Shufflenet unit. For example, given the input size c x h x w and the bottleneck channels m, Shufflenet unit requires only h*w(2c*m/g + 9m) FLOPs, where g means the number of groups for convolutions.
\subsection{Techniques}
\subsubsection{Rotnet}
The authors proposed rotating an image pseudo-randomly by a discrete set of angles \cite{Rotnet}. The angle is treated as the label and the rotated image as the input. In the paper they tested rotating images in 45, 90 and 180 degree intervals separately. The best results were observed for 90 degree interval set and the worst results were for 180 degree interval set. We used 90 degree intervals as our labels for training since they achieved the highest accuracy in the paper.
\subsubsection{MOCOv2}
The authors introduced a contrastive learning approach – momentum contrast (MOCO) \cite{mocov2}. The key ideas of this method are: 1) implementation of a queue as the dictionary which stores plethora of keys; 2) updating the key encoder using the momentum update from the query encoders(without the need of passing the batch through key encoder). Compared with previous contrastive learning methods based on memory bank and end-to-end learning, MOCOv2 not only supports a large negative sample size but also maintains a consistent key encoding.
\subsubsection{SimCLR}
The authors introduced a framework \cite{SimCLR} for contrastive learning of visual representations. SimCLR learns representation by maximising similarity of two differently augmented data set of the same image. It uses a stochastic data augmentation module that randomly modifies a given data instance, producing two correlated images of the same example, which is regarded a positive pair. SimCLR uses three basic augmentations in sequence: random cropping followed by resizing to the original size, random colour distortions, and random Gaussian blur. The authors believe that random cropping and colour distortion are critical to achieving high performance.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/architecture_wise_graphs/Efficientnetlite0.jpeg}
\caption{Efficientlite0}
\label{fig:arch_wise_graphs1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/architecture_wise_graphs/mobilenet.jpeg}
\caption{Mobilenetv2}
\label{fig:arch_wise_graphs2}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/architecture_wise_graphs/Resnet.jpeg}
\caption{ResNet-18}
\label{fig:arch_wise_graphs3}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/architecture_wise_graphs/shufflenet.jpeg}
\caption{Shufflenet}
\label{fig:arch_wise_graphs4}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/architecture_wise_graphs/squeezenet.jpeg}
\caption{Squeezenet}
\label{fig:arch_wise_graphs5}
\end{subfigure}
\caption{The graph represents max validation accuracies achieved on downstream classification task with the increase in pretraining done on different techniques for each architecture. Each graph represents one among 5 architectures evaluated.(x-axis : number of epochs for which the model was pre-trained. y-axis : Max validation accuracy achieved on downstream classification task.)}
\label{fig:arch_wise_graphs}
\end{figure*}
\subsubsection{BYOL}
BYOL \cite{BYOL} is a contrastive learning approach which uses two encoder networks with the same architecture referred to as online and target networks to learn representations and reduces the contrastive loss between the representations learned by the two networks. Unlike other contrastive learning BYOL does not use any negative samples. The output of the network is iterated so as to serve as a target for enhanced representation. BYOL trains its online network using an augmented view of an image and predicts the target network’s representation of another augmented view of the same image. They use ResNet-50 as an encoder network. For the projection MLP, a 2048 dimensional feature vector is projected onto 4096-dimensional vector space first with a sub-network composed of batch norm followed by ReLU non-linear activation. The resulting vector is then reduced to a 256-dimensional feature vector. The same architecture is used for the predictor network.
\subsubsection{Split-Brain Auto encoders}
Split-Brain Auto encoders are a spin on the traditional auto encoder architecture \cite{SplitBrainAEZhang2017}. The method adds a split to the network thereby creating two sub-networks. The image is divided into two subsets of channels. Each sub-network predicts one subset of channels from the other subset. For this study, we use the \textit{Lab} space and divide the image into perceptual lightness \textit{L} and color \textit{ab} channels. Both sub-networks are trained for classification using a cross-entropy objective. When predicting \textit{L} from \textit{ab}, the output space is quantized into 50 bins of size 2. When predicting \textit{ab} from \textit{L}, the quantized output space is binned into 313 bins of size 100.
\section{EXPERIMENTS}
We evaluated the performance of five of the most cited techniques: Rotnet, MOCOv2, Split-Brain Auto encoders, SimCLR and BYOL across five different architectures: Efficientnetlite0, Mobilenetv2, ResNet-18, Shufflenetv2 and Squeezenet. All of them are relatively smaller architectures, except for ResNet-18 which acts as a point of comparison of the performance of the four lightweight architectures. All of the techniques and architectures have been described in Section \ref{section:experimental_setup} above.
Every architecture was trained on the pretext task, using the unlabelled subset of images of STL-10 until convergence. We saved the weights of the network every 5 epochs during pretraining. These checkpoints were later used to evaluate performance on the downstream classification task to evaluate the performance as a function of the amount of pre-training the networks were exposed to for the pretext task.
We used categorical crossentropy loss for the downstream training and Adam \cite{adam} as the optimizer. For pretext tasks all of the configuration was kept as stated in their respective papers, only the model architectures were replaced.
To inspect the quality of the learned representations and understand what part of the images most affect the outcome of predictions made by the models, we used Saliency maps \cite{Saliency}. Saliency maps show us the degree of importance of each pixel an image, in the visual field. Our approach to this was to find the best true positives and false negatives for each experiment and then examining those for any observable trends.
Another technique we used to understand the models' performance is K-Nearest Neighbours search (kNN) \cite{KNN}. We use it to find the closest image representations in space of representations learned by the models. We use Euclidean distance as our distance measure.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/Bird.jpeg}
\caption{Bird}
\label{fig:Sal_Bird}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/Horse.jpeg}
\caption{Horse}
\label{fig:Sal_Horse}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/plane.jpeg}
\caption{Aeroplane}
\label{fig:Sal_Plane}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/ship.jpeg}
\caption{Ship}
\label{fig:Sal_ship}
\end{subfigure}
\hspace{1em}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/truck.jpeg}
\caption{Car}
\label{fig:Sal_truck}
\end{subfigure}
\\
\hspace{1em}
\begin{subfigure}{.28\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/Saliency_maps/saliency_scale.jpeg}
\caption{Saliency heat map scale. Blue showing the least focused regions and red showing the most focused regions}
\label{fig:sal_scale}
\end{subfigure}
\caption{Saliency maps for BYOL using Mobilenetv2's backbone. Although only a few examples have been shown, These Saliency maps represent a trend which we observed in most of our experiments. In Fig \ref{fig:Sal_Bird} we can observe that the model focuses on the surrounding rather than the bird and hence throughout our experiments bird images were the most misclassified. In Fig \ref{fig:Sal_Horse} we can observe that the model focuses on the horse with a decent regard for its surrounding as well, this must have been the reason that horse images were classified correctly in most instances. In Fig \ref{fig:Sal_Plane} and Fig \ref{fig:Sal_ship} we can observe that the model focuses primarily on the blue pixels, and thus ends up confusing between planes and ships in practice. In Fig \ref{fig:Sal_truck} we can see that the model is focusing on the red pixels and hence we get an explanation for why red coloured cars and trucks are misclassified interchangeably by models }
\label{fig:saliencymaps}
\end{figure*}
\section{Results}
\subsection{General}
As is evident from figures \ref{fig:val_acc_graphs} Resnet-18 is the best performer in Fig \ref{fig:val_acc_graphs2}, Fig \ref{fig:val_acc_graphs4} and Fig \ref{fig:val_acc_graphs5}, with the different in performance increasing with the amount of pretraining. This was expected as it is significantly heavier than all architectures, . The highest validation accuracy in all our experiments was achieved using ResNet-18, twice (Table \ref{tab:top-validation-acc}). For Split-Brain Auto encoders and Rotnet, ResNet-18 has the second best performance. Split-Brain appears to be unstable when trained on ResNet-18 (as we can see from Fig \ref{fig:val_acc_graphs1}). For Rotnet, after 100 epochs of pretraining all architectures have about the same validation accuracy (Fig \ref{fig:val_acc_graphs3}), so it is unclear as to which is the best performing architecture. Despite all this it is safe to say that ResNet-18 outperforms lighter models overall. On the other hand we can also observe in Fig \ref{fig:val_acc_graphs} that Squeezenet appears consistently in the worst 2 performing models.
Another key take away from Fig \ref{fig:val_acc_graphs4} and Fig \ref{fig:val_acc_graphs5} is that while Shufflenet has trouble learning with less pretraining, it eventually overtakes the other architectures as the amount of pretraining increases. Hence, while Shufflenetv2 has a slower learning pace, with respect to the pretraining done, it learns better feature representations over time, unlike the other 4 architectures.
We can observe from Fig \ref{fig:arch_wise_graphs} that BYOL is the best performing technique for 4 out 5 model architectures. From fig \ref{fig:val_acc_graphs} we can observe that it also is the most stable among other techniques.
Split-Brain Auto Encoders and Rotnet require relatively less pretraining to reach their max validation accuracies, regardless of the architecture ( Fig \ref{fig:val_acc_graphs1} and Fig \ref{fig:val_acc_graphs3}). Rotnet is stable while doing so but the same cant be said about Split-Brain Autoencoders.
Another observation is that excessive pre-training didn't have any detrimental or supplemental effects on performance during the downstream task.
\subsection{Saliency Maps and kNN}
We performed K nearest neighbors search for each architecture and technique. One common trend we observed was that, most of the techniques misclassified Aeroplanes as Ships (Fig \ref{fig:KNN_Plane_main}, Fig \ref{fig:KNN_Plane_rest}). This could be attributed to the prevalence of the blue colored pixels in both of images. Moreover this assumption is strengthened by visualization of such images through Saliency maps(Fig \ref{fig:Sal_Plane}), which showed that most models classified those images primarily based off of the blue pixels rather than the object.
Another observation is that images with substantial amount of red pixels are directly classified as Trucks by most models (Fig \ref{fig:KNN_Car_main}, Fig \ref{fig:KNN_Car_rest}). This may be due to skewed data in case of the Truck class and a general lack of other red colored objects. Subsequently, in most techniques the truck label was wrongly predicted. The Saliency map of a truck in Fig \ref{fig:Sal_truck} further supports this idea. We observe that the model classifies the image as a truck primarily based off of the red colored section.
Models were most consistently accurate while predicting horse images and least consistently accurate while predicting bird images(Fig \ref{fig:KNN_Horse_main}, Fig \ref{fig:KNN_Horse_rest}, Fig \ref{fig:KNN_Bird_main}, Fig \ref{fig:KNN_Bird_rest}). For instance, we observed from Fig \ref{fig:Sal_Horse} that the model focused on the horse with some regard for its surrounding. On the contrary, in Fig \ref{fig:Sal_Bird} we observe that there is very little focus on the bird and its classifying the image more on the basis of its surrounding. These observations were a good representation of the many other observations we made during our experiments with Saliency Maps and KNNs.
\section{Assumptions and Limitations}
\subsection{Architectures without skip connections}
After conducting most of our experiments we realized that all the architectures used for evaluation had intrinsically incorporated skip connections. Conducting experiments on architectures without skip connections would have resulted in a more comprehensive study.
\subsection{Heavier Architectures}
Our experiments have 4 light weight architectures and 1 moderately heavy architecture, but evaluation on even bigger architectures (Ex. ResNet-101) could have provided a more concrete comparison benchmark. We were unable to conduct these experiments due to the high compute requirement as well as the huge amount of time required to conduct a single experiment across all the tasks.
\subsection{Dataset}
We used a single data set for all of our experiments. While this removed the chance of inherent variation in results due to varying data it also limited our observation scope. Conducting all the experiments with a different data set could have helped in more concretely establishing all the model performances.
\section{CO2 Emission Related to Experiments}
Experiments were conducted using Google Cloud Platform in region asia-southeast1, which has a carbon efficiency of 0.42 kgCO$_2$eq/kWh. A cumulative of 250 hours of computation was performed on hardware of type Tesla P100 (TDP of 250W).
Total emissions are estimated to be 35 kgCO$_2$eq of which 100 percents were directly offset by the cloud provider.
Estimations were conducted using the Machine Learning Impact calculator presented in \cite{lacoste2019quantifying}.
\begin{figure*}[ht]
\centering
\begin{subfigure}{.08\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/plane_main.jpeg}
\caption{Plane}
\label{fig:KNN_Plane_main}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{.62\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/plane_rest.jpeg}
\caption{nearest neighbour results for Shufflenetv2 trained with BYOL as the pretext task.}
\label{fig:KNN_Plane_rest}
\end{subfigure}
\begin{subfigure}{.08\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Bird_main.jpeg}
\caption{Bird}
\label{fig:KNN_Bird_main}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{.62\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Bird_rest.jpeg}
\caption{Nearest neighbour results SqueezeNet trained with BYOL as the pretext task.}
\label{fig:KNN_Bird_rest}
\end{subfigure}
\begin{subfigure}{.08\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Car_main.jpeg}
\caption{Truck}
\label{fig:KNN_Car_main}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{.62\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Car_rest.jpeg}
\caption{Nearest neighbour results for Mobilenetv2 trained with BYOL as the pretext task.}
\label{fig:KNN_Car_rest}
\end{subfigure}
\begin{subfigure}{.08\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Horse_main.jpeg}
\caption{Horse}
\label{fig:KNN_Horse_main}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{.62\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/KNN/Horse_rest.jpeg}
\caption{Nearest neighbour results for Efficientnetlite0 trained with BYOL as the pretext task.}
\label{fig:KNN_Horse_rest}
\end{subfigure}
\caption{The figure shows query image (left) with mapped images (right) whose feature vectors have the least Euclidean distances from the query image. The Euclidean distance increases from left to right. The KNN search performed in above images is for the best performing pretext technique, BYOL, but similar trends were observed in other techniques too. In Fig \ref{fig:KNN_Plane_main}, Fig \ref{fig:KNN_Plane_rest} we can observe that the model groups together images with blue pixels and thus tends to misclassify boats and aeroplanes. Evidence for this can be seen from Fig Fig \ref{fig:Sal_Plane} and Fig \ref{fig:Sal_Plane} where the model is classifying those images on the basis of blue regions. In Fig \ref{fig:KNN_Bird_main} and Fig \ref{fig:KNN_Bird_rest} we can observe that the mapped images appear somewhat random. This may be due to the fact that in Fig \ref{fig:Sal_Bird} the model isn't focusing on the bird but vaguely on the background. In Fig \ref{fig:KNN_Car_main} and Fig \ref{fig:KNN_Car_rest} we can see that that the model group together images of cars as well as trucks probably due to the red pixels. From Fig \ref{fig:Sal_truck} we can observe that the model focuses in the red colored pixels only. In Fig \ref{fig:KNN_Horse_main} and \ref{fig:KNN_Horse_rest} we can observe that the model has group together all horse images accurately, this may be a result of the model focusing on the horse as we can see in Fig \ref{fig:Sal_Horse}. }
\label{fig:KNN_images}
\end{figure*}
\section{Conclusion}
In conclusion we found that while having a heavier architecture does help, having a good pretext technique has a bigger impact on the performance of models. In our experiments BYOL was the best performing technique and Rotnet was consistently among the worst performing techniques. Contrastive techniques performed better than non-contrastive techniques most of the times even in the realm of lightweight architectures.
Aeroplanes ans Ships are mis-labelled as each other because the models focus on the blue pixels in the image rather than the object. Similarly Cars and Trucks are mis-labelled as each other because the model focuses on the red pixels. This could be tackled by using a more diverse data set. Bird images are the most misclassified and horse images are the most successfully classified this was attributed to weight of focus on the object and environment.
\section{Future Work}
Some things that could be explored in the future are using more diverse and balanced datasets. Experimenting on other lightweight architectures such as condensenet \cite{condensenet}, ThiNet \cite{thinet} and the others can also be explored. Computationally heavier architectures than ResNet-18, such as ResNet-50, ResNet-101, DenseNet-121 and the others can also be used to gauge the performance of various pre-text tasks. Also, incorporating more self-supervised techniques would be one of the important goals. Finally, a more diverse set of downstream tasks like object detection using the pretext task trained models as backbones would enable a fairer study of the effectiveness of architectural and method choices.
\subsubsection*{\bibname}}
\begin{document}
\twocolumn[
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022}
\aistatsauthor{ Author 1 \And Author 2 \And Author 3 }
\aistatsaddress{ Institution 1 \And Institution 2 \And Institution 3 } ]
\begin{abstract}
The Abstract paragraph should be indented 0.25 inch (1.5 picas) on
both left and right-hand margins. Use 10~point type, with a vertical
spacing of 11~points. The \textbf{Abstract} heading must be centered,
bold, and in point size 12. Two line spaces precede the
Abstract. The Abstract must be limited to one paragraph.
\end{abstract}
\section{GENERAL FORMATTING INSTRUCTIONS}
The camera-ready versions of the accepted papers are 8 pages,
plus any additional pages needed for references.
Papers are in 2 columns with the overall line width of 6.75~inches (41~picas).
Each column is 3.25~inches wide (19.5~picas). The space
between the columns is .25~inches wide (1.5~picas). The left margin is 0.88~inches (5.28~picas).
Use 10~point type with a vertical spacing of
11~points. Please use US Letter size paper instead of A4.
Paper title is 16~point, caps/lc, bold, centered between 2~horizontal rules.
Top rule is 4~points thick and bottom rule is 1~point thick.
Allow 1/4~inch space above and below title to rules.
Author descriptions are center-justified, initial caps. The lead
author is to be listed first (left-most), and the Co-authors are set
to follow. If up to three authors, use a single row of author
descriptions, each one center-justified, and all set side by side;
with more authors or unusually long names or institutions, use more
rows.
Use one-half line space between paragraphs, with no indent.
\section{FIRST LEVEL HEADINGS}
First level headings are all caps, flush left, bold, and in point size
12. Use one line space before the first level heading and one-half line space
after the first level heading.
\subsection{Second Level Heading}
Second level headings are initial caps, flush left, bold, and in point
size 10. Use one line space before the second level heading and one-half line
space after the second level heading.
\subsubsection{Third Level Heading}
Third level headings are flush left, initial caps, bold, and in point
size 10. Use one line space before the third level heading and one-half line
space after the third level heading.
\paragraph{Fourth Level Heading}
Fourth level headings must be flush left, initial caps, bold, and
Roman type. Use one line space before the fourth level heading, and
place the section text immediately after the heading with no line
break, but an 11 point horizontal space.
\subsection{Citations, Figure, References}
\subsubsection{Citations in Text}
Citations within the text should include the author's last name and
year, e.g., (Cheesman, 1985).
Be sure that the sentence reads
correctly if the citation is deleted: e.g., instead of ``As described
by (Cheesman, 1985), we first frobulate the widgets,'' write ``As
described by Cheesman (1985), we first frobulate the widgets.''
The references listed at the end of the paper can follow any style as long as it is used consistently.
\subsubsection{Footnotes}
Indicate footnotes with a number\footnote{Sample of the first
footnote.} in the text. Use 8 point type for footnotes. Place the
footnotes at the bottom of the column in which their markers appear,
continuing to the next column if required. Precede the footnote
section of a column with a 0.5 point horizontal rule 1~inch (6~picas)
long.\footnote{Sample of the second footnote.}
\subsubsection{Figures}
All artwork must be centered, neat, clean, and legible. All lines
should be very dark for purposes of reproduction, and art work should
not be hand-drawn. Figures may appear at the top of a column, at the
top of a page spanning multiple columns, inline within a column, or
with text wrapped around them, but the figure number and caption
always appear immediately below the figure. Leave 2 line spaces
between the figure and the caption. The figure caption is initial caps
and each figure should be numbered consecutively.
Make sure that the figure caption does not get separated from the
figure. Leave extra white space at the bottom of the page rather than
splitting the figure and figure caption.
\begin{figure}[h]
\vspace{.3in}
\centerline{\fbox{This figure intentionally left non-blank}}
\vspace{.3in}
\caption{Sample Figure Caption}
\end{figure}
\subsubsection{Tables}
All tables must be centered, neat, clean, and legible. Do not use hand-drawn tables.
Table number and title always appear above the table.
See Table~\ref{sample-table}.
Use one line space before the table title, one line space after the table title,
and one line space after the table. The table title must be
initial caps and each table numbered consecutively.
\begin{table}[h]
\caption{Sample Table Title} \label{sample-table}
\begin{center}
\begin{tabular}{ll}
\textbf{PART} &\textbf{DESCRIPTION} \\
\hline \\
Dendrite &Input terminal \\
Axon &Output terminal \\
Soma &Cell body (contains cell nucleus) \\
\end{tabular}
\end{center}
\end{table}
\section{SUPPLEMENTARY MATERIAL}
If you need to include additional appendices during submission, you can include them in the supplementary material file.
You can submit a single file of additional supplementary material which may be either a pdf file (such as proof details) or a zip file for other formats/more files (such as code or videos).
Note that reviewers are under no obligation to examine your supplementary material.
If you have only one supplementary pdf file, please upload it as is; otherwise gather everything to the single zip file.
You must use \texttt{aistats2022.sty} as a style file for your supplementary pdf file and follow the same formatting instructions as in the main paper.
The only difference is that it must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point.
Alternatively, you may append the supplementary content to the main paper and split the final PDF into two separate files.
\section{SUBMISSION INSTRUCTIONS}
To submit your paper to AISTATS 2022, please follow these instructions.
\begin{enumerate}
\item Download \texttt{aistats2022.sty}, \texttt{fancyhdr.sty}, and \texttt{sample\_paper.tex} provided in our starter pack.
Please, do not modify the style files as this might result in a formatting violation.
\item Use \texttt{sample\_paper.tex} as a starting point.
\item Begin your document with
\begin{flushleft}
\texttt{\textbackslash documentclass[twoside]\{article\}}\\
\texttt{\textbackslash usepackage\{aistats2022\}}
\end{flushleft}
The \texttt{twoside} option for the class article allows the
package \texttt{fancyhdr.sty} to include headings for even and odd
numbered pages.
\item When you are ready to submit the manuscript, compile the latex file to obtain the pdf file.
\item Check that the content of your submission, \emph{excluding} references, is limited to \textbf{8 pages}. The number of pages containing references alone is not limited.
\item Upload the PDF file along with other supplementary material files to the CMT website.
\end{enumerate}
\subsection{Camera-ready Papers}
If your papers are accepted, you will need to submit the camera-ready version. Please make sure that you follow these instructions:
\begin{enumerate}
\item Change the beginning of your document to
\begin{flushleft}
\texttt{\textbackslash documentclass[twoside]\{article\}}\\
\texttt{\textbackslash usepackage[accepted]\{aistats2022\}}
\end{flushleft}
The option \texttt{accepted} for the package
\texttt{aistats2022.sty} will write a copyright notice at the end of
the first column of the first page. This option will also print
headings for the paper. For the \emph{even} pages, the title of
the paper will be used as heading and for \emph{odd} pages the
author names will be used as heading. If the title of the paper
is too long or the number of authors is too large, the style will
print a warning message as heading. If this happens additional
commands can be used to place as headings shorter versions of the
title and the author names. This is explained in the next point.
\item If you get warning messages as described above, then
immediately after $\texttt{\textbackslash
begin\{document\}}$, write
\begin{flushleft}
\texttt{\textbackslash runningtitle\{Provide here an alternative
shorter version of the title of your paper\}}\\
\texttt{\textbackslash runningauthor\{Provide here the surnames of
the authors of your paper, all separated by commas\}}
\end{flushleft}
Note that the text that appears as argument in \texttt{\textbackslash
runningtitle} will be printed as a heading in the \emph{even}
pages. The text that appears as argument in \texttt{\textbackslash
runningauthor} will be printed as a heading in the \emph{odd}
pages. If even the author surnames do not fit, it is acceptable
to give a subset of author names followed by ``et al.''
\item The camera-ready versions of the accepted papers are 8
pages, plus any additional pages needed for references.
\item If you need to include additional appendices,
you can include them in the supplementary
material file.
\item Please, do not change the layout given by the above
instructions and by the style file.
\end{enumerate}
\subsubsection*{Acknowledgements}
All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support.
To preserve the anonymity, please include acknowledgments \emph{only} in the camera-ready papers.
\subsubsection*{References}
References follow the acknowledgements. Use an unnumbered third level
heading for the references section. Please use the same font
size for references as for the body of the paper---remember that
references do not count against your page length total.
\subsubsection*{\bibname}}
\begin{document}
\onecolumn
\aistatstitle{Instructions for Paper Submissions to AISTATS 2022: \\
Supplementary Materials}
\section{FORMATTING INSTRUCTIONS}
To prepare a supplementary pdf file, we ask the authors to use \texttt{aistats2022.sty} as a style file and to follow the same formatting instructions as in the main paper.
The only difference is that the supplementary material must be in a \emph{single-column} format.
You can use \texttt{supplement.tex} in our starter pack as a starting point, or append the supplementary content to the main paper and split the final PDF into two separate files.
Note that reviewers are under no obligation to examine your supplementary material.
\section{MISSING PROOFS}
The supplementary materials may contain detailed proofs of the results that are missing in the main paper.
\subsection{Proof of Lemma 3}
\textit{In this section, we present the detailed proof of Lemma 3 and then [ ... ]}
\section{ADDITIONAL EXPERIMENTS}
If you have additional experimental results, you may include them in the supplementary materials.
\subsection{The Effect of Regularization Parameter}
\textit{Our algorithm depends on the regularization parameter $\lambda$. Figure 1 below illustrates the effect of this parameter on the performance of our algorithm. As we can see, [ ... ]}
\vfill
\end{document}
|
2,869,038,155,004 | arxiv |
\part{Computation of Adams differentials}\label{part:computation}
\section{Overview}
The goal of this \namecref{part:computation} is to use the computer calculations of \Cref{part:secondary} to compute differentials in the Adams spectral sequence.
The computer algorithm automatically gives us all $d_2$ differentials. To compute longer differentials, we introduce the notion of a hidden extension on the $E_k$ page. Essentially by definition, hidden extensions on the $E_3$ page can be read off from the computer calculated $\Mod_{C\tau^2}$ composition products. Equipped with these hidden extensions, a generalized Leibniz rule then lets us relate differentials of different lengths.
After introducing this machinery in \Cref{section:diff-hidden}, we proceed to perform two sets of computations.
In \Cref{section:old-diff}, we compute the first 35 stems of the Adams spectral sequence. Of course, all of these results are well-known; the goal is to illustrate the techniques in more familiar territory.
In \Cref{section:new-diff}, we resolve previously unknown differentials in the Adams spectral sequence. In particular, we compute all unknown $d_2$, $d_3$, $d_4$ and $d_5$ differentials up to the 95\textsuperscript{th} stem listed in \cite{more-stable-stems}. Since this \namecref{section:new-diff} builds upon the results of \cite{more-stable-stems}, we assume the reader is already familiar with \cite{more-stable-stems}.
\section{Differentials and hidden extensions}\label{section:diff-hidden}
The arguments of this \namecref{section:diff-hidden} are quite generally applicable, and we shall present them in more generality than are needed for our calculations. In this \namecref{section:diff-hidden}, we work in $\Syn_E$ for some fixed Adams type spectrum $E$, and $X$ and $Y$ will be arbitrary synthetic spectra, not necessarily of the form $\nu (-)$. To streamline the presentation, we shall adopt the following conventions:
\begin{itemize}
\item ``The Adams spectral sequence of $X$'' will mean ``the $\tau$-Bockstein spectral sequence of $X$ with the change of sign'' (recall that for any spectrum $X$, the $\tau$-Bockstein spectral sequence of $\nu X$ agrees with the Adams spectral sequence of $X$ up to a sign \cite[Theorem A.1]{manifold-synthetic}).
\item We will write $X/\tau^k$ for $C\tau^k \otimes X$.
\item We will omit all suspensions $\Sigma^{a, b}$ in $\Syn_E$; they can be inferred from context if necessary.
\end{itemize}
\begin{notation}
Define maps $r_m, r_{n, m}, \delta_m, \delta_{n, m}$ by the cofiber sequences
\[
\begin{tikzcd}[row sep=tiny]
X \ar[r, "\tau^n"] & X \ar[r, "r_m"] & X / \tau^m \ar[r, "\delta_m"] & X \\
X/\tau^{n - m} \ar[r, "\tau^m"] & X/\tau^n \ar[r, "r_{n,m}"] & X / \tau^m \ar[r, "\delta_{n, m}"] & X/\tau^{n - m}.
\end{tikzcd}
\]
Note that $\tau^m$ will always denote a map $X/\tau^{n - m} \to X/\tau^n$, as opposed to the endomorphism of $X/\tau^n$ of the same name. In particular, $\tau^m$ is non-zero on $X/\tau$.
\end{notation}
\begin{notation}
If $x \in \pi_{*, *}X/\tau^m$ and $y \in \pi_{*, *}X/\tau^n$ are such that $r_{m, k} x = r_{n, k} y$, we say $x \equiv y \mod \tau^k$. Note in particular that $x$ and $y$ may live in different groups.
\end{notation}
One immediately sees that
\begin{lemma}\pushQED{\qed}
For any $n, k > m$, we have a commutative diagram
\[
\begin{tikzcd}
X \ar[r, "\tau^n"] \ar[d, "\tau^{n - m}"] & X \ar[r, "r_n"] \ar[d, equals] & X/\tau^n \ar[r, "\delta_n"] \ar[d, "r_{n, m}"] & X \ar[d, "\tau^{n - m}"] \\
X \ar[r, "\tau^m"] \ar[d, equals] & X \ar[r, "r_m"] \ar[d, "\tau^{k - m}"] & X/\tau^m \ar[r, "\delta_m"] \ar[d, "\tau^{k - m}"] & X \ar[d, equals] \\
X \ar[r, "\tau^k"] \ar[d, "r_{\ell - k}"] & X \ar[r, "r_k"] \ar[d, "r_\ell"] & X/\tau^k \ar[r, "\delta_k"] \ar[d, equals] & X \ar[d, "r_{\ell - k}"]\\
X/ \tau^{\ell - k} \ar[r, "\tau^k"] & X / \tau^\ell \ar[r, "r_{\ell, k}"] & X/\tau^k \ar[r, "\delta_{\ell, k}"] & X/ \tau^{\ell - k}.
\end{tikzcd}\qedhere
\]
\end{lemma}
The following are standard properties of Bockstein spectral sequences, whose proofs are left to the reader.
\begin{lemma}\pushQED{\qed}
Let $x \in \pi_{*, *} X/\tau$.
\begin{enumerate}
\item For any representative of $d_{k + 1}(x)$ on the $E_2$ page, there is a lift of $x$ to $[x] \in \pi_{*, *} X/\tau^k$ such that $\delta_k [x] \equiv -d_{k + 1}(x) \mod \tau$.
\item If $\tau^k x = 0$, then $x$ is the target of a $d_{k + 1}$ differential.
\item If $\delta x = \tau^{k - 2} y$ for some $y \in \pi_{*, *} X$, then $x$ survives to the $E_k$ page, and $y \equiv -d_k(x) \mod \tau$.\qedhere
\end{enumerate}
\end{lemma}
We now define hidden extensions. Classically, they are defined for classes on the $E_\infty$ page in terms of multiplication in homotopy groups. For our purposes, we need to generalize this to potentially non-surviving classes. Such a notion was first introduced by Cooley in his thesis \cite[pp.\ 18--21]{cooley-thesis}, together with a version of \Cref{thm:hidden-ext} \cite[Theorem 1.24]{cooley-thesis}. While we believe our definition agrees with Cooley's, we shall make no attempts to compare them.
Fix a map of synthetic spectra $\alpha\colon X \to Y$.
\begin{defi}\label{defi:hidden-ext}
Let $x\in \pi_{*, *} X/\tau$ and $y \in \pi_{*, *} Y / \tau$. Suppose $x$ survives to the $E_r$ page and $s < r - 1$. We say there is a hidden $\alpha$-extension by $s$ from $x$ to $y$ on the $E_r$ page if there is a lift $\{x\}$ of $x$ to $\pi_{*, *} X / \tau^{r - 1}$ and $\{y\}$ of $y$ to $\pi_{*, *} Y / \tau^{r - 1 - s}$ such that
\[
\alpha \{x\} = \tau^s \{y\}.
\]
Alternatively, this says $\alpha \{x\}$ is $\tau^s$-divisible, and a $\tau^s$ division of $\alpha\{x\}$ is equal to $y$ mod $\tau$.
We say this hidden extension is maximal if $\alpha\{x\}$ is not $\tau^{s + 1}$ divisible. This is automatic if $r = s + 2$. In case $r = \infty$ and $\alpha\{x\}$ is $\tau^s$ divisible for all $s$ (e.g.\ it is zero), we say there is a maximal hidden extension by $\infty$ to $0$.
\end{defi}
In particular, a hidden extension by $0$ is a regular, non-hidden extension.
\begin{remark}
The jump $s$ is redundant information given $x$, $y$ and $\alpha$, and we omit it when no confusion can arise.
\end{remark}
\begin{remark}
After fixing an $\{x\}$, the value of $y$ is well-defined up to images of $d_2, \ldots, d_{s + 1}$, and we shall consider $y$ as an element in this quotient. It is, however, inaccurate to say it is well-defined on the $E_{s + 2}$ page; it may not survive that long.
Of course, different lifts $\{x\}$ give different values of $y$, and in general they can belong to different filtrations. However, this is not an issue when $s = 1$; there is a hidden extension by $1$ iff $\alpha x = 0$ on the $E_2$ page, and the indeterminacy in $y$ is exactly $\alpha$-multiples of classes in the bidegree right above $x$ on the $E_2$ page.
\end{remark}
\begin{remark}
Let $s + 1 < q < r$. If there is a hidden $\alpha$-extension by $s$ from $x$ to $y$ on the $E_r$ page, then there is a hidden $\alpha$-extension from $x$ to $y$ on the $E_q$ page. The converse holds if there is no indeterminacy (and $x$ survives long enough).
\end{remark}
\begin{thm}[Generalized Leibniz rule]\label{thm:extend-diff}
Let $x \in \pi_{*, *} X/\tau$ survive to the $E_r$ page. Fix a representative of $d_r(x)$ on the $E_2$ page. Then there is a differential from a maximal $\alpha$-extension of $x$ on the $E_r$ page to a maximal $\alpha$-extension of $d_r(x)$ on the $E_\infty$ page.
\end{thm}
\begin{proof}
Pick a lift $\{x\}$ of $x$ to $\pi_{*, *}X/\tau^{r - 1}$ such that $\delta_{r - 1}\{x\}$ is a lift of $-d_r(x)$. Then we have
\[
\alpha \{x\} = \tau^{?} y,\quad \alpha \delta_{r - 1}\{x\} = \tau^\iq z
\]
for some $y$ and $z$ whose reduction mod $\tau$ are maximal hidden $\alpha$-extensions of $x$ and $-d_r(x)$ respectively. Then
\[
\delta_{r - ? - 1} y = \delta_{r - 1} \tau^? y = \delta_{r - 1} \alpha \{x\} = \alpha \delta_{r - 1} \{x\} = \tau^\iq z.
\]
So
\[
\delta (r_{r - ? - 1, 1} y) = \tau^{r - ? - 2} \delta_{r - ? - 1} y = \tau^{r - ? - \iq - 2} z.\qedhere
\]
\end{proof}
\begin{remark}
There are also differentials between non-maximal extensions, but they all vanish since they are pre-empted by shorter differentials.
\end{remark}
We end the \namecref{section:diff-hidden} with a result that identifies hidden extensions with differentials in the cofiber, which can be useful if we want to compute longer hidden extensions by hand. Define $C\alpha, \iota_\alpha, \delta_\alpha$ by the cofiber sequence
\[
\begin{tikzcd}
X \ar[r, "\alpha"] & Y \ar[r, "\iota_{\alpha}"] & C\alpha \ar[r, "\delta_{\alpha}"] & X.
\end{tikzcd}
\]
\begin{thm}\label{thm:hidden-ext}
Let $x \in \pi_{*,*} X/\tau$ be such that $d_{k + 1}(x) = 0$. Suppose $\bar{x} \in \pi_{*, *} C\alpha / \tau$ is such that $\delta_\alpha \bar{x} = x$, and suppose $y \in \pi_{*, *} Y/\tau$ is such that $\iota_\alpha y = d_{k + 1} \bar{x}$ on the $E_k$ page. Then there is a hidden $\alpha$ extension from $x$ to $y$ on the $E_{k + 2}$ page.
\end{thm}
\begin{proof}
Consider the cofiber sequences
\[
\begin{tikzcd}[row sep = tiny]
X \ar[r, "\alpha"] & Y \ar[r, "\iota_{\alpha}"] & C\alpha \ar[r, "\delta_{\alpha}"] & X \\
\S/\tau \ar[r, "\tau^k"] & \S/\tau^{k + 1} \ar[r, "r_{k + 1, k}"] & \S/\tau^k \ar[r, "\delta_{k + 1, k}"] & \S/\tau
\end{tikzcd}
\]
Taking the tensor product of these cofiber sequences gives
\[
\begin{tikzcd}
Y/\tau^{k + 1} \ar[d, "r_{k + 1, k}"] \ar[r, "\iota_\alpha"] & C\alpha/\tau^{k + 1} \ar[d, "r_{k + 1, k}"] \ar[r, "\delta_\alpha"] & X/\tau^{k + 1} \ar[d, "r_{k + 1, k}"] \\
Y/\tau^k \ar[r, "\iota_\alpha"] \ar[d, "\delta_{k + 1, k}"] & C\alpha/\tau^k \ar[d, "\delta_{k + 1, k}"] \ar[r, "\delta_\alpha"] & X/\tau^k \ar[d, "\delta_{k + 1, k}"] \\
Y/\tau \ar[r, "\iota_\alpha"] & C\alpha/\tau \ar[r, "\delta_\alpha"] & X/\tau.
\end{tikzcd}
\]
Since $d_{k + 1} \bar{x} = \iota_\alpha y$ on the $E_k$ page, we can pick a lift $\{\bar{x}\} \in \pi_{*, *} C \alpha$ of $\bar{x}$ such that
\[
\delta_{k + 1, k} \{\bar{x}\} = -\iota_\alpha y.
\]
By \cite[Section 6]{may-additivity} (see also \cite[Lemma 9.3.2]{inverting-hopf}), there is an $\{x\} \in X/\tau^{k + 1}$ such that
\[
r_{k + 1, k} \{x\} = \delta_\alpha \{\bar{x}\},\quad \alpha \{x\} = \tau^k y.
\]
The first condition tells us
\[
\{x\} \equiv \delta_\alpha \{\bar{x}\} \equiv \delta_\alpha \bar{x} = x \mod \tau.
\]
So $\{x\}$ is a lift of $x$ to $X/\tau^{k + 1}$, and the result follows.
\end{proof}
\section{Computation of old differentials}\label{section:old-diff}
\input{ass.tex}
To illustrate how one can make use of the computer-generated data, we compute all differentials in the Adams spectral sequence up to the 35\textsuperscript{th} stem at the prime $2$ and resolve most hidden extension. The resulting Adams charts are displayed in \Cref{fig:e2,fig:diff,fig:einfty} (the dashed hidden extensions in the $E_\infty$ page are those we do not prove). Many of the arguments can be simplified if we are willing to use other tools, but we restrict ourselves to ``straightforward'' manipulations using the computer data.
\afterpage{\clearpage}
\subsection*{Conventions}
We assume the reader is familiar with the names of classes in the homotopy groups of spheres and the classical Adam $E_2$ page, as well as the translation between the two. For convenience, we label the relevant $E_2$ page names in the Adams charts as well.
We adopt the following naming conventions:
\begin{enumerate}
\item If $\alpha \in \pi_* \S$ is an element in the classical homotopy groups of the sphere, we use the same name to denote the corresponding element in the homotopy groups of the \emph{synthetic} sphere. By this we mean an element in $\pi_{*, *} \S$ whose $\tau$ inversion gives the original class $\alpha$, and has maximum Adams filtration amongst such elements. While this is potentially ambiguous, the ambiguity is irrelevant in all cases of interest in this \namecref{section:old-diff}.
To avoid any confusion, we shall never refer to the classical homotopy groups of the sphere in this \namecref{section:old-diff}. All such names always refer to the synthetic version.
\item If $a \in \Ext_\A(\F_2, \F_2)$ is a permanent cycle, we use $\{a\}$ to denote any lift of $a$ to $\pi_{*, *} \S$. Again the ambiguities end up being irrelevant.
\item If $a \in \Ext_\A(\F_2, \F_2)$ survives the $d_2$ differential, we use $[a]$ to denote the specific lift to $\pi_{*, *} C\tau^2$ constructed in \Cref{thm:lift} using the minimal resolution generated by our program. Of course, the precise choice of lift is irrelevant; what matters is that $[a]$ refers to the same lift throughout the whole dataset.
Note that in general, $[a + b] \not= [a] + [b]$. Instead, there is a correction term as specified in \Cref{thm:lift}.
\end{enumerate}
\subsection{Differentials in stems \texorpdfstring{$0$}{0} to \texorpdfstring{$28$}{28}}
We first look at the differentials in the first $28$ stems, which are relatively straightforward.
\begin{lemma}
We have
\[
d_3(h_0 h_4) = h_0 d_0.
\]
\end{lemma}
\begin{proof}
In the computer data, we see that
\[
[d_0][h_0 h_4] = \tau k.
\]
Since $[d_0]$ detects $\kappa$, this means there is a hidden $\kappa$-extension from $h_0 h_4$ to $k$. So this follows from dividing the differential $d_2(k) = h_0 d_0^2$.
\end{proof}
\begin{cor}
We have $\delta h_4 = \tilde{2} \sigma^2$.
\end{cor}
\begin{proof}
We know that $\pi_{14, 3} \S$ is spanned by $\tilde{2} \sigma^2$ and $\tau \kappa$ as an $\F_2$-module. Since $\delta h_4 = \tilde{2} \sigma^2 \mod \tau$, we know that $\delta h_4 = \tilde{2} \sigma^2 + a \tau \kappa$ for some $a \in \F_2$. By computer calculation, $\tilde{2}^2 \sigma^2 = \tau \tilde{2} \kappa$. So we get
\[
\delta h_0 h_4 = \tilde{2}^2 \sigma^2 + a \tau \tilde{2} \kappa = (a + 1) \tau \tilde{2} \kappa,
\]
Since $d_3 (h_0 h_4) = h_0 d_0$, this expression must also equal $\tau \tilde{2} \kappa$. So we must have $a = 0$, as desired.
\end{proof}
\begin{remark}
One can prove these two results in the opposite order. First, using the fact that $\tau \tilde{2} \sigma^2 = 2 \sigma^2 = 0$, we learn that we must have $\delta h_4 = \tilde{2} \sigma^2$. Then $\delta h_0 h_4 = \tilde{2}^2 \sigma^2 = \tau \tilde{2}\kappa$. So $d_3(h_0 h_4) = h_0 d_0$.
\end{remark}
\begin{cor}
$h_1 h_4$, $h_2 h_4$ and $c_0 h_4$ are permanent.
\end{cor}
\begin{proof}
We have $\delta (h_1h_4) = \eta \delta h_4 = \eta \tilde{2} \sigma^2 = 0$ since $\tilde{2} \eta = 0$. The others follow similarly with $\nu \sigma = 0$ and $\tilde{2} \epsilon = 0$.
\end{proof}
\begin{remark}
This is the same proof as the Moss' convergence theorem proof, constructing $[h_1 h_4]$ as $\langle \sigma^2, 2, \eta \rangle$. When applying Moss' convergence theorem, one has to verify that the product vanishes in homotopy and work out indeterminacies. In the synthetic proof, this translates to keeping track of higher $\tau$-divisible terms that can show up in the products and $\delta$. In particular, knowing the full value of $\delta$ instead of just the Adams differential is often extremely useful for future computations.
\end{remark}
\begin{lemma}
$g$ is permanent.
\end{lemma}
\begin{proof}
If $g$ supported a differential, then so would $Pg = d_0^2$, since $P$ acts injectively on all potential targets with no indeterminacy. But $d_0^2$ is permanent.
\end{proof}
\subsection{Hidden extensions in stems \texorpdfstring{$0$}{0} to \texorpdfstring{$28$}{28}}
\begin{lemma}
We have
\[
\nu^3 + \eta^2 \sigma = \tau \eta \epsilon.
\]
\end{lemma}
\begin{proof}
The computer data gives
\[
[h_2]^3 = [h_1^2 h_3],\quad [h_1]^2 [h_3] = [h_1^2 h_3] + \tau h_1 c_0.\qedhere
\]
\end{proof}
\begin{lemma}
$\delta e_0 = \eta^2 \kappa$. Thus, $\tau \eta^2 \kappa = 0$.
\end{lemma}
\begin{proof}
Since $d_2(e_0) = h_1^2 d_0$, the only other possibility is $\delta e_0 = \eta^2 \kappa + \tau \{Pc_0\}$. This would imply that
\[
\tau \eta^2 \kappa = \tau^2 \{Pc_0\}.
\]
Multiplying by $\eta$ gives
\[
\tau^2 \eta \{Pc_0\} = \tau \eta^3 \kappa = \tau \tilde{2}^2 \nu \kappa = 0,
\]
which is a contradiction.
\end{proof}
\begin{remark}
One can similarly show that $\delta f_0 = \tilde{2} \nu \kappa$.
\end{remark}
\begin{lemma}
We have
\[
\nu^3 \kappa = \tau^2 \eta \{Pd_0\}.
\]
\end{lemma}
Note that this hidden extension jumps by $2$ filtrations, and we are able to compute this by iterating hidden extensions by $1$.
\begin{proof}
Since $\nu^3 = \eta^2 \sigma + \tau \eta\epsilon$, multiplying by $\kappa$ gives
\[
\nu^3 \kappa = \eta^2 \sigma \kappa + \tau \eta \epsilon \kappa.
\]
By computer calculation, we know that
\[
\epsilon \kappa = \tau \{Pd_0\}.
\]
So it remains to show that the first term vanishes. Since the 23\textsuperscript{rd} stem is non-$\tau$-torsion, it suffices to show that $\tau \eta^2 \sigma \kappa = 0$. But we have already seen that $\tau \eta^2 \kappa = \tau \delta e_0 = 0$. So we are done.
\end{proof}
\begin{cor}
We have
\[
\tilde{2}^2 \nu \kappabar = \tau^2 \eta \{Pd_0\},\quad \eta \kappabar = \tau^2 \{Pd_0\}.
\]
\end{cor}
\begin{proof}
The first follows from the identity
\[
\tilde{2}^2 \kappabar = \nu^2 \kappa.
\]
The second follows from $\eta^3 = \tilde{2}^2 \nu$.
\end{proof}
\begin{lemma}
We have
\[
\sigma \{Ph_1\} = \eta^2 \kappa + \tau \{Pc_0\}.
\]
Thus, in the (classical) stable homotopy groups of spheres, there is a hidden $\sigma$ extension from $Ph_1$ to $Pc_0$.
\end{lemma}
\begin{proof}
This follows from the computer data, since there are no higher filtration terms.
\end{proof}
\subsection{Stems \texorpdfstring{$29$}{29} to \texorpdfstring{$35$}{35}}
\begin{lemma}\label{lemma:h03-h5}
$d_3(h_0^3 h_5) = h_0 \Delta h_2^2$ and $d_4(h_0^8 h_5) = h_0 P^2 d_0$.
\end{lemma}
\begin{proof}
To compute $d_3(h_0^3 h_5)$, the obvious approach of starting with $d_2(h_0^2 h_5) = h_0^3 h_4^2$ and then computing a hidden $\tilde{2}$ extension does not work, since $h_0 \Delta h_2^2$ is in the indeterminacy. Instead, we start with $d_2(h_0 h_5) = h_0^2 h_4^2$ and compute a hidden $\tilde{2}^2$ extension. Indeed, computer calculation gives
\[
\begin{aligned}
\relax[h_0] [h_0^2 h_4^2] &= [h_0^3 h_4^2]\\
\relax[h_0] [h_0^3 h_4^2] &= \tau [h_0 \Delta h_2^2].
\end{aligned}
\]
So there is a hidden $\tilde{2}^2$ extension from $h_0^2 h_4^2$ to $h_0 \Delta h_2^2$ with no indeterminacy, and the $d_3$ follows. The $d_4$ is similar.
\end{proof}
\begin{lemma}
$d_3(d_0 e_0) = h_0^5 \Delta h_2^2$ and $d_4(d_0e_0 + h_0^7 h_5) = P^2 d_0$.
\end{lemma}
\begin{proof}
We have
\[
\delta (d_0 e_0) = \kappa \delta e_0 = \eta^2 \kappa^2,
\]
which computer calculation tells us is $\tau h_0^5 \Delta h_2^2$ mod $\tau^2$. The next differential follows from $h_0$-division in a purely classical manner, since $h_0 d_0 e_0 = 0$ on the $E_3$ page.
\end{proof}
\begin{remark}
Here it is important for us to precisely identify the value of $\delta e_0$. A simple hidden extension argument would not work since $h_0^5 \Delta h_2^2 = d_0 Pc_0$ is in the indeterminacy.
\end{remark}
\begin{cor}
$d_3(\Delta h_2^2) = h_1 d_0^2$.
\end{cor}
\begin{proof}
The source and target have hidden $\eta$ extensions to $d_0e_0$ and $h_0^5 \Delta h_2^2$ respectively.
\end{proof}
\begin{cor}
We have
\[
\delta h_5 = \tilde{2} \{h_4^2\}.
\]
\end{cor}
\begin{proof}
From our calculations, $\pi_{30, 3} \S = \F_2$ and is generated by $\tilde{2} \{h_4^2\}$.
\end{proof}
\begin{cor}
$h_1 h_5$ and $p$ are permanent and $d_3(h_2 h_5) = h_0 p$.
\end{cor}
\begin{proof}
The first follows from $\tilde{2} \eta = 0$. The rest follow from the hidden $\nu$ extension from $h_4^2$ to $p$.
\end{proof}
To prove that the remaining elements are permanent, we have to look beyond what our charts in \Cref{fig:e2,fig:diff,fig:einfty} cover. The reader can instead refer to Isaksen's charts at \cite{charts}.
\begin{lemma}
$d_1$ and $\Delta h_1 h_3$ are permanent.
\end{lemma}
\begin{proof}
These elements can only hit $h_0^{15} h_5$, and it is easy to check that $Ph_0^{15} h_5$ cannot be hit by an element of filtration at least $8$.
\end{proof}
\begin{lemma}
$h_0 h_2 h_5$ is permanent.
\end{lemma}
\begin{proof}
Since $[h_0] [h_0 p] = 0$, we know $h_0 h_2 h_5$ does not hit $h_1 \Delta h_1 h_3$. So the only potential targets are $h_1 P^3 c_0$ and $P^4 h_1$. We again rule this out by Adams periodicity. We have
\[
P h_0 h_2 h_5 = h_0 P h_2 h_5.
\]
Since $P [h_0 p] = 0$ with no indeterminacy in $\Mod_{C\tau^2}$, we know that $d_4(P h_2 h_5) = 0$. So the shortest differential $P h_2 h_5$ can support hits $h_1 P^4 c_0$ or $P^5 h_1$, both of which preclude a differential on $h_0 P h_2 h_5$.
\end{proof}
\section{Computation of new differentials}\label{section:new-diff}
We now turn to the computation of new differentials. These differentials are listed in \Cref{table:new-diff}, with the proofs indicated in the last column. Many of these new differentials are easy consequences of the generalized Leibniz rule using hidden $\tilde{2}$, $\eta$, $\nu$ and $\sigma$ extensions, which are listed in \Cref{table:hidden-two,table:hidden-eta,table:hidden-nu,table:hidden-sigma}. We shall not provide further explanation for these differentials. The remainder of the differentials require extra arguments, and are explained in \Cref{section:hard-diff}.
Throughout the \namecref{section:new-diff}, we shall use the names of \cite{more-stable-stems}. The identifications of their classes in our basis are listed in \Cref{table:class-ident} with brief justifications. We encourage the reader to refer to the charts at \cite{charts} when reading this \namecref{section:new-diff}.
\begin{remark}
Some of these new differentials have been independently computed in unpublished work of Burklund--Isaksen--Xu. Specifically, they have computed the differentials on $\Delta^2 g_2$, $h_1 \Delta^2 g_2$, $h_0^3 \Delta h_2^2 h_6$, $x_{95, 7}$, $\Delta^2 Mh_1$ and $\Delta^2 M h_1^2$. Their arguments are similar to ours, except they had to compute hidden extensions by hand.
\end{remark}
\subsection{Computation of new differentials}\label{section:hard-diff}
\begin{lemma}\label{lemma:h07-h6}
$d_3(h_0^7 h_6) = \Delta^2 h_0 h_3^2$.
\end{lemma}
\begin{proof}
As in \Cref{lemma:h03-h5}, we start with $d_2(h_0^5h_6) = h_0^6 h_5^2$, and observe that there is a hidden $\tilde{2}^2$ extension from $h_0^6 h_5^2$ to $\Delta^2 h_0 h_3^2$. Indeed, we have
\begin{align*}
[h_0] [h_0^6 h_5^2] &= [h_0^7 h_5^2] + \tau (h_1 \Delta x + \Delta^2 h_3^2)\\
[h_0] [h_0^7 h_5^2] &= 0.\qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lemma:h0-g-b4}
$d_3(h_0 g B_4) = h_0 \Delta^2 d_0 e_0 + \Delta h_1 e_0^2 g$.
\end{lemma}
\begin{proof}
First, $h_0 g B_4$ is $h_2$-divisible with
\[
h_0 g B_4 = h_2 e_0 B_4,\quad d_2 (e_0 B_4) = h_0 M d_0 e_0.
\]
There is a hidden $\nu$-extension from $h_0 M d_0 e_0$ to $\Delta h_1 e_0^2 g$ with indeterminacy $h_0 \Delta^2 d_0 e_0$. So we know that
\[
d_3(h_0 g B_4) = \Delta h_1 e_0^2g + ?h_0 \Delta^2 d_0 e_0.
\]
To determine the indeterminacy, we consider further $\nu$-multiplication. There is a hidden $\nu$-extension from $h_0 g B_4$ to $h_0 \Delta^2 m$. Thus, we find that
\[
h_2 d_3(h_0 g B_4) = d_2(h_0 \Delta^2 m) = h_2 (h_0 \Delta^2 d_0 e_0).
\]
Since $h_2 \Delta h_1 e_0^2 g = 0$, the result follows.
\end{proof}
\begin{remark}\label{remark:err}
Our calculations have uncovered two incorrect $d_3$'s in \cite{more-stable-stems}. Write $\tau_m$ for the $\tau$ in \cite{more-stable-stems}, which is $\tau^2$ in $\Syn_{BP}$.
\cite[Lemma 5.21]{more-stable-stems} claims that $d_3(h_0 g B_4) = \Delta^2 h_0 d_0 e_0$ (note that $d_0 B_5 = h_0 g B_4$ in the classical Adams spectral sequence). Their argument neglects the possibility that in the motivic Adams spectral sequence, $d_3(\tau_\mot^2 d_0 B_5) = \Delta^2 h_0 d_0 e_0 + \tau_\mot^3 \Delta h_1 e_0^2 g$, which would make $\Delta^2 h_0 d_0 e_0$ $\tau_\mot$-divisible in the $E_\infty$-page. Indeed, our argument shows this is exactly what happens.
\cite[Lemma 5.26]{more-stable-stems} claims that $d_3(M h_0 d_0 k) = P \Delta^2 h_0 d_0 e_0$. This is a clerical error; in $\mathrm{mmf}$, there is a $d_2$ hitting $\Delta h_1 d_0^2 e_0^2 + \tau_\mot^3 P \Delta h_1 dg^2$, so in the $E_3$ page we have $\Delta h_1 d_0^2 e_0^2 = \tau_\mot^3 P \Delta h_1 dg^2$. Since $\tau_\mot^2 h_0 M d_0 k$ has trivial image in $\mathrm{mmf}$, its $d_3$ must be $\Delta h_1 d_0^2 e_0^2 + \tau_\mot^3 P \Delta h_1 dg^2$.
\end{remark}
\begin{lemma}\label{lemma:delta3-h1-h3}
$d_3(\Delta^3 h_1 h_3) = \Delta h_1 e_0^2 g$.
\end{lemma}
\begin{proof}
We first show that $d_3(\Delta^3 h_1 h_3)$ is non-zero. This is the argument of \cite[Lemma 5.20]{more-stable-stems}. Since $Ph_1 \cdot \Delta^3 h_1 h_3$ supports a $d_4$, we know $\Delta^3 h_1 h_3$ supports a differential of length at most $d_4$. The target bidegree of the $d_4$ is zero and computer calculation shows it does not support a $d_2$, so it must support a $d_3$.
Next, we observe that there is a hidden $\nu$-extension by $1$ from $\Delta^3 h_1 h_3$ to $0$. So $d_3(\Delta^3 h_1 h_3)$ must be killed by $h_2$. This leaves $h_0 \Delta^2 d_0 e_0$ as the only possibility.
\end{proof}
\begin{lemma}\label{lemma:h1-d2-g2}
$d_6(h_1 \Delta^2 g_2) = 0$.
\end{lemma}
\begin{proof}
We have to show that $\delta (h_1 \Delta^2 g_2) = 0$ mod $\tau^5$. To do so, we use the $E_2$ page relation $h_1 \Delta^2 g_2 = h_3 \Delta^2 e_1$.
Since $d_3(\Delta^2 e_1) = h_2^2 \Delta^2 n$, we can write
\[
\delta (\Delta^2 e_1) = \tau \nu^2 \{\Delta^2 n\} + {?} \tau^2 \eta \{M \Delta h_1 d_0\} + {?} \tau^3 \{\Delta h_1 g^3\}.
\]
We shall show that all terms are trivial mod $\tau^5$ after multiplication by $\sigma$.
\begin{enumerate}
\item Since $\nu \sigma = 0$, the first term vanishes completely.
\item By computer calculation, we know
\[
[h_3] [M \Delta h_1 d_0] = \tau h_0^6 x_{91, 11}.
\]
Note that $h_0^6 x_{91, 11}$ is itself a $h_3$ multiple, hence is in the indeterminacy. The only term in the bidegree above $h_0^6 x_{91, 11}$ is $h_0^7 x_{91, 11}$. So we get
\[
\sigma \{M \Delta h_1 d_0\} = {?} \tau h_0^6 x_{91, 11} + {?} \tau^2 h_0^7 x_{91, 11} \mod {\tau^3},
\]
where the coefficients are potentially different. However, the left-hand side is permanent, while the terms on the right support differentials. So the coefficients must in fact vanish.
\item Since hidden $\sigma$-extensions by $1$ vanish identically at bidegree $(85, 17)$, we know that
\[
\sigma \{\Delta h_1 g^3\} = 0 \mod \tau^2.\qedhere
\]
\end{enumerate}
\end{proof}
\begin{lemma}\label{lemma:x-94-8}
$d_3(x_{94, 8}) = h_1 x_{92, 10}$.
\end{lemma}
\begin{proof}
This follows from \cite[Remark 5.2]{more-stable-stems} since $d_2(x_{94, 8}) = 0$.
\end{proof}
\subsection{Tables}\label{section:tables}
This \namecref{section:tables} contains the following tables:
\begin{itemize}
\item \Cref{table:new-diff} contains all the newly computed differentials and their proofs. The grey rows consist of old differentials we include for reference.
\item \Cref{table:hidden-two,table:hidden-eta,table:hidden-nu,table:hidden-sigma} contain the hidden extensions that we use to compute the differentials. The first four columns are lifted straight out of computer-generated data, while the last two columns identify the names of the classes.
\item \Cref{table:class-ident} gives the identification between the names in \cite{more-stable-stems} and our basis, together with a brief justification for each. We omit cases where the group is one-dimensional.
\end{itemize}
\begin{longtabu}{cccccc}
\caption{Newly computed differentials}\label{table:new-diff} \\
\toprule
$n$ & $s$ & $r$ & source & target & proof \\
\midrule
\endfirsthead
\caption{Newly computed differentials (continued)} \\
\toprule
$n$ & $s$ & $r$ & source & target & proof \\
\midrule
\endhead
\bottomrule
\endfoot
63 & 8 & 3 & $h_0^7 h_6$ & $\Delta^2 h_0 h_3^2$ & \Cref{lemma:h07-h6} \\
69 & 8 & 2 & $D_3'$ & $0$ & - \\
69 & 8 & 3 & $D_3'$ & $h_2 M g$ & $\tilde{2}$ division \\
\rowfont{\color{gray!70!black}} 69 & 10 & 2 & $P(A + A')$ & $h_0 h_2 M g$ & - \\
80 & 14 & 3 & $h_0 g B_4$ & $h_0 \Delta^2 d_0 e_0 + \Delta h_1 e_0^2 g$ & \Cref{lemma:h0-g-b4} \\
80 & 14 & 3 & $\Delta^3 h_1 h_3$ & $\Delta h_1 e_0^2 g$ & \Cref{lemma:delta3-h1-h3} \\
\rowfont{\color{gray!70!black}} 85 & 17 & 2 & $M d_0 j$ & $h_0 MPd_0 e_0$ & - \\
\rowfont{\color{gray!70!black}} 87 & 17 & 2 & $\Delta^3 h_1 d_0$ & $e_0^3 m$ & - \\
88 & 18 & 3 & $\Delta^3 h_1^2 d_0$ & $\Delta h_1 d_0^2 e_0^2$ & $\eta$ multiplication\\
88 & 18 & 3 & $h_2 M d_0 j$ & $\Delta h_1 d_0^2 e_0^2 + h_0 P \Delta^2 d_0 e_0$ & $\nu$ multiplication\\
92 & 12 & 5 & $\Delta^2 g_2$ & $0$ & $\eta$ division \\
93 & 9 & 5 & $h_0^2 \Delta h_2^2 h_6$ & $h_0^2 \Delta^2 g_2$ & \cite{more-stable-stems} \\
93 & 10 & 6 & $h_0^3 \Delta h_2^2 h_6$ & $M\Delta h_2^2 e_0$ & $\tilde{2}$ multiplication \\
93 & 13 & 6 & $h_1 \Delta^2 g_2$ & $0$ & \Cref{lemma:h1-d2-g2} \\
94 & 8 & 2 & $x_{94, 8}$ & $0$ & - \\
94 & 8 & 3 & $x_{94, 8}$ & $h_1 x_{92, 10}$ & \Cref{lemma:x-94-8} \\
94 & 15 & 3 & $\Delta^2 M h_1$ & $M d_0 e_0^2$ & $\tilde{2}$ division \\
\rowfont{\color{gray!70!black}} 94 & 17 & 3 & $M d_0 m$ & $MP\Delta h_1^2 d_0$ & $\eta$ division \\
95 & 7 & 2 & $x_{95, 7}$ & $h_0 x_{94, 8}$ & - \\
95 & 16 & 4 & $\Delta^2 M h_1^2$ & $M P \Delta h_0^2 e_0$ & $\eta$ multiplication \\
\rowfont{\color{gray!70!black}} 95 & 19 & 2 & $x_{95, 19, 0}$ & $MP\Delta h_1^3 d_0$ & - \\
\end{longtabu}
\begin{longtable}{cccccc}
\caption{Selected hidden $\tilde{2}$-extensions}\label{table:hidden-two} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endfirsthead
\caption{Selected hidden $\tilde{2}$-extensions (continued)} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endhead
\bottomrule
\endfoot
69 & 8 & $[1, 0]$ & $[1]$ & $D_3'$ & $P(A + A')$ \\
92 & 14 & $[0, 1, 0]$ & $[1]$ & $h_0^2 \Delta^2 g_2$ & $M\Delta h_2^2 e_0$ \\
93 & 18 & $[1]$ & $[1, 0]$ & $Md_0 e_0^2$ & $MP\Delta h_1^2 d_0$ \\
94 & 15 & $[1]$ & $[1]$ & $\Delta^2 M h_1$ & $Md_0 m$ \\
\end{longtable}
\begin{longtable}{cccccc}
\caption{Selected hidden $\eta$ extensions}\label{table:hidden-eta} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target\\
\midrule
\endfirsthead
\caption{Selected hidden $\eta$ extensions (continued)} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target\\
\midrule
\endhead
\bottomrule
\endfoot
86 & 19 & $[1]$ & $[0, 1, 1]$ & $e_0^3 m$ & $\Delta h_1 d_0^2 e_0^2$ \\
91 & 17 & $[0, 0, 1]$ & $[0, 1]$ & $Md_0 \ell + h_0^6 x_{91, 11}$ & $MP\Delta h_1 d_0$ \\
93 & 18 & $[1]$ & $[0, 1]$ & $M d_0 e_0^2$ & $MP\Delta h_0^2 e_0$\\
94 & 17 & $[1]$ & $[1, 0]$ & $M d_0 m$ & $x_{95, 19, 0}$ \\
\end{longtable}
\begin{longtable}{cccccc}
\caption{Selected hidden $\nu$ extensions}\label{table:hidden-nu} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endfirsthead
\caption{Selected hidden $\nu$ extensions (continued)} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endhead
\bottomrule
\endfoot
76 & 15 & $[1]$ & $[1, 1, 1]$ & $h_0 M d_0 e_0$ & $h_0 \Delta^2 d_0 e_0 + \Delta h_1 e_0^2 g$ \\
80 & 14 & $[1, 0]$ & $[0, 1]$ & $h_0 g B_4$ & $h_0 \Delta^2 m$\\
80 & 14 & $[0, 1]$ & $[0, 0]$ & $\Delta^3 h_1 h_3$ & $0$\\
84 & 19 & $[1, 1]$ & $[0, 0, 1]$ & $h_0 MPd_0 e_0$ & $\Delta h_1 d_0^2 e_0^2 + h_0 P \Delta^2 d_0 e_0$ \\
\end{longtable}
\begin{longtable}{cccccc}
\caption{Selected hidden $\sigma$ extensions}\label{table:hidden-sigma} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endfirsthead
\caption{Selected hidden $\nu$ extensions (continued)} \\
\toprule
$n$ & $s$ & source & target & name of source & name of target \\
\midrule
\endhead
\bottomrule
\endfoot
84 & 15 & $[1, 1]$ & $[1, 0, 0]$ & $M \Delta h_1 d_0$ & $h_0^6 x_{91, 11}$ \\
85 & 17 & $[1, 0, 0]$ & $[0, 0]$ & $x_{85, 17, 0}$ & $0$ \\
85 & 17 & $[0, 1, 1]$ & $[0, 0]$ & $x_{85, 17, 1} + x_{85, 17, 2}$ & $0$ \\
\end{longtable}
\begin{longtable}{ccccc}
\caption{Identification of classes}\label{table:class-ident} \\
\toprule
$n$ & $s$ & class & name & identification\\
\midrule
\endfirsthead
\caption{Identification of classes (continued)} \\
\toprule
$n$ & $s$ & class & name & identification\\
\midrule
\endhead
\bottomrule
\endfoot
62 & 10 & $[1, 0, 0]$ & $h_1 \Delta x$ & $h_1$-divisible \\
62 & 10 & $[0, 1, 0]$ & $\Delta^2 h_3^2$ & $h_1$-torsion \\
68 & 12 & $[1, 0]$ & $h_0 h_2 Mg$ & $h_2$-divisible \\
69 & 8 & $[1, 0]$ & $D_3'$ & $h_0$-torsion \\
79 & 17 & $[1, 0, 0]$ & $h_0 \Delta^2 d_0 e_0$ & $h_0$-divisible \\
79 & 17 & $[0, 1, 1]$ & $\Delta h_1 e_0^2 g$ & $h_0$-torsion \\
80 & 14 & $[0, 1]$ & $\Delta^3 h_1 h_3$ & $h_0$-torsion \\
80 & 14 & $[1, 0]$ & $h_0 g B_4$ & $h_0$-divisible \\
83 & 16 & $[0, 1]$ & $h_0 \Delta^2 m$ & $h_0$-divisible \\
84 & 15 & $[1, 1]$ & $M\Delta h_1 d_0$ & $h_0$-torsion \\
84 & 19 & $[1, 1]$ & $h_0 MPd_0 e_0$ & $h_2$-divisible \\
85 & 17 & $[?, 1, 0]$ & $Md_0 j$ & $d_0$-divisible \\
87 & 21 & $[0, 1, 0]$ & $h_0 P \Delta^2 d_0 e_0$ & $h_0$-divisible \\
87 & 21 & $[0, 1, 1]$ & $\Delta h_1 d_0^2 e_0^2$ & $h_0$-torsion \\
91 & 17 & $[1, 0, 0]$ & $h_0^6 x_{91, 11}$ & $h_0$-divisible \\
91 & 17 & $[1, 0, 1]$ & $Md_0 \ell$ & $d_0$-divisible \\
92 & 14 & $[0, 1, 0]$ & $h_0^2 \Delta^2 g_2$ & $h_0$-divisible \\
92 & 19 & $[1, 1]$ & $e_0 g^2 m$ & $g$-divisible \\
93 & 20 & $[1, 0]$ & $MP\Delta h_1^2 d_0$ & $h_1$-divisible \\
94 & 9 & $[1, 0, 0]$ & $h_0 x_{94, 8}$ & $h_0$-divisible \\
94 & 20 & $[0, 1]$ & $MP\Delta h_0^2 e_0 + ? e_0^3 g$ & $h_0$-non-torsion \\
95 & 7 & $[?, 1]$ & $x_{95, 7}$ & non-$h_6$-divisible \\
\end{longtable}
\section{Introduction}
One of the most fundamental problems in homotopy theory is the computation of stable homotopy groups of spheres. While simple to define, they have proved to be extremely difficult to compute.
The standard way to compute stable homotopy groups is the Adams spectral sequence \cite{structure-applications}, which seeks to compute the stable homotopy groups of a finite spectrum $X$ from its cohomology
\[
H^* X = \bigoplus_k \pi_{-k} F(X, \HFp).
\]
The crucial observation is that $H^*X$ is not just a group, but supports the action of cohomology operations. To capture this action, we define the algebra of all stable cohomology operations
\[
\A = \bigoplus_k \pi_{-k} \End(\F_p).
\]
This algebra is known as the Steenrod algebra, and can be described explicitly in terms of generators and relations. Then $H^*X$ is naturally a module over $\A$, and the Adams spectral sequence takes the form
\[
E^{s, t}_2 = \Ext_\A^{s, t}(H^*X, \F_p) \Rightarrow \pi_{t - s} X^\wedge_p.
\]
Unfortunately, even when $X$ is the sphere, this spectral sequence is highly non-trivial --- the $E_2$ page does not admit a simple description, and the differentials are hard to compute.
In practice, the first problem does not present a huge obstacle. Using a computer, one can iteratively construct a minimal free $\A$-resolution of $H^*X$ in a fairly efficient manner. This not only lets us read off the $\Ext$ groups; it also lets us compute the composition product of $\Ext$. Since the Adams spectral sequence is multiplicative, this lets us apply the Leibniz rule effectively, which massively simplifies the work involved in computing differentials. Similarly, we can compute Massey products and apply Moss' convergence theorem \cite{moss} to obtain new differentials.
Equipped with a (practically) full description of the Adams $E_2$ page, much subsequent work was focused on developing techniques to compute differentials by hand. In \cite{baues-e3}, Baues and Jibladze tackled the Adams spectral sequence from a different approach --- they discovered an algorithm that computes all $d_2$ differentials in the Adams spectral sequence of the sphere, thereby obtaining a computer description of the $E_3$ page.
The key insight of their algorithm is that just as the Adams $E_2$ page is controlled by the Steenrod algebra, the Adams $E_3$ page is controlled by the secondary Steenrod algebra, the algebra of all secondary cohomology operations.
Recall that secondary cohomology operations are defined by relations between cohomology operations. For example, the relation $\beta \beta = 0$ gives rise to a secondary cohomology operation $\Phi$, which is defined on elements $x \in H^* X$ such that $\beta x = 0$. To construct the action, represent $x$ as a map $\tilde{x} \colon X \to \HFp$. We then have a sequence
\[
\begin{tikzcd}
X \ar[r, "\tilde{x}"] & \HFp \ar[r, "\beta"] & \Sigma \HFp \ar[r, "\beta"] & \Sigma^2 \HFp
\end{tikzcd}
\]
where any successive composition is trivial. The secondary cohomology operation is then defined as the Toda bracket
\[
\Phi x = \langle \beta, \beta, \tilde{x} \rangle.
\]
Just as $\beta$ detects $p$, this cohomology operation $\Phi$ detects $p^2$. For example, it acts non-trivially on the cohomology of $\Z/p^2$.
Thus, to construct the secondary Steenrod algebra, we need to know not only all cohomology operations, but also homotopies between them. This suggests the definition
\[
\A^{(2)} = \bigoplus_k \tau_{[0, 1]} \Sigma^k \End(\F_p).
\]
We similarly define the secondary cohomology functor by
\[
\H^{(2)} X = \bigoplus_k \tau_{[0, 1]} \Sigma^k F(X, \F_p).
\]
One then sees that the action of $\A^{(2)}$ on $\H^{(2)}X$ lets us recover all secondary cohomology operations acting on $H^*X$.
While the Steenrod algebra is an actual algebra, $\A^{(2)}$ is \emph{a priori} only a graded $\E_1$-ring. Nevertheless, in \cite{baues-book}, Baues showed that $\A^{(2)}$ is in fact a differential graded algebra over $\Z/p^2$, and explicitly computed this differential graded algebra.
Equipped with this computation, Baues and Jibladze showed that we can compute all Adams $d_2$ differentials of a spectrum $X$ algorithmically given $\H^{(2)} X$. Together with a computation of $\H^{(2)}\S$, they were able to implement an algorithm to compute all $d_2$ differentials for the sphere. (Unfortunately, their implementation only managed to reach $t = 40$, and this theory was considered impractical by most of the broader computational homotopy theory community. This is, in fact, not true, as our implementation shows.)
The main thesis of this paper is that knowledge of the secondary Steenrod algebra in fact gives us ``full control'' of the Adams $E_3$ page, not just the $E_3$ page as a group. While the $E_3$ page of the Adams spectral sequence inherits a multiplication from the $E_2$ page, this is not the full picture; the ``$E_3$ page product'' ought to know about products up to one filtration higher. For example, it should be able to detect hidden extensions that jump by one filtration, as well as relations of the form
\[
\nu^3 = \eta^2 \sigma + \eta \epsilon.
\]
This knowledge is extremely useful for computing the Adams spectral sequence. Consider a hypothetical Adams chart as in \Cref{figure:fake-adams}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale = 0.5]
\draw [opacity = 0.1] (-1.9, -0.9) grid (6.9, 4.9);
\node [left] at (-1, 2) {$x$};
\node [left] at (0, 4) {$y$};
\node [left] at (0, 0) {$z$};
\node [right] at (1, 1) {$h_1 z$};
\draw (0, 0) -- (1, 1);
\draw [dashed] (-1, 2) -- (0, 4);
\@nx\foreach}\let\sseq@stack@unexpanded\unexpanded\def\sseq@error@annotation{}\sseq@error@hook \x/\y in {-1/2, 0/4, 0/0, 1/1} {
\draw [fill] (\x, \y) circle (0.1);
}
\draw [blue, ->] (0, 0) -- (-1, 2);
\node [left] at (4, 3) {$c$};
\node [right] at (5, 4) {$h_1 c$};
\node [left] at (5, 0) {$a$};
\node [left] at (6, 2) {$b$};
\draw (4, 3) -- (5, 4);
\draw [dashed] (5, 0) -- (6, 2);
\@nx\foreach}\let\sseq@stack@unexpanded\unexpanded\def\sseq@error@annotation{}\sseq@error@hook \x/\y in {4/3, 5/4, 5/0, 6/2} {
\draw [fill] (\x, \y) circle (0.1);
}
\draw [blue, ->] (6, 2) -- (5, 4);
\end{tikzpicture}
\caption{An example Adams chart with hidden extensions}\label{figure:fake-adams}
\end{figure}
In this diagram, there are hidden $\eta$ extensions from $x$ to $y$ and from $a$ to $b$. Using a generalized version of the Leibniz rule, we can deduce that
\[
d_3(h_1 z) = \text{``}\eta x\text{''} = y.
\]
Similarly, we can divide the differential $d_2(b) = h_1 c$ along $\eta$ to learn that $d_3(a) = c$. Crucially, this lets us relate differentials of different lengths, and in particular differentials on pages beyond the $E_2$ page.
\subsection*{Main results}
The paper has three main results, and is divided into three parts accordingly.
\subsubsection*{\ref{part:n-ary}\quad The \texorpdfstring{$n$}{n}-ary Steenrod algebra}
Our first result is to formalize the relationship between the secondary Steenrod algebra and the Adams $E_3$ page using the language of synthetic spectra \cite{synthetic} \cite[Appendix A]{manifold-synthetic}. Recall that for any Adams type spectrum $E$, the category $\Syn_E$ of $E$-based synthetic spectra is a symmetric monoidal category that interpolates between $\Comod_{E_*E}$ and $\Sp$. Specifically, there is an endomorphism $\tau$ of the unit $\S$ such that
\[
\Mod_{C\tau} \cong \Comod_{E_*E},\quad \Mod_{\tau^{-1}\S} \cong \Sp.
\]
Further, there is a synthetic analogue functor $\nu \colon \Sp \to \Syn_E$ such that
\[
C\tau \otimes \nu X \cong E_* X,\quad \tau^{-1} \nu X \cong X
\]
under the respective isomorphisms, and the $\tau$-Bockstein spectral sequence
\[
\pi_{*, *} C\tau \otimes \nu X = \Ext_{E_*E} (E_*, E_* X) \Rightarrow \pi_{*, *} \tau^{-1} \nu X = \pi_* X
\]
of $\nu X$ is exactly the $E$-based Adams spectral sequence for $X$, at least up to a sign.
In the language of synthetic spectra, $\Mod_{C\tau}$ is the category controlling the Adams $E_2$ page. Similarly, $\Mod_{C\tau^n}$ fully captures information about the Adams $E_{n + 1}$ page. In particular, the ``$E_{n + 1}$ page product'' we alluded to is simply the composition product in $\Mod_{C\tau^n}$.
Using this, we reinterpret and extend Baues and Jibladze's result as
\begin{thm}
Define the $n$-ary Steenrod algebra by
\[
\A^{(n)} = \bigoplus_k \tau_{[0, n)} \Sigma^k \End(\HFp).
\]
and let $E = \HFp$. Then there is a cocontinuous functor
\[
\H^{(n)} \colon \Mod_{C\tau^n} \to \Mod_{\A^{(n)}}^\op
\]
that is fully faithful when restricted to the full stable subcategory generated by objects of the form $C\tau^n \otimes \nu X$ where $X$ is a finite type spectrum.
\end{thm}
We will prove this in \Cref{part:n-ary} of the paper, as well as compatibility results between different $n$'s. Of course, we are mostly interested in the $n = 2$ case; the higher $n$ result is not practically useful without a computation of the $n$-ary Steenrod algebra itself.
\subsubsection*{\ref{part:secondary}\quad Computing \texorpdfstring{$E_3$}{E3} page data}
In \Cref{part:secondary}, we specialize to the case $n = 2$, where the secondary Steenrod algebra $\A^{(2)}$ was explicitly computed by Baues. Since $\A^{(2)}$ is a differential graded algebra over $\Z/p^2$, standard homological algebra gives us a model category presentation of $\Mod_{\A^{(2)}}$, which we can use to perform computations in $\Mod_{C\tau^2}$.
After describing the explicit algorithms to perform these computations, we implement them at the prime $2$ and compute the following data up to the $140$\textsuperscript{th} stem:
\begin{enumerate}
\item all $d_2$ differentials;
\item all $\Mod_{C\tau^2}$ products with $E_3$ page indecomposables up to the 39\textsuperscript{th} stem; and
\item select $\Mod_{C\tau^2}$ Massey products, including the Adams periodicity operator.
\end{enumerate}
The primary purpose of this part is to document the details of the algorithm itself, and is largely aimed towards an audience interested in implementing the algorithm. The reader is encouraged to skip this part entirely if they are only interested in the mathematical underpinnings of the algorithm (and are satisfied with ``we have a model category presentation so we can compute anything''). Those who are interested in using the results to perform Adams spectral sequence calculations should read \Cref{section:data}, which explains how to retrieve and interpret our generated data.
\subsubsection*{\ref{part:computation}\quad Computing Adams differentials}
Finally, in \Cref{part:computation}, we use the computer generated data to compute new Adams differentials. We first formally define our notion of a hidden extension and prove a generalized version of the Leibniz rule. Using this, we resolve various unknown differentials in \cite{more-stable-stems}. In particular, we resolve all remaining unknown $d_2$, $d_3$, $d_4$ and $d_5$ differentials up to the 95\textsuperscript{th} stem.
\subsection*{Acknowledgements}
I would like to thank Christian Nassau, Dan Isaksen, Haynes Miller, John Rognes, Martin Frankland, Mike Hopkins, Piotr Pstr\k{a}gowski, Robert Bruner and Robert Burklund for helpful discussions related to this paper.
\part{The \texorpdfstring{$n$}{n}-ary Steenrod algebra}\label{part:n-ary}
\section{Overview}
The goal of this \namecref{part:n-ary} is to construct the comparison functor
\[
\H^{(n)} \colon \Mod_{C\tau^n} \to \Mod_{\A^{(n)}}^\op
\]
and show that it is an equivalence on finite type objects.
We begin by introducing the category $\Mod_{C\tau^n}$ in \Cref{section:ctaun}. After proving basic categorical properties of the category, we move on to study duals in this category. The goal is to show that despite not being dualizable, the object $C\tau^m \in \Mod_{C\tau^n}$ for $m < n$ still behaves as if it were dualizable under many circumstances. For example, the natural map to its double dual is an equivalence.
We next introduce the $n$-ary Steenrod algebra in \Cref{section:steenrod}. After constructing $\A^{(n)}$ as a graded $\E_1$-ring, we show that it in fact strictifies to an algebra over a suitable quotient of the sphere. In the case $n = 2$, this recovers Baues' result that $\A^{(2)}$ is a differential graded algebra over $\Z/p^2$. We end by computing the secondary $\A(0)$ as a primer to the full secondary Steenrod algebra introduced in \Cref{section:nassau}.
In \Cref{section:comparison}, we construct the comparison functor $\H^{(n)}$, show that it is an equivalence on finite type objects, and prove naturality in $n$.
Our main result implies that the composition product in $\Mod_{C\tau^n}$ can be computed as the composition product in $\Mod_{\A^{(n)}}$. However, this is not quite true for the composition of bigraded mapping groups $[\Sigma^{a, b} X, Y]$; they only agree up to a sign! This is the infamous discrepancy between the product in the Adams $E_2$ page and the product in $\Ext$ \cite[p. 196]{structure-applications}. We will understand this in \Cref{section:bigraded} by carefully keeping track of the coherence data defining locally bigraded categories. Note that this is important even when $p = 2$, since we are now working over $\Z/4$, not $\Z/2$!
\subsection*{Conventions}
\begin{notation}
If $\mathcal{C}$ is a spectrally enriched category and $X, Y \in \mathcal{C}$, we use $\mathcal{C}(X, Y)$ to denote the mapping space and $F_{\mathcal{C}}(X, Y)$ for the mapping spectrum. Thus, $\mathcal{C}(X, Y) = \Omega^\infty F_{\mathcal{C}}(X, Y)$. We will write $F_\Sp$ as $F$.
If $\mathcal{C}$ is presentably symmetric monoidal, we write $\underline{\mathcal{C}}(X, Y)$ for the internal Hom.
\end{notation}
\begin{notation}
We always use $DX$ to mean the weak dual of $X$ in the appropriate category. That is, $DX = \underline{\C}(X, \mathbf{1})$.
\end{notation}
\begin{notation}
We write $\nu_n \colon \Sp \to \Mod_{C\tau^n}$ for the functor $X \mapsto C\tau^n \otimes \nu X$.
\end{notation}
\begin{notation}
We define bigraded suspension in $\Syn_E$ by
\[
(\Sigma^{a, b} X)(P) = \Sigma^{-b} X(\Sigma^{-a - b} P).
\]
In particular, categorical suspension is $\Sigma^{1, -1}$, while $\nu \Sigma = \Sigma^{1, 0} \nu$. This is chosen to be compatible with the Adams spectral sequence, and differs from \cite{synthetic}.
\end{notation}
\section{Modules over \texorpdfstring{$C\tau^n$}{Ctaun}}\label{section:ctaun}
The goal of this \namecref{section:ctaun} is to construct and understand the category $\Mod_{C\tau^n}$.
Recall that the Adams $d_m$ differential is given by the connecting homomorphism of
\[
\Sigma^{0, -m + 1} C\tau \to C\tau^m \to C\tau^{m - 1},
\]
which can be lifted to $\Mod_{C\tau^n}$ if $m \leq n$. In our future arguments, we would like to manipulate $C\tau^m$ as if it were dualizable. However, if $m < n$, then this is not true. Nevertheless, $C\tau^m$ is ``finite enough'' that in the situations of interest, it behaves as if it were dualizable. Specifically, we shall show that
\begin{thm}\label{thm:almost-compact}
Let $X \in \Mod_{C\tau^n}$ be such that the underlying object in $\Syn_E$ is dualizable. Then
\begin{enumerate}
\item If $Y \in \Sp$ is a filtered colimit of objects in $\Sp_E^{fp}$, then the map
\[
DX \otimes \nu_n Y \to \underline{\Mod_{C\tau^n}}(X, \nu_n Y)
\]
is an equivalence.
\item The map $X \to DDX$ is an equivalence.
\end{enumerate}
\end{thm}
We will prove these in \Cref{thm:almost-dualizable,thm:double-dual} respectively.
\subsection{The category \texorpdfstring{$\Mod_{C\tau^n}$}{ModCtaun}}
To define $\Mod_{C\tau^n}$ at all, we have to construct $C\tau^n$ as an $\E_\infty$-ring, which does not follow from the definition as the cofiber as $\tau^n$. To make it a ring, we need an alternative description of $C\tau^n$.
Recall that $\Syn_E$ comes with a natural $t$-structure compatible with the symmetric monoidal structure \cite[Propositions 2.16, 2.29]{synthetic}. By \cite[Lemma 4.29]{synthetic}, we can write
\[
C\tau^n = \tau_{<n} \S.
\]
This immediately gives
\begin{cor}\pushQED{\qed}
There is a sequence of $\E_\infty$-rings
\[
\S \to \cdots \to C\tau^n \to \cdots \to C\tau^3 \to C\tau^2 \to C\tau.\qedhere
\]
\end{cor}
This allows us to define the categories $\Mod_{C\tau^n}$, which are symmetric monoidal. As expected, these come with a natural $t$-structure.
\begin{lemma}
Let $(\Mod_{C\tau^n})_{\geq 0}$ and $(\Mod_{C\tau^n})_{\leq 0}$ be the full subcategories of $\Mod_{C\tau^n}$ consisting of modules whose underlying object in $\Syn_E$ is connective and co-connective respectively. Then these form a right-complete $t$-structure compatible with filtered colimits and the symmetric monoidal structure.
\end{lemma}
\begin{proof}
By \cite[Proposition 1.4.4.11]{ha}, there is a $t$-structure whose connective part is generated by $\{\nu_n P\}_{P \in \Sp_E^{fp}}$, and standard arguments (e.g.\ \cite[Lemma 5.3.2.12.3]{ha}) show that the connective part is $(\Mod_{C\tau^n})_{\geq 0}$. It follows from the adjunction that the co-connective objects are those whose underlying object is co-connective.
The right-completeness and compatibility with filtered colimits follow from the same properties of the $t$-structure on $\Syn_E$. Compatibility with the symmetric monoidal structure follows from the bar construction model of the tensor product.
\end{proof}
Our original motivation was to study the cofiber sequence
\[
\Sigma^{0, -m+1} C\tau \to C\tau^m \to C\tau^{m - 1}
\]
whose connecting map is the Adams differential. This cofiber sequence is easy to construct in $\Syn_E$, and we can lift it uniquely to one in $\Mod_{C\tau^n}$ by virtue of
\begin{lemma}\label{lemma:unique-lift}
For any $k \in \Z$, any diagram in $(\Syn_E)_{[k, k + n)}$ lifts uniquely to a diagram in $\Mod_{C\tau^n}$.
\end{lemma}
\begin{proof}
It suffices to prove this for the $k = 0$ case, and we have to show that $(\Mod_{\mathbf{1}_{<n}})_{[0, n)} \to (\Syn_E)_{[0, n)}$ is an equivalence. This follows from the general fact that given a compatible localization functor on a symmetric monoidal category, the category of local objects is equivalent to the category of local modules over the localization of the unit.
\end{proof}
\begin{cor}\label{cor:lift-cofib}
For $k \leq m \leq n$ and $X \in \Sp$, the object $C\tau^m \otimes \nu X$ has a unique $C\tau^n$-structure. Further, the cofiber sequence
\[
\Sigma^{0, -k} C\tau^{m - k} \otimes \nu X \to C\tau^m \otimes \nu X \to C\tau^k \otimes \nu X
\]
has a unique lift to $\Mod_{C\tau^n}$.\fakeqed
\end{cor}
\subsection{Almost dualizable objects}
When $m < n$, the object $C\tau^m \in \Mod_{C\tau^n}$ is not dualizable. However, it does have the redeeming quality of being \emph{almost} dualizable.
\begin{defi}
Let $\D$ be a presentably stable symmetric monoidal $\infty$-category with a compatible $t$-structure. We say $X \in \D$ is \emph{almost dualizable} if we can write $X = \colim X_k$ where
\begin{enumerate}
\item $X_k$ is dualizable; and
\item $X_k \to X$ is a $k$-equivalence (i.e.\ it is an equivalence after $\tau_{\leq k}$).
\end{enumerate}
\end{defi}
\begin{remark}
In favorable circumstances, one can show that this agrees with the notion of almost compactness of \cite[Definition 7.2.4.8]{ha}.
\end{remark}
\begin{eg}
A spectrum is almost dualizable iff it is finite type, i.e.\ it is bounded below and $H_*(X; \Z)$ is finite dimensional in each degree.
\end{eg}
\begin{lemma}
Let $X \in \Mod_{C\tau^n}$ be almost dualizable and $Y \in \Sp$. Suppose $Y$ can be written as a filtered colimit of objects in $\Sp_E^{fp}$. Then the natural map
\[
DX \otimes \nu_n Y \to \underline{\Mod_{C\tau^n}}(X, \nu_n Y)
\]
is an equivalence.
\end{lemma}
\begin{proof}
First observe that if $Y$ is in fact in $\Sp_E^{fp}$, then $\nu_n Y$ is dualizable, and the result holds unconditionally for all $X$.
Let $X = \colim X_k$ as in the definition of almost dualizable. Let $X^k$ be the cofiber of $X_k \to X$. Then $X^k$ is $k$-connected. By \cite[Lemma 4.29]{synthetic}, we know $\nu_n Y$ is $n$-coconnected.
We can write our map as
\[
DX \otimes \nu_n Y \to \lim \underline{\Mod_{C\tau^n}}(X_k, \nu_n Y) = \lim (DX_k \otimes \nu_n Y),
\]
whose fiber is $\lim (DX^k \otimes \nu_n Y)$.
By right-completeness, it suffices to show that $DX^k \otimes \nu_n Y$ is $(n - k)$-coconnected. Since the $t$-structure is compatible with filtered colimits and $\nu_n$ preserves filtered colimits, we may assume $Y \in \Sp_E^{fp}$. Then $DX^k \otimes \nu_n Y = \underline{\Mod_{C\tau^n}}(DX^k, \nu_n Y)$, and the result follows.
\end{proof}
\begin{thm}\label{thm:almost-dualizable}
Let $X \in \Mod_{C\tau^n}$. If the underlying object of $X$ is dualizable, then $X$ is almost dualizable.
\end{thm}
\begin{proof}
By shifting $X$, we may assume that $X$ is connective. Let $X_\bullet$ be the bar construction on $X$ as a $C\tau^n$-module. Then we can write
\[
X = \colim X_\bullet = \colim_m \left(\colim_{\Delta^\op_{< m}} X_\bullet\right).
\]
Since $X_\bullet = (C\tau^n)^\bullet \otimes X$ is free on a dualizable object, it is dualizable. Further, when $k > m$, the cofiber of
\[
\colim_{\Delta^\op_{< m}} X_\bullet \to \colim_{\Delta^\op_{< k}} X_\bullet
\]
is $m$-connected \cite[Proposition 1.2.4.5.4]{ha}, so $\colim_{\Delta^\op_{< m}}X_\bullet \to X$ is an $m$-equivalence.
\end{proof}
\subsection{Weak duals in \texorpdfstring{$\Mod_{C\tau^n}$}{Ctaun}}
Finally, we compute the weak dual of $C\tau^m \in \Mod_{C\tau^n}$, and show that the natural map $C\tau^m \to DDC\tau^m$ is an equivalence. We begin by computing the (strong) dual in $\Syn_E$.
\begin{lemma}
In $\Syn_E$, the dual of $\tau \colon \Sigma^{0, -1} \S \to \S$ is $\Sigma^{0, 1} \tau \colon \S \to \Sigma^{0, 1} \S$.
Thus, we have
\[
D C\tau^n = \Sigma^{-1, n + 1} C\tau^n.
\]
\end{lemma}
\begin{proof}
For the first part, the map $\tau$ is constructed by starting with the following diagram in $\Sp_E^{fp}$:
\[
\begin{tikzcd}
\S^{-1} \ar[r] \ar[d] & * \ar[d] \\
* \ar[r] & \S
\end{tikzcd}
\]
applying $\nu$ to get
\[
\begin{tikzcd}
\S^{-1, 0} \ar[r] \ar[d] & * \ar[d] \\
* \ar[r] & \S
\end{tikzcd}
\]
and then taking the induced map $\S^{0, -1} = \Sigma \S^{-1, 0} \to \S$. Since $\nu$ is symmetric monoidal and the dual of the first diagram is a suspension of the original diagram, the result follows.
The second part follows immediately from the first.
\end{proof}
If $X \in \Mod_{C\tau^n}$, then its weak dual is defined to be $\underline{\Mod_{C\tau^n}}(X, C\tau^n)$. The key to understanding the weak dual is the observation that $C\tau^n$ is not just a free $C\tau^n$-module, but a cofree one as well.
Let $F \dashv U \colon \Mod_{C\tau^n} \rightleftharpoons \Syn_E$ be the free-forgetful adjunction. Since $U$ preserves all colimits, it has a right adjoint $C$.
\begin{lemma}
We have
\[
C \S = \Sigma^{-1, n + 1} C\tau^n.
\]
\end{lemma}
\begin{proof}
Since $UC$ is right adjoint to $UF$, we find that
\[
UC(X) = DC\tau^n \otimes X.
\]
So the result follows from \Cref{cor:lift-cofib}.
\end{proof}
\begin{remark}
In fact, one can show that $C \cong \Sigma^{-1, n + 1} F$.
\end{remark}
\begin{cor}
There is a natural equivalence of functors
\[
UD \cong \Sigma^{1, -n - 1} DU.\qedhere
\]
\end{cor}
\begin{proof}
This follows from the more general relation
\[
U \underline{\Mod_{C\tau^n}}(X, CY) = \underline{\Syn_E}(UX, Y),
\]
whose proof is formal abstract nonsense using the projection formula
\[
U(FZ \otimes X) \cong Z \otimes UX.\qedhere
\]
\end{proof}
\begin{cor}
In $\Mod_{C\tau^n}$, if $m \leq n$, then $DC\tau^m \cong \Sigma^{0, m - n} C\tau^m$.\fakeqed
\end{cor}
\begin{thm}\label{thm:double-dual}
Let $X \in \Mod_{C\tau^n}$ be such that $UX$ is dualizable. Then the natural map $X \to DD X$ is an equivalence.
\end{thm}
\begin{proof}
Since the equivalence $UD = \Sigma^{1, -n - 1}DU$ preserves the co-evaluation map, this follows from the conservativity of $U$.
\end{proof}
\section{The \texorpdfstring{$n$}{n}-ary Steenrod algebra}\label{section:steenrod}
\subsection{Constructing the \texorpdfstring{$n$}{n}-ary Steenrod algebra}\label{section:n-ary-algebra}
Let $n \in [0, \infty]$. Informally, we can define the $n$-ary Steenrod algebra as
\[
\A^{(n)} = \bigoplus_k \tau_{[0, n)} \Sigma^k \End(\HFp).
\]
While this is easy to write down as a graded spectrum, the $\E_1$-ring structure requires performing a categorical dance.
\begin{defi}
The category of graded spectra is $\Sp^\gr = \Sp^\Z$, where $\Z$ is viewed as a discrete abelian group. This is a symmetric monoidal category under Day convolution.
We give this a $t$-structure by declaring the connective part to be $\Sp_{\geq 0}^\Z$.
\end{defi}
\begin{defi}
We define the bigraded spheres $\S^{a, b} \in \Sp^\gr$ to be $\S^a$ in degree $b$ and $0$ elsewhere. We define $[k] \colon \Sp^\gr \to \Sp^\gr$ to be $\S^{0, k} \otimes (-)$.
\end{defi}
The first step in constructing $\A^{(n)}$ as an $\E_1$-ring in $\Sp^\Z$ is to construct the $\E_1$-ring that is $\End(\HFp)$ in every degree. This follows from the functoriality of Day convolution.
\begin{lemma}[{\cite[Corollary 3.8]{day-functorial}}]
Let $f \colon \C \to \C'$ be symmetric monoidal and $\mathcal{D}$ a presentably symmetric monoidal category. Then
\begin{enumerate}
\item $f^* \colon \mathcal{D}^{\C'} \to \mathcal{D}^\C$ is lax symmetric monoidal; and
\item $f_* \colon \mathcal{D}^{\C} \to \mathcal{D}^{\C'}$ is symmetric monoidal,
\end{enumerate}
where $f^*$ is the restriction functor and $f_*$ is the left adjoint to $f^*$.\fakeqed
\end{lemma}
In our case, we have symmetric monoidal functors
\[
\{0\} \overset\iota\hookrightarrow \Z \overset{\Delta}\twoheadrightarrow \{0\}
\]
which result in four functors between $\Sp^\gr$ and $\Sp$:
\begin{itemize}
\item $\iota^* X_\bullet = X_0$.
\item $(\iota_* X)_0 = X$ and vanishes in non-zero degrees.
\item $(\Delta^* X)_n = X$ for all $n$.
\item $\Delta_* X_\bullet = \bigoplus_n X_n$.
\end{itemize}
Then $\Delta^* \End(\HFp)$ is the $\E_1$-ring that is $\End(\HFp)$ in every degree. We next need to apply the shift $\Sigma^k$.
\begin{lemma}
There is a cocontinuous $\E_1$-monoidal functor $\Phi \colon \Sp^\gr \to \Sp^\gr$ that sends $\{X_k\}$ to $\{\Sigma^k X_k\}$.
\end{lemma}
\begin{proof}
By \cite[Proposition 4.8.1.10]{ha}, such a functor is equivalent to an $\E_1$-monoidal functor $\Z \to \Sp^\gr$, which we choose to send $k$ to $\S^{k, k}$. Then $\Phi$ is the unique cocontinuous functor extending this, and must be of the given form.
\end{proof}
\begin{defi}
We define $\Phi^{(n)} \colon \Sp \to \Sp^\gr$ by $\Phi^{(n)} = \tau_{[0, n)} \Phi \Delta^*$.
\end{defi}
This is a lax $\E_1$-monoidal functor and $\Phi^{(n)} \Sigma X = (\Phi^{(n)} X)[-1]$.
\begin{defi}
The $n$-ary Steenrod algebra is the $\E_1$-ring in $\Sp^\gr$ given by
\[
\A^{(n)} = \Phi^{(n)} \End(\HFp).
\]
The $n$-ary cohomology functor
\[
\H^{(n)} \colon \Sp \to \Mod_{\A^{(n)}}^\op
\]
is given by
\[
\H^{(n)}(X) = \Phi^{(n)} F(X, \HFp).
\]
\end{defi}
The following lemmas are immediate from definition:
\begin{lemma}
$\H^{(n)}$ sends sums to products and $\H^{(n)}(\Sigma X) = \H^{(n)}(X)[1]$.\fakeqed
\end{lemma}
\begin{lemma}\label{lemma:hn-homotopy}\pushQED{\qed}
Let $\tau \in \pi_{1, 1} \A^{(n)}$ be the identity map in $\Sigma \End(\HFp)$. Then there is an isomorphism of rings
\[
\pi_{*, *} \A^{(n)} = \A [\tau] / \tau^n,
\]
where $\A$ is the ordinary Steenrod algebra. Further, as an $\pi_{*, *} \A^{(n)}$-module, we have
\[
\pi_{*, *} \H^{(n)} (X) = H^*(X) [\tau] / \tau^n.
\]
Thus, we have
\[
\H^{(n - 1)} (X) = \A^{(n - 1)} \otimes_{\A^{(n)}} \H^{(n)} (X).\qedhere
\]
\end{lemma}
\begin{remark}
$\A^{(\infty)}$ is a \emph{shift algebra} in the sense of \cite{abstract-goerss-hopkins}, and $\H^{(\infty)} (X)$ is a periodic $\A^{(\infty)}$-module.
This lets us apply the results of \cite[Section 4]{abstract-goerss-hopkins}. In particular, $\H^{(n)}(X)$ is a potential $(n - 1)$-stage, and there is an obstruction theory for the space of possible values of $\H^{(n)}(X)$ given $H^*(X)$. In many cases of interest (e.g.\ the sphere), this space is connected, so any $\A^{(n)}$-module with the right homotopy groups must be $\H^{(n)}(X)$.
\end{remark}
\begin{remark}
It follows from the descriptions of the homotopy groups that if $A \to B \to C$ induces a short exact sequence on cohomology, then $\H^{(n)} (A) \to \H^{(n)} (B) \to \H^{(n)} (C)$ is a cofiber sequence.
\end{remark}
In order to connect $\Mod_{\A^{(n)}}$ to $\Mod_{C\tau^n}$, we will need an alternative description of $\Mod_{\A^{(n)}}$. Recall the $P_\Sigma$ construction from \cite[Definition 5.5.8.8]{htt}, and write $P_\Sigma^\Sp(\C)$ for the stabilization of $P_\Sigma(\C)$. By \cite[Remark C.1.5.9]{sag}, $P_\Sigma^\Sp(\C)$ is the full subcategory of $\Fun(\C^\op, \Sp)$ of (finite) product-preserving functors.
\begin{defi}
We let $\Free_{\A^{(n)}}$ be the full subcategory of $\Mod_{\A^{(n)}}$ consisting of finite direct sums of modules of the form $\A^{(n)}[k]$.
\end{defi}
\begin{lemma}
There is an equivalence of categories
\[
P_\Sigma^\Sp(\Free_{\A^{(n)}}) \cong \Mod_{\A^{(n)}}
\]
with inverse given by the spectral Yoneda embedding.
\end{lemma}
\begin{proof}
The inclusion of $\Free_{\A^{(n)}}$ gives a cocontinuous map
\[
F\colon P_\Sigma(\Free_{\A^{(n)}}) \to \Mod_{\A^{(n)}}.
\]
We claim this is fully faithful with essential image given by $(\Mod_{\A^{(n)}})_{\geq 0}$. By \cite[Proposition 5.5.8.22]{htt}, we need to show that the inclusion of $\Free_{\A^{(n)}}$ is fully faithful with image given by compact projective generators. The first part is clear and the second follows from \cite[Corollary 7.1.4.14]{ha}.
Since $\Mod_{\A^{(n)}}$ is the stabilization of its connective part, the stabilization $\tilde{F}$ of $F$ is an equivalence of categories.
Let $G$ and $\tilde{G}$ be the right adjoints to $F$ and $\tilde{F}$ respectively. By combining \cite[Corollary 5.2.6.5, Proposition 5.5.8.10]{htt}, we know $G$ is given by the Yoneda embedding.
For the spectral version, note that we must have $\Omega^\infty \tilde{G} = G$, since both are right adjoints to $F = \tilde{F} \Sigma^\infty_+$. By \cite[Corollary 1.4.2.23]{ha}, any two left exact functors $\Mod_{\A^{(n)}} \to P_\Sigma(\Free_{\A^{(n)}})$ that agree after applying $\Omega^\infty$ must in fact agree. So $\tilde{G}$ must be the spectral Yoneda embedding.
\end{proof}
\begin{remark}
Given a presheaf $X \colon \Free_{\A^{(n)}}^\op \to \Sp$, the underlying $\A^{(n)}$-module of $X$ is given by
\[
X_k = X(\A^{(n)} [k]).
\]
If $\alpha \in \pi_{0, \ell} \A^{(n)} = \A_\ell$, then its action on $X$ is given by applying the presheaf to
\[
\alpha \colon \A^{(n)}[\ell] \to \A^{(n)}.
\]
This is all standard; the less-obvious part is the action of $\tau$. Informally, we expect this to be given by $X$ acting on $\tau \colon \Sigma \A^{(n)}[1] \to \A^{(n)}$. However, $\Sigma \A^{(n)}[1]$ is not an object in $\Free_{\A^{(n)}}$. Nevertheless, the map $\tau$ is represented by a commutative diagram
\[
\begin{tikzcd}
\A^{(n)}[1] \ar[r] \ar[d] & * \ar[d] \\
* \ar[r] & \A^{(n)}.
\end{tikzcd}
\]
Applying $X$ to this diagram then gives a map $\Sigma X \to X[-1]$, which is the action of $\tau$.
\end{remark}
The category $\Free_{\A^{(n)}}$ in turn admits a more direct definition in terms of Eilenberg--Maclane spectra.
\begin{defi}
Let $\M_{\HFp}$ be the full subcategory of $\Sp$ consisting of finite sums of shifts of $\HFp$.
\end{defi}
\begin{lemma}\label{cor:free-an}
The $n$-ary cohomology functor gives an equivalence of categories
\[
h_n \M_{\HFp} \overset\sim\to \Free_{\A^{(n)}}^\op.
\]
Under the isomorphism $\Mod_{\A^{(n)}} \cong P_\Sigma^\Sp(h_n \M_{\HFp}^\op)$, the $n$-ary cohomology $\H^{(n)} X$ of a spectrum $X$ corresponds to the presheaf
\[
M \mapsto \tau_{[0, n)} F(X, M).
\]
\end{lemma}
This is an immediate consequence of the following more general lemma:
\begin{lemma}
Let $X \in \Sp$ and $M \in \M_{\HFp}$. Then the natural map
\[
\H^{(n)}\colon \Sp(X, M) \to \Mod_{\A^{(n)}}(\H^{(n)}(M), \H^{(n)}(X))
\]
is $n$-truncation.
\end{lemma}
\begin{proof}
Since $\H^{(n)}$ preserves shifts and direct sums, we may assume $M = \A^{(n)}$. Then the right-hand side is
\begin{multline*}
\Mod_{\A^{(n)}}(\A^{(n)}, \H^{(n)}(X)) = \Sp^\gr(\iota_* \S, \H^{(n)}(X)) = \Sp(\S, \iota^* \H^{(n)}(X)) \\
= \Sp(\S, \tau_{[0, n)} F(X, \HFp)) = \tau_{\leq n} \Sp(X, \HFp).
\end{multline*}
\end{proof}
We end with a lemma on the naturality of this isomorphism.
\begin{lemma}\label{lemma:psigma-nat}
Let $m < n$. Under the isomorphism $\Mod_{\A^{(n)}} \cong P_\Sigma^\Sp(h_n \M_{\HFp}^\op)$, the forgetful functor $\Mod_{\A^{(m)}} \to \Mod_{\A^{(n)}}$ corresponds to restriction along $h_n \M_\HFp^\op \to h_m \M_\HFp^\op$.
\end{lemma}
\begin{proof}
It suffices to show that they have the same left adjoint. By construction, the left adjoint to restriction along $h_n \M_\HFp^\op \to h_m \M_\HFp^\op$ is the unique stable cocontinuous functor $P_\Sigma^\Sp(h_n \M_\HFp^\op) \to P_\Sigma^\Sp(h_m \M_\HFp^\op)$ that extends the map $h_n \M_\HFp^\op \to h_m \M_\HFp^\op$. Since $\A^{(m)} \otimes_{\A^{(n)}}(-)$ also fits this description, we are done.
\end{proof}
\begin{remark}
The equivalence $\Mod_{\A^{(n)}} \cong P_\Sigma^\Sp(h_n \M_\HFp^\op)$ lets us directly construct the category of modules over $\A^{(n)}$ without constructing $\A^{(n)}$ itself. A version of this was studied by \cite{mapping-algebra} using the language of model categories. They were then able to prove directly that it encodes information about the Adams $E_{n + 1}$ page.
\end{remark}
\subsection{Strictifying the \texorpdfstring{$n$}{n}-ary Steenrod algebra}
\emph{A priori}, the algebra $\A^{(n)}$ is a ring over $\S$. In the $n = 1$ case, we know it is in fact a ring over $\F_p$, which is much easier to work with. In the $n = 2$ case, Baues \cite[Section 5]{baues-book} showed that $\A^{(2)}$ is a ring over $\Z/p^2$, which also allows us to employ homological algebra machinery to perform computations. In general, $\A^{(n)}$ is a ring over a suitable truncation of the sphere.
\begin{defi}
For $E$ an Adams type homology theory and $n \geq 1$, define the truncation functor
\[
\tau^E_{<n} = (-)^E_{<n} \colon \Sp_{\geq 0} \to \Sp_{\geq 0}
\]
by
\[
\tau^E_{<n} X = X^E_{<n} = F_{\Syn_E}(\S, C\tau^n \otimes \nu X).
\]
\end{defi}
\begin{lemma}
The functor $X \mapsto X^E_{<n}$ is lax symmetric monoidal and natural in $n$. Moreover, there is a natural transformation of lax symmetric monoidal functors $X \to X^E_{<n}$. On homotopy groups, this kills elements whose image in the $E$-based Adams spectral sequence has $t$-coordinate at least $n$.\fakeqed
\end{lemma}
\begin{remark}
The definition of $\tau^E_{<n}$ makes sense for non-connective spectra as well, but the effect on negative homotopy groups is more subtle.
\end{remark}
\begin{eg}
$\S^{\HFp}_{<1} = \F_p$, $\S^{\HFp}_{<2} = \Z/p^2$ and $\pi_* \S^{\F_2}_{<3} = \Z/8 [\eta] / \eta^2$.
\end{eg}
By construction, we know that
\begin{lemma}
$F(C\tau^n, -) \colon \Mod_{C\tau^n} \to \Sp$ lifts to a lax symmetric monoidal functor $\Mod_{C\tau^n} \to \Mod_{\S^{\HFp}_{<n}}$.\fakeqed
\end{lemma}
\begin{thm}
$\A^{(n)}$ lifts to an $\E_1$-algebra in $\Mod_{\S^{\HFp}_{<n}}^\Z$.
\end{thm}
\begin{proof}
By \cite[Construction C.17]{r-motivic}, there is a symmetric monoidal functor $\Z \to \Syn_{\HFp}$ that sends $n$ to $\S^{0, -n}$. Call this functor $\S^{0, -\bullet}$.
Consider the composite
\[
\Sp \overset{\nu}\longrightarrow \Syn_{\HFp} \overset{\Delta^*}\longrightarrow \Syn_{\HFp}^\Z \overset{\otimes \S^{0, -\bullet}}\longrightarrow \Syn_{\HFp}^\Z \overset{C\tau^n \otimes }\longrightarrow \Mod_{C\tau^n}^\Z \overset{F(C\tau^n, -)}\longrightarrow \Sp^\Z\overset{\Phi}\longrightarrow \Sp^\Z.
\]
All the functors are lax symmetric monoidal, and the last two maps naturally lift to $\Mod_{\S_{<n}^\HFp}^\Z$. So it suffices to show that $\A^{(n)}$ is the image of $\End(\HFp)$ under this map.
Let this composite be $F_{\nu \otimes C\tau^n}$. We consider the variations $F_Y$ and $F_\nu$ defined as follows:
\begin{itemize}
\item $F_\nu$ is obtained by replacing the fourth and fifth maps of $F_{\nu \otimes C\tau^n}$ with $\Syn_{\HFp}^\Z \overset{F(\S, -)}\longrightarrow \Sp^\Z$.
\item $F_Y$ is obtained by replacing the first map of $F_\nu$ with $Y\colon \Sp \to \Syn_{\HFp}$ (recall that $Y = \tau^{-1} \nu$ is the spectral Yoneda embedding).
\end{itemize}
We then have natural transformations
\[
\begin{tikzcd}
F_\nu \ar[r] \ar[d] & F_{\nu \otimes C\tau^n} \\
F_Y
\end{tikzcd}
\]
of lax symmetric monoidal functors. It suffices to show that
\begin{enumerate}
\item $F_Y \cong \Phi \Delta$;
\item $F_\nu \End(\HFp) = \tau_{\geq 0} F_Y \End(\HFp)$; and
\item $F_{\nu \otimes C\tau^n} \End(\HFp) = \tau_{< n} F_\nu \End(\HFp)$.
\end{enumerate}
The last two can be checked on homotopy groups. To prove the first, note that on $\tau$-invertible spectra, $F(\S, -)$ is canonically equivalent to $\tau^{-1} \colon \Syn_{\HFp} \to \Sp$ as a lax symmetric monoidal functor. Further, $\S^{0, -\bullet}$ is constructed so that after $\tau$-inversion, it is the symmetric monoidal functor that is constantly the unit. So we are done.
\end{proof}
\subsection{The secondary \texorpdfstring{$\A(0)$}{A(0)}}\label{section:a0}
Before we move on, it is prudent to give some intuition for what $\A^{(n)}$ looks like. In \Cref{section:nassau}, we are going to give a full description of $\A^{(2)}$. However, this description is fairly complex and it is easy to get lost in the details. To provide a simpler example, we instead look at the secondary $\A(0)$, defined by
\[
\A(0)^{(2)} = \bigoplus_{k \in \Z} \tau_{[0, 1]} \Sigma^{k} \End_\Z(\F_p).
\]
To compute this, we use the following explicit presentation of $\F_p \in \Mod_\Z$:
\[
\F_p =
\left(
\begin{tikzcd}
\Z \{x_1\} \ar[d, "p"] \\ \Z \{x_0\}
\end{tikzcd}
\right).
\]
Then $\End_\Z(\F_p)$ is given by
\[
\End_\Z(\F_p) = \F_p \otimes \F_p^* = \left(
\begin{tikzcd}
\Z \{x_1 \otimes x_0^*\} \ar[d, "{(p, -p)}"] \\
\Z\{x_0 \otimes x_0^*, x_1 \otimes x_1^*\} \ar[d, "{(p, p)}"] \\
\Z\{x_0 \otimes x_1^*\}.
\end{tikzcd}
\right).
\]
As is well-known, $\pi_* \End_Z(\F_p) = \F_p\{1, \beta\}$, with explicit representatives given by
\[
1 = x_0 \otimes x_0^* - x_1 \otimes x_1^*,\quad \beta = x_0 \otimes x_1^*.
\]
By definition, $\A(0)^{(2)}$ is the sum of truncations
\[
\begin{tikzcd}[column sep=0.5em, row sep=tiny]
\color{gray} k = 0 & \color{gray} k = 1 & \color{gray} k = 2 \\
\Z\{x_1 \otimes x_0^*\} \ar[dd, "p"] & \Z\{x_1 \otimes x_1^*\} \oplus \F_p\{x_0 \otimes x_0^* - x_1 \otimes x_1^*\}\ar[dd, "{(p, 0)}"] & \F_p\{x_0 \otimes x_1^*\} \ar[dd] \\
\vphantom{x} \\
\Z\{x_0 \otimes x_0^* - x_1 \otimes x_1^*\} & \Z\{x_0 \otimes x_1^*\} & 0
\end{tikzcd}
\]
In $\A(0)^{(2)}$, we let $1$ and $\beta$ be the corresponding classes in cohomological degree $0$, and define the following classes in cohomological degree $1$:
\[
\mu_0 = x_1 \otimes x_0^*,\quad \tau = x_0 \otimes x_0^* - x_1 \otimes x_1^*.
\]
Thus, $\mu_0$ is the null-homotopy of $p$, while $\tau$ detects the copy of $1$ in cohomological degree $1$. We can then write $\A(0)^{(2)}$ as the chain complex
\[
\A(0)^{(2)} = \left(
\begin{tikzcd}
\Z\{\mu_0, \mu_0 \beta\} \oplus \F_p\{\tau, \tau \beta\} \ar[d, "d"] \\
\Z \{1, \beta\}
\end{tikzcd}
\right),\quad\sidedeg
\]
As for the algebra structure, $\tau$ acts centrally, while we have the crucial relation
\[
\beta \mu_0 = \mu_0 \beta + \tau.
\]
This relation encodes the fact that $\beta$ detects $p$.
We now have a full description of $\A(0)^{(2)}$. However, this is a differential graded algebra over $\Z$, instead of the promised $\Z/p^2$. To remedy this, observe that as a chain complex, $\A(0)^{(2)}$ is in fact equivalent to one over $\F_p$ --- we can simply quotient out the $\mu_0$ factors and end up with $\A(0)$ in cohomological degrees $0$ and $1$. However, this quotienting does not respect the algebra relation $\beta \mu_0 = \mu_0 \beta + \tau$. Nevertheless, since $p \tau = 0$, we can quotient out $p \mu_0$, and get our final presentation
\[
\A(0)^{(2)} = \left(
\begin{tikzcd}
\F_p\{\mu_0, \mu_0 \beta\} \oplus \F_p\{\tau, \tau \beta\} \ar[d, "d"] \\
\Z/p^2 \{1, \beta\}
\end{tikzcd}
\right),\quad\sidedeg
\]
Equipped with a presentation of $\A(0)^{(2)}$, we can now compute the secondary cohomology of various $\Z$-modules. The simplest $\Z$-module is, of course, $\Z$ itself. Tracing through the definitions gives the following presentation of the secondary cohomology of $\Z$:
\[
\kk = \left(\begin{tikzcd}
\F_p\{\mu_0\} \oplus \F_p\{\tau\} \ar[d, "d"] \\
\Z/p^2
\end{tikzcd}\right),\quad\sidedeg
\]
The only non-trivial $\A(0)^{(2)}$ action is given by
\[
\beta \mu_0 = \tau.
\]
Alternatively, this can be described as $\A(0)^{(2)} / (\A(0)^{(2)} \beta)$.
\begin{remark}
While there is a ring map $\kk \to \A(0)^{(2)}$, this does not map to the literal center of $\A(0)^{(2)}$. Instead, there is a chain homotopy between the left and right multiplication maps.
\end{remark}
More interestingly, we can look at two-cell complexes. The first example we can look at is $\Z/p$. Since this has cells in degrees $0$ and $1$, as a chain complex, we have
\[
\H^{(2)} (\Z/p) = \kk \{a\} \oplus \kk \{b\},\quad |a| = 0,\quad |b| = 1.
\]
However, there is a non-trivial $\A(0)^{(2)}$ action given by $\beta a = b$. This distinguishes it from $\H^{(2)}(\Z \oplus \Z[1])$, which has the same underlying chain complex but with $\beta a = 0$. Of course, this difference already manifests itself on the level of ordinary cohomology, without having to go to the secondary level.
On the other hand, $\Z/p^2$ and $\Z \oplus \Z[1]$ \emph{do} have the same ordinary cohomology groups. We can compute that $\H^{(2)} (\Z/p^2)$ is again $\kk\{a\} \oplus \kk \{b\}$, but now the $\A(0)^{(2)}$ action is given by
\[
\beta a = p b.
\]
Since $pb$ is null-homotopic, this is not visible on the level of ordinary cohomology. Instead, this is detected by the secondary cohomology operation associated to the equation $\beta \beta = 0$. Indeed, we can compute
\[
\langle \beta, \beta, a\rangle = 0 \cdot a + \beta \cdot \mu_0 b = \tau b
\]
with no indeterminacy.
\section{The comparison functor}\label{section:comparison}
Set $E = \HFp$. In this \namecref{section:comparison}, we will construct the comparison functor
\[
\H^{(n)} \colon \Mod_{C\tau^n} \to \Mod_{\A^{(n)}}^\op
\]
and show that it has the desired properties. The constructions will work when $n = \infty$ as well, in which case we set $\Mod_{C\tau^n}$ to be $\widehat{\Syn_\HFp}$, the category of hypercomplete synthetic spectra, and $C\tau^n \otimes (-)$ is the hypercompletion functor.
\subsection{Constructing the comparison functor}
The comparison functor will be a natural extension of the $n$-ary cohomology functor $\H^{(n)} \colon \Sp \to \Mod_{\A^{(n)}}^\op$ along $\nu_n \colon \Sp \to \Mod_{C\tau^n}$. To construct this functor, we use the isomorphism
\[
\Mod_{\A^{(n)}} \cong P_\Sigma^\Sp(h_n \M_{\HFp}^\op).
\]
By \Cref{cor:free-an}, the $n$-ary cohomology functor can then be described by
\[
\H^{(n)}(X)(M) = \tau_{[0, n)} F(X, M).
\]
To extend $\H^{(n)}$ along the map $\nu_n$, we need to express $\tau_{[0, n)} F(X, M)$ in terms of $\nu_n X$. This follows from the following lemma:
\begin{lemma}\label{lemma:truncation}
Let $M$ be a homotopy ${\HFp}$-module. Then $\nu M = \tau_{\geq 0} F(-, M)$. Thus, for any $X \in \Sp$, we have
\[
F_{\Syn_{\HFp}}(\nu X, \nu M) = \tau_{\geq 0} F(X, M).
\]
Further, the functor $C\tau^n \otimes(-)$ exhibits $F_{\Mod_{C\tau^n}}(\nu_n X, \nu_n M)$ as the $n$-truncation of $F_{\Syn_{\HFp}}(\nu X, \nu M)$.
\end{lemma}
\begin{proof}
By construction, $\nu M$ is the sheafification of $\tau_{\geq 0} F(-, M)$. Thus, we have to show that $\tau_{\geq 0} F(-, M)$ is already a sheaf. By \cite[Theorem 2.8]{synthetic}, we have to show that if $A \to B \to C$ is a cofiber sequence in $\Sp_{\HFp}^{fp}$ with the second map being an $(\HFp)_*$-surjection, then
\[
\tau_{\geq 0} F(C, M) \to \tau_{\geq 0} F(B, M) \to \tau_{\geq 0} F(A, M)
\]
is a fiber sequence.\footnote{\cite[Theorem 2.8]{synthetic} states this for presheaves of spaces instead of spectra. However, the proof reduces it to the case of spectra and proves it for spectra.} Since this is a fiber sequence before applying $\tau_{\geq 0}$, it suffices to show that
\[
[B, M] \to [A, M] \to 0
\]
is exact. Since ${\HFp}$ is Adams type, this is given by
\[
\Hom_{\F_p}((\HFp)_* B, M_*) \to \Hom_{\F_p}((\HFp)_* A, M_*) \to 0.
\]
Since $(\HFp)_* A \to (\HFp)_* B$ splits, the result follows.
To prove the second part, note that $\nu$ preserves filtered colimits, so it suffices to prove this when $X$ is finite, which follows from Yoneda's lemma.
The last part follows from the construction of $\tau$ (for $n = \infty$, use that $\nu M$ is already hypercomplete).
\end{proof}
\begin{cor}
The composite
\[
\begin{tikzcd}[column sep=large]
\M_{\HFp} \ar[r, hook] & \Sp \ar[r, "\nu"] & \Syn_{\HFp} \ar[r, "C\tau^n \otimes (-)"] & \Mod_{C\tau^n}
\end{tikzcd}
\]
identifies the image with $h_n \M_{\HFp}$.
\end{cor}
This allows us to define the comparison functor as follows:
\begin{defi}
We define $\H^{(n)}\colon \Mod_{C\tau^n} \to P_\Sigma^\Sp(h_n \M_\HFp^\op)^\op \cong \Mod_{\A^{(n)}}^\op$ by
\[
\H^{(n)}(X)(M) = F_{\Mod_{C\tau^n}}(X, \nu_n M).
\]
\end{defi}
It is easy to see that $\H^{(n)}$ preserves the two suspension functors (but see \Cref{section:bigraded} for crucial details), and a little diagram chase shows that $\H^{(n)}$ sends $\tau$ to $\tau$.
For the rest of the \namecref{section:comparison}, we will use $\H^{(n)}$ to refer to this extended functor on $\Mod_{C\tau^n}$ instead of the $n$-ary cohomology functor.
\begin{remark}
A useful property of the comparison functor $\H^{(n)}$ is that it is cocontinuous, unlike the $n$-ary cohomology functor. The trade-off is that $\nu_n$ is, of course, not cocontinuous.
\end{remark}
\subsection{Fully faithfulness of the comparison functor}
We shall show that $\H^{(n)}$ is fully faithful when restricted to the full subcategory of finite type objects.
Recall that a spectrum $X$ is finite type if it is bounded below and $H_*(X; \Z)$ is finite dimensional in each degree. In other words, it is a bounded below spectrum built with finitely many cells in each degree. We let $\Sp^{ft} \subseteq \Sp$ be the full subcategory of finite type spectra. By K\"unneth's formula, this is closed under tensor products.
\begin{defi}
Let $\Mod_{C\tau^n}^{ft} \subseteq \Mod_{C\tau^n}$ be the full stable subcategory generated by $\{\nu_n P\}_{P \in \Sp^{ft}}$.
\end{defi}
\begin{thm}\label{thm:y-ff}
$\H^{(n)}$ restricts to a fully faithful functor $\Mod_{C\tau^n}^{ft} \to \Mod_{\A^{(n)}}^\op$. In fact, for any $X \in \Mod_{C\tau^n}$ and $Y \in \Mod_{C\tau^n}^{ft}$, the map
\[
\H^{(n)} \colon \Mod_{C\tau^n}(X, Y) \to \Mod_{\A^{(n)}}(\H^{(n)}Y, \H^{(n)} X)
\]
is an equivalence.
\end{thm}
The proof requires some auxiliary lemmas, which we will prove after proving the main theorem.
\begin{proof}
Let $\C \subseteq \Mod_{C\tau^n}$ be the full subcategory of $\Mod_{C\tau^n}$ consisting of spectra $Y$ such that for any $X \in \Mod_{C\tau^n}$, the map
\[
F_{\Mod_{C\tau^n}}(X, Y) \to F_{\Mod_{\A^{(n)}}}(\H^{(n)}Y, \H^{(n)} X)
\]
is an equivalence. Then $\C$ is stable and is closed under limits preserved by $\H^{(n)}$ (that is, limits that are sent to colimits in $\Mod_{\A^{(n)}}$).
By the spectral Yoneda lemma, $\M_{\HFp} \subseteq \C$.
Next, we show that if $P \in \Sp^{ft}$, then $\nu_n (\HFp \otimes P) \in \C$. Indeed, we can write
\[
\HFp \otimes P = \bigoplus \Sigma^{k_i} {\HFp}
\]
where $k_i \to \infty$. By \Cref{cor:sum-prod}, we have
\[
\nu_n (\HFp \otimes P) = \bigoplus \nu_n \Sigma^{k_i} \HFp = \prod \nu_n \Sigma^{k_i} \HFp.
\]
Since $\H^{(n)}$ is cocontinuous, it preserves direct sums. Thus, we have
\[
\H^{(n)} \prod \nu_n \Sigma^{k_i} \HFp = \H^{(n)} \bigoplus \nu_n \Sigma^{k_i} \HFp = \prod \H^{(n)} \nu_n \Sigma^{k_i} \HFp = \bigoplus \H^{(n)} \nu_n \Sigma^{k_i} \HFp.
\]
So the direct product is preserved by $\H^{(n)}$, and $\nu_n (\HFp \otimes P) \in \C$.
Finally, if $P \in \Sp^{ft}$, then by \Cref{lemma:ass-converge}, its Adams spectral sequence converges. That is, we have
\[
\nu_n P = \varprojlim \nu_n (CB^\bullet(\HFp) \otimes P).
\]
By \Cref{lemma:y-tot}, this limit is preserved by $\H^{(n)}$. So $\nu_n P \in \C$.
\end{proof}
We now prove the various lemmas used in the proof.
\begin{lemma}
Let $X$ be $k$-connective. Then $\pi_{a, b} \nu X = \pi_{a, b} \nu_n X = 0$ when $a < k$.
\end{lemma}
\begin{proof}
We first prove the $\nu X$ version. By \cite[Theorem 4.58]{synthetic}, this is true for $b \leq 0$. For $b > 0$, the long exact sequence from \cite[Proposition 4.57]{synthetic} gives
\[
\Ext^{b - 1, a + b}_{\A_*}(\F_p, H_* X) \to \pi_{a, b - 1} \nu X \to \pi_{a, b} \nu X \to \Ext_{\A_*}^{b, a + b}(\F_p, H_* X).
\]
So the result follows from the vertical vanishing line of $\Ext$.
As for the $\nu_n$ version, the $n < \infty$ case follows from the cofiber sequence
\[
\begin{tikzcd}
\Sigma^{0, -n} \nu X \ar[r, "\tau^n"] & \nu X \ar[r] & C\tau^n \otimes \nu X.
\end{tikzcd}
\]
When $n = \infty$, since $X$ is connective, we have $\nu_n X = \nu X^\wedge_p$. Since $X^\wedge_p$ is also $k$-connective, the result follows.
\end{proof}
\begin{cor}\label{cor:sum-prod}
Let $\{X_i\}_{i \in \mathbb{N}}$ be a sequence of spectra such that $X_i$ is $k_i$-connective and $k_i \to \infty$. Then
\[
\bigoplus X_i = \prod X_i,\quad \nu_n \left(\bigoplus X_i\right) = \bigoplus \nu_n X_i = \prod \nu_n X_i.
\]
\end{cor}
\begin{proof}
The first part is standard. The equality $\nu_n \left(\bigoplus X_i\right) = \bigoplus \nu_n X_i$ follows from $\nu_n$ preserving finite coproducts and filtered colimits, hence infinite coproducts. To show that
\[
\bigoplus \nu_n X_i = \prod \nu_n X_i,
\]
we use the fact that $\Mod_{C\tau^n}$ is generated by shifts of $\{\nu_n P\}_{P \in \Sp_\HFp^{fp}}$ under colimits. So it suffices to show that
\[
\left[ \nu_n P, \bigoplus \nu_n X_i\right]^{*, *} =
\left[ \nu_n P, \prod \nu_n X_i\right]^{*, *}
\]
Since $\nu_n P$ is compact, this is equivalent to showing that
\[
\bigoplus \left[ \nu_n P, \nu_n X_i\right]^{*, *} =
\prod \left[ \nu_n P, \nu_n X_i\right]^{*, *}.
\]
Since $\nu_n P$ is dualizable and $\nu_n DP \otimes \nu_n X_i = \nu_n (DP \otimes X_i)$, we may assume that $P = \S$. So we have to show that
\[
\bigoplus \pi_{*, *} \nu_n X_i = \prod \pi_{*, *} \nu_n X_i
\]
This follows from the previous vanishing line.
\end{proof}
\begin{lemma}\label{lemma:ass-converge}
If $X$ is any bounded below spectrum, then $\nu_n X$ is $\nu_n \HFp$-nilpotent complete in $\Mod_{C\tau^n}$. That is,
\[
\nu_n P \cong \varprojlim \nu_n (CB^\bullet({\HFp}) \otimes P).
\]
\end{lemma}
\begin{proof}
If $n < \infty$, this is \cite[Lemma A.12]{manifold-synthetic}, since limits in $\Mod_{C\tau^n}$ are computed in $\Syn_{\HFp}$.
If $n = \infty$, then by \cite[Propositions 5.4, 5.6]{synthetic}, we have $\nu_\infty X = \nu X_{\HFp}$. Since $X$ is bounded below, by \cite[Theorem 6.6]{localization-spectra}, we know that $X_{\HFp} = X_{\HFp}^{\wedge}$, the ${\HFp}$-nilpotent completion of $X$. By \cite[Proposition A.11]{manifold-synthetic}, we know that $\nu X_{\HFp}^{\wedge}$ is $\nu {\HFp} = \nu_\infty {\HFp}$-nilpotent complete.
\end{proof}
\begin{lemma}\label{lemma:y-tot}
Let $P \in \Sp^{ft}$. Then $\H^{(n)}$ preserves the limit
\[
\nu_n P \overset\sim\to \varprojlim \nu_n (CB^\bullet({\HFp}) \otimes P).
\]
\end{lemma}
\begin{proof}
Sifted colimits in $P_\Sigma^\Sp(h_n \M_{\HFp}^\op)$ are evaluated pointwise, so we evaluate both sides on $\nu_n M \in h_n \M_{\HFp}$. The left-hand side is
\[
(\H^{(n)} \nu_n P)(\nu_n M) = \tau_{[0, n)} F(P, M),
\]
while right-hand side is given by
\[
\begin{aligned}
\varinjlim F_{\Mod_{C\tau^n}}(\nu_n CB^\bullet({\HFp}) \otimes P, \nu_n M) &= \varinjlim \tau_{[0, n)} F(CB^\bullet({\HFp}) \otimes P, M) \\
&= \varinjlim \tau_{[0, n)} F(P, F(CB^\bullet({\HFp}), M)).
\end{aligned}
\]
Since $M$ is an ${\HFp}$-module, the augmented simplicial object $F(CB^\bullet({\HFp}), M) \to M$ has extra degeneracies. So we are done.
\end{proof}
\subsection{Naturality of the comparison functor}\label{section:comp-nat}
Our ultimate goal is to use the comparison functor to compute the Adams differential, which is the long exact sequence associated to the cofiber sequence
\[
\Sigma^{0, -k} C\tau^{m - k} \to C\tau^m \to C\tau^k.\tag{$\dagger$}
\]
More precisely, we want to look at the long exact sequence induced by applying the functor $[X, (-)\otimes_{C\tau^n} Y]_{\Mod_{C\tau^n}}^{*, *}$ to the cofiber sequence, where $X, Y \in \Mod_{C\tau^n}^{ft}$.
Since $C\tau^m \otimes_{C\tau^n} Y$ is not in $\Mod_{C\tau^n}^{ft}$ when $m < n$, we cannot simply apply \Cref{thm:y-ff} to translate this to the world of $\A^{(n)}$-modules. Nevertheless, \Cref{thm:almost-compact} tells us we can instead apply $[D(-) \otimes_{C\tau^n} X, Y]^{*, *}_{\Mod_{C\tau^n}}$ to the sequence ($\dagger$) to obtain the same result.
Thus, we are motivated to compute $\H^{(n)}(DC\tau^m \otimes_{C\tau^n} X)$ in terms of $\H^{(n)}(X)$.
\begin{thm}\label{lemma:shift-ctau}
Let $m < n$. Then there is a natural transformation of $\A^{(n)}$-modules
\[
\eta\colon \A^{(m)} \otimes_{\A^{(n)}} \H^{(n)} X \to \H^{(n)} (DC\tau^m \otimes_{C\tau^n} X)
\]
which is an equivalence on the stable subcategory generated by $\{\nu_n Y\}_{Y \in \Sp}$. Moreover, when $X$ is of the form $\Sigma^a \nu_n Y$, the cofiber sequence induced by \textup{($\dagger$)} corresponds to the cofiber sequence induced by
\[
\begin{tikzcd}
\Sigma^{k, k} \A^{(m - k)} \ar[r, "\tau^k"] & \A^{(m)} \ar[r] & \A^{(k)}.
\end{tikzcd}
\]
\end{thm}
\begin{remark}
We expect the compatibility property to hold unconditionally. However, a proof eludes us.
\end{remark}
The first part naturally breaks into two lemmas.
\begin{lemma}\label{lemma:hn-nat-dual}
Let $m \leq n$. Then there is a natural equivalence of $\A^{(n)}$-modules
\[
\H^{(n)} (DC\tau^m \otimes_{C\tau^n} X) \cong \H^{(m)} (C\tau^m \otimes_{C\tau^n} X).
\]
\end{lemma}
Note that on the left-hand side, we are using the tensor product in $\Mod_{C\tau^n}$, whereas on the right, we are using the base change functor $\Mod_{C\tau^n} \to \Mod_{C\tau^m}$.
\begin{proof}
By \Cref{lemma:psigma-nat}, we can write the right-hand side as the presheaf
\begin{align*}
\H^{(m)}(C\tau^m \otimes_{C\tau^n} X)(\nu_n M) &= F_{\Mod_{C\tau^m}} (C\tau^m \otimes_{C\tau^n} X, C\tau^m \otimes_{C\tau^n} \nu_n M) \\
&= F_{\Mod_{C\tau^n}} (X, C\tau^m \otimes_{C\tau^n} \nu_n M) \\
&= F_{\Mod_{C\tau^n}} (DC\tau^m \otimes_{C\tau^n} X, \nu_n M) \\
&= \H^{(n)} (DC\tau^m \otimes_{C\tau^n} X)(\nu_n M),
\end{align*}
where the third equality uses \Cref{thm:almost-compact}.
\end{proof}
\begin{lemma}\label{lemma:hn-nat}
There is a natural transformation of $\A^{(m)}$-modules
\[
\A^{(m)} \otimes_{\A^{(n)}} \H^{(n)} X \to \H^{(m)} (C\tau^m \otimes_{C\tau^n} X)
\]
that is an equivalence on the stable subcategory generated by $\{\nu_n Y\}_{Y \in \Sp}$.
\end{lemma}
\begin{proof}
Taking the dual of $C\tau^n \to C\tau^m$ gives a map $DC\tau^m \to DC\tau^n = C\tau^n$. Since $\H^{(n)}$ is contravariant, this gives a map of $\A^{(n)}$-modules
\[
\H^{(n)} X \to \H^{(n)}(DC\tau^m \otimes_{C\tau^n} X) \cong \H^{(m)} (C\tau^m \otimes_{C\tau^n} X).
\]
The desired natural transformation is then the adjoint to this map.
One then observes that this is an equivalence when $X = \nu_n Y$, where both sides are the $m$-ary cohomology of $Y$.
\end{proof}
\begin{proof}[Proof of \Cref{lemma:shift-ctau}]
The first part follows from \Cref{lemma:hn-nat-dual,lemma:hn-nat}. As for the second part, tracing through the proof shows that the reduction map $C\tau^m \to C\tau^k$ always corresponds to the natural projection $\A^{(m)} \to \A^{(k)}$. The map $\tau^k \colon \Sigma^{0, -k} C\tau^{m - k} \to C\tau^m$ requires more work.
For brevity, we drop the subscripts in the tensor products. Then we have a commutative diagram
\[
\begin{tikzcd}
\Sigma^{k, k} \A^{(m - k)} \otimes \H^{(n)} X \ar[r, "\tau^k"] \ar[d, dashed] & \A^{(m)} \otimes \H^{(n)} X \ar[r] \ar[d, "\eta"] & \A^{(k)} \otimes \H^{(n)} X \ar[d, "\eta"] \\
\H^{(n)}(\Sigma^{0, k} DC\tau^{m - k} \otimes X) \ar[r, "\tau^k"] & \H^{(n)}(DC\tau^m \otimes X) \ar[r] & \H^{(n)}(DC\tau^k \otimes X)
\end{tikzcd}
\]
where the dashed vertical arrow is induced by the universal property of a cofiber sequence. Our goal is to show that the dashed vertical arrow is in fact $\eta$ when $X = \nu_n Y$.
In this case, $\eta$ is an equivalence, and the leftmost column is the $k$-connective cover of the middle column. Thus, there is a unique choice of the dashed arrow that makes the left-hand square commutes. So it suffices to show that $\eta$ also makes the left-hand square commute.\footnote{We are trying to show that selecting the dashed map to be $\eta$ gives a map of cofiber sequences, which is \emph{a priori} stronger than showing that the two squares commute. In this special case, our argument shows that the latter is in fact sufficient.}
The trick is that we know $\H^{(n)}(D(-) \otimes X)$ sends the map
\[
\tau^k \colon \Sigma^{0, -k} C\tau^m \to C\tau^m
\]
to
\[
\tau^k \colon \Sigma^{k, k} \A^{(m)} \to \A^{(m)}.
\]
The maps labelled $\tau^k$ in the diagram above are related to these $\tau^k$ maps by the restriction maps $C\tau^m \to C\tau^{m - k}$ and $\A^{(m)} \to \A^{(m - k)}$, which $\H^{(n)}$ is also known to preserve. So in the diagram
\[
\begin{tikzcd}[column sep=1.4em]
\Sigma^{k, k} \A^{(m)} \otimes \H^{(n)} X \ar[r] \ar[d, "\eta"] & \Sigma^{k, k} \A^{(m - k)} \otimes \H^{(n)} X \ar[r, "\tau^k"] \ar[d, "\eta"] & \A^{(m)} \otimes \H^{(n)} X \ar[d, "\eta"] \\
\H^{(n)}(\Sigma^{0, k} DC\tau^m \otimes X) \ar[r] & \H^{(n)}(\Sigma^{0, k} DC\tau^{m - k} \otimes X) \ar[r, "\tau^k"] & \H^{(n)}(DC\tau^m \otimes X),
\end{tikzcd}
\]
we know both the large rectangle and the left square commute. Moreover, the left-hand square exhibits the middle column as the $(m-1)$-truncation of the leftmost column, and the rightmost column is $(m - 1)$-truncated. So the right-hand square must commute as well, and we are done.
\end{proof}
\section{Locally bigraded categories}\label{section:bigraded}
Famously, the product of the Adams $E_2$ page differs from the (usual) product of the $\Ext$ groups by a sign \cite[p. 196]{structure-applications}. The goal of this \namecref{section:bigraded} is to explain where this sign comes from. Even at the prime $2$, the sign is now important, since the secondary Steenrod algebra is an algebra over $\Z/4$, not $\Z/2$.
The main issue at hand is that $\Mod_{C\tau^n}$ and $\Mod_{\A^{(n)}}$ have \emph{two} suspension functors $\Sigma^{1, 0}$ and $\Sigma^{0, 1}$. To define the composition product, we need to choose natural equivalences $\Sigma^{a, b} \Sigma^{a', b'} \cong \Sigma^{a + a', b + b'}$ in a suitably coherent fashion. While the map $\H^{(n)} \colon \Mod_{C\tau^n} \to \Mod_{\A^{(n)}}$ preserves each suspension functor individually, it does \emph{not} preserve this coherence data.
In this \namecref{section:bigraded}, our goal is to develop a framework to keep track of these coherence data. In \Cref{section:locally-graded}, we warm up on the case where there is only one suspension functor, which is relatively straightforward. In \Cref{section:locally-bigraded}, we follow the template of \Cref{section:locally-graded} to study the bigraded case. In general, it is difficult to show that a functor preserves the coherence data. However, we will show that this is automatic if one of the suspensions is the categorical suspension and the ``obvious'' coherence data is used.
In \Cref{section:sign}, we explain how these choices affect sign rules in the presence of a symmetric monoidal structure. This motivates us to impose a non-obvious choice of coherence data on $\Mod_{C\tau^n}$, which $\H^{(n)}$ then fails to preserve.
\subsection{Locally graded categories}\label{section:locally-graded}
\begin{defi}
A locally graded category is a category $\C$ with an automorphism $[1] \colon \C \to \C$.
\end{defi}
\begin{eg}
The category $\Sp$ of spectra is a locally graded category with automorphism given by $X[1] = \Sigma X$.
\end{eg}
\begin{eg}
The category $\mathrm{Ab}_*$ of graded abelian groups is a locally graded category with automorphism given by $(X[1])_n = X_{n - 1}$.
\end{eg}
The structure of a locally graded category gives rise to graded mapping spaces. Let $\C$ be a locally graded category and $X, Y \in \C$. We can then define
\[
[X, Y]^t = [X[t], Y],
\]
where $[t]$ is the $t$-fold composition of $[1]$ (using the inverse if negative).
The graded mapping spaces inherit a composition operation
\[
[X, Y]^t \times [Y, Z]^s \to [X, Z]^{t + s}.
\]
To define this, given $f\in [X, Y]^t$ and $g \in [Y, Z]^s$, we shift $f$ to get a map
\[
f[s] \colon X[t][s] \cong X[t + s] \to Y[s],
\]
and then compose with $g \colon Y[s] \to Z$ to get a map $X[t + s] \to Z$. Crucially, this involves identifying $X[t][s] \cong X[t + s]$. This is easy, since both are given by iterating the functor $[1]$ $(t + s)$ many times. One might have to be a bit careful when $t$ or $s$ is negative, but it turns out not to be a problem.
Nevertheless, it is worth keeping track of these identifications ``properly'', which will be crucial in the bigraded case. To do so, we define locally graded categories in an ``unbiased'' way. That is, we provide functors $[t] \colon \C \to \C$ for every $t \in \Z$, together with a coherent choice of equivalences $[t] \circ [s] \cong [t + s]$. In other words, we want an $\E_1$-map $\Z \to \Aut(\C)$.
\begin{defi}
The category of locally graded categories is $\Cat^{B\Z}$.
\end{defi}
To reconcile the two definitions, let $S^1$ be the simplicial set given by identifying the endpoints of $\Delta^1$. Then there is an inclusion map $S^1 \hookrightarrow B\Z$ selecting $1 \in \Z$. One can check that the induced map $\Cat^{B\Z} \to \Cat^{S^1}$ is fully faithful with essential image given by those where $S^1$ selects an automorphism of the category. This then recovers our original definition of a locally graded category. Further, this lets us describe a morphism of locally graded categories as a functor $F \colon \C \to \D$ together with a natural equivalence $F[1]_\C \cong [1]_\D F$.
\subsection{Locally bigraded categories}\label{section:locally-bigraded}
A locally bigraded category is one where there are two compatible shift operators. There is now no obvious biased definition, so we head straight to the unbiased one, and then reverse-engineer the biased one afterwards.
\begin{defi}
The category of locally bigraded categories is $\Cat^{B\Z \times B\Z}$. Given a locally bigraded category, we write the action of $(a, b) \in \Z \times \Z$ as $\Sigma^{a, b}$.
\end{defi}
We then have bigraded mapping spaces
\[
[X, Y]^{a, b} = [\Sigma^{a, b} X, Y].
\]
As in the single-graded case, we have a fully faithful embedding $\Cat^{B\Z \times B\Z} \to \Cat^{S^1 \times S^1}$ whose essential image is given by the elements where $S^1 \times S^1$ selects automorphisms.
From this, we see that a local bigrading is given by two automorphisms $\Sigma^{1, 0}$ and $\Sigma^{0, 1}$ together with an equivalence
\[
\Sigma^{1, 0} \Sigma^{0, 1} \simeq \Sigma^{0, 1} \Sigma^{1, 0},
\]
which we call the swapping homotopy.
Given this data, we define $\Sigma^{a, b} = (\Sigma^{1, 0})^a (\Sigma^{0, 1})^b$. Then the identifications $\Sigma^{a, b} \Sigma^{a', b'} \cong \Sigma^{a + a', b + b'}$ are given by
\begin{align*}
\Sigma^{a, b} \Sigma^{a', b'} &\cong \Sigma^{a, 0} \Sigma^{0, b} \Sigma^{a', 0} \Sigma^{0, b'} \\
&\cong \Sigma^{a, 0} \Sigma^{a', 0} \Sigma^{0, b} \Sigma^{0, b'} \\
&\cong \Sigma^{a + a', 0} \Sigma^{0, b + b'} \\
&\cong \Sigma^{a + a', b + b'}
\end{align*}
In this chain, the second identification applies the swap map $a'b$ many times, and the rest are by definition.
Informally, a morphism of locally bigraded categories is a functor that commutes with the two shifts and preserves the swapping homotopy. In practice, while it is easy to check that a functor is compatible with the shifts, it is difficult to show that it preserves the swapping homotopy --- we have to write down a 3-morphism to show that a certain cube commutes.
Since we need the identification $\Sigma^{a, b} \Sigma^{a', b'} \cong \Sigma^{a + a', b + b'}$ to define composition of bigraded mapping spaces, a functor that fails to preserve this identification will fail to preserve compositions between bigraded mapping spaces. Indeed, this is the source of mismatch between the product in $\Ext$ and the product in the Adams $E_2$ page.
Fortunately for us, in all cases of interest, the bigrading is of a special form --- one of the shifts is given by categorical suspension. This can be chosen functorially, which will relieve much of our pains.
To state this formally, let $\Stbl \subseteq \Cat$ be the category of stable $\infty$-categories and exact functors.
\begin{lemma}\label{lemma:canonical-suspension}
The projection $(\Stbl)^\Z \to \Stbl$ has a section $\Sigma \colon \Stbl \to (\Stbl)^\Z$ that selects the categorical suspension functor of each stable $\infty$-category.
\end{lemma}
This argument is due to Piotr Pstr\k{a}gowski.
\begin{proof}
We have to produce an automorphism of $1 \in \Fun(\Stbl, \Stbl)$. Under the Grothendieck construction, the identity functor is classified by the coCartesian fibration ${}_{\Sp^\omega \backslash} \Stbl \to \Stbl$, where $\Sp^\omega$ is the category of finite spectra. The desired automorphism is then given by precomposition with $\Sigma\colon \Sp^\omega \to \Sp^\omega$.
\end{proof}
Given any stable category $\C$ and an automorphism $\Sigma^{0, 1}$, there is a local bigrading where $\Sigma^{1, 0}$ is the categorical suspension and the swapping homotopy is the natural transformation witnessing the exactness of $\Sigma^{0, 1}$. This construction can be made functorial as follows:
\begin{defi}
Let $\Sigma \colon \Stbl \to (\Stbl)^{B\Z}$ be the suspension functor of \Cref{lemma:canonical-suspension}. Applying $(-)^{B\Z}$ to this gives a functor
\[
\Sigma^{B\Z}\colon (\Stbl)^{B\Z} \to (\Stbl)^{B\Z \times B\Z}.
\]
If $\Sigma^{0, 1} \colon \C \to \C$ is an automorphism of a stable category, we call the image under $\Sigma^{B\Z}$ the canonical local bigrading generated by $\Sigma^{0, 1}$.
\end{defi}
The key point is that if $F \colon \C \to \D$ is a morphism between locally graded stable categories, then it is automatically a functor between the canonical locally bigraded categories. This absolves the need to consider $3$-morphisms.
\begin{eg}\label{eg:moda-bigrade}
Let $A$ be a graded algebra over $\Z$ and $\Mod_A$ be the $\infty$-category of graded modules over $A$. This has a shift functor $[1]\colon \Mod_A \to \Mod_A$ given by shifting the internal grading.
Recall that $\Mod_A$ is presented by the category $\Ch(A)$ of chain complexes over $A$. Then the categorical suspension functor $\Ch(A)$ is given by shifting cohomological degrees, while the internal shift $[1]$ is given by shifting internal degrees. As functors between $1$-categories, these commute on the nose, and this gives the canonical bigrading.
\end{eg}
If we give both $\Mod_{C\tau^n}$ and $\Mod_{\A^{(n)}}$ the canonical local bigrading, then $\H^{(n)}$ will be a morphism of locally bigraded categories, and everything will be nice. \Cref{eg:moda-bigrade} suggests we should indeed give $\Mod_{\A^{(n)}}$ the canonical local bigrading, since this is what we get when computing with the model structure. In the next \namecref{section:sign}, we will explain why we should \emph{not} give $\Mod_{C\tau^n}$ the canonical local bigrading.
\subsection{Sign rules}\label{section:sign}
Often, the local bigrading comes from a symmetric monoidal structure. Let $\C$ be a symmetric monoidal category, and choose $\S^{1,0}, \S^{0, 1} \in \Pic(\C)$. We can then define bigraded spheres
\[
\S^{a, b} = (\S^{1, 0})^{\otimes a} \otimes (\S^{0, 1})^{\otimes b},
\]
and thus bigraded suspension functors
\[
\Sigma^{a, b} = \S^{a, b} \otimes (-).
\]
To formally define a local bigrading, we choose the two shift maps to be $\Sigma^{1, 0}$ and $\Sigma^{0, 1}$, and choose the swapping homotopy to be the one induced by the symmetric monoidal structure. We call this the symmetric monoidal bigrading.
\begin{lemma}
Suppose $\S^{1, 0} = \Sigma \mathbf{1}_\C$. Then the symmetric monoidal bigrading agrees with the canonical local bigrading generated by $\Sigma^{0, 1}$.
\end{lemma}
\begin{proof}
We have to show that for any $X, Y \in \C$, the following diagram commutes naturally:
\[
\begin{tikzcd}
\Sigma (X \otimes Y) \ar[r] \ar[d] & X \otimes \Sigma Y \ar[d] \\
\Sigma \mathbf{1} \otimes X \otimes Y \ar[r, "\sigma \otimes Y"] & X \otimes \Sigma \mathbf{1} \otimes Y.
\end{tikzcd}
\]
Then taking $X = \S^{0, 1}$, the top map is the canonical bigrading, while the bottom map is the symmetric monoidal bigrading.
To show this, we show that the two diagonal compositions $\Sigma(X \otimes Y) \to X \otimes \Sigma \mathbf{1} \otimes Y$ are both equal to a third map
\[
g \colon \Sigma(X \otimes Y) \to \Sigma(X \otimes \mathbf{1} \otimes Y) \to X \otimes \Sigma \mathbf{1} \otimes Y.
\]
For the composite through the bottom-left, consider the diagram
\[
\begin{tikzcd}
X \otimes Y \ar[r] \ar[rd] & \mathbf{1} \otimes X \otimes Y \ar[r] \ar[d, "\sigma \otimes Y"] & * \ar[r] \ar[d, equals] & \Sigma \mathbf{1} \otimes X \otimes Y \ar[d, "\sigma \otimes Y"] \\
& X \otimes \mathbf{1} \otimes Y \ar[r] & * \ar[r] & X \otimes \Sigma \mathbf{1} \otimes Y.
\end{tikzcd}
\]
Here the left triangle commutes canonically by the definition of a symmetric monoidal category, while the rest of the diagram is a map of cofiber sequences obtained by applying the natural transformation $(-) \otimes X \otimes Y \to X \otimes (-) \otimes Y$ to the cofiber sequence $\mathbf{1} \to * \to \Sigma \mathbf{1}$.
This diagram gives two commutative diagrams of the form
\[
\begin{tikzcd}
X \otimes Y \ar[d] \ar[r] & * \ar[d] \\
* \ar[r] & X \otimes \Sigma \mathbf{1} \otimes Y,
\end{tikzcd}
\]
one via the top cofiber sequence and the other via the bottom one, which correspond to two maps $\Sigma (X \otimes Y) \to X \otimes \Sigma \mathbf{1} \otimes Y$. The one via the top sequence is the bottom-left composite, while the one via the bottom sequence is the map $g$. Since the diagram of cofiber sequences commutes, it follows that these two maps agree.
The top-right composite follows from a similar argument. Start with $\mathbf{1} \to * \to \Sigma \mathbf{1}$ and tensor with $Y$ on the right to get the commutative diagram of cofiber sequences
\[
\begin{tikzcd}
Y \ar[d] \ar[r] & * \ar[d, equals] \ar[r] & \Sigma Y \ar[d] \\
\mathbf{1} \otimes Y \ar[r] & * \ar[r] & \Sigma \mathbf{1} \otimes Y
\end{tikzcd}
\]
Next we tensor this whole diagram with $X$ on the left to get
\[
\begin{tikzcd}
X \otimes Y \ar[d] \ar[r] & * \ar[d, equals] \ar[r] & X \otimes \Sigma Y \ar[d] \\
X \otimes \mathbf{1} \otimes Y \ar[r] & * \ar[r] & X \otimes \Sigma \mathbf{1} \otimes Y
\end{tikzcd}
\]
Then the map through the top sequence is the top-right composite, while the one via the bottom sequence is $g$.
\end{proof}
\begin{remark}
The proof that the diagram commutes is, of course, entirely formal. In fact, it does not use that the tensor product preserves colimits; it only involves the colimit comparison map. Once one decides to prove the result in this generality, there is only one possible proof to write down.
\end{remark}
One checks that
\begin{lemma}
Suppose the composite
\[
\S^{2, 0} \cong \S^{1, 0} \otimes \S^{1, 0} \overset{\sigma}\to \S^{1, 0} \otimes \S^{1, 0} \cong \S^{2, 0}
\]
is multiplication by $\alpha \in \End(\S^{0, 0})$ and the corresponding one for $\S^{0, 1}$ is $\beta \in \End(\S^{0, 0})$. If we use the symmetric monoidal structure to identify $\S^{a + a', b + b'} \cong \S^{a, b} \otimes \S^{a', b'}$, then the composite
\[
\S^{a + a', b + b'} \cong \S^{a,b} \otimes \S^{a', b'} \overset{\sigma}\to \S^{a', b'} \otimes \S^{a, b} \cong \S^{a + a', b + b'}
\]
is multiplication by $\alpha^{aa'} \beta^{bb'}$.\fakeqed
\end{lemma}
Note that when we identify $\S^{a + a', b + b'} \cong \S^{a, b} \otimes \S^{a', b'}$, we have to move $\S^{a', 0}$ over $\S^{0, b}$, but the swap map $\sigma$ immediately moves it back. When determining sign rules of bigraded homotopy groups, the first move uses the homotopy from the definition of the bigrading, and the second uses the symmetric monoidal structure. For the symmetric monoidal bigrading, these agree, so they cancel out. If the two homotopies differ by $(-1)$, then we pick up an extra sign of $(-1)^{ab' + a'b}$.
\begin{lemma}
For $\Syn_E$, the multiples for $\S^{1, -1}$ and $\S^{1, 0}$ are both $-1$.
\end{lemma}
\begin{proof}
The former is a general property of categorical suspension. The latter follows from the fact that $\nu \colon \Sp_E^{fp} \to \Syn_E$ is symmetric monoidal and $\nu \S^1 = \S^{1, 0}$.
\end{proof}
Under the canonical bigrading, we get a sign rule of $(-1)^{aa' + a'b + ab'}$, which is bizarre; a more natural sign rule is $(-1)^{aa'}$, which depends only on the stem and not the filtration. For example, under the sign rule of $(-1)^{aa' + a'b + ab'}$, both $h_0$ and $\tau$ multiplications anti-commute with elements in odd stems. To fix the sign rule, we insert a sign:
\begin{defi}[{\cite[Remark 4.10]{synthetic}}]
Viewing $\Syn_E$ as a category of sheaves over $\Sp_E^{fp}$, the Adams bigrading on $\Syn_E$ is generated by
\[
(\Sigma^{1, 0} X)(P) = X(\Sigma^{-1} P),\quad (\Sigma^{1, -1}X)(P) = \Sigma X(P),
\]
where the swap map is given by $(-1)$ times the canonical bigrading.
\end{defi}
This then results in a sign rule of $(-1)^{aa'}$.
If we used the canonical bigrading on $\Syn_E$ (hence $\Mod_{C\tau^n}$), then since the functor $\H^{(n)} \colon \Mod_{C\tau^n} \to \Mod_{\A^{(n)}}^\op$ is exact, it is a map of locally bigraded categories, hence preserves the bigraded composition product. Since we decide to use the Adams bigrading on $\Syn_E$ instead, under the functor $\H^{(n)}$, composition products differ by a sign of $(-1)^{(-b')(a + b)} = (-1)^{s' t}$.
\part{Computing \texorpdfstring{$E_3$}{E3} page data}\label{part:secondary}
\section{Overview}
Following Baues, we write $\A = \A^{(1)}$ and $\B = \A^{(2)}$. The objective of this \namecref{part:secondary} is to understand how to do computations in $\Mod_{C\tau^2}$ via the fully faithful embedding
\[
\Mod_{C\tau^2}^{ft} \to \Mod_{\B}^\op.
\]
These computations will then be used to compute the Adams spectral sequence in \Cref{part:computation}.
We start with \Cref{section:modules}, where we seek to understand $\Mod_{\B}$ via a model category presentation. After describing the differential graded algebra $\B$ and studying some $\B$-modules in depth, we learn how to construct a cofibrant replacement of $\H^{(2)}X$ by lifting a free $\A$-resolution of $H^*X$.
In \Cref{section:e3}, we take this cofibrant replacement and use it to compute the data we sought, namely $d_2$ differentials, $\Mod_{C\tau^2}$ products and $\Mod_{C\tau^2}$ Massey products. The $d_2$ differentials are computed as the obstruction to lifting an $\Ext$ class over $\A$ to an $\Ext$ class over $\B$, while the $\Mod_{C\tau^2}$ products and Massey products are obtained by lifting chain maps and chain homotopies over $\A$ to ones over $\B$. Note that our algorithm to compute $d_2$ differentials ends up being identical to that of \cite{baues-e3}, but our proofs are independent (apart from the computation of $\B$ itself).
Finally, in \Cref{section:data}, we discuss our implementation of the algorithm. We will give an overview of the resulting dataset and provide instructions to reproducing the data. We then discuss the performance characteristics of our implementation to understand how the run time grows with the stem.
\section{The category of secondary Steenrod modules}\label{section:modules}
\subsection{The secondary Steenrod algebra}\label{section:nassau}
In this \namecref{section:nassau}, we explicitly describe the secondary Steenrod algebra as a differential graded algebra at the prime $2$. This was originally computed by Baues in \cite{baues-book}. To make use of his computations, we need to reconcile our definition with his.
\begin{thm}
$\B$ is equivalent to the secondary Steenrod algebra of \cite{baues-book}.
\end{thm}
\begin{proof}
By Morita theory, $\B$ is uniquely characterized by the fact that
\[
\Free_{\B} \cong h_2 \M_{\HFp}^\op.
\]
This was shown for Baues' secondary Steenrod algebra in \cite[Theorem 5.5.6]{baues-book} and ours in \Cref{cor:free-an}.
\end{proof}
Since $\B$ is a differential graded algebra, it admits multiple presentations as chain complexes. Baues' original presentation was large and unwieldy. In \cite{nassau-secondary}, Nassau discovered a smaller and simpler presentation of the secondary Steenrod algebra, which is what we shall regurgitate here.
Recall that the homotopy groups of $\B$ are given by
\[
\pi_* \B = \A[\tau] / \tau^2.
\]
In particular, they are concentrated in cohomological degrees $0$ and $1$. Thus, we can represent $\B$ as a $2$-term chain complex
\[
\B = \left(\begin{tikzcd}
\B_1 \ar[d, "d^\B"] \\
\B_0
\end{tikzcd}\right).
\]
The structure of being a differential graded algebra means $\B_0$ is a ring, $\B_1$ is a $\B_0$-$\B_0$-bimodule, and $d^\B$ is a bimodule homomorphism such that
\[
(d^\B a)b = a(d^\B b) \text{ for all }a, b \in \B_1.
\]
One should think of $\B_0$ as an enlargement of $\A$ so that certain products are not literally zero. For example, if we want the secondary cohomology operation $\langle \beta, \beta, -\rangle$ to ever be non-zero, the product $\beta \beta$ cannot vanish in $\B_0$; it must be killed by a non-trivial homotopy in $\B_1$. The $\B_0$-$\B_0$-bimodule structure on $\B_1$ then encodes various Massey product information.
By definition, $\B_1$ and $\B_0$ fit in a long exact sequence
\[
\begin{tikzcd}
0 \ar[r] & \pi_1 \B = \A\{\tau\} \ar[r] & \B_1 \ar[r, "d^\B"] & \B_0 \ar[r, "\pi^\B"] \ar[r] & \pi_0 \B = \A \ar[r] & 0.
\end{tikzcd}
\]
Our model of $\B$ admits the following crucial simplifying property:
\begin{lemma}\pushQED{\qed}
The long exact sequence splits as
\[
\B_1 \cong \ker \pi^B \oplus \A\{\tau\},\quad |\tau| = 1.
\]
Further, this splitting is compatible with the right $\B_0$-action and the left $\ker \pi^\B$-action.\qedhere
\end{lemma}
Under this splitting, the left action of $\B_0$ on $\B_1$ necessarily takes the form
\[
a \cdot (r, p) = (ar, A(\pi^\B(a), r) + \pi^\B(a) p)
\]
for some function
\[
A \colon \A \otimes \ker \pi^\B \to \A\{\tau\}.
\]
One should think of this function $A$ as carrying the ``Massey product information'' in $\B$. For example, if $a, b, c \in \A$ are such that $ab = bc = 0$ and $\tilde{b}, \tilde{c} \in \B_0$ are lifts of $b, c$ respectively, then $A(a, \tilde{b}\tilde{c}) \in \langle a, b, c\rangle$.
\begin{eg}
In the secondary $\A(0)$ that we computed in \Cref{section:a0}, we had $\B_0 = \Z/4\{1, \beta\}$ and
\[
A(\beta, p) = \tau.
\]
\end{eg}
\begin{remark}
All the non-trivial information in $\B$ is contained in the function $A$. When choosing $\B_0$, we are mostly just trying to fatten $\A$ enough to make room for the non-trivial $A$. In Baues' original model, it was simply taken to be the free $\Z/4$-algebra on $\{\Sq^n\}_{n > 0}$.
\end{remark}
\begin{remark}
One can show that if $r \in \ker \pi^\B$ is in the center of $\B_0$, then $A(-, r)$ is a derivation on $\A$. When $r = 2$, the derivation $A(-, 2)$ is the Kirstensen derivation that sends $\Sq^n \to \Sq^{n - 1}$.
\end{remark}
In the rest of the \namecref{section:nassau}, we will describe the ring $\B_0$ and the function $A$. We start with $\B_0$, which is in fact a Hopf algebra. As in the ordinary case, its dual admits a nice ``geometric'' description --- it is the Hopf algebra representing power series of the form
\[
f(x) = \sum_{k \geq 0} \xi_k x^{2^k} + \sum_{0 \leq k < l} 2 \xi_{k, l} x^{2^k + 2^l}
\]
under composition mod $4$. This gives a natural inclusion $\A_* \hookrightarrow (\B_0)_*$, whose dual defines our projection $\pi^\B \colon \B_0 \to \pi_0 \B$.
Explicitly, the Hopf algebra $(\B_0)_*$ is given by
\[
(\B_0)_* = \Z/4[\xi_k, 2 \xi_{k, l} \mid 0 \leq k < l, \xi_0 = 1]
\]
with the coproduct
\[
\begin{aligned}
\Delta \xi_n &= \sum_{i + j = n} \xi_i^{2^j} \otimes \xi_j + 2 \sum_{0 \leq k < l} \xi_{n - 1 - k}^{2^k} \xi_{n - 1 - l}^{2^l} \otimes \xi_{k, l}\\
\Delta \xi_{n, m} &= \xi_{n, m} \otimes 1 + \sum_{k \geq 0} \xi_{n - k}^{2^k} \xi_{m - k}^{2^k} \otimes \xi_{k + 1} \\
&\hphantom{= \xi_{n, m} \otimes 1}+ \sum_{0 \leq k < l} (\xi_{n - k}^{2^k} \xi_{m - l}^{2^l} + \xi_{m - k}^{2^k} \xi_{n - l}^{2^l}) \otimes \xi_{k, l}.
\end{aligned}
\]
That is, $(\B_0)_*$ is the sub-Hopf algebra of $\Z/4[\xi_k, \xi_{k, l}]$ generated by the $\xi_k$ and $2\xi_{k, l}$. The ring $\B_0$ is then given by $\Hom((\B_0)_*, \Z/4)$. It is generated by the following elements:
\begin{defi}
Define $\Sq(R)$ and $Y_{k, \ell}$ to be dual to $\xi^R$ and $2\xi_{k, l}$ under the monomial basis.\footnote{Our indexing differs slightly from Nassau's.}
\end{defi}
It is easy to check that
\begin{lemma}\pushQED{\qed}
$Y_{k, \ell} \Sq(R)$ is dual to $\xi^R \xi_{k, \ell}$ under the monomial basis. Further,
\[
\pi^\B(Y_{k, \ell}) = 0\text{ and } \pi^\B(\Sq(R)) = \Sq(R).\qedhere
\]
\end{lemma}
\begin{lemma}\pushQED{\qed}\label{lemma:y-prod-zero}
\[
Y_{*, *} Y_{*, *} = 2 Y_{*, *} = 0.\qedhere
\]
\end{lemma}
\begin{remark}
Since $2Y_{*, *} = 0$, we prefer to think of the $\Sq(R)$ in $Y_{k, \ell} \Sq(R)$ as an element of $\A$ instead of $\B_0$. Similarly, there is a left action of $\A$ on the $Y_{*, *} \Sq(*)$ terms.
\end{remark}
To describe the rest of the multiplication, we let $\daleth: \A_* \otimes \A \to \A$ be the contraction operator. In the Milnor basis, we have
\[
\daleth(\xi^R, \Sq(S)) = \Sq(S - R),
\]
where $\Sq(S - R)$ is zero if any entry is negative.
\begin{lemma}\pushQED{\qed}
If $a \in \A$, then
\[
a Y_{k, l} = \sum_{i, j \geq 0} Y_{k + i, l + j} \daleth(\xi_i^{2^k} \xi_j^{2^l}, a),
\]
where we set
\[
Y_{k, l} =
\begin{cases}
Y_{l, k} & k > l,\\
2 \Sq(\Delta_{k + 1}) & l = k.
\end{cases}\qedhere
\]
\end{lemma}
Here $\Delta_k$ is the sequence that is $1$ in the $\xi_k$ position and $0$ elsewhere.
It remains to determine the multiplication between the $\Sq(R)$. Recall the following definition in the multiplication of $\A$ under the Milnor basis:
\begin{defi}
Let $X = (x_{ij})$ be a matrix indexed on the non-negative integers. Define
\begin{gather*}
r_i(X) = \sum_j 2^j x_{ij},\quad s_j(X) = \sum_i x_{ij},\quad t_n(X) = \sum_{i + j = n} x_{ij}, \\
R(X) = (r_1(X), r_2(X), \ldots ),\quad S(X) = (s_1(X), \ldots ),\quad T(X) = (t_1(X), \ldots), \\
b(X) = \frac{\prod t_n!}{\prod x_{ij}!} = \prod_n \binom{t_n}{x_{n0}\; \cdots \; x_{0n}} \in \Z.
\end{gather*}
\end{defi}
\begin{thm}[{\cite[Theorem 4b]{milnor-steenrod}}]\pushQED{\qed}
We have
\[
\Sq(R) \Sq(S) = \sum_{\substack{R(X) = R\\ S(X) = S}} b(X) \Sq(T(X)).\qedhere
\]
\end{thm}
Dualizing the secondary coproduct formula gives
\begin{thm}\pushQED{\qed}
\begin{multline*}
\Sq(R) \Sq(S) = \sum_{k \geq 0} \sum_{0 \leq m < n} Y_{m + k, n + k} \daleth(\xi_m^{2^k} \xi_n^{2^k}, \Sq(R)) \daleth(\xi_{k + 1}, \Sq(S))\\
+ \sum_{\substack{R(X) = R\\ S(X) = S}} b(X) \Sq(T(X)).
\end{multline*}\popQED
\end{thm}
This completes the description of $\B_0$. It remains to describe the function $A$.
\begin{lemma}\pushQED\qed
We have
\[
\begin{aligned}
A(a, 2) &= \daleth(\xi_1, a),\\
A(a, Y_{k, \ell}) &= \sum_{i, j \geq 0} Z_{k + i, l + j} \daleth(\xi_i^{2^k} \xi_j^{2^{\ell}}, a),\\
A(a, r \Sq(R)) &= A(a, r) \Sq(R),
\end{aligned}
\]
where
\[
Z_{k, \ell} =
\begin{cases}
0 & k < \ell,\\
\Sq(\Delta_k + \Delta_\ell) & k \geq \ell.
\end{cases}\qedhere
\]
\end{lemma}
\subsection{Periodic \texorpdfstring{$\B$}{B}-modules}\label{section:periodic}
Equipped with a description of $\B$, we can now describe the category $\Mod_\B$. This admits the expected model category presentation.
\begin{thm}
Let $A$ be a $\Z$-graded differential graded algebra and $\dgMod_A$ the $1$-category of differential graded modules over $A$. Then there is a model structure on $\dgMod_A$ where
\begin{enumerate}
\item fibrations are epimorphisms;
\item weak equivalences are homology isomorphisms; and
\item if $j \colon M \to N$ is a cofibration of graded $\Z$-chain complexes, then $A \otimes j\colon A \otimes M \to A \otimes N$ is a cofibration.
\end{enumerate}
Further, this model category presents $\Mod_A$, where we view $A$ as a graded $\E_1$-ring in $\Mod_\Z$.
\end{thm}
\begin{proof}
If $A$ is cofibrant as a chain complex over $\Z$, then this is \cite[Theorem 4.3.3.17]{ha}. Otherwise, let $A'$ be a cofibrant replacement of $A$ in $\Alg(\operatorname{Ch}(\Z))$. By \cite[Theorem 3.1]{schwede-shipley}, we know $A'$ is cofibrant as an $\Z$-chain complex. By \cite[Corollary 3.4]{dgm}, the base change adjunction $\dgMod_A \rightleftharpoons \dgMod_{A'}$ is a Quillen equivalence. So we are done.
\end{proof}
The goal of this \namecref{section:periodic} is to understand $\B$-modules of the form $\H^{(2)}X$. Instead of computing $\H^{(2)} X$ directly, our strategy is to start with $H^*X$ and use obstruction theory to classify $\B$-modules that look like potential candidates for $\H^{(2)} X$.
Recall from \Cref{lemma:hn-homotopy} that $\H^{(2)}X$ and $H^*X$ are related by the equations
\[
\pi_0 \H^{(2)} X = H^*X,\quad \pi_* \H^{(2)}X = (\pi_0 \H^{(2)}X) [\tau] / \tau^2.
\]
\begin{defi}
A $\B$-module $M$ is periodic if
\[
\pi_* M = \pi_0 M[\tau] /\tau^2
\]
as a $\pi_* \B$-module. We say $M$ is a lift of the $\A$-module $\pi_0 M$.
\end{defi}
We should think of the category of periodic $\B$-modules as the secondary version of the heart of $\Mod_{\A}$. In particular, it is a $2$-category.
\begin{thm}
Let $\bar{M}$ be an $\A$-module. Then the obstruction to lifting $\bar{M}$ to a periodic $\B$-module lies in $\Ext^{3, 1}_\A(\bar{M}, \bar{M})$.
If the obstruction vanishes, then the set of lifts is a torsor over $\Ext^{2, 1}_\A(\bar{M}, \bar{M})$.
\end{thm}
\begin{proof}
This follows from (the proof of) \cite[Theorem 4.9]{abstract-goerss-hopkins}.
\end{proof}
This obstruction theory has a natural interpretation. One can think of the ordinary cohomology groups $H^*X$ as encoding how one builds $X$ out of spheres, except we only remember maps up to filtration $1$. The secondary cohomology group then seeks to remember these attaching maps up to filtration $2$. If the attaching maps of $\bar{M}$ supported $d_2$ differentials, then it would be impossible to lift to a periodic $\B$-module, and this obstruction is captured in $\Ext^{3, 1}_\A(\bar{M}, \bar{M})$. If these obstructions vanish, then the set of ways to lift the filtration 1 maps to include filtration 2 information is then a torsor over $\Ext^{2, 1}_\A(\bar{M}, \bar{M})$.
In our case, if $\bar{M}$ is the cohomology of a spectrum, then we know a lift exists, namely the secondary cohomology of said spectrum. For many spectra of interest, the group $\Ext^{2, 1}_\A(\bar{M}, \bar{M})$ is trivial, so there is exactly one lift. Thus, any lift one can write down will work. When the group is non-trivial, one can show that if two lifts differ by $\chi \in \Ext^{2, 1}_\A(\bar{M}, \bar{M})$, then the $d_2$ differentials of the lifts differ by multiplication-by-$\chi$ (e.g.\ this follows from inspecting the $d_2$ algorithm we present later). To find the right lift, one will have to manually compute a small number of $d_2$'s.
In \Cref{section:resolution}, we will describe an explicit procedure to lift an $\A$-module by lifting its minimal free resolution. In this \namecref{section:periodic}, we will instead focus on understanding a few key examples of periodic $\B$-modules.
\begin{notation}
Let $M$ be a periodic $\B$-module. Then $M$ can be represented by a $2$-term chain complex. That is, it is zero outside of cohomological degrees $0$ and $1$. We will write this chain complex as
\[
M = \left(\begin{tikzcd}
M_1 \ar[d, "d^M"] \\
M_0
\end{tikzcd}\right).
\]
We write $\pi^M\colon M_0 \to \pi_0 M$ for the natural projection map.
\end{notation}
When manipulating periodic $\B$-modules, we often make use of the following property, which we have already seen for $\B$ itself:
\begin{lemma}
Let $M$ be a periodic $\B$-module. If $a \in \B_1$ and $m \in M_1$, then
\[
(d^\B a) m = a (d^M m).
\]
\end{lemma}
\begin{proof}
$am = 0$ for degree reasons, and apply the Leibniz rule.
\end{proof}
We start with the simplest secondary Steenrod module, namely the secondary cohomology of the sphere. This is the natural analogue of $k = \F_p$.
\begin{defi}
We define $\kk = \H^{(2)}(\S)$.
\end{defi}
\begin{lemma}
We have
\[
\kk = \left(\begin{tikzcd}
\Z/p\{\mu_0\} \oplus \Z/p\{\tau\} \ar[d, "d"] \\
\Z/p^2
\end{tikzcd}\right),\quad\sidedeg
\]
The $\B$ action is given by
\[
\tilde{\beta} \mu_0 = \tau,
\]
where $\tilde\beta \in \B_0$ is any representative of $\beta \in \pi_0 \B = \A$ (as usual, $\beta = \Sq^1$ when $p = 2$). For degree reasons, there are no other possible non-trivial actions.
\end{lemma}
Note that $\mu_0$ is the null-homotopy of $p$, and the action encodes the fact that $\beta$ detects $p$.
\begin{proof}
Since $\Ext^{2, 1}_\A(k, k) = 0$, \emph{any} periodic $\B$-module lifting $k$ must be a model of $\kk$. Thus, the only work is to check that we described a valid $\B$-module structure, which is straightforward.
\end{proof}
The other family of periodic $\B$-modules we are interested in is the cohomology of Eilenberg--Maclane spectra, i.e.\ free $\B$-modules.
\begin{defi}
A free $\B$-module is a direct sum of internal degree shifts of $\B$.
\end{defi}
Note that we consider the choice of generators part of the structure of a free $\B$-module.
Recall that $\B_1$ admits a splitting
\[
\B_1 = \ker \pi^\B \oplus \pi_0 \B\{\tau\}.
\]
Thus, every free module $M$ also comes with a standard\footnote{We shall use ``standard'' to refer something that results from a concrete but non-canonical choice we have made.} splitting
\[
M_1 \cong \ker \pi^M \oplus \pi_0 M\{\tau\}.
\]
We refer to these two components as the $\ker \pi$ component and the $\tau$ component respectively. Of course, we also have such a splitting in the case of $\kk$ by our explicit description.
We now turn to homomorphisms between periodic $\B$-modules. Since we are going to take free resolutions of periodic $\B$-modules, we are only interested in $\B$-module homomorphisms out of free modules. One immediately sees that
\begin{lemma}
Let $M$ be a free $\B$-module and $N$ any $\B$-module. Then the natural map
\[
[M, N]_{\B} \to [\pi_0 M, \pi_0 N]_{\A}
\]
is a bijection.\fakeqed
\end{lemma}
In other words, given a map $\pi_0 M \to \pi_0 N$, there is a unique lift to a map $M \to N$ up to homotopy. While we do not get a well-defined chain map $M \to N$, we can fix some choices once and for all. We fix sections
\[
\begin{aligned}
\sigma^\B \colon \A = \pi_0 \B &\to \B_0\\
\sigma^\kk \colon k = \pi_0 \kk &\to \kk_0.
\end{aligned}
\]
of $\pi^\B$ and $\pi^\kk$ as functions between sets. These naturally extend to sections for free modules as well.
With these choices, if we have a map $f\colon \pi_0 M \to \pi_0 N$ where $N$ is either free or $\kk$, then we get a standard lift to a chain map $\tilde{f}\colon M \to N$, which we can depict as
\[
\begin{tikzcd}
M_1 \ar[d, "d^M"] \ar[r, "f_1"] & N_1 \ar[d, "d^N"] \\
M_0 \ar[r, "f_0"] & N_0.
\end{tikzcd}
\]
Given chain maps $f, g$, a homotopy between them is a $\B_0$-module map $h\colon M_0 \to N_1$ such that
\[
f_0 - g_0 = d^N h,
\]
i.e.\ a lift of the difference along $d^N$.\footnote{The definition of a chain map requires $f_1 - g_1 = h d^M$ as well. However, if $M$ is free, then this is automatic. Indeed, any element in $M_1$ is of the form $am$, where $a \in \B_1$ and $m \in M_0$. Then
\[
hd^M (am) = h(d^\B(a) m) = d^\B(a) h(m) = a d^N(h(m)) = a(f_0 - g_0)(m) = (f_1 - g_1)(am).
\]
Here the last equality uses the fact that $f$ and $g$ are maps of $\B$-modules, while the third equality uses the crucial relation $a (dm) = (da) m$. This will be a common theme in our future manipulations, where the $(-)_1$ version of the equation we have to satisfy follows formally from the $(-)_0$ version when the source is free. The proof is largely similar and we will not comment further.} In practice, to specify a homotopy, we don't have to specify all of $h$. Since $d^N$ is injective when restricted to the $\ker \pi$ component of $N_1$, this component of $h$ must be equal to $f_0 - g_0$ itself. The freedom in choosing the homotopy lies in the $\tau$ component. Thus, given our choices, we can thus identify a homotopy with a map $M_0 \to \pi_0 N\{\tau\}$, which necessarily factors through $\pi^M$ to give a map $\pi_0 M \to \pi_0 N\{\tau\}$.
\begin{remark}
In general, the space of homotopies is canonically a torsor over $\Hom_\A(\pi_0 M, \pi_0 N\{\tau\})$. After making all our choices, we have found a basepoint for this space, namely the homotopy with trivial $\tau$ component. We can then identify the space of homotopies with $\Hom_\A(\pi_0 M, \pi_0 N\{\tau\})$ itself.
\end{remark}
\subsection{Free resolutions}\label{section:resolution}
We are now ready to construct a free resolution of a periodic $\B$-module $M$, which will give us a cofibrant replacement of $M$ in $\dgMod_\B$.
We start by taking a free resolution $\bar{P}^\bullet \to \pi_0 M$ of $\pi_0 M$. As usual, each $\bar{P}^{(s)}$ is a free $\A$-module with a fixed choice of generators.
The previous \namecref{section:periodic} gives us a lift of this to a sequence of free $\B$-modules
\[
\begin{tikzcd}
\cdots \ar[r] & P^{(3)} \ar[r, "\partial^{(3)}"] & P^{(2)} \ar[r, "\partial^{(2)}"] & P^{(1)} \ar[r, "\partial^{(1)}"] & P^{(0)} \ar[r, "\epsilon"] & M
\end{tikzcd}
\]
such that the composites of successive maps are homotopic to zero. This alone does not let us assemble this into a cofibrant replacement in $\dgMod_\B$. What we need is a suitable choice of null-homotopy of each composite $\partial^{(k - 1)} \partial^{(k)}$.
\begin{defi}[{\cite[Definition 3.1]{baues-e3}}]
A secondary chain complex is a sequence
\[
\begin{tikzcd}
\cdots \ar[r] & P^{(3)} \ar[r, "\partial^{(3)}"] & P^{(2)} \ar[r, "\partial^{(2)}"] & P^{(1)} \ar[r, "\partial^{(1)}"] & P^{(0)}
\end{tikzcd}
\]
of periodic $\B$-modules together with specified null-homotopies of $\partial^{(k - 1)} \partial^{(k)}$, such that all three-fold Massey products $\langle \partial^{(k - 2)}, \partial^{(k - 1)}, \partial^{(k)} \rangle$ vanish.
\end{defi}
Writing each module out as a 2-term chain complex itself, we can expand this to a diagram
\[
\begin{tikzcd}[row sep = large, column sep = large]
\cdots \ar[r] \ar[d] & P^{(3)}_1 \ar[r, "\partial^{(3)}_1"] \ar[d, "d^{(3)}"] & P^{(2)}_1 \ar[r, "\partial^{(2)}_1"] \ar[d, "d^{(2)}"] & P^{(1)}_1 \ar[r, "\partial^{(1)}_1"] \ar[d, "d^{(1)}"] & P^{(0)}_1 \ar[d, "d^{(0)}"]\\
\cdots \ar[r] \ar[urr, gray!50!black, pos=0.4] & P^{(3)}_0 \ar[r, "\partial^{(3)}_0"] \ar[urr, "h^{(3)}", gray!50!black, pos=0.4] & P^{(2)}_0 \ar[r, "\partial^{(2)}_0"] \ar[urr, "h^{(2)}", gray!50!black, pos=0.4] & P^{(1)}_0 \ar[r, "\partial^{(1)}_0"] & P^{(0)}_0.
\end{tikzcd}
\]
The condition of being a secondary chain complex is then
\[
\begin{aligned}
d^{(s - 1)} \partial_1^{(s)} &= \partial_0^{(s)} d^{(s)},\\
\partial_0^{(s - 1)} \partial_0^{(s)} = d^{(s - 2)} h^{(s)}, &\quad\;\, \partial_1^{(s - 1)} \partial_1^{(s)} = h^{(s)} d^{(s)},\\
h^{(s - 1)} \partial^{(s)}_0 &= \partial^{(s - 2)}_1 h^{(s)}.
\end{aligned}
\]
These say, respectively, that
\begin{itemize}
\item Each $\partial^{(s)}$ is a chain map;
\item $h^{(s)}$ is a null-homotopy of $\partial^{(s - 1)} \partial^{(s)}$; and
\item The bracket $\langle \partial^{(s - 2)}, \partial^{(s - 1)}, \partial^{(s)}\rangle$ vanishes.
\end{itemize}
Note that when the source is free, the equations $d\partial_1 = \partial_0 d$ and $\partial_1 \partial_1 = hd$ are implied by the others by $\B$-linearity.
Having chosen such homotopies, we can now define
\begin{defi}
Let $P^{\bullet}$ be a secondary chain complex. Then the total chain complex $\Tot(P^{\bullet})$ is the chain complex\footnote{
There are many possible choices of sign when forming the total chain complex. The choice of sign here is motivated by two concerns:
\begin{itemize}
\item The inclusion map $P^{(0)} \hookrightarrow \Tot(P^\bullet)$ should be given by the obvious inclusion. This precludes the last differential from being $\begin{pmatrix} \partial_0^{(1)} & -d^{(0)}\end{pmatrix}$, which is a more common sign convention
\item Applying $\A \otimes_{\B}(-)$ to the total chain complex should yield the same complex as $\A \otimes_{\B}(-)$ applied to the secondary chain complex. This requires the top-left entry to be $\partial_0^{(s)}$ instead of $-\partial_0^{(s)}$.
\end{itemize}
Of course, these choices are immaterial, but we believe our choice of signs makes it slightly easier to reason about various factors.
}
\[
\setlength\arraycolsep{1pt}
\begin{tikzcd}[column sep = 4.7em, ampersand replacement=\&]
P_0^{(3)} \oplus P_1^{(2)} \ar[r, "{\begin{pmatrix}\partial^{(3)}_0 & d^{(2)} \\ -h^{(3)} & -\partial^{(2)}_1 \end{pmatrix}}"] \& P_0^{(2)} \oplus P_1^{(1)} \ar[r, "{\begin{pmatrix}\partial^{(2)}_0 & d^{(1)} \\ -h^{(2)} & -\partial^{(1)}_1 \end{pmatrix}}"] \& P_0^{(1)} \oplus P^{(0)}_1 \ar[r, "{\begin{pmatrix} \partial_0^{(1)} & d^{(0)} \end{pmatrix}}"] \& P_0^{(0)}.
\end{tikzcd}
\]
\end{defi}
One readily checks that the conditions of being a secondary chain complex translate to the totalization being a chain complex, and that the natural $\B$-module structure gives it a structure of a differential graded module over $\B$.
\begin{thm}
If each $P^{(k)}$ is cofibrant, then so is $\Tot(P^{\bullet})$.
\end{thm}
\begin{proof}
Let $F_s \Tot(P^\bullet)$ be the subcomplex consisting of the $P^{(k)}$ terms with $k \leq s$. Then $\Tot(P^{\bullet}) = \colim F_s \Tot(P^{\bullet})$. So it suffices to show that $F_{s - 1} \Tot(P^{\bullet}) \to F_s \Tot(P^{\bullet})$ is a cofibration. But this fits in the pushout diagram
\[
\begin{tikzcd}
S^{s - 1} \otimes P^{(s)} \ar[r] \ar[d] & F_{s - 1} \Tot(P^{\bullet}) \ar[d] \\
D^s \otimes P^{(s)} \ar[r] & F_s \Tot(P^{\bullet})
\end{tikzcd}
\]
and the left-hand map is a cofibration. So we are done.
\end{proof}
Filtering by cohomological degree gives a spectral sequence
\begin{lemma}\pushQED{\qed}
If $P^{\bullet}$ is a secondary chain complex, then there is a spectral sequence
\[
E^{p, q}_2 = H^p(\pi_q P^{\bullet}) \implies \pi_{p + q} \Tot(P^\bullet).\qedhere
\]
\end{lemma}
In particular,
\begin{cor}
If $\bar{P}^\bullet \to \pi_0 M$ is a free resolution and $P^\bullet \to M$ a lift to a secondary chain complex, then the induced map $\Tot(P^\bullet) \to M$ is a weak equivalence.\fakeqed
\end{cor}
This leaves the question of how one can construct the secondary chain complex in the first place. We start with the sequence
\[
\begin{tikzcd}
\cdots \ar[r] & P^{(3)} \ar[r, "\partial^{(3)}"] & P^{(2)} \ar[r, "\partial^{(2)}"] & P^{(1)} \ar[r, "\partial^{(1)}"] & P^{(0)} \ar[r, "\epsilon"] & M
\end{tikzcd},
\]
and our goal is to choose homotopies inductively to satisfy
\[
\partial^{(s - 2)}_1 h^{(s)} = h^{(s - 1)} \partial^{(s)}_0 .
\]
To simplify notation, we assume $M_1$ admits a splitting into a $\ker \pi$ component and a $\tau$ component as well; the argument goes through with slight modifications in the general case.
The first potential non-zero homotopy is $h^{(1)}$, which we can choose arbitrarily, since the equation it has to satisfy takes values in the zero group. Inductively, assume we have made valid choices of $h^{(k)}$ for $k < s$.
As previously discussed, the $\ker \pi$ component of $h^{(s)}$ is forced to be $\partial_0^{(s - 1)} \partial_0^{(s)}$, and we have the freedom to choose the $\tau$ component, which we call $h^{(s)}_\tau$.
Let $g$ be a generator of $P^{(s)}$, and write
\[
\bar{\partial}^{(s)} g = \sum \alpha^i g_i,
\]
where $\bar{\partial}$ is the differential of $\bar{P}^\bullet$ and $\{g_i\}$ is a set of generators of $P^{(s - 1)}$. We can then write the $\tau$ component of the equation as
\[
\bar{\partial}^{(s - 2)} h_\tau^{(s)} g = \sum \left(\alpha^i h^{(s - 1)}_\tau (g_i) + A\left(\alpha^i, \partial_0^{(s - 2)} \partial_0^{(s - 1)}g_i\right) \right) \equiv t_g.
\]
Thus, we want to choose $h^{(s)}_\tau g$ to be a lift of $t_g$ along $\bar{\partial}^{(s - 2)}$.
\begin{lemma}
For any valid choice of $h^{(k)}$ for $k < s$, the equation can be solved.
\end{lemma}
Thus, we can choose the homotopies iteratively.
\begin{proof}
By exactness, we need to check that $\bar{\partial}^{(s - 3)} t_g = 0$. Informally, this follows from the Toda bracket manipulation
\[
\partial^{(s - 3)} \langle \partial^{(s - 2)}, \partial^{(s - 1)}, \partial^{(s)}\rangle = \langle \partial^{(s - 3)}, \partial^{(s - 3)}, \partial^{(s - 1)} \rangle \partial^{(s)} = 0.
\]
In more detail, we can identify $\bar{\partial}^{(s - 3)} t_g$ as the $\tau$ component of $\partial_1^{(s - 3)} h^{(s - 1)} \partial_0^{(s)}$. It is convenient to temporarily pick an arbitrary (and immaterial) value for $h^{(s)}_\tau$, so that $h^{(s)}$ is a null-homotopy of $\partial^{(s)} \partial^{(s - 1)}$. Then we have
\[
\partial_1^{(s - 3)} h^{(s - 1)} \partial_0^{(s)} = h^{(s - 2)} \partial_0^{(s - 1)}\partial_0^{(s)} = h^{(s - 2)} d^{(s)} h^{(s)} = \partial_1^{(s - 3)} \partial_1^{(s - 2)} h^{(s)}.
\]
The $\tau$ component of the right-hand side is $\bar\partial^{(s - 3)} \bar\partial^{(s - 2)} h^{(s)}_\tau$, which vanishes for any choice of $h^{(s)}_\tau$. So we are done.
\end{proof}
In practice, we are rarely provided with a description of $M$ itself, but just the Steenrod module $\bar{M} \equiv \pi_0 M$. In this case, the strategy is to lift the chain complex $\bar{P}^{(\bullet)}$ itself, without the augmentation to $\bar{M}$. The above argument applies for most of the chain complex, except at the very beginning, where the chain complex is no longer exact.
Thus, we need to carefully choose $h^{(2)}_\tau$ such that the lift $h^{(3)}_\tau$ can be made. This is not always possible, and one checks that the obstruction to doing so lives in $\Ext^{3, 1}_\A(\bar{M}, \bar{M})$. If we manage to choose such an $h^{(2)}_\tau$, we can perform the rest of the inductive procedure and obtain a secondary chain complex $P^\bullet$. Finally, the spectral sequence implies that $\Tot(P^\bullet)$ is a lift of $\bar M$ to a periodic $\B$-module.
In general, different choices of $h^{(2)}_\tau$ will lead to different lifts of $\bar M$, and one can check directly that the set of choices is a torsor over $\Ext^{2, 1}_\A(\bar{M}, \bar{M})$. In practice, we often work in the case where there is a unique lift, and further, for degree reasons, \emph{any} choice of $h^{(2)}_\tau$ allows for a lift. Then we simply choose $h^{(2)}_\tau = 0$.
\begin{remark}
When implementing this algorithm, the most expensive steps are evaluating $\partial_0^{(s - 2)} \partial_0^{(s - 1)}$, and then applying $A$ to it. Both of these steps are fully parallelizable with no data dependencies, so can be computed in a scalable and distributed fashion. Note that the composite $\partial_0^{(s - 2)} \partial_0^{(s - 1)}$ will be used again when computing products, and should not be discarded after applying $A$.
\end{remark}
\section{Computing the Adams \texorpdfstring{$E_3$}{E3} page}\label{section:e3}
Now that we can compute cofibrant replacements, we can, in theory, calculate anything we want in $\Mod_{C\tau^2}^{ft}$. In this \namecref{section:e3}, we will describe in detail the procedure for computing various useful sets of data. To simplify the presentation, we shall focus on obtaining data useful for computing the Adams spectral sequence for $\pi_{*, *} X$. The general case of computing $[X, Y]$ is not too much more difficult, but involves slightly more linear algebra\footnote{While it is true that $[X, Y] = \pi_* DX \otimes Y$ when $X$ is finite, this trick is not so useful in practice, since we lose the composition product structure under this transformation.}.
In this \namecref{section:e3}, we will assume that the free $\A$-resolution $\bar{P}^\bullet \to H^*X$ is a minimal resolution, i.e.\ $\bar{P}^\bullet \otimes_\A k$ has trivial differential. In this case, $\Hom_\A(\bar{P}^\bullet, k)$ also has trivial differential, so an element in $\Ext^{s, t}_\A(H^*X, k)$ is exactly a map $\bar{P}^{(s)} \to k[t]$. Thus, a choice of generators of each $\bar{P}^{(s)}$ also grants $\Ext^{s, t}_\A(H^*X, k)$ an $\F_p$-vector space basis.
\begin{remark}
This \namecref{section:e3} is better seen as a technical documentation of the algorithm rather than a piece of mathematical exposition. Most of the content involves explicitly writing out large formulas that have to be satisfied and then observing that we can indeed iteratively make choices to satisfy these equations.
\end{remark}
\subsection{Computing \texorpdfstring{$d_2$}{d2}}\label{section:d2}
We begin by computing the $d_2$ differential in the Adams spectral sequence of $X$.
\begin{lemma}
We can read off the $d_2$ differential of $X$ from a minimal free resolution of $\H^{(2)} X$.
\end{lemma}
\begin{proof}
Recall that the $d_2$ differential is given by the connecting homomorphism of the cofiber sequence
\[
\Sigma^{0, -1} C\tau \otimes \nu X \to C\tau^2 \otimes \nu X \to C\tau \otimes \nu X
\]
in $\Syn_\HFp$. More precisely, it is obtained by applying $\pi_{*, *}$ to the connecting homomorphism of this cofiber sequence.
By \Cref{lemma:unique-lift}, the cofiber sequence lifts uniquely to a sequence in $\Mod_{C\tau^2}$, and then the $d_2$ differential is induced by applying $[C\tau^2, -]^{*, *}_{\Mod_{C\tau^2}}$ to the connecting homomorphism.
By \Cref{lemma:shift-ctau}, this is equivalent to applying $[\H^{(2)}X, -]^{*, *}$ to the sequence
\[
\Sigma k[1] \to \kk \to k.
\]
This connecting homomorphism can be computed as in the proof of the snake lemma, i.e.\ as the obstruction to lifting a map $\H^{(2)}X \to k$ along the projection $\kk \to k$.
Explicitly, let $P^{\bullet} \to \H^{(2)}X$ be a minimal free secondary resolution lifting $\bar{P}^\bullet \to H^*X$. An element in $\Ext^{s, t}_\A(H^*X, k)$ is represented by a map $x\colon \bar{P}^{(s)} \to k[t]$, which lifts to a map $\tilde{x} \colon P^{(s)} \to \kk[t]$ using the section $\sigma^\kk$ we have chosen.
We can try to make this into a map of chain complexes
\[
\begin{tikzcd}[column sep=huge, row sep = huge, ampersand replacement=\&]
P_0^{(s + 2)} \oplus P_1^{(s + 1)} \ar[d, "{\begin{pmatrix}\partial^{(s + 2)}_0 & d^{(s + 1)} \\ -h^{(s + 2)} & -\partial^{(s + 1)}_1 \end{pmatrix}}"'] \ar[r] \& 0 \ar[d]\\
P_0^{(s + 1)} \oplus P_1^{(s)} \ar[d, "{\begin{pmatrix}\partial^{(s + 1)}_0 & d^{(s)} \\ -h^{(s + 1)} & -\partial^{(s)}_1 \end{pmatrix}}"'] \ar[r, "{\begin{pmatrix} 0 & x_1 \end{pmatrix}}"] \& \kk_1[t] \ar[d, "d^\kk"] \\
P_0^{(s)} \oplus P^{(s - 1)}_1 \ar[r, "{\begin{pmatrix} x_0 & 0 \end{pmatrix}}"] \& \kk_0[t]
\end{tikzcd}
\]
where $x_0$ and $x_1$ are the components of $\tilde{x}$. The minimality of the resolution ensures that $x_0 \partial_0^{(s + 1)}$ is literally zero as a map of modules, so the bottom square commutes.\footnote{If the resolution is not minimal, then we get a non-trivial map $P_0^{(s + 1)} \to \kk_1[t]$ representing a null-homotopy of $x_0 \partial_0^{(s + 1)}$, and we have to adjust the upcoming argument accordingly.}
On the other hand, the top square need not commute. Again, minimality ensures the $P_1^{(s + 1)}$ component of the map is automatically fine, and the $P_0^{(s + 2)}$ component is the map
\[
-x_1 h^{(s + 2)} \colon P_0^{(s + 2)} \to \kk_1[t].
\]
The commutativity of the bottom square ensures this is in the kernel of $d^\kk$, so it factors through $\pi_1 \kk[t] = k[t + 1]$. Concretely, this map is given by $-x h^{(s + 2)}_\tau \colon P_0^{(s + 2)} \to k[t + 1]$, which represents an element in $\Ext^{s + 2, t + 1}_\A(H^*X, k)$.
This obstruction is exactly the connecting homomorphism of the cofiber sequence $\Sigma k[1] \to \kk \to k$. Hence $d_2(x) = x h_\tau^{(s + 2)}$ in the Adams spectral sequence.
\end{proof}
When working with synthetic spectra, computing the $E_3$ page is more than computing the $d_2$ differential, which is the connecting homomorphism of
\[
\Sigma^{0, -1} C\tau \otimes \nu X \to C\tau^2 \otimes \nu X \to C\tau \otimes \nu X.
\]
Instead, we want to compute $\pi_{*, *} C\tau^2 \otimes \nu X$. Since we have already computed the connecting homomorphism, it remains to solve the extension problem. Fortunately, this is reasonably straightforward --- multiplication by $p$ is detected by $h_0$.
To state our result, we recall the \emph{carrying cocycle} \cite{arithmetic}:
\begin{notation}
The \emph{carrying cocycle} $x \mathbin{\tilde{+}} y$ of $\sigma^\kk$ is defined by
\[
\sigma^\kk(x) + \sigma^\kk(y) = \sigma^\kk(x + y) + p (x \mathbin{\tilde{+}} y)
\]
for $x, y \in \F_p$. This naturally extends to a function $V \times V \to V$ for any $\F_p$-vector space $V$ with a basis.
\end{notation}
\begin{thm}\label{thm:lift}
Fix a minimal free resolution, hence a basis of $\Ext^{s, t}(H^*X, k)$ for every $s, t$. For every element $x \in \Ext^{s, t}(H^* X, k)$ such that $d_2(x) = 0$, there is a standard lift $[x] \in \pi_{*, *} C\tau^2 \otimes \nu X$ with the property that
\[
[x + y] = [x] + [y] + \tau h_0 (x \mathbin{\tilde{+}} y),
\]
\end{thm}
This completely specifies the additive structure of $\pi_{*, *} C\tau^2 \otimes \nu X$.
\begin{proof}
Let $x$ be an element that survives the $E_2$ page. Then the argument above describes a lift of $x$ to an element in $[\H^{(2)}X, \kk]^{s, t}$, or equivalently, $\pi_{t - s, s} C\tau^2 \otimes \nu X \otimes$. We call this element $[x]$, which is a well-defined lift once we fixed every choice we have made so far.
This is, of course, not the only lift. If $y \in \Ext^{s + 1, t + 1}_\A(H^*X, k)$, then we can add it to the $\tau$ component of the map $P_0^{(s + 1)} \to \kk_1[t]$ that we originally picked to be zero. It would still be a chain map, and this represents $[x] + \tau y$.
The failure of $[-]$ to be additive arises from the fact that the sum of two standard lifts need not be a standard lift, since $\sigma^\kk$ is not linear. To prove the additive relation claimed, we have to show that if $x \colon \bar{P}^{(s)} \to k[t]$ is any map, then the chain map $P^{\bullet} \to \kk[t]$ given by $p \tilde{x}$ is homotopic to $\tau h_0 x$. This is a straightforward computation, with the homotopy being given by $\mu_0 x_0$. Ultimately, this boils down to the relation $\beta \mu_0 = \tau$.
\end{proof}
\subsection{Computing products}
We now turn to computing composition products in $\Mod_{C\tau^2}$. To simplify matters, we shall only consider the case of computing the $\pi_{*, *} C\tau^2$ action on $\pi_{*, *} C\tau^2 \otimes \nu X$.
Let $P^\bullet \to \H^{(2)} X$ be a minimal free resolution of $\H^{(2)} X$, and let $Q^\bullet \to \kk$ be a minimal free resolution of $\kk$. From the previous \namecref{section:d2}, we know that an element $f\in \pi_{t - s, s} C\tau^2 \otimes \nu X$ is represented by a chain map $\Tot(P^\bullet) \to \Sigma^s \kk[t]$, which we can lift to a map $\Tot(P^\bullet) \to \Sigma^s \Tot(Q^\bullet)[t]$ since the source is cofibrant.
Now an element in $\pi_{t' - s', s'} C\tau^2$ is represented by a chain map $\Tot(Q^\bullet) \to \Sigma^{s'} \kk[t']$. To compute the product with $f$, we simply compose this with the lifted chain map $\Tot(P^\bullet) \to \Sigma^s \Tot(Q^\bullet)[t]$ and read off the composite.
Most of the hard work is in actually writing down the lift of $f$ to $\Sigma^s \Tot(Q^\bullet)[t]$. To simplify notation slightly, we shift $X$ so that $t = 0$. We begin by computing a lift of $f$ over the ordinary Steenrod algebra, i.e.\ a lift to a map $\bar{f} \colon \bar{P}^\bullet \to \Sigma^s \bar{Q}^\bullet$. Our standard splittings then let us lift this to a diagram
\[
\begin{tikzcd}
\vdots \ar[d, "\partial^{(s + 2)}"] \ar[r] & \vdots \ar[d, "\partial^{(2)}"] \\
P^{(s + 1)} \ar[d, "\partial^{(s + 1)}"] \ar[r, "f^{(s + 1)}"] & Q^{(1)} \ar[d, "\partial^{(1)}"] \\
P^{(s)} \ar[r, "f^{(s)}"] \ar[d] & Q^{(0)} \ar[d] \\
P^{(s - 1)} \ar[r] \ar[d] & 0 \ar[d] \\
\vdots \ar[r] & \vdots
\end{tikzcd}
\]
By construction, each of these squares commute up to homotopy, and again our job is to find a suitable homotopy $H$ that induces a chain map on $\Tot(-)$. To do so, we write down the induced map on the total chain complex:
\[
\begin{tikzcd}[column sep=2.8cm, row sep = huge, ampersand replacement=\&]
\vdots \ar[d] \ar[r] \& \vdots \ar[d] \\
P_0^{(s + k)} \oplus P_1^{(s + k - 1)} \ar[d, "{\begin{pmatrix}\partial^{(s + k)}_0 & d^{(s + k - 1)} \\ -h^{(s + k)} & -\partial^{(s + k - 1)}_1 \end{pmatrix}}"'] \ar[r, "{\begin{pmatrix} f^{(s + k)}_0 & 0 \\ H^{(s + k)} & f^{(s + k - 1)}_1 \end{pmatrix}}"] \& Q_0^{(k)} \oplus Q_1^{(k - 1)} \ar[d, "{\begin{pmatrix}\partial^{(k)}_0 & d^{(k - 1)} \\ -h^{(k)} & -\partial^{(k - 1)}_1 \end{pmatrix}}"]\\
P_0^{(s + k - 1)} \oplus P^{(s + k - 2)}_1 \ar[d] \ar[r, "{\begin{pmatrix} f^{(s + k - 1)}_0 & 0 \\ H^{(s + k - 1)} & f^{(s + k - 2)}_1 \end{pmatrix}}"] \& Q_0^{(k - 1)} \oplus Q^{(k - 2)}_1 \ar[d]\\
\vdots \ar[r] \& \vdots
\end{tikzcd}
\]
Expanding the matrices gives the equations
\[
\begin{aligned}
d^{(k)} f_1^{(s + k)} &= f_0^{(s + k)} d^{(s + k)} \\
H^{(s + k)} d^{(s + k)} &= f_1^{(s + k - 1)} \partial_1^{(s + k)} - \partial_1^{(k)} f_1^{(s + k)} \\
d^{(k - 1)} H^{(s + k)} &= f_0^{(s + k - 1)} \partial^{(s + k)}_0 - \partial_0^{(k)} f_0^{(s + k)} \\
\partial_1^{(k - 1)} H^{(s + k)} + H^{(s + k - 1)} \partial^{(s + k)}_0 &= f_1^{(s + k - 2)} h^{(s + k)} - h^{(k)} f_0^{(s + k)}
\end{aligned}
\]
As usual, the first two are implied by the remaining via $\B$-linearity, and the third equation simply says $H^{(k)}$ is a homotopy between $\partial^{(k)} f^{(k)}$ and $f^{(k - 1)} \partial^{(k)}$. The main equation of content is the last one. We can interpret this as follows --- there are two natural null-homotopies of $f^{(s + k - 2)} \partial^{(s + k - 1)} \partial^{(s + k)}$:
\begin{itemize}
\item we can use the null-homotopy of $\partial^{(s + k - 1)} \partial^{(s + k)}$ and compose it with $f^{(s + k - 2)}$; or
\item we can homotope to $\partial^{(k - 1)} \partial^{(k)} f^{(s + k)}$ and apply the null-homotopy of $\partial^{(k - 1)} \partial^{(k)}$.
\end{itemize}
The equation asserts that these two null-homotopies agree.
Regardless of how we are supposed to interpret this equation, our job is to choose the $\tau$ part of $H$ so that the equation is satisfied.
Let $g$ be a generator of $P^{(s + k)}$. Write
\[
\bar{\partial} (g) = \sum \alpha^i g_i,\quad \bar{f} (g) = \sum \beta^j g_j.
\]
Then the $\tau$ part of the last equation says
\begin{multline*}
\bar\partial^{(k - 1)} H^{(s + k)}_\tau g \\
+ \sum \left(\alpha^i H^{(s + k - 1)}_\tau g_i + A\left(\alpha^i, \left(f_0^{(s + k - 1)} \partial^{(s + k)}_0 - \partial_0^{(k)} f_0^{(s + k)}\right)g_i\right)\right) \\
= \bar{f}^{(s + k - 2)} h^{(s + k)}_\tau g - \sum \left(\beta^j h_\tau^{(k)} g_j + A\left( \beta^j, \partial_0^{(k - 1)} \partial^{(k)}_0 g_j\right)\right).
\end{multline*}
We again solve this inductively. The first homotopy is $H^{(s + 1)}$. Its equation takes values in the zero group, so it always holds. Changing the $\tau$ part modifies $f$ by a $\tau$-multiple. If we want to set $f = [\bar{f}]$, then we choose the $\tau$ part of $H^{(s + 1)}$ to vanish.
As for the second homotopy $H^{(s + 2)}$, we have chosen the target of the chain map to be a resolution of $\kk$, and we always choose minimal resolutions. So $\bar\partial^{(1)}$ is surjective in positive internal degree. The only term in the equation that is in internal degree zero is the $\bar{f}^{(s + k -2)} h_\tau^{(s + k)}g$ term. Requiring this to vanish is exactly the requirement that $\bar{f}$ survives the Adams $d_2$. (Note that in $\beta^j h_\tau^{(2)} g_j$, the $h^{(2)}$ is the homotopy of the free resolution of $\kk$, which is zero for degree reasons)
Afterwards, exactness implies that the lift can always be performed. To see this, as in the case of constructing a free resolution, we have to check that $\partial_1^{(k - 2)}$ applied to the last equation is always satisfied. Instead of painstakingly tracking through each of the terms, it suffices to observe that by induction, the equations desired always hold after applying the differential in $\Tot(Q^\bullet)$. Since the first three equations always hold, we know that $\partial_1^{(k - 2)}$ must kill the last equation.
Once we have performed this lift, given any class $x \in \Ext^{s', t'}(k, k)$ that survives the $d_2$ differential, we can compute the product $[x]f$ by composing the chain maps. We then end up with
\[
\begin{tikzcd}[column sep=4cm, ampersand replacement=\&]
P_0^{(s + s' + 1)} \oplus P_1^{(s + s')} \ar[d] \ar[r, "{\begin{pmatrix} x_1 H^{(s + s' + 1)} & x_1 f_1^{(s + s')}\end{pmatrix}}"] \& \kk_1[t'] \ar[d, "d^\kk"]\\
P_0^{(s + s')} \oplus P^{(s + s' - 1)}_1 \ar[r, "{\begin{pmatrix} x_0 f_0^{(s + s')} & 0 \end{pmatrix}}"] \& \kk_0[t']
\end{tikzcd}
\]
One has to be extremely careful here, as $x_0 f_0^{(s + s')}$ need not be the standard lift of $x\bar{f}$; one has to use the formula $p = \tau h_0$ to express the result in terms of $[x\bar{f}]$.
The interesting part is, of course, the $x_1 H^{(s + s' + 1)}$ term. Note that this term lives in the kernel of $d^\kk$. Indeed, by the commutativity of the diagram, $d^\kk x_1 H^{(s + s' + 1)} = x_0 f_0^{(s + s')} \partial_0^{(s + 1)}$, but $\partial_0^{(s + 1)}$ only hits decomposables by minimality, which is then killed by $x_0$. The $\tau$ component of $x_1 H^{(s + s' + 1)}$ is $x H^{(s + s' + 1)}_\tau$, so the product $[x] f$ is given by whatever $x_0 f_0$ represents plus $\tau (x H^{(s + s' + 1)}_\tau)$.
In a very imprecise manner, this suggests that $H^{(s + s' + 1)}_\tau$ encodes the hidden extension part of the product. Of course, ``the hidden extension part'' only makes sense after choosing the standard lifts $[-]$; it is not a homotopically meaningful concept.
\subsection{Massey products}
We finally turn to Massey products, which requires lifting chain homotopies. This actually contains two kinds of information. Firstly, we get to compute hidden extensions in Massey products that jump by one filtration, which is always useful. But we also get to compute Massey products of the form $\langle x, \tau y, z \rangle$ when $xy$ and $yz$ are not zero in the $E_2$ page but are hit by a differential. This is exactly the $E_3$ page Massey product as described in Moss' convergence theorem \cite{moss}. While these are extremely easy to compute in the $E_3$ page, our synthetic approach gives us the answer up to mod $\tau^2$ instead of mod $\tau$, which is extra information one can capitalize on. (It also provides a very useful test case for the algorithm, since we can verify the answers by hand)
This is more involved than the previous cases, and is largely unpleasant. In particular, we apologize in advance for the overwhelming number of objects called $h$.
We begin with a general remark on chain homotopies, which is rather important. Suppose we have a chain map between (ordinary) chain complexes over $\A$ of the form
\[
\begin{tikzcd}
\vdots \ar[d, "\partial^{(s + 2)}"] \ar[r] & \vdots \ar[d, "\partial^{(2)}"] \\
P^{(s + 1)} \ar[d, "\partial^{(s + 1)}"] \ar[r, "f^{(s + 1)}"] & Q^{(1)} \ar[d, "\partial^{(1)}"] \\
P^{(s)} \ar[r, "f^{(s)}"] \ar[d] & Q^{(0)} \ar[d] \\
P^{(s - 1)} \ar[r] \ar[d] & 0 \ar[d] \\
\vdots \ar[r] & \vdots
\end{tikzcd}
\]
Null-homotopies of this chain map consist of maps $\H^{(s + k)} \colon P^{(s + k)} \to Q^{(k + 1)}$ satisfying the equation
\[
\partial^{(k + 1)} \H^{(s + k)} + \H^{(s + k - 1)} \partial^{(s + k)} = f^{(s + k)}.
\]
The first possible non-zero map is $\H^{(s - 1)}$. A useful trick is that if these are minimal resolutions and $Q^\bullet$ resolves $k$, then we can always choose $\H^{(s - 1)} = 0$. Indeed, since $f$ is null and the resolution is minimal, $f^{(s)}$ can only hit positive degree elements. Since $\partial^{(1)}$ is surjective in positive internal degree, the lifting problem for $H^{(s)}$ can always be solved. Subsequent lifting problems can then be solved by exactness.
On the other hand, this is not true in general, and in particular not true when working in the secondary setting.
Indeed, over the ordinary Steenrod algebra, the set of null-homotopies is a torsor over the appropriate $\Ext$ group, which acts by modifying $\H^{(s - 1)}$. While we can always choose $\H^{(s - 1)} = 0$, this should not be thought of as having a special status.
In the secondary setting, it is no longer a torsor over the full $\Ext$ group; we can only add elements that survive the Adams $d_2$. Then the homotopy with $\H^{(s - 1)} = 0$ need not be a valid basepoint of this torsor. Part of our job when lifting chain homotopies is to find a choice of $\H^{(s - 1)}$.
Let $P^\bullet \to \H^{(2)} X$ and $Q^\bullet \to \kk$ be minimal free resolutions. Suppose we have a chain map between them of the form
\[
\begin{tikzcd}[column sep=3cm, row sep = huge, ampersand replacement=\&]
P_0^{(s + k + 1)} \oplus P_1^{(s + k)} \ar[d, "{\begin{pmatrix}\partial^{(s + k + 1)}_0 & d^{(s + k)} \\ -h^{(s + k + 1)} & -\partial^{(s + k)}_1 \end{pmatrix}}"'] \ar[r, "{\begin{pmatrix} f^{(s + k + 1)}_0 & 0 \\ H^{(s + k + 1)} & f^{(s + k)}_1 \end{pmatrix}}"] \& Q_0^{(k + 1)} \oplus Q_1^{(k)} \ar[d, "{\begin{pmatrix}\partial^{(k + 1)}_0 & d^{(k)} \\ -h^{(k + 1)} & -\partial^{(k)}_1 \end{pmatrix}}"]\\
P_0^{(s + k)} \oplus P^{(s + k - 1)}_1 \ar[r, "{\begin{pmatrix} f^{(s + k)}_0 & 0 \\ H^{(s + k)} & f^{(s + k - 1)}_1 \end{pmatrix}}"] \& Q_0^{(k)} \oplus Q^{(k - 1)}_1
\end{tikzcd}
\]
that is null-homotopic. Then the induced map on $\A \otimes_\B(-)$ is also null-homotopic and admits null-homotopies, say $\H^{(s + k)} \colon P^{(s + k)} \to Q^{(k + 1)}$. These lift to maps
\[
\begin{pmatrix}
\H^{(s + k)}_0 & 0\\
-\eta^{(s + k)} & -\H^{(s + k - 1)}_1
\end{pmatrix}
\colon
P_0^{(s + k)} \oplus P_1^{(s + k - 1)} \to Q_0^{(k + 1)} \oplus Q_1^{(k)}.
\]
Expanding the definition of a chain homotopy gives the equations
\[
\begin{aligned}
d^{(k)} \eta^{(s + k)} &= \partial_0^{(k + 1)} \H^{(s + k)}_0 + \H_0^{(s + k - 1)} \partial_0^{(s + k)} - f_0^{(s + k)}\\
\partial_1^{(k)} \eta^{(s + k)} &= \eta^{(s + k - 1)} \partial_0^{(s + k)} + h^{(k + 1)} \H_0^{(s + k)} - \H_1^{(s + k - 2)} h^{(s + k)} - H^{(s + k)}.
\end{aligned}
\]
The first equation simply states that $\eta$ witnesses the equation $\partial \H + \H \partial \simeq f$, and the second is some complex compatibility condition that we have to solve iteratively.
Note that the equation for $\eta^{(s + k)}$ takes values in $Q_1^{(k - 1)}$. Thus, it is automatically satisfied for \emph{both} $k = -1$ and $k = 0$. The $k = 1$ step is the only step where it might be impossible, and all higher $k$ can be solved by exactness.
When $k = 1$, since $\bar{\partial}^{(1)}$ is surjective in positive internal degrees, we can only fail to lift in internal degree $0$. We shall now analyze which terms can contribute to the $\tau$ part.
\begin{itemize}
\item In $\eta^{(s)} \partial_0^{(s + 1)}$, the only possible contribution comes from the $f_0$ part of $\eta$. The idea is that minimality ensures $\partial_0$ introduces an algebra element of positive degree. On the other hand, when multiplying with a homotopy, the $A$ function shows up, and lowers degree by 1. So as long as two $\partial_0$'s show up, we are clear.
\item The $h^{(2)} \H_0^{(s + 1)}$ term cannot contribute. Indeed, $\H_0^{(s + 1)}$ takes values in $Q_0^{(2)}$, whose generator of lowest degree is $2$. While $h^{(2)}$ can lower degree by $1$, we still have $2 - 1 > 0$.
\item The $\H_1^{(s - 1)} h^{(s + 1)}$ and $H^{(s + 1)}$ terms can contribute.
\end{itemize}
Our goal is then to choose $\H_1^{(s - 1)}$ appropriately so that the right-hand side vanishes in degree $0$. This has a very natural interpretation. In degree $0$, The map $f_0$ is necessarily $p$ times some $\Ext$ class, and one checks that the term $\eta^{(s)} \partial_0^{(s + 1)}$ picks out $h_0$ times said $\Ext$ class. The term $H^{(s + 1)}$ is the $\tau$ part of $f$ itself. So these two terms combined give us the $\tau$ part of $f$ after normalizing the degree $0$ part to exactly $0$. The equation then tells us $\H_1^{(s - 1)}$ should be the class whose $d_2$ kills the $\tau$ part of $f$, witnessing the fact that $f$ is indeed null.
In general, finding this $\H_1^{(s - 1)}$ is low-dimensional linear algebra, and is relatively easy. While the equation it has to satisfy comes from the $k = 1$ case, it ends up not depending on the $k = -1$ and $k = 0$ data, so it can be computed before we start the lifting process.
Once we manage to lift chain homotopies, computing Massey products becomes relatively straightforward. Unfortunately, the sign conventions surrounding Massey products are rather confusing and not well-documented in the literature. Instead of tackling this problem, we are content with computing Massey products up to a sign.
\section{The computer implementation}\label{section:data}
\subsection{The generated data}
We ran our algorithm on the sphere at the prime $2$ up to the $140$\textsuperscript{th} stem. The output of the algorithm is available at \cite[\texttt{d2-data.zip}]{raw_data}, and the contents of each file are described in \Cref{table:data}. In the table, the first three sets of files are the data generated by the new algorithm, while the rest are auxiliary data to assist the user in interpreting the data.
\begin{table}[ht]
\centering
\caption{List of data files}\label{table:data}
\vspace{0.4cm}
\begin{tabularx}{\textwidth}{lX}
\toprule
Filename & Description\\
\midrule
\texttt{d2} & $d_2$ differentials. \\
\texttt{product\_a} & The product of all elements with \texttt{[a]}. We have computed products with all indecomposables up to the $39$\textsuperscript{th} stem, and the names are listed in \Cref{table:prod-name}. Note that these products include the twist of $(-1)^{s't}$. \\
\texttt{massey\_a\_b} & The Massey product \texttt{<-, [b], [a]>} up to a sign. The only exception is \texttt{massey\_P}, which contains the Adams periodicity operator $\langle -, [h_0^4], [h_3]\rangle$. \\
\midrule
\texttt{change\_of\_basis} & Translation between our basis and the basis of the Bruner--Rognes dataset \cite{bruner-rognes}. \\
\texttt{filtration\_one} & All ($E_2$ page) filtration one products. This is useful for identifying elements by hand. \\
\texttt{charts.pdf} & Adams charts displaying the $E_2$ and $E_3$ pages. When a bidegree has more than one basis element, they are ordered bottom-to-top, left-to-right.\\
\texttt{clean\_charts.pdf} & The same charts as above but without $h_2$ products. \\
\texttt{differentials.gz} & Differentials in our minimal resolution. This contains information to uniquely identify all of our basis elements and lifts, but is most likely not of much use to humans. \\
\bottomrule
\end{tabularx}
\end{table}
\begin{table}[ht]
\centering
\caption{Names of products}\label{table:prod-name}
\vspace{0.4cm}
\begin{tabular}{cccc}
\toprule
$n$ & $s$ & class & name \\
\midrule
0 & 1 & $[1]$ & \verb|h_0| \\
1 & 1 & $[1]$ & \verb|h_1| \\
3 & 1 & $[1]$ & \verb|h_2| \\
7 & 1 & $[1]$ & \verb|h_3| \\
8 & 3 & $[1]$ & \verb|c_0| \\
9 & 5 & $[1]$ & \verb|Ph_1| \\
11 & 5 & $[1]$ & \verb|Ph_2| \\
14 & 4 & $[1]$ & \verb|d_0| \\
15 & 2 & $[1]$ & \verb|h_0h_4| \\
16 & 2 & $[1]$ & \verb|h_1h_4| \\
16 & 7 & $[1]$ & \verb|Pc_0| \\
17 & 9 & $[1]$ & \verb|P^2h_1| \\
18 & 2 & $[1]$ & \verb|h_2h_4| \\
19 & 3 & $[1]$ & \verb|c_1| \\
19 & 9 & $[1]$ & \verb|P^2h_2| \\
20 & 4 & $[1]$ & \verb|g| \\
22 & 8 & $[1]$ & \verb|Pd_0| \\
23 & 4 & $[1]$ & \verb|h_4c_0| \\
23 & 9 & $[1,1]$ & \verb|h_0^2i| \\
24 & 11 & $[1]$ & \verb|P^2c_0| \\
25 & 13 & $[1]$ & \verb|P^3h_1| \\
27 & 13 & $[1]$ & \verb|P^3h_2| \\
30 & 2 & $[1]$ & \verb|h_4^2| \\
30 & 6 & $[1]$ & \verb|Dh_2^2| \\
\bottomrule
\end{tabular}
\hspace{1em}
\begin{tabular}{cccc}
\toprule
$n$ & $s$ & class & name \\
\midrule
30 & 12 & $[1]$ & \verb|P^2d_0| \\
31 & 4 & $[1]$ & \verb|h_0^3h_5| \\
31 & 5 & $[0,1]$ & \verb|n| \\
31 & 8 & $[1,1]$ & \verb|d_0e_0| \\
32 & 2 & $[1]$ & \verb|h_1h_5| \\
32 & 4 & $[1]$ & \verb|d_1| \\
32 & 6 & $[1]$ & \verb|Dh_1h_3| \\
32 & 15 & $[1]$ & \verb|P^3c_0| \\
33 & 4 & $[1]$ & \verb|p| \\
33 & 17 & $[1]$ & \verb|P^4h_1| \\
34 & 2 & $[1]$ & \verb|h_2h_5| \\
34 & 8 & $[1]$ & \verb|e_0^2| \\
35 & 17 & $[1]$ & \verb|P^4h_2| \\
36 & 6 & $[1]$ & \verb|t| \\
37 & 5 & $[1]$ & \verb|x| \\
37 & 8 & $[0,1]$ & \verb|e_0g| \\
38 & 2 & $[1]$ & \verb|h_3h_5| \\
38 & 4 & $[1,0]$ & \verb|e_1| \\
38 & 16 & $[1]$ & \verb|P^3d_0| \\
39 & 4 & $[1]$ & \verb|h_5c_0| \\
39 & 9 & $[1]$ & \verb|Dh_1d_0| \\
39 & 12 & $[1]$ & \verb|Pd_0e_0| \\
39 & 17 & $[1,1]$ & \verb|h_0^2P^2i| \\
\\
\bottomrule
\end{tabular}
\end{table}
In all files, the results are expressed in terms of our $E_2$ page basis. We adopt the following naming conventions:
\begin{enumerate}
\item \verb|x_(n, s, i)| is the $i$\textsuperscript{th} basis element in filtration $s$ of the $n$\textsuperscript{th} stem.
\item We use $[-]$ to denote the standard lift to $\Mod_{C\tau^2}$ as in \Cref{thm:lift}. Note again that $[a + b] \not= [a] + [b]$ in general.
\item If an element is in a known degree (e.g.\ it is the value of a product), we will write an element in vector form under our basis, e.g.\ as $[1, 0]$. We shall not put an extra pair of brackets around the vector to denote the secondary lift. It should be clear from context whether we mean the $E_2$ page element or its secondary lift.
\item We use $\tau$ to denote multiplication by $\tau$ (our files are UTF-8 encoded).
\end{enumerate}
\subsection{Generating the data}
The code used for the calculation is available at \cite{ext_rs}, and the latest version of this software is available at \url{https://github.com/SpectralSequences/sseq}. This is a monorepo, and we will work in the \texttt{ext/} subdirectory throughout. This repository comes with a reasonable amount of documentation, and the \texttt{README} in \texttt{ext/} contains instructions for accessing said documentation.
The commands used to generate the data are packaged into a script, which is available at \cite[\texttt{script.sh}]{raw_data}. This should be run in the \texttt{ext/} directory of the repository. The save files for the computations are at \cite[\texttt{S\_2\_milnor.tar}]{raw_data}.
To assist the reader in further exploring the resolution, we illustrate the full interactive session that generates the data we need in \Cref{figure:session}. Assuming Rust is installed, running the commands as indicated in any subdirectory of \texttt{ext/} will compute the $d_2$ differentials for $S_2$, the product with $g$ as well as the Adams periodicity operator.\footnote{As mentioned in the documentation, when resolving to larger stems, one ought to supply the \texttt{-{}-release}, \texttt{-{}-features concurrent} and \texttt{-{}-no-default-features} flags after \texttt{cargo run} for much improved performance.} This guide is written for the version in \cite{ext_rs} but should work with future versions with little modifications.
\begin{figure}[ht]
\begin{shell}
$ cargo run --example secondary > d2
\prompt{Module (default: S_2):} S_2
\prompt{Module save directory (optional):} S_2_milnor
\prompt{Max n (default: 30):} 40
\prompt{Max s (default: 15):} 20
$ cargo run --example secondary_product > product_g
\prompt{Module (default: S_2):} S_2
\prompt{Module save directory (optional):} S_2_milnor
\prompt{Max n (default: 30):} 40
\prompt{Max s (default: 15):} 20
\prompt{Name of product:} g
\prompt{n of Ext class g:} 20
\prompt{s of Ext class g:} 4
\prompt{Input ext class:} [1]
$ cargo run --example secondary_massey > massey_P
\prompt{We are going to compute <-, b, a> for all (-), where a is an}
\prompt{element in Ext(M, k) and b and (-) are elements in Ext(k, k).}
\prompt{Module (default: S_2):} S_2
\prompt{Module save directory (optional):} S_2_milnor
\prompt{Max n (default: 30):} 40
\prompt{Max s (default: 15):} 20
\prompt{n of a:} 7
\prompt{s of a:} 1
\prompt{Name of Ext part of a:} h_3
\prompt{Input Ext class h_3:} [1]
\prompt{Name of τ part of a:}
\prompt{n of b:} 0
\prompt{s of b:} 4
\prompt{Name of Ext part of b:} h_0^4
\prompt{Input Ext class h_0^4:} [1]
\prompt{Name of τ part of b:}
\end{shell}
\caption{Interactive session to generate the dataset. The grey text is the computer's prompt and the black text is the user's input.}\label{figure:session}
\end{figure}
\subsection{Runtime performance}
We ran the program on a computational server of the Harvard University Mathematics Department. It has two \texttt{Xeon E5-2690 v2} CPUs (10 cores/20 threads each) and 125 GiB of memory. Using all 40 threads of the machine, computing the secondary resolution up to the 140\textsuperscript{th} stem took 3.3 hours and 7.8 GiB of memory.\footnote{We will mostly focus on analyzing the performance of computing the secondary resolution itself. For the products, computing the $h_0$ product generally takes twice as long as computing the resolution, while computing $\langle -, [h_1], [h_0]\rangle$ took $2.6\times$ as long. The time taken decreases rapidly as the stem of the multiplicand increases. For example, if we want to compute the product with $[g]$, to stay within the range, we would only multiply $[g]$ with elements up to the 120\textsuperscript{th} stem, and we only need to lift the chain map for 120 stems, as opposed to 140 for $h_0$.}
To understand the asymptotic complexity of the algorithm, recall that for each generator, to compute $h_\tau^{(s)}g$, we have to solve the equation
\[
\bar{\partial}^{(s - 2)} h_\tau^{(s)} g = \sum \left(\alpha^i h_\tau^{(s - 1)}g_i + A\left(\alpha^i, \partial_0^{(s - 1)} \partial_0^{(s - 2)} g_i\right)\right).
\]
We break this up into two steps --- we first compute $A\left(\alpha^i, \partial_0^{(s - 1)} \partial_1^{(s - 2)} g_i \right)$, and then solve the rest of the lifting problem. The first step is fully parallelizable, as there are no dependencies between different generators, while the second step requires the value of $h_\tau^{(s - 1)} g_i$, so must be computed in some specific order.
In practice, the second step is much faster than the first step even after parallelization. Moreover, the cost of the second step is exactly the cost of computing a single product; if it takes too long, we have bigger problems to deal with.
Thus, we shall focus on the cost of the first step. Our objective is to understand how this grows with the stem. There are two separate questions we can ask:
\begin{enumerate}
\item What is the maximum time it takes to perform the first step for a single generator?
\item How much time does it take in total to compute up to a stem?
\end{enumerate}
The first question is relevant in a situation where we have an extremely large number of cores/machines that can parallelize the computation, in which case the bottleneck is the slowest generator. The second question is relevant where we have a fixed, small(ish) number of cores that will be saturated throughout the process.
To answer these questions, we timed the computation for each generator and generated three charts:
\begin{enumerate}
\item \Cref{figure:max} shows the time taken by the slowest generator in each stem.
\item \Cref{figure:cumulative} shows the time taken to compute up to each stem.
\item \Cref{figure:scatter} shows the time taken by the slowest generator in each bidegree.
\end{enumerate}
Again, these figures only include the time taken by the first part, and measure CPU time as opposed to wall time (so the actual time needed to compute up to a stem is around $\frac{1}{40}$ of the time indicated).
\begin{figure}[t]
\centering
\input{max.pgf}
\caption{Time taken by the slowest generator in each stem}\label{figure:max}
\end{figure}
\begin{figure}[t]
\centering
\input{cumulative.pgf}
\caption{Time taken to compute up to each stem}\label{figure:cumulative}
\end{figure}
\begin{figure}[t]
\centering
\input{scatter.pgf}
\caption{Time taken by the slowest generator in each bidegree}\label{figure:scatter}
\end{figure}
The most obvious feature that stands out is that the time tends to grow exponentially in stem (the time axis uses a log scale). Fitting a simple linear regression on the datapoints beyond the 50\textsuperscript{th} stem, we see that the maximum time increases by a factor of 3 every 10 stems, while the cumulative time increases by a factor of $~3.85$ every 10 stems. For comparison, the cumulative time of Nassau's algorithm for computing the minimal resolution increases by a factor of 2.8 every 10 stems \cite[Abbildung 2.13]{nassau}.
From the scatter plot, we see that for each stem, the slowest bidegrees are the ones with the lowest filtrations, except for a few exceptions in very low filtrations. This is expected, since the Adams vanishing line suggests there will be more and lower degree generators in low filtrations, hence the resolution is larger in these bidegrees. This also explains the irregularities one observes in the graphs. In \Cref{figure:max}, the dips coincide with the stems where there are no low filtration elements. Conversely, the jumps in the cumulative time occur in stems near $2^n$, where there is a higher density of low filtration elements.
\subsection{Future work}
There are a few obvious improvements one can make to the dataset:
\begin{enumerate}
\item Compute further into more stems. This mostly requires more computational power. The ``secondary'' part of the process is fully parallelizable, and the code supports distributing the work across multiple machines.
\item Compute \emph{all} products by indecomposables, not just the indecomposables up to the $39$\textsuperscript{th} stem. The remaining products are extremely fast to compute, since the cost depends on the stem of the multiplicand, which now only goes up to at most 100. The main bottleneck is enumerating the indecomposables, which has been done manually so far. To push the product computation further, we ought to automate the process of finding all indecomposables.
\item Compute more Massey products. However, computing \emph{all} potential Massey products seems prohibitively expensive.
\item Have a dataset expressed in terms of ``human names'' of the classes. The main blocker is in coming up with a reasonable database of names.
\item Have a program to propagate differentials back and forth using the Leibniz rules and all available products.
\end{enumerate}
|
2,869,038,155,005 | arxiv | \section{\textbf{Introduction}}
The history of Genocchi numbers can be traced back to Italian mathematician
Angelo Genocchi (1817-1889). From Genocchi to the present time, Genocchi
numbers have been extensively studied in many different context in such
branches of Mathematics as, for instance, elementary number theory, complex
analytic number theory, Homotopy theory (stable Homotopy groups of spheres),
differential topology (differential structures on spheres), theory of
modular forms (Eisenstein series), $p$-adic analytic number theory ($p$-adic
$L$-functions), quantum physics (quantum Groups). The works of Genocchi
numbers and their combinatorial relations have received much attention \cit
{araci 3}, \cite{Araci 4}, \cite{araci 2}, \cite{Araci 5}, \cite{Araci 6},
\cite{Araci 7}, \cite{Jolany 1}, \cite{Jolany 2}, \cite{Kim 8}, \cite{Kim 10
. For showing the value of this type of numbers and polynomials, we list
some of their applications.
In the complex plane, the Genocchi numbers, named after Angelo Genocchi, are
a sequence of integers that defined by the exponential generating function
\begin{equation}
\frac{2t}{e^{t}+1}=e^{Gt}=\sum_{n=0}^{\infty }G_{n}\frac{t^{n}}{n!},\text{
\left( \left\vert t\right\vert <\pi \right) \label{Equation 1}
\end{equation
with the usual convention about replacing $G^{n}$ by $G_{n}$, is used. When
we multiply with $e^{xt}$ in the left hand side of the Eq. (\ref{Equation 1
), then we hav
\begin{equation}
\sum_{n=0}^{\infty }G_{n}\left( x\right) \frac{t^{n}}{n!}=\frac{2t}{e^{t}+1
e^{xt},\text{ }\left( \left\vert t\right\vert <\pi \right)
\label{Equation 28}
\end{equation
where $G_{n}\left( x\right) $ called Genocchi polynomials. It follows from
\ref{Equation 28}) that
G_{1}=1,G_{2}=-1,G_{3}=0,G_{4}=1,G_{5}=0,G_{6}=-3,G_{7}=0,G_{8}=17,\cdots ,$
and $G_{2n+1}=0$ for $n\in
\mathbb{N}
$ (for details, see \cite{araci 3}, \cite{Araci 4}, \cite{araci 2}, \cit
{Araci 5}, \cite{Jolany 1}, \cite{Jolany 2}, \cite{Kim 8}, \cite{Kim 10}).
Differentiating both sides of (\ref{Equation 1}), with respect to $x$, then
we have the following
\begin{equation}
\frac{d}{dx}G_{n}\left( x\right) =nG_{n-1}\left( x\right) \text{ \textit{and}
}\deg G_{n+1}\left( x\right) =n. \label{Equation 2}
\end{equation}
On account of (\ref{Equation 1}) and (\ref{Equation 2}), we can easily
derive the following
\begin{equation}
\int_{b}^{a}G_{n}\left( x\right) dx=\frac{G_{n+1}\left( a\right)
-G_{n+1}\left( b\right) }{n+1}\text{.} \label{Equation 3}
\end{equation}
By (\ref{Equation 1}), we ge
\begin{equation}
G_{n}\left( x\right) =\sum_{k=0}^{n}\binom{n}{k}G_{k}x^{n-k}.
\label{Equation 4}
\end{equation}
Thanks to (\ref{Equation 3}) and (\ref{Equation 4}), we acquire the
following equation (\ref{Equation 5})
\begin{equation}
\int_{0}^{1}G_{n}\left( x\right) dx=-2\frac{G_{n+1}}{n+1}\text{.}
\label{Equation 5}
\end{equation}
It is not difficult to see tha
\begin{eqnarray}
e^{tx} &=&\frac{1}{2t}\left( \frac{2t}{e^{t}+1}e^{\left( 1+x\right) t}+\frac
2t}{e^{t}+1}e^{xt}\right) \label{Equation 6} \\
&=&\frac{1}{2t}\sum_{n=0}\left( G_{n}\left( x+1\right) +G_{n}\left( x\right)
\right) \frac{t^{n}}{n!}\text{.} \notag
\end{eqnarray}
By expression of (\ref{Equation 6}), then we hav
\begin{equation}
2x^{n}=\frac{G_{n+1}\left( x+1\right) +G_{n+1}\left( x\right) }{n+1}
\label{Equation 7}
\end{equation
(see \cite{Kim 8}, \cite{Jolany 2}).
Let $\mathcal{P}_{n}\mathcal{=}\left\{ p\left( x\right) \in
\mathbb{Q}
\left[ x\right] \mid \deg p\left( x\right) \leq n\right\} $ be the $\left(
n+1\right) $-dimensional vector space over
\mathbb{Q}
$. Probably, $\left\{ 1,x,x^{2},\cdots ,x^{n}\right\} $ is the most natural
basis for $\mathcal{P}_{n}$. From this, we note that $\left\{ G_{1}\left(
x\right) ,G_{2}\left( x\right) ,\cdots ,G_{n+1}\left( x\right) \right\} $ is
also good basis for space $\mathcal{P}_{n}$.
In \cite{Kim 5}, Kim $et$ $al$. introduced the following integrals
\begin{equation}
I_{m,n}=\int_{0}^{1}B_{m}\left( x\right) x^{n}dx\text{ \ and
J_{m,n}=\int_{0}^{1}E_{m}\left( x\right) x^{n}dx \label{Equation 8}
\end{equation
where $B_{m}\left( x\right) $ and $E_{n}\left( x\right) $ are called
Bernoulli polynomials and Euler polynomials, respectively. Also, they are
defined by the following generating functions
\begin{eqnarray}
e^{B\left( x\right) t} &=&\sum_{n=0}^{\infty }B_{n}\left( x\right) \frac
t^{n}}{n!}=\frac{t}{e^{t}-1}e^{xt},\text{ }\left\vert t\right\vert <2\pi ,
\label{Equation 9} \\
e^{E\left( x\right) t} &=&\sum_{n=0}^{\infty }E_{n}\left( x\right) \frac
t^{n}}{n!}=\frac{2}{e^{t}+1}e^{xt},\text{ }\left\vert t\right\vert <\pi
\label{Equation 11}
\end{eqnarray
with $B^{n}\left( x\right) :=B_{n}\left( x\right) $ and $E^{n}\left(
x\right) :=E_{n}\left( x\right) $, symbolically. By substituting $x=0$ in
\ref{Equation 9}) and (\ref{Equation 11}), then we readily see that,
\begin{eqnarray}
\frac{t}{e^{t}-1} &=&\sum_{n=0}^{\infty }B_{n}\left( 0\right) \frac{t^{n}}{n
}, \label{Equation 10} \\
\frac{2}{e^{t}+1} &=&\sum_{n=0}^{\infty }E_{n}\left( 0\right) \frac{t^{n}}{n
}. \label{Equation 12}
\end{eqnarray}
Here $B_{n}\left( 0\right) :=B_{n}$ and $E_{n}\left( 0\right) :=E_{n}$ are
called Bernoulli numbers and Euler numbers, respectively. Thus, Bernoulli
and Euler numbers and polynomials have the following identities
\begin{equation}
B_{n}\left( x\right) =\sum_{k=0}^{n}\binom{n}{k}B_{k}x^{n-k}\text{ and
E_{n}\left( x\right) =\sum_{k=0}^{n}\binom{n}{k}E_{k}x^{n-k}.
\label{Equation 13}
\end{equation
(for details, see \cite{Acikgoz}, \cite{Bayad}, \cite{araci 1}, \cite{Cangul
, \cite{Kim 11}, \cite{Kim 9}, \cite{Luo}). By (\ref{Equation 10}) and (\re
{Equation 12}), we have the following recurrence relations of Euler and
Bernoulli numbers, as follows
\begin{equation}
B_{0}=1,\text{ }B_{n}\left( 1\right) -B_{n}=\delta _{1,n}\text{ and }E_{0}=1
\text{ }E_{n}\left( 1\right) +E_{n}=2\delta _{0,n} \label{Equation 14}
\end{equation
where $\delta _{n,m}$ is the Kronecker's symbol which is defined b
\begin{equation}
\delta _{n,m}=\left\{
\begin{array}{cc}
1, & \text{if }n=m \\
0, & \text{if }n\neq m
\end{array
\right. \label{Equation 15}
\end{equation}
In the complex plane, we can write the following
\begin{equation}
\sum_{n=0}^{\infty }G_{n}\frac{\left( it\right) ^{n}}{n!}=it\frac{2}{e^{it}+
}=it\sum_{n=0}^{\infty }E_{n}\frac{\left( it\right) ^{n}}{n!}\text{.}
\label{Equation 37}
\end{equation}
By (\ref{Equation 37}), we hav
\begin{equation*}
\sum_{n=0}^{\infty }\left( \frac{G_{n+1}}{n+1}\right) \frac{\left( it\right)
^{n}}{n!}=\sum_{n=0}^{\infty }E_{n}\frac{\left( it\right) ^{n}}{n!},
\end{equation*
by comparing coefficients on the both sides of the above equatlity, then we
hav
\begin{equation}
\frac{G_{n+1}}{n+1}=E_{n},\text{ (see \cite{Kimm 8}).} \label{Equation 29}
\end{equation}
Via the equation (\ref{Equation 29}), our results in the present paper can
be extended to Euler polynomials.
From of Eqs (\ref{Equation 8}-\ref{Equation 15}), Kim $et$ $al$. derived
some new formulae on the product for two and several Bernoulli and Euler
polynomials (for details, see [21-26]). In \cite{He}, He and Wang also gave
formulae of products of the Apostol-Bernoulli and Apostol-Euler Polynomials.
By the same motivation of the above knowledge, we write this paper. We give
some interesting properties which are procured from the basis of Genocchi.
From our methods, we obtain some new identities including Bernoulli and
Euler polynomials. Also, by using (\ref{Equation 29}), we derive our results
in terms of Euler polynomials.
\section{\textbf{On the Genocchi numbers and polynomials}}
In this section, we introduce the following integral equation: For $m,n\geq
1,
\begin{equation}
T_{m,n}=\int_{0}^{1}G_{m}\left( x\right) x^{n}dx\text{.} \label{Equation 16}
\end{equation}
By (\ref{Equation 16}), becomes
\begin{equation*}
T_{m,n}=-\frac{G_{m+1}}{m+1}-\frac{n}{m+1}\int_{0}^{1}G_{m+1}\left( x\right)
x^{n-1}dx\text{.}
\end{equation*}
Thus, we have the following recurrence formulas, as follows
\begin{equation*}
T_{m,n}=-\frac{G_{m+1}}{m+1}-\frac{n}{m+1}T_{m+1,n-1}
\end{equation*
by continuing with the above recurrence relation, then we derive tha
\begin{equation*}
T_{m,n}=-\frac{G_{m+1}}{m+1}+\left( -1\right) ^{2}\frac{n}{\left( m+1\right)
\left( m+2\right) }G_{m+2}+\left( -1\right) ^{2}\frac{n\left( n-1\right) }
\left( m+1\right) \left( m+2\right) }T_{m+2,n-2}.
\end{equation*}
Now also, we develop the following for sequel of this paper
\begin{equation}
T_{m,n}=\frac{1}{n+1}\sum_{j=1}^{n}\left( -1\right) ^{j}\frac{\binom{n+1}{j
}{\binom{m+j}{m}}G_{m+j}+2\frac{\left( -1\right) ^{n+1}G_{n+m+1}}{\left(
n+m+1\right) \binom{n+m}{m}}. \label{Equation 17}
\end{equation}
Let us now introduce the polynomial
\begin{equation*}
p\left( x\right) =\sum_{l=0}^{n}G_{l}\left( x\right) x^{n-l}\text{, with
n\in
\mathbb{N}
.
\end{equation*}
Taking $k$-th derivative of the above equality, then we have
\begin{eqnarray}
p^{\left( k\right) }\left( x\right) &=&\left( n+1\right) n\left( n-1\right)
\cdots \left( n-k+2\right) \sum_{l=k}^{n}G_{l-k}\left( x\right) x^{n-l}
\label{Equation 18} \\
&=&\frac{\left( n+1\right) !}{\left( n-k+1\right) !}\sum_{l=k}^{n}G_{l-k
\left( x\right) x^{n-l}\text{ }\left( k=0,1,2,\cdots ,n\right) . \notag
\end{eqnarray}
\begin{theorem}
\label{Theorema}The following equality holds true
\begin{gather*}
\sum_{l=0}^{n}G_{l}\left( x\right) x^{n-l} \\
=\sum_{k=1}^{n-1}\left( \sum_{j=1}^{n-k}\left( -1\right) ^{j}\frac{\binom
n-k+1}{j}}{\left( n-k+1\right) \binom{k+j}{k}}G_{k+j}+2\frac{\left(
-1\right) ^{n-k+1}G_{n+1}}{\left( n+1\right) \binom{n}{k}}-2\frac{G_{k+1}}
k+1}\right) \\
+\sum_{k=1}^{n}\left( \frac{\binom{n+2}{k}}{n+2}\sum_{l=k-1}^{n-1}\left(
2-G_{l-k+1}-G_{n-k+1}\right) \right) B_{k}\left( x\right) \text{.}
\end{gather*}
\end{theorem}
\begin{proof}
On account of the properties of the Genocchi basis for the space of
polynomials of degree less than or equal to $n$ with coefficients in
\mathbb{Q}
$, then $p\left( x\right) $ can be written as follows
\begin{equation}
p\left( x\right) =\sum_{k=0}^{n}a_{k}B_{k}\left( x\right)
=a_{0}+\sum_{k=1}^{n}a_{k}B_{k}\left( x\right) \text{.} \label{Equation 19}
\end{equation
Therefore, by (\ref{Equation 19}), we obtai
\begin{align}
a_{0}& =\int_{0}^{1}p\left( x\right)
dx=\sum_{k=1}^{n}\int_{0}^{1}G_{k}\left( x\right)
x^{n-k}dx=\sum_{k=1}^{n}T_{k,n-k}=\sum_{k=1}^{n-1}T_{k,n-k}+T_{k,0}
\label{Equation 30} \\
& =\sum_{k=1}^{n-1}\frac{1}{n-k+1}\sum_{j=1}^{n-k}\left( -1\right) ^{j}\frac
\binom{n-k+1}{j}}{\binom{k+j}{k}}G_{k+j}+2\frac{\left( -1\right)
^{n-k+1}G_{n+1}}{\left( n+1\right) \binom{n}{k}}-2\frac{G_{k+1}}{k+1}\text{.}
\notag
\end{align
From expression of (\ref{Equation 18}), we ge
\begin{eqnarray}
a_{k} &=&\frac{1}{k!}\left( p^{\left( k-1\right) }\left( 1\right) -p^{\left(
k-1\right) }\left( 0\right) \right) \label{Equation 31} \\
&=&\frac{\left( n+1\right) !}{k!\left( n-k+2\right) !}\left(
\sum_{l=k-1}^{n}G_{l-k+1}\left( 1\right) -0^{n-l}G_{n-k+1}\right) \notag \\
&=&\frac{\binom{n+2}{k}}{n+2}\sum_{l=k-1}^{n-1}\left(
2-G_{l-k+1}-G_{n-k+1}\right) \text{.} \notag
\end{eqnarray}
Substituting equations (\ref{Equation 30}) and (\ref{Equation 31}) into (\re
{Equation 19}), we arrive at the desired result.
\end{proof}
By using (\ref{Equation 29}) and theorem \ref{Theorema}, we get the
following corollary, which has been stated in terms of Euler polynomials.
\begin{corollary}
For any $n\in
\mathbb{N}
$, then we hav
\begin{gather*}
\sum_{l=0}^{n}G_{l}\left( x\right) x^{n-l} \\
=\sum_{k=1}^{n-1}\left( \sum_{j=1}^{n-k}\left( -1\right) ^{j}\frac{\left(
k+j\right) \binom{n-k+1}{j}}{\left( n-k+1\right) \binom{k+j}{j}}E_{k+j-1}+
\frac{\left( -1\right) ^{n-k+1}E_{n}}{\binom{n}{k}}-2E_{k}\right) \\
+\sum_{k=1}^{n}\left( \frac{\binom{n+2}{k}}{n+2}\sum_{l=k-1}^{n-1}\left(
2-\left( l-k+1\right) E_{l-k}-\left( n-k+1\right) E_{n-k}\right) \right)
B_{k}\left( x\right) \text{.}
\end{gather*}
\end{corollary}
\begin{theorem}
\label{Theorem 2}The following nice identit
\begin{gather*}
\sum_{l=0}^{n}G_{l}\left( x\right) x^{n-l} \\
=\sum_{k=0}^{n}\left( \left( n+1\right) \binom{n}{k}-\frac{\binom{n+1}{k}}{2
\sum_{l=k}^{n-1}\left( G_{l-k}-G_{n-k}\right) \right) E_{k}\left( x\right)
\end{gather*
is true.
\end{theorem}
\begin{proof}
Let us now consider the polynomial $p\left( x\right) $ in terms of Euler
polynomials as follows
\begin{equation*}
p\left( x\right) =\sum_{k=0}^{n}b_{k}E_{k}\left( x\right) \text{.}
\end{equation*
In \cite{Kim 5}, Kim $et$ $al$. gave the coefficients $b_{k}$ by utilizing
from the definition of Bernoulli polynomials. Now also, we give the
coefficients $b_{k}$ by using the definition of Genocchi polynomials, as
follows
\begin{eqnarray*}
b_{k} &=&\frac{1}{2k!}\left( p^{\left( k\right) }\left( 1\right) +p^{\left(
k\right) }\left( 0\right) \right) \\
&=&\frac{\left( n+1\right) !}{2k!\left( n-k+1\right) !}\sum_{l=k}^{n}\left(
G_{l-k}\left( 1\right) +0^{n-l}G_{l-k}\right) \\
&=&\left( n+1\right) \binom{n}{k}-\frac{\binom{n+1}{k}}{2
\sum_{l=k}^{n-1}\left( G_{l-k}-G_{n-k}\right) \text{.}
\end{eqnarray*
After the above applications, we complete the proof of theorem.
\end{proof}
By employing (\ref{Equation 29}) and theorem \ref{Theorem 2}, we have the
following corollary, which is sums of products of two Euler polynomials.
\begin{corollary}
For each $n\in
\mathbb{N}
$, then we hav
\begin{gather*}
\sum_{l=0}^{n}G_{l}\left( x\right) x^{n-l} \\
=\sum_{k=0}^{n}\left( \left( n+1\right) \binom{n}{k}-\frac{\binom{n+1}{k}}{2
\sum_{l=k}^{n-1}\left( \left( l-k\right) E_{l-k-1}-\left( n-k\right)
E_{n-k-1}\right) \right) E_{k}\left( x\right) \text{.}
\end{gather*}
\end{corollary}
We now discover the following theorem, which will be interesting and
worthwhile theorem for studying in Analytic numbers theory.
\begin{theorem}
The following equality holds
\begin{gather*}
\sum_{l=0}^{n}\frac{1}{l!\left( n-l\right) !}G_{l}\left( x\right) x^{n-l} \\
=\sum_{l=1}^{n}\frac{2^{l-2}}{l!}\sum_{j=l-1}^{n}\frac{\left(
2-G_{l-j+1}\right) G_{l}\left( x\right) }{\left( j-l+1\right) !\left(
n-j\right) !}+\frac{2^{l-2}}{l!\left( n-l+1\right) !}G_{n-l+1}G_{l}\left(
x\right) \text{.}
\end{gather*}
\end{theorem}
\begin{proof}
It is proved by using the following polynomial $p\left( x\right) :$
\begin{equation}
p\left( x\right) =\sum_{l=0}^{n}\frac{1}{l!\left( n-l\right) !}G_{l}\left(
x\right) x^{n-l}=\sum_{l=0}^{n}a_{l}G_{l}\left( x\right) \text{.}
\label{Equation 33}
\end{equation
It is not difficult to indicate the following
\begin{equation}
p^{\left( k\right) }\left( x\right) =2^{k}\sum_{l=k}^{n}\frac{1}{\left(
l-k\right) !\left( n-l\right) !}G_{l-k}\left( x\right) x^{n-l}\text{.}
\label{Equation 20}
\end{equation
Then, we see that for $k=1,2,\cdots ,n,
\begin{align}
a_{l}& =\frac{1}{2l!}\left( p^{\left( l-1\right) }\left( 1\right) +p^{\left(
l-1\right) }\left( 0\right) \right) \label{Equation 32} \\
& =\frac{2^{l-2}}{l!}\sum_{j=l-1}^{n}\frac{1}{\left( j-l+1\right) !\left(
n-j\right) !}\left( G_{j-l+1}\left( 1\right) +0^{n-j}G_{j-l+1}\right) \notag
\\
& =\frac{2^{l-2}}{l!}\sum_{j=l-1}^{n}\frac{\left( 2-G_{l-j+1}\right) }
\left( j-l+1\right) !\left( n-j\right) !}+\frac{2^{l-2}}{l!\left(
n-l+1\right) !}G_{n-l+1}. \notag
\end{align}
By (\ref{Equation 33}) and (\ref{Equation 32}), we arrive at the desired
result.
\end{proof}
\begin{theorem}
\label{theorem 3}The following identit
\begin{gather}
\sum_{l=0}^{n}\frac{1}{l!\left( n-l\right) !}G_{l}\left( x\right) x^{n-l}
\label{Equation 23} \\
=-2\frac{G_{n+1}}{n+1}+\sum_{l=1}^{n-1}\sum_{j=1}^{n-l}\frac{\left(
-1\right) ^{j}}{l!\left( n-l+1\right) !}\frac{\binom{n-l+1}{j}}{\binom{l+j}{
}}G_{l+j}+2\frac{\left( -1\right) ^{n-l+1}G_{n+1}}{\left( n+1\right) \binom{
}{l}} \notag \\
+\sum_{k=1}^{n}\left( \frac{2^{k-1}}{k!}\sum_{l=k-1}^{n}\frac{\left(
2-G_{l-k+1}\right) }{\left( l-k+1\right) !\left( n-l\right) !}-\frac{2^{k-1
}{k!\left( n-k+1\right) !}G_{n-k+1}\right) B_{k}\left( x\right) \notag
\end{gather
is true.
\end{theorem}
\begin{proof}
Now also, let us take the polynomial in terms of Bernoulli polynomials as
\begin{equation}
p\left( x\right) =\sum_{k=0}^{n}a_{k}B_{k}\left( x\right) .
\label{Equation 34}
\end{equation
By using the above identity, we develop as follows
\begin{align}
a_{0}& =\int_{0}^{1}p\left( x\right) dx=\sum_{l=0}^{n}\frac{1}{l!\left(
n-l\right) !}\int_{0}^{1}G_{l}\left( x\right) x^{n-l}dx \label{Equation 35}
\\
& =\sum_{l=0}^{n}\frac{1}{l!\left( n-l\right) !}T_{l,n-l}=T_{n,0}
\sum_{l=1}^{n-1}\frac{1}{l!\left( n-l\right) !}T_{l,n-l} \notag \\
& =-2\frac{G_{n+1}}{n+1}+\sum_{l=1}^{n-1}\sum_{j=1}^{n-l}\frac{\left(
-1\right) ^{j}}{l!\left( n-l+1\right) !}\frac{\binom{n-l+1}{j}}{\binom{l+j}{
}}G_{l+j}+2\frac{\left( -1\right) ^{n-l+1}G_{n+1}}{\left( n+1\right) \binom{
}{l}}. \notag
\end{align
By (\ref{Equation 20}), we compute $a_{k}$ coefficients, as follows
\begin{eqnarray}
a_{k} &=&\frac{1}{k!}\left( p^{\left( k-1\right) }\left( 1\right) -p^{\left(
k-1\right) }\left( 0\right) \right) \label{Equation 36} \\
&=&\frac{2^{k-1}}{k!}\sum_{l=k-1}^{n}\frac{1}{\left( l-k+1\right) !\left(
n-l\right) !}\left( G_{l-k+1}\left( 1\right) -0^{n-l}G_{l-k+1}\right) \notag
\\
&=&\frac{2^{k-1}}{k!}\sum_{l=k-1}^{n}\frac{\left( 2-G_{l-k+1}\right) }
\left( l-k+1\right) !\left( n-l\right) !}-\frac{2^{k-1}}{k!\left(
n-k+1\right) !}G_{n-k+1}\text{.} \notag
\end{eqnarray}
When we substituted (\ref{Equation 35}) and (\ref{Equation 36}) into (\re
{Equation 34}), the proof of theorem will be completed.
\end{proof}
By using equation (\ref{Equation 29}) and theorem \ref{theorem 3}, we
procure the following corollary.
\begin{corollary}
For any $n\in
\mathbb{N}
,$ then we hav
\begin{gather*}
\sum_{l=0}^{n}\frac{1}{l!\left( n-l\right) !}G_{l}\left( x\right) x^{n-l} \\
=-2E_{n}+\sum_{l=1}^{n-1}\sum_{j=1}^{n-l}\frac{\left( -1\right) ^{j}}
l!\left( n-l+1\right) !}\frac{\left( l+j\right) \binom{n-l+1}{j}}{\binom{l+
}{l}}E_{l+j-1}+2\frac{\left( -1\right) ^{n-l+1}E_{n}}{\binom{n}{l}} \\
+\sum_{k=1}^{n}\left( \frac{2^{k-1}}{k!}\sum_{l=k-1}^{n}\frac{\left( \frac{
}{l-k+1}-E_{l-k}\right) }{\left( l-k\right) !\left( n-l\right) !}-\frac
2^{k-1}}{k!\left( n-k\right) !}E_{n-k}\right) B_{k}\left( x\right)
\end{gather*}
\end{corollary}
In \cite{Kim 8}, it is well-known tha
\begin{equation}
G_{n}\left( x+y\right) =\sum_{k=0}^{n}\binom{n}{k}G_{k}\left( x\right)
y^{n-k}\text{.} \label{Equation 21}
\end{equation}
For $x=y$ in (\ref{Equation 21}), then we have the followin
\begin{equation}
\frac{1}{n!}G_{n}\left( 2x\right) =\sum_{k=0}^{n}\frac{1}{k!\left(
n-k\right) !}G_{k}\left( x\right) x^{n-k}. \label{Equation 22}
\end{equation}
By comparing the equations of (\ref{Equation 23}) and (\ref{Equation 22}),
then we readily derive the following corollary.
\begin{corollary}
\begin{equation*}
\frac{1}{n!}G_{n}\left( 2x\right) =\text{the right-hand-side of equation in
Theorem 2.4.}
\end{equation*}
\end{corollary}
\begin{theorem}
\label{theorem 4}The following equalit
\begin{gather*}
\sum_{k=1}^{n-1}\frac{1}{k\left( n-k\right) }G_{k}\left( x\right) x^{n-k} \\
=\sum_{k=0}^{n}\left( \frac{\binom{n}{k}}{2\left( n-k+1\right) }\left(
H_{n-1}-H_{n-k}\right) -\frac{\binom{n}{k}}{2n}\sum_{l=k}^{n-1}\frac{\left(
2-G_{l-k+1}\right) }{\left( n-l\right) \left( l-k+1\right) }\right)
G_{k}\left( x\right)
\end{gather*
holds true.
\end{theorem}
\begin{proof}
To prove this theorem, we introduce the following polynomial $p\left(
x\right) :$
\begin{equation*}
p\left( x\right) =\sum_{k=1}^{n-1}\frac{1}{k\left( n-k\right) }G_{k}\left(
x\right) x^{n-k}\text{.}
\end{equation*
Then, we derive $k$-th derivative of $p\left( x\right) $ is given b
\begin{equation}
p^{\left( k\right) }\left( x\right) =C_{k}\left( x^{n-k}+G_{n-k}\left(
x\right) \right) +\left( n-1\right) \left( n-2\right) \cdots \left(
n-k\right) \sum_{l=k+1}^{n-1}\frac{G_{l-k}\left( x\right) x^{n-l}}{\left(
n-l\right) \left( l-k\right) }, \label{Equation 24}
\end{equation
wher
\begin{equation*}
C_{k}=\frac{\sum_{j=1}^{k}\left( n-1\right) ...\left( n-j+1\right) \left(
n-j-1\right) ...\left( n-k\right) }{n-k}\text{ }\left( k=1,2,...,n-1\right)
\text{, }C_{0}=0\text{.}
\end{equation*
We want to note tha
\begin{equation*}
p^{\left( n\right) }\left( x\right) =\left( p^{\left( n-1\right) }\left(
x\right) \right)
{\acute{}
=C_{n-1}\left( x+G_{1}\left( x\right) \right) =C_{n-1}=\left( n-1\right)
!H_{n-1},
\end{equation*
where $H_{n-1}$ are called Harmonic numbers, which are defined b
\begin{equation*}
H_{n-1}=\sum_{j=1}^{n-1}\frac{1}{j}\text{.}
\end{equation*
With the properties of Genocchi basis for the space of polynomials of degree
less than or equal to $n$ with coefficients in
\mathbb{Q}
$, $p\left( x\right) $ is introduced b
\begin{equation}
p\left( x\right) =\sum_{k=0}^{n}a_{k}G_{k}\left( x\right) \text{.}
\label{Equation 25}
\end{equation
By expression of (\ref{Equation 25}), we obtain tha
\begin{eqnarray*}
a_{k} &=&\frac{1}{2k!}\left( p^{\left( k-1\right) }\left( 1\right)
+p^{\left( k-1\right) }\left( 0\right) \right) \\
&=&\frac{C_{k-1}}{2k!}\left( 1+2\delta _{1,n-k+1}\right) +\frac{\left(
n-1\right) !}{2k!\left( n-k\right) !}\sum_{l=k}^{n-1}\frac{\left(
G_{l-k+1}\left( 1\right) +0^{n-l}G_{l-k+1}\right) }{\left( n-l\right) \left(
l-k+1\right) } \\
&=&\frac{C_{k-1}}{2k!}-\frac{\binom{n}{k}}{2n}\sum_{l=k}^{n-1}\frac{\left(
2-G_{l-k+1}\right) }{\left( n-l\right) \left( l-k+1\right) }.
\end{eqnarray*
As a result
\begin{equation*}
a_{n}=\frac{1}{2n!}\left( p^{\left( n\right) }\left( 1\right) +p^{\left(
n\right) }\left( 0\right) \right) =\frac{C_{n-1}}{n!}=\frac{H_{n-1}}{n}\text
.}
\end{equation*
In \cite{Kim 5}, it is well-known tha
\begin{equation}
\frac{C_{k-1}}{k!}=\frac{\binom{n}{k}}{\left( n-k+1\right) }\left(
H_{n-1}-H_{n-k}\right) \text{.} \label{Equation 26}
\end{equation
By (\ref{Equation 24}), (\ref{Equation 25}) and (\ref{Equation 26}), then we
arrive at the desired result.
\end{proof}
From (\ref{Equation 29}) and (\ref{theorem 4}), we acquire the following.
\begin{corollary}
The following identity holds
\begin{gather*}
\sum_{k=1}^{n-1}\frac{1}{k\left( n-k\right) }G_{k}\left( x\right) x^{n-k} \\
=\sum_{k=1}^{n}\left( \frac{\binom{n}{k}}{2\left( n-k+1\right) }\left(
H_{n-1}-H_{n-k}\right) -\frac{\binom{n}{k}}{2n}\sum_{l=k}^{n-1}\frac{\left(
\frac{2}{l-k+1}-E_{l-k}\right) }{\left( n-l\right) }\right) kE_{k-1}\left(
x\right)
\end{gather*}
\end{corollary}
\section{\textbf{Further Remarks}}
Let $\mathcal{P}_{n}=\left\{ \sum_{j=0}a_{j}x^{j}\mid a_{j}\in
\mathbb{Q}
\right\} $ be the space of polynomials of degree less than or equal to $n.$
In this final section, we will give the matrix formulation of Genocchi
polynomials. Let us now consider the polynomial $p\left( x\right) \in
\mathcal{P}_{n}$ as a linear combination of Genocchi basis polynomials wit
\begin{equation*}
p\left( x\right) =C_{1}G_{1}\left( x\right) +C_{2}G_{2}\left( x\right)
+\cdots +C_{n+1}G_{n+1}\left( x\right) \text{.}
\end{equation*}
We can write the above as a product of two variable
\begin{equation}
p\left( x\right) =\left(
\begin{array}{cccc}
G_{1}\left( x\right) & G_{2}\left( x\right) & \cdots & G_{n+1}\left( x\right
\end{array
\right) \left(
\begin{array}{c}
C_{1} \\
C_{2} \\
\vdots \\
C_{n+1
\end{array
\right) . \label{Equation 27}
\end{equation}
From expression of (\ref{Equation 27}), we consider the following equation
\begin{equation*}
p\left( x\right) =\left(
\begin{array}{ccccc}
1 & x & x^{2} & \cdots & x^{n
\end{array
\right) \left(
\begin{array}{cccc}
g_{1,1} & g_{1,2} & \cdots & g_{1,n+1} \\
0 & g_{2,2} & \cdots & g_{2,n+1} \\
0 & 0 & \cdots & g_{3,n+1} \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & g_{n+1,n+1
\end{array
\right) \left(
\begin{array}{c}
C_{1} \\
C_{2} \\
C_{3} \\
\vdots \\
C_{n+1
\end{array
\right)
\end{equation*
where $g_{i,j}$ are the coefficients of the power basis that are used to
determine the respective Genocchi polynomials. We now list a few Genocchi
polynomials as follows
\begin{equation*}
G_{1}\left( x\right) =1,\text{ }G_{2}\left( x\right) =2x-1,\text{
G_{3}\left( x\right) =3x^{2}-3x,\text{ }G_{4}\left( x\right)
=4x^{3}-6x^{2}-1,\cdots .
\end{equation*}
In the quadratic case ($n=2$), the matrix representation i
\begin{equation*}
p\left( x\right) =\left(
\begin{array}{ccc}
1 & x & x^{2
\end{array
\right) \left(
\begin{array}{ccc}
1 & -1 & 0 \\
0 & 2 & -3 \\
0 & 0 &
\end{array
\right) \left(
\begin{array}{c}
C_{1} \\
C_{2} \\
C_{3
\end{array
\right) \text{.}
\end{equation*}
In the cubic case ($n=3$), the matrix representation i
\begin{equation*}
p\left( x\right) =\left(
\begin{array}{cccc}
1 & x & x^{2} & x^{3
\end{array
\right) \left(
\begin{array}{cccc}
1 & -1 & 0 & -1 \\
0 & 2 & -3 & 0 \\
0 & 0 & 3 & -6 \\
0 & 0 & 0 &
\end{array
\right) \text{.}
\end{equation*}
Throughout this paper, many considerations for Genocchi polynomials seem to
be useful for a matrix formulation.\hspace{0.5cm}
|
2,869,038,155,006 | arxiv | \section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was supported in part by the National Key R\&D Program of China (No. 2018YFB1403001).
\bibliographystyle{IEEEtran}
\section{Introduction}\label{sec-intro}
Deep Neural Networks (DNN) have achieved remarkable progresses in various applications such as computer vision \cite{howard2017mobilenets}, sales forecasting \cite{chen2019much}, and recommender system \cite{zhu2019dtcdr}, due to its powerful ability to learn hierarchy representations.
Training a good DNN model often requires a large amount of data. However, in practice, integrated data are always held by different parties.
Traditionally, when multiple parties want to build a DNN model (e.g., a fraud detection model) together, they need to aggregate their data and train the model with the plaintext data, as is shown in Figure \ref{compare-exam-nn}.
With all kinds of national data protection regulations coming into force, data isolation has become a serious problem currently.
As a result, different organizations (data holders) are reluctant to or cannot share sensitive data with others due to such regulations and they have to train DNN models using their own data.
To this end, each data holder has to train DNN models using its own data.
Therefore,
such a \textit{data isolation problem} has constrained the power of DNN, since DNN usually achieves better performance with more high-quality data.
\begin{figure*}
\centering
\subfigure [\emph{Plaintext DNN}]{ \label{compare-exam-nn} \includegraphics[height=3.5cm]{figures/example-nn}} \hspace{.5cm}
\subfigure[\emph{Algorithmic DNN \cite{vepakomma2018split}}] { \label{compare-exam-split} \includegraphics[height=3.5cm]{figures/example-split}} \hspace{.5cm}
\subfigure [\emph{Cryptographic DNN \cite{mohassel2017secureml}}]{ \label{compare-exam-sml} \includegraphics[height=3.5cm]{figures/example-secureml}}
\caption{Comparison of existing approaches. Here, we assume there are only two data holders ($\mathcal{A}$ and $\mathcal{B}$), $\mathcal{A}$ has partial feature ($X_\mathcal{A}$, shown in yellow dots) and label ($y$, shown in yellow squares), and $\mathcal{B}$ has partial feature ($X_\mathcal{B}$, shown in green dots). }
\label{compare-exam}
\end{figure*}
\subsection{Existing methods and Shortcomings}
Existing researches solve the above problem from either \textit{algorithmic} perspective or \textit{cryptographic} perspective.
\textbf{Algorithmic methods} build privacy preserving DNN by spliting the computation graphs of DNN from an algorithmic perspective \cite{gupta2018distributed,vepakomma2018split,osia2019hybrid,gu2019securing,hu2019fdml}. Their common idea is to let each data holder first use a partial neural network (i.e., encoder) to encode the raw input individually and then send the encoded representations to another data holder (or server) for the rest model training.
Although such algorithmic methods are efficient, they have two shortcomings.
First, the efficiency of those methods is usually at the expense of sacrificing model performance, as data holders train partial neural networks individually and the co-relation between data is not captured.
Second, data privacy is not fully protected since the raw labels need to be shared with server during model training, as shown in Figure \ref{compare-exam-split}. Meanwhile the encoded representations may unintentionally reveal sensitive information (e.g., membership and property information~\cite{wu_sgld,collabrative_leakge,ganju2018property}).
\textbf{Cryptographic methods} focus on using pure cryptographic techniques, e.g., homomorphic encryption \cite{gilad2016cryptonets} or secure multi-party computation \cite{mohassel2017secureml,demmler2015aby}, for multi-parties to build privacy-preserving neural networks, as is shown in Figure \ref{compare-exam-sml}.
Although such cryptographic methods have a strong privacy guarantee, they are difficult to scale to deep structures and large datasets due to the high communication and computational complexity of the cryptographic techniques.
However, real-world applications always have two characteristics:
(1) datasets are large due to the massive data held by big companies; and
(2) high-performance neural network models are always deep and wide, which come with huge computational costs.
Therefore, efficiency becomes the main shortcoming when applying existing cryptographic methods in practice.
\subsection{Our solution}
\nosection{Methodology}
In this paper, we propose a Scalable and Privacy-preserving deep Neural Network (SPNN) learning paradigm,
which combines the advantages of existing algorithmic methods and cryptographic methods.
First, from the \textit{algorithmic perspective}, for scalability concern, we split the computation graph of a given DNN model into two types.
The computations related to private data are performed by data holders and the rest heavy computations are delegated to a computation-efficient server.
Here, private data refers to the input and output of DNN models, i.e., features and labels.
Second, from the \textit{cryptographic perspective}, for accuracy and privacy concern, we let data holders securely calculate the first hidden layer collaboratively.
More specifically, data holders firstly adopt cryptographic techniques, e.g., secret sharing and homomorphic encryption, to perform private feature related computations cooperatively. They generate the first hidden layer of DNN and send it to server.
Then, the server performs the successive hidden layer related computations, gets the final hidden layer, and sends it to the data holder who has labels.
The data holder who has labels conducts private label related computations and gets predictions based on the final hidden layer.
The backward computations are performed reversely.
In summary, private data and corresponding model are held by data holders, and the heavy non-private data related computations are done by the server.
Our proposed SPNN~only involves cryptographic techniques for the first hidden layer, and therefore enjoys good scalability.
To prevent privacy leakage of the hidden features on server, we further propose to inject moderate noises into the gradient during training.
A typical way is to use differentially private stochastic gradient descent (DP-SGD). However, in practice it leads to significant model accuracy drop~\cite{dp_sgd}.
In light of some recent works~\cite{wu_sgld}, we propose using Stochastic Gradient Langevin Dynamics (SGLD) to reduce the potential information leakage.
\nosection{Implementation}
We implement SPNN~in a decentralized network with three kinds of computation nodes, i.e., a coordinator, a server, and a group of clients.
The coordinator splits the computation graph and controls the start and termination of SPNN~based on a certain condition, e.g., the number of iterations.
The clients are data holders who are in charge of private data related computations, and the server is responsible for hidden layer related computations which can be adapted to existing deep learning backends such as PyTorch.
Communications between workers and servers make sure the model parameters are correctly updated.
Moreover, our implementation also supports user-friendly API, similar to PyTorch. Developers can easily build any privacy preserving deep neural network models without complicated cryptography knowledge.
\nosection{Results}
We conduct experiments on real-world fraud detection and distress prediction datasets. Results demonstrate that our proposed SPNN~has comparable performance with the traditional neural networks that are trained on plaintext data. Moreover, experimental results also show that SPNN~significantly outperforms existing algorithmic methods and cryptographic approaches
\nosection{Contributions} We summarize our main contributions as follows:
\begin{itemize} [leftmargin=*] \setlength{\itemsep}{-\itemsep}
\item We propose SPNN, a novel learning framework for scalable privacy preserving deep neural network, which not only has good scalability but also preserves data privacy.
\item We implement SPNN~on decentralized network settings, which not only has user-friendly APIs but also can be adapted to existing deep learning backends such as PyTorch.
\item Our proposal is verified on real-world datasets and the results show its superiority.
\end{itemize}
\section{The Proposed Method}\label{model}
In this section, we first describe the problem
, and then present the overview of \modelnam
.
Next, we present the sub-modules of SPNN~in detail
, and finally present the learning of SPNN
\subsection{Problem Description}\label{model-problem}
We start from a concrete example.
Suppose there are two financial companies, i.e., $\mathcal{A}$ and $\mathcal{B}$, who both need to detect fraud users.
As is shown in Figure \ref{framework},
$\mathcal{A}$ has some user features ($\textbf{X}_A$, shown in yellow dots) and labels ($\textbf{y}$, shown in yellow squares), and $\mathcal{B}$ has features ($\textbf{X}_B$, shown in green dots) for the same batch of users.
Although $\mathcal{A}$ can build a Deep Neural Network (DNN) for fraud detection using its own data, the model performance can be improved by incorporating features of $\mathcal{B}$.
However, these two companies can not share data with each other due to the fact that leaking users' private data is against regulations.
This is a classic data isolation problem.
It is challenging for both parties to build scalable privacy preserving neural networks collaboratively without compromising their private data.
In this paper, we only consider the situation where two data holders have the same sample set, one of them ($\mathcal{A}$) has partial features and labels, and the other ($\mathcal{B}$) has the rest partial features.
Our proposal can be naturally extended to more than two parties.
\subsection{Proposal Overview}\label{model-overview}
We propose a novel scalable and privacy-preserving deep neural network learning framework (SPNN) for the above challenge.
As described in Section \ref{pre-nn}, DNN can be defined as a layer-wise representation function.
Motivated by the existing work \cite{gupta2018distributed,vepakomma2018split,osia2019hybrid,gu2019securing}, we propose to decouple the computation graph of DNN into two types, i.e., the computations related to private data are performed by data holders using cryptographic techniques, and the rest computations are delegated to a server with high computation ability.
Here, the private data are the input and output of the neural network, which corresponds to the private features and labels from data holders.
Specifically, we divide the model parameters ($\theta$) into three parts, (1) \textit{the computations that are related to private features on both data holders} ($\theta_\mathcal{A}$ and $\theta_\mathcal{B}$), (2) \textit{the rest heavy hidden layer related computations on server} ($\theta_\mathcal{S}$), and (3) \textit{the computations related to private labels on the data holder who has label} ($\theta_y$).
As shown in Figure \ref{framework}, the first part is private data related computations and therefore are performed by data holders themselves using secret sharing or homomorphic encryption techniques, the second part is delegated to server which has rich computation resources. We summarize the forward propagation in Algorithm \ref{algo}.
We will describe each part in details in the following subsections.
\textit{Threat Model.}
Our proposed model relies on two kinds of participants, i.e., data holders and server. We consider a static adversary who controls one of the participants or the server at a time. We allow an adversary who corrupts the server to be malicious but it does not collude with any data holders.
That is, the corrupted server will try to infer as much information as possible using all intermediate computation results it has, i.e., mainly the hidden layers from data holders. We claim this is a reasonable assumption as the server is usually a well-established company or government and does not want to jeopardize its reputation by colluding with others \cite{abadi2016vd}. Also, the non-colluding assumption is well-accepted and widely used
in the literature \cite{abadi2016vd,gordon2015multi}. We will present how to defend the corrupted server using Bayesian neural network learning.
\begin{algorithm}[t]
\caption{The forward propagation of SPNN}\label{algo}
\KwIn {Features of $\mathcal{A}$ ($\textbf{X}_A$), features of $\mathcal{B}$ ($\textbf{X}_B$), server ($\mathcal{S}$), and the number of iteration ($T$)}
\KwOut{Predictions ($\hat{\textbf{y}}$) on $\mathcal{A}$}
$\mathcal{A}$ initializes $\bm{\theta}_A$ and $\bm{\theta}_y$, $\mathcal{B}$ initializes $\bm{\theta}_B$, and the server initializes $\bm{\theta}_S$ \\
\For{$t=1$ to $T$}
{
\For{each mini-batch in training datasets}
{
\# private feature related computations by $\mathcal{A}$ and $\mathcal{B}$ (Section \ref{model-feature}) \\
$\mathcal{A}$ and $\mathcal{B}$ collaboratively learn the first hidden layer ($\textbf{h}_1$) using secret sharing (Algorithm \ref{model-feature-ss}) or homomorphic encryption (Algorithm \ref{model-feature-he}), i.e., $\textbf{h}_1 = f(\textbf{X}_A, \textbf{X}_B; \bm{\theta}_A, \bm{\theta}_B)$ \\
\# hidden layer related computations by Server (Section \ref{model-hidden}) \\
$\mathcal{S}$ calculates the final hidden layer by $\textbf{h}_L = f(\textbf{h}_1; \bm{\theta}_S)$ \\
\# private label related computations by $\mathcal{A}$ (Section \ref{model-label}) \\
$\mathcal{A}$ makes prediction by $\hat{\textbf{y}} = f(\textbf{h}_L; \bm{\theta}_y)$
}
}
\Return Predictions ($\hat{\textbf{y}}$) on $\mathcal{A}$
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{figures/cement}
\caption{The proposed SPNN, which is trained by using SGLD. The bottom blue part stands for private feature related computations which
are performed by data holders collaboratively and securely (Section \ref{model-feature}).
The middle pink part is hidden layer related computations that are conducted on by server (Section \ref{model-hidden}).
The top green part is private label related computations that are done by the data holder who has label (Section \ref{model-label}).}
\label{framework}
\end{figure}
\begin{algorithm}[t]
\caption{Data holders $\mathcal{A}$ and $\mathcal{B}$ securely calculate the first hidden layer using arithmetic sharing}\label{model-feature-ss}
\KwIn {features of $\mathcal{A}$ and $\mathcal{B}$ ($\textbf{X}_A$ and $\textbf{X}_B)$, current models of $\mathcal{A}$ and $\mathcal{B}$ ($\bm{\theta}_A$ and $\bm{\theta}_B)$, and Server ($\mathcal{S}$)}
\KwOut{The first hidden layer $\textbf{h}_1$ on $\mathcal{S}$}
$\mathcal{A}$ and $\mathcal{B}$ locally generate $\left\langle\textbf{X}_A\right\rangle_1$ and $\left\langle\textbf{X}_A\right\rangle_2$, and $\left\langle\textbf{X}_B\right\rangle_1$ and $\left\langle\textbf{X}_B\right\rangle_2$, respectively \label{algo-ss-1}\\
$\mathcal{A}$ and $\mathcal{B}$ locally generate $\left\langle\bm{\theta}_A\right\rangle_1$ and $\left\langle\bm{\theta}_A\right\rangle_2$, and $\left\langle\bm{\theta}_B\right\rangle_1$ and $\left\langle\bm{\theta}_B\right\rangle_2$, respectively\\
$\mathcal{A}$ distributes $\left\langle\textbf{X}_A\right\rangle_2$ and $\left\langle\bm{\theta}_A\right\rangle_2$ to $\mathcal{B}$ \\
$\mathcal{B}$ distributes $\left\langle\textbf{X}_B\right\rangle_1$ and $\left\langle\bm{\theta}_B\right\rangle_1$ to $\mathcal{A}$ \label{algo-ss-4}\\
$\mathcal{A}$ locally calculates $\left\langle\textbf{X}\right\rangle_1 = \left\langle\textbf{X}_A\right\rangle_1 \oplus \left\langle\textbf{X}_B\right\rangle_1$, $\left\langle\bm{\theta}\right\rangle_1 = \left\langle\bm{\theta}_A\right\rangle_1 \oplus \left\langle\bm{\theta}_B\right\rangle_1$, and $\left\langle\textbf{X}\right\rangle_1 \cdot \left\langle\bm{\theta}\right\rangle_1$ \label{algo-ss-ct1}\\
$\mathcal{B}$ locally calculates $\left\langle\textbf{X}\right\rangle_2 = \left\langle\textbf{X}_A\right\rangle_2 \oplus \left\langle\textbf{X}_B\right\rangle_2$, $\left\langle\bm{\theta}\right\rangle_2 = \left\langle\bm{\theta}_A\right\rangle_2 \oplus \left\langle\bm{\theta}_B\right\rangle_2$, and $\left\langle\textbf{X}\right\rangle_2 \cdot \left\langle\bm{\theta}\right\rangle_2$ \label{algo-ss-ct2}\\
$\mathcal{A}$ and $\mathcal{B}$ calculate $\left\langle\textbf{X}\right\rangle_1 \cdot \left\langle\bm{\theta}\right\rangle_2$ and $\left\langle\textbf{X}\right\rangle_2 \cdot \left\langle\bm{\theta}\right\rangle_1$ using arithmetic sharing matrix multiplication, $\mathcal{A}$ get $\left\langle\textbf{X}_1 \cdot \bm{\theta}_2\right\rangle_A $ and $\left\langle\textbf{X}_2 \cdot \bm{\theta}_1\right\rangle_A $, $\mathcal{B}$ gets $\left\langle\textbf{X}_1 \cdot \bm{\theta}_2\right\rangle_B $ and $\left\langle\textbf{X}_2 \cdot \bm{\theta}_1\right\rangle_B $ \label{algo-ss-smm}\\
$\mathcal{A}$ locally calculates $\left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_A = \left\langle\textbf{X}\right\rangle_1 \cdot \left\langle\bm{\theta}\right\rangle_1 + \left\langle\textbf{X}_1 \cdot \bm{\theta}_2\right\rangle_A + \left\langle\textbf{X}_2 \cdot \bm{\theta}_1\right\rangle_A$ \label{algo-ss-rc1}\\
$\mathcal{B}$ locally calculates $\left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_B = \left\langle\textbf{X}\right\rangle_2 \cdot \left\langle\bm{\theta}\right\rangle_2 + \left\langle\textbf{X}_1 \cdot \bm{\theta}_2\right\rangle_B + \left\langle\textbf{X}_2 \cdot \bm{\theta}_1\right\rangle_B$ \label{algo-ss-rc2}\\
$\mathcal{A}$ and $\mathcal{B}$ sends $\left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_A$ and $\left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_B$ to $\mathcal{S}$
$\mathcal{S}$ calculates $\textbf{h}_1=\left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_A + \left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_B$
\end{algorithm}
\subsection{Private Feature Related Computations}\label{model-feature}
Private feature related computations refer to data holders collaboratively calculate the hidden layer of a DNN using their private features.
Here, data holders want to (1) calculate a common function, i.e., $\textbf{h}_1 = f(\textbf{X}_A, \textbf{X}_B; \bm{\theta}_A, \bm{\theta}_B)$, collaboratively and (2) keep their features, i.e., $\textbf{X}_A$ and $\textbf{X}_B$, private.
Mathematically, $\mathcal{A}$ and $\mathcal{B}$ have partial features ($\textbf{X}_A$ and $\textbf{X}_B$) and partial model parameters ($\bm{\theta}_A$ and $\bm{\theta}_B$), respectively, and they want to compute the output of the first hidden layer collaboratively.
That is, $\mathcal{A}$ and $\mathcal{B}$ want to compute
$\textbf{h}_1 = \textbf{X}_A \cdot \bm{\theta}_A + \textbf{X}_B \cdot \bm{\theta}_B = (\textbf{X}_A \oplus \textbf{X}_B) \cdot (\bm{\theta}_A \oplus \bm{\theta}_B)$,
where $\oplus$ denotes concatenation operation.
Note that we omit the activation function here, since activation can be done by server after it receives $\textbf{h}_1$ from data holders.
The above secure computation problem can be done by cryptographical techniques.
As we described in Section \ref{pre}, arithmetic sharing and additive HE are popularly used due to their high efficiency. We will present two solutions based on arithmetic sharing and additive HE, respectively.
\subsubsection{Arithmetic sharing based solution}
We first present how to solve the above secure computation problem using arithmetic sharing.
The main technique is secret sharing based matrix addition and multiplication on fixed-point numbers, please refer to Section~\ref{pre-ss} for more details.
We summarize the secure protocol in Algorithm \ref{model-feature-ss}.
As Algorithm~\ref{model-feature-ss} shows, data holders first secretly \textit{share} their feature and model (Lines \ref{algo-ss-1}-\ref{algo-ss-4}), then they concat the feature and model shares \ref{algo-ss-ct1}-\ref{algo-ss-ct2}.
After it, data holders calculate $\textbf{h}_1= \textbf{X} \cdot \bm{\theta}$ by using distributive property, i.e., $\textbf{X} \cdot \bm{\theta} = (\left\langle\textbf{X}\right\rangle_1 + \left\langle\textbf{X}\right\rangle_2) \cdot (\left\langle\bm{\theta}\right\rangle_1 + \left\langle\bm{\theta}\right\rangle_2)$ (Line \ref{algo-ss-smm}).
Next, each data holder sums up their intermediate shares as shares of the hidden layer (Lines \ref{algo-ss-rc1}-\ref{algo-ss-rc2}).
To this end, $\mathcal{A}$ and $\mathcal{B}$ each obtains a partial share of the hidden layer, i.e., $\langle \textbf{h}_1 \rangle _A = \left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_A$ and $\langle \textbf{h}_1 \rangle _B = \left\langle\textbf{X} \cdot \bm{\theta}\right\rangle_B$.
Finally, the server \textit{reconstruct} the first hidden layer by
$\textbf{h}_1 = \langle \textbf{h}_1 \rangle _A + \langle \textbf{h}_1 \rangle _B$.
\subsubsection{Additive HE based solution}
We then present how to solve the secure computation problem using additive HE. We summarize the protocol in Algorithm \ref{model-feature-he}, where we first rely on the server to generate key pair and decryption (Line \ref{algo-he-keygen}), then let data holders calculate the encrypted hidden layer (Lines \ref{algo-he-a}-\ref{algo-he-b}), and finally let the server decrypt to get the plaintext hidden layer (Line \ref{algo-he-s}).
Arithmetic sharing and additive HE have their own advantages.
Arithmetic sharing does not need time-consuming encryption and decryption operations, however, it has higher communication complexity.
In contrast, although additive HE has lower communication complexity, it relied on the time-consuming encryption and decryption operations. We will empirically study their performance under different network settings in Section \ref{sec-exp-speed}.
\begin{algorithm}[t]
\caption{Data holders $\mathcal{A}$ and $\mathcal{B}$ securely calculate the first hidden layer using additive HE}\label{model-feature-he}
\KwIn {features of $\mathcal{A}$ and $\mathcal{B}$ ($\textbf{X}_A$ and $\textbf{X}_B)$, current models of $\mathcal{A}$ and $\mathcal{B}$ ($\bm{\theta}_A$ and $\bm{\theta}_B)$, and Server ($\mathcal{S}$)}
\KwOut{The first hidden layer $\textbf{h}_1$ on $\mathcal{S}$}
$\mathcal{S}$ generates key pair $(pk, sk)$ and distributes public key $pk$ to $\mathcal{A}$ and $\mathcal{B}$ \label{algo-he-keygen} \\
$\mathcal{A}$ calculates $\textbf{X}_A \cdot \bm{\theta}_A$, encrypts it with $pk$, and sends $\llbracket \textbf{X}_A \cdot \bm{\theta}_A \rrbracket$ to $\mathcal{B}$ \label{algo-he-a} \\
$\mathcal{B}$ calculates $\textbf{X}_B \cdot \bm{\theta}_B$, encrypts it with $pk$, calculates $\llbracket \textbf{X}_A \cdot \bm{\theta}_A + \textbf{X}_B \cdot \bm{\theta}_B \rrbracket = \llbracket \textbf{X}_A \cdot \bm{\theta}_A \rrbracket + \llbracket \textbf{X}_B \cdot \bm{\theta}_B \rrbracket$, and sends it to $\mathcal{S}$ \label{algo-he-b} \\
$\mathcal{S}$ decrypt $\llbracket \textbf{X}_A \cdot \bm{\theta}_A + \textbf{X}_B \cdot \bm{\theta}_B \rrbracket$ using $sk$ and gets $\textbf{h}_1 = \textbf{X}_A \cdot \bm{\theta}_A + \textbf{X}_B \cdot \bm{\theta}_B$ \label{algo-he-s}
\Return $\textbf{h}_1$ on $\mathcal{S}$
\end{algorithm}
\subsection{Hidden Layer Related Computations}\label{model-hidden}
After $\mathcal{A}$ and $\mathcal{B}$ obtain the shares of the first hidden layer, they send them to the server for hidden layer related computations, i.e., $\textbf{h}_{L} = f (\textbf{h}_1, \bm{\theta}_{S})$.
This is the same as the traditional neural networks.
Given $l$-th hidden layer $\textbf{h}_l$, where $1 \le l \le L-1$ and $L$ be the number of hidden layers, the $(l+1)$-th hidden layer can be calculated by
\begin{equation}\label{hdl}
\textbf{h}_{l+1} = f_l (\textbf{h}_l, \bm{\theta}_{l}),
\end{equation}
where $\bm{\theta}_{l}$ is the parameters in $l$-th layer, and $f_l$ is the active function of the $l$-th layer.
These are the most time-consuming computations, because there are many nonlinear operations, e.g., max pooling, are not cryptographically
friendly. We leave these heavy computations on a server who has strong computation power.
To this end, our model can scale to large datasets.
Moreover, one can easily implement any kinds of deep neural network models using the existing deep learning platforms such as TensorFlow (https://tensorflow.org/)
and PyTorch (https://pytorch.org/).
As a comparison, for the existing privacy preserving neural network approaches such as SecureML \cite{mohassel2017secureml} and ABY \cite{demmler2015aby}, one needs to design specific protocols for different deep neural network models, which significantly increases the development cost for deep learning practitioners.
\subsection{Private Label Related Computations}\label{model-label}
After the neural server finishes the hidden layer related computations, it sends the final hidden layer $\textbf{h}_L$ to the data holder who has the label, i.e., $\mathcal{A}$ in this case, for computing predictions. That is
\begin{equation}\label{eq-predict}
\hat{\textbf{y}} = \delta (\textbf{h}_L, \bm{\theta}_y),
\end{equation}
where $\delta$ is designed based on different prediction tasks, e.g., $\delta$ is the softmax function for classification tasks.
\subsection{Learning Model Parameters}\label{model-learn}
To prevent the information leakage caused by the hidden features, i.e., ${\mathbf{h}_l}_{l\ge 1}$, we propose using SGLD instead of SGD to optimize the parameters of SPNN. The formal description of SGLD is introduced in Equation~\ref{eq:sgld} and we refer reader to the prior work~\cite{sgld_original} for more details.
The gradient is computed using back propagation, which is similar to the forward propagation procedure in Algorithm \ref{algo}.
Both forward computation and backward computation need communication between $\mathcal{A}$, $\mathcal{B}$, and the server, in a decentralized manner.
During training, all the private data ($\textbf{X}_A$, $\textbf{X}_B$, and $\textbf{y}$) and private data related model parameters ($\bm{\theta}_A$, $\bm{\theta}_B$, and $\bm{\theta}_y$) are kept by data holders. Therefore, data privacy is preserved to a large extent.
It is worth noticing that our proposal can be generalized to multi-parties and the situations that the data holders collaboratively calculate $i$ ($1 \le i \le L$) hidden layers instead of the first hidden layer only. Therefore, the existing method \cite{mohassel2017secureml} is one of our special cases, i.e., $\mathcal{A}$ and $\mathcal{B}$ collaboratively calculate all the neural networks using cryptographical techniques, without the aid of server.
\section{Conclusion and Future Work}\label{conlu}
In this paper, we have proposed SPNN~---~a scalable privacy preserving deep neural newtwork learning.
Our motivation is to design SPNN~from both algorithmic perspective and cryptographic perspective.
From algorithmic perspective, we split the computation graph of DNN models into two parts, i.e., the private data related computations that are performed by data holders, and the rest heavy computations which are delegated to a server with high computation ability.
From cryptographic perspective, we proposed two kinds of cryptographic techniques, i.e., secret sharing and homomorphic encryption, for the isolated data holders to conduct private data related computations privately and cooperatively.
We implemented SPNN~in a decentralized setting and presented its user-friendly APIs.
Our model has achieved promising results on real-world fraud detection dataset and financial distress dataset.
In the future, we would like to deploy our proposal in real-world applications.
\section{Preliminaries}\label{pre}
In this section, we first briefly describe the data partition setting,
and then present background knowledge on deep learnin
, secret sharin
, and homomorphic encryption
\subsection{Data Partition}\label{pre-setting}
There are usually two types of data partition settings, i.e., \textit{horizontally data partitioning} and \textit{vertically data partitioning} in literature.
The former one indicates each participant has a subset of the samples with the same features, while the latter denotes each party has the same samples but different features \cite{hall2011secure,yang2019federated}.
In practice, the latter one is more common due to the fact that most users are always active on multi-platforms for different purposes, e.g., on Facebook for social and on Amazon for shopping. Therefore, we focus on vertically data partitioning in this paper.
In practice, before building privacy preserving machine learning models under vertically data partitioning setting, the first step is to align samples between participants, e.g., align users when each sample is user features and lable.
Taking a fraud user detection scenario for example, assume two companies both have a batch of users with different user features, and they want to build a better fraud detection system collaboratively and securely. To train a fraud detection model such as neural network, they need match the intersectant users and align them as training samples.
This can be done efficiently using the existing \textit{private set intersection} technique \cite{de2010practical,pinkas2014faster}. In this paper, we assume participants have already aligned samples and are ready for building privacy preserving neural network.
\subsection{Deep Neural Network}\label{pre-nn}
Deep Neural Network (DNN) has been showing great power in kinds of machine learning tasks, since it can learn complex functions by composing multiple non-linear modules to transform representations from low-level raw inputs to high-level abstractions \cite{gu2019securing}.
Mathematically, the forward procedure of a DNN can be defined as a representation function $f$ that maps an input $\textbf{X}$ to an output $\hat{y}$, i.e., $\hat{y}=f(\textbf{X}, \bm\theta)$, where $\bm\theta$ is model parameter.
Assume a DNN has $L$ layers, then $f$ is composed of $L$ sub-functions $f_{l|{l\in[1,L]}}$, which are connected in a chain. That is, $f(\textbf{X})=f_L(...f_2(f_1(\textbf{X}, \bm\theta_0), \bm\theta_1)..., \bm\theta_{L-1})$, as is shown in Figure \ref{compare-exam-nn}.
\subsubsection{Bayesian neural network learning}
The above model parameter can be learnt by using mini-batch gradient descent.
Let $\mathcal{D}=\{(\textbf{x}_i, y_i)\}_{i=1}^n$ be the training dataset, where $n$ is the sample size, $\textbf{x}_i$ is the feature of $i$-th sample, and $y_i$ is its corresponding label.
The loss function of DNN is built by minimizing the losses over all the training samples, that is, $\mathcal{L}=\sum_i^n l(y_i, \hat{y_i})$. Here $l(y_i, \hat{y_i})$ is defined based on different tasks, e.g., softmax for classification tasks.
After it, DNN can be learnt efficiently by minimizing the losses using mini-batch Stochastic Gradient Descent (SGD) and its variants.
Take mini-batch gradient descent for example, let $\textbf{B}$ be samples in each batch, $|\textbf{B}|$ be the batch size, $\textbf{X}_B$ and $\textbf{Y}_B$ be the features and labels in the current batch, then the model of DNN can be updated by:
\begin{equation}\label{batch-update}
\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - \frac{\alpha}{|\textbf{B}|} \cdot \frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}} ,
\end{equation}
where $\alpha$ is the learning rate
The model gradient $\frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}}$ is usually calculated by back propagation \cite{goodfellow2016deep}.
As discussed in the introduction, the hidden features can impose some sensitive information. To reduce the information lekage, in this paper, we propose to use SGLD~\cite{sgld_original}, a Bayesian learning approach. Specifically, SGLD can be seen as a noisy version of the conventional SGD algorithm. To reduce the leakage, SGLD injects an isotropic
Gaussian noise vector into the gradients. Formally, this process can be represented as:
\begin{equation}
\boldsymbol{\theta} \leftarrow \boldsymbol{\theta} - (\frac{\alpha_t}{2}\frac{\partial \mathcal{L}}{\partial \boldsymbol{\theta}}+\eta_t), \eta_t\sim\mathcal{N}(0, \alpha_t \mathbf{I}),
\label{eq:sgld}
\end{equation}
here $\alpha_t$ denotes the learning rate at the $t$-th iteration and $\mathcal{N}(0,\alpha_t\mathbf{I})$ is the Gaussian distribution.
\subsection{Arithmetic Secret Sharing}\label{pre-ss}
Assume there are two parties ($\mathcal{P}_0$ and $\mathcal{P}_1$), $\mathcal{P}_0$ has an $\ell$-bit secret $a$ and $\mathcal{P}_1$ has an $\ell$-bit secret $b$.
To secretly share $\textbf{Shr}(\cdot)$ $a$ for $\mathcal{P}_0$, party $\mathcal{P}_0$ generates an integer $r \in \mathds{Z}_{2^\ell}$ uniformly at random, sends $r$ to party $\mathcal{P}_1$ as a share $\left\langle a \right\rangle_1$, and keeps $\left\langle a \right\rangle_0 = a- r$ mod $2^\ell$ as the other share. $\mathcal{P}_1$ can share $b$ with $\mathcal{P}_0$ similarly, and $\mathcal{P}_1$ keeps $\left\langle b \right\rangle_1$ and $\mathcal{P}_0$ receives $\left\langle b \right\rangle_0$.
We will describe how to perform addition and multiplication and how to support decimal numbers and vectors in the following subsections.
\subsubsection{Addition and multiplication}
Suppose $\mathcal{P}_0$ and $\mathcal{P}_1$ want to secretly calculate $a+b$ using Arithmetic sharing, $\mathcal{P}_0$ locally calculates $\left\langle c \right\rangle_0=\left\langle a \right\rangle_0 + \left\langle b \right\rangle_0$ mod $2^\ell$ and $\mathcal{P}_1$ locally calculates $\left\langle c \right\rangle_1=\left\langle a \right\rangle_1 + \left\langle b \right\rangle_1$ mod $2^\ell$.
To reconstruct $\textbf{Rec}(\cdot, \cdot)$ a secret, one party sends his share to the other party who reconstruct the plaintext by $c=\left\langle c \right\rangle_0 + \left\langle c \right\rangle_1$ which is equal to $a+b$.
To secretly calculate $a \cdot b$ using Arithmetic sharing, Beaver’s multiplication triples \cite{beaver1991efficient} are usually involved.
Specifically, to multiply two secretly shared values ($a$ and $b$), $\mathcal{P}_0$ and $\mathcal{P}_1$ first need to collaboratively generate a triple $\langle u \rangle$, $\langle v \rangle$, and $\langle w \rangle$, where $u, v$ are uniformly random values in $\mathds{Z}_{2^\ell}$ and $w=u \cdot v$ mod $2^\ell$.
Then, $\mathcal{P}_0$ locally computes $\langle e \rangle _0 = \langle a \rangle _0 - \langle u \rangle _0$ and $\langle f \rangle _0 = \langle b \rangle _0 - \langle v \rangle _0$, and $\mathcal{P}_1$ locally computes $\langle e \rangle _1 = \langle a \rangle _1 - \langle u \rangle _1$ and $\langle f \rangle _1 = \langle b \rangle _1 - \langle v \rangle _1$.
Next, they reconstruct $e$ and $f$ by $\textbf{Rec}(\langle e \rangle _0, \langle e \rangle _1)$ and $\textbf{Rec}(\langle f \rangle _0, \langle f \rangle _1)$, respectively.
Finally, $\mathcal{P}_0$ gets $\langle c \rangle _0 = f \cdot \langle a \rangle _0 + e \cdot \langle b \rangle _0 + \langle w \rangle _0$ and $\mathcal{P}_1$ gets $\langle c \rangle _0 = e \cdot f + f \cdot \langle a \rangle _1 + e \cdot \langle b \rangle _1 + \langle w \rangle _1$, where $\langle c \rangle _0 + \langle c \rangle _1 = a \cdot b$.
\subsubsection{Supporting decimal numbers and vectors}
The above protocols only work in finite field, since it needs to sample uniformly in $\mathds{Z}_{2^\ell}$. However, in neural network, both features and model parameters are usually decimal vectors.
To support this, we adopt the existing fixed-point representation to approximate decimal arithmetics efficiently \cite{mohassel2017secureml}.
Simply speaking, we use at most $l_F$ bits to represent the fractional part of decimal numbers.
Specifically, Suppose $a$ and $b$ are two decimal numbers with at most $l_F$ bits in the fractional part, to do fixed-point multiplication, we first transform them to integers by letting $a'=2^{l_F}a$ and $b'=2^{l_F}b$, and then calculate $c=a'b'$. Finally, we truncate the last $l_F$ bits of $c$ so that it has at most $l_F$ bits representing the fractional part, with $l_F=16$ in this paper. It has been proven that this truncation technique also works when $c$ is secret shared \cite{mohassel2017secureml}.
We set $l_F=16$ in this paper.
After this, it is easy to vectorize the addition and multiplication protocols under Arithmetic sharing setting.
We will present how participants use arithmetic sharing to calculate the first hidden layer cooperatively in Section \ref{model-feature}.
\subsection{Additive Homomorphic Encryption}\label{pre-he}
Additive Homomorphic Encryption (HE) is a kind of encryption scheme which allows a third party (e.g., cloud, service provider) to perform addition on the encrypted data while preserving the features of addition operation
and format of the encrypted data \cite{acar2018survey}.
Suppose there is a server with key generation ability and a number of participants with private data. Under such setting, the use of additive HE mainly has the following steps \cite{acar2018survey}:
\begin{itemize}[leftmargin=*] \setlength{\itemsep}{-\itemsep}
\item \textbf{Key generation. } The server generates the public and secret key pair $(pk, sk)$ and distributes public key $pk$ to the participants.
\item \textbf{Encryption. } Given a plaintext $x$ on any participant, it is encrypted using $pk$ and a random $r$, i.e., $\llbracket x \rrbracket=\textbf{Enc}(pk; x, r)$, where $\llbracket x \rrbracket$ denotes the ciphertext and $r$ makes sure the ciphertexts are different in multiple encryptions even with the same plaintexts.
\item \textbf{Homomorphic addition. } Given two ciphertext ($\llbracket x \rrbracket$ and $\llbracket y \rrbracket$) on participants, addition can be done by $\llbracket x+y \rrbracket=\llbracket x \rrbracket+\llbracket y \rrbracket$.
\item \textbf{Decryption. } Given a ciphertext $\llbracket x \rrbracket$ on server, it can be decrypted by $x=\textbf{Dec}(sk; \llbracket x \rrbracket)$.
\end{itemize}
In this paper, we choose Paillier \cite{paillier1999public} to do additive HE, which is popularly used due to its high efficiency. We will present how participants use additive HE to calculate the first hidden layer cooperatively in Section \ref{model-feature}.
\section{Empirical Study}\label{exp}
In this section, we conduct comprehensive experiments to study the accuracy and efficiency of SPNN~by comparing with the state-of-the-art algorithm based privacy preserving DNN methods and cryptograph based privacy preserving DNN approaches.
\subsection{Experimental Settings}
\nosection{Datasets}
To test the effectiveness of our proposed model, we choose two real-world datasets, both of which are binary classification tasks. The first one is a fraud detection dataset \cite{dal2014learned}, where there are 28 features and 284,807 transactions. The other one is financial distress dataset \cite{findata}, where there are 83 features and 3,672 transactions. After we encode the categorical features, there are 556 features in total.
We assume these features are hold by two parties, and each of them has equal partial features. Moreover, we randomly split the fraud detection dataset into two parts: 80\% as training dataset and the rest as test dataset.
We also randomly split the financial distress dataset into 70\% and 30\%, as suggested by the dataset owner.
We repeat experiments five times and report their average results.
\nosection{Metrics}
We adopt Area Under the receiver operating characteristic Curve (AUC) as the evaluation metric, since both datasets are binary classification tasks.
In practice, AUC is equivalent to the probability that the classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance, and therefore, the higher the better.
\nosection{Hyper-parameters}
For the Fraud detection dataset, we use a multi-layer perception with 2 hidden layers whose dimensions are (8,8). We choose Sigmoid as the activation function \cite{jun1995sigmod} and use gradient descent as the optimizer. We set the learning rate to 0.001.
For the Financial distress dataset, we use a multi-layer perception with 3 hidden layers with dimensions (400, 16, 8), we choose Relu as the activation function \cite{HRrelu2000} in the last layer and Sigmoid function in the other layers, and set the learning rate to 0.006.
\nosection{Comparison methods}
To study the effectiveness and efficiency of SPNN, we compare it with the following three kinds of approaches.
\begin{itemize} [leftmargin=*] \setlength{\itemsep}{-\itemsep}
\item Plaintext Neural Network (\textbf{NN}) builds DNN using the plaintext data and therefore cannot protect data privacy.
\item Split Neural Network (\textbf{SplitNN}) \cite{vepakomma2018split} builds privacy preserving DNN by split the computation graphs of DNN from algorithmic perspective, where each data holder trains a partial deep network model using its own features individually, and then the partial models are concatenated and sent to a server who has labels to train the rest of the mode.
\item \textbf{SecureML} \cite{mohassel2017secureml} designs end-to-end privacy preserving DNN model using secret sharing protocols from cryptographic perspective. It also uses piece-wise functions or multi-nomials to approximate the non-linear active function in DNN.
\end{itemize}
Moreover, our proposed SPNN~has two implementations, i.e., SS and HE, and therefore we have \textbf{SPNN-SS} and \textbf{SPNN-HE}.
\subsection{Accuracy Comparison}
We first study the accuracy (AUC) of SPNN.
For accuracy comparison, we use SPNN~to denote both SPNN-SS and SPNN-HE, since they have the same AUC. Note that we use SGD as the optimizer during comparison.
\subsubsection{Comparison results of two data holders}
We first assume there are only two parties, and report the comparison AUC performances on both datasets in Table \ref{compare_AUC}.
From it, we can see that SPNN~achieves almost the same prediction performance as NN, and the differences come from the fixed-point representation of decimal numbers.
We also observe that SPNN~has better performance than SplitNN and SecureML. This is because our proposed SPNN~uses cryptographic techniques for data holders to learn the first hidden layer collaboratively. In contrast, for SplitNN, the data holders learn partial hidden layer individually, which causes information loss. For SecureML, it has to approximate the multiply non-linear functions, which damages its accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=6cm]{figures/cement-client-num}
\caption{Effect of the number of participants.}
\label{fig-effect-num}
\end{figure}
\subsubsection{Effects of the number of data holders}
We then study how the number of data holders affects each model's performance. Figure \ref{fig-effect-num} shows the comparison results with respect to different number of data holders, where we choose the fraud detection dataset. From it, we find that both SecureML and SPNN~achieve the same performance with the change of number of data holders. On the contrary, the performance of SplitNN tends to decline with the increase of number of participants. This is because, for both SecureML and SPNN, the data holders collaboratively learn all the layers or the first layer using cryptographic technique. As a contrast, for SplitNN, the data holders learn partial hidden layer individually, and the more data holders, the more information is lost.
\subsubsection{Training and test losses}
Besides, to study whether SPNN~has over-fitting problem, we study the average training loss and average test loss of SPNN~w.r.t. the iteration.
We report the average training loss and average test loss of SPNN~on both datasets in Figure \ref{loss1} and Figure \ref{loss2}, respectively.
From them, we can see that SPNN~converges steadily without over-fitting problem.
The above experiments demonstrate that SPNN~consistantly achieves the best performance no matter what the nubmer of participants is, which shows its practicalness.
\begin{table}
\centering
\caption{Comparison results on two datasets in terms of AUC.}
\label{compare_AUC}
\begin{tabular}{|c|c|c|c|c|}
\hline
AUC & NN & SplitNN & SecureML & SPNN \\
\hline
Fraud Detection & 0.8772 & 0.8624 & 0.8558 & 0.8637 \\
\hline
Financial Distress & 0.9379 & 0.9032 & 0.9092 & 0.9314 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\subfigure [\emph{Training loss}]{ \includegraphics[width=4.2cm]{figures/mpc_loss_train1}}~~~
\subfigure[\emph{Test loss}] { \includegraphics[width=4.2cm]{figures/mpc_loss_test1}}
\caption{Average loss of SPNN~on fraud detection dataset.}
\label{loss1}
\end{figure}
\begin{figure}[t]
\centering
\subfigure [\emph{Training loss}]{ \includegraphics[width=4.2cm]{figures/mpc_loss_train2}}~~~
\subfigure[\emph{Test loss}] { \includegraphics[width=4.2cm]{figures/mpc_loss_test2}}
\caption{Average loss of SPNN~on financial distress dataset.}
\label{loss2}
\end{figure}
\subsection{Leakage Reduction of Hidden Features}
In this part, we empirically demonstrate the effectiveness of replacing SGD with SGLD to reduce the information leakage of hidden features (layers). To do this, we first introduce the property attack used in the prior work~\cite{ganju2018property}. This attack aims to infer whether a hidden feature has a specific property or not. For our specific task of fraud detection, we select `amount' as the target
property. That is, we try to infer the amount of each transaction given the hidden features.
For simplification, we change the value of `amount' to 0 or 1 based on its median, i.e., the values bigger than median are taken as 1, otherwise as 0. Therefore, the attack becomes a binary classification task.
\nosection{Attack Model}
To quantify the information leakage of SPNN, we borrow the \textit{shadow training} attack technique from \cite{shokri2017membership}. First, we create a ``shadow model'' that
imitate the behavior of the SPNN, but for which we know the training datasets and thus the ground truth about `amount' in these datasets. We then train the attack model
on the labeled inputs and outputs of the shadow models. In our experiments, we assume the attacker somehow gets the `amount' (label) from the original dataset and the corresponding hidden features, with which the attacker tries to train the attack model.
For this task, we use the fraud detection dataset, from where we randomly split 50\% as the shadow dataset, 25\% as the training dataset and 25\% as the test dataset.
Here, we build a simple logistic regression model for property attack task. After this, to compare the effects of leakage reduction of different training methods, we perform the property attack on SPNN~trained by using SGD and SGLD and use the AUC to evaluate the attack performance.
We report the comparison results in Table~\ref{tab:attack_results} using the fraud detection dataset.
As Table~\ref{tab:attack_results} shows, SGLD can significantly reduce the information leakage compared with the conventional SGD in terms of AUC, i.e., 0.8223 vs. 0.5951. More surprisedly, SGLD also boosts the model performance in terms of AUC. Compared with SGD, SGLD has boosted the AUC value from 0.9118 to 0.9313. We infer that the performance boost can be caused by the regularization effect introduced by SGLD (i.e., SGLD can improve the generalization ability of the model).
\begin{table}[t]
\centering
\caption{Evaluation of information leakage on fraud detection dataset.}
\begin{tabular}{|c|c|c|}
\hline
Optimizer & Task AUC & Attack AUC \\
\hline
SGD & 0.9118 & 0.8223 \\
\hline
SGLD & 0.9313 & 0.5951 \\
\hline
\end{tabular}
\label{tab:attack_results}
\end{table}
\subsection{Scalability Comparison}\label{sec-exp-speed}
We now study the scalability (efficiency) of our proposed SPNN, including its training time comparison with NN, SplitNN, and SecureML, the running time comparison of SPNN-SS and SPNN-HE, and the running time of SPNN~with different training data sizes, where the running time refers to the running time per epoch.
\subsubsection{Comparison of training time}
First, we compare the training time of SPNN-SS with NN, SplitNN, and SecureML on both datasets.
The results are summarized in Table \ref{compare_time}, where we set batch size to 5,000 and the network bandwidth to 100Mbps.
From it, we find that NN and SplitNN are the most efficient ones since they do not involve any time-consuming cryptographic techniques. SPNN~is slower than NN and SplitNN since it adopts secret sharing technique for data holders to collaboratively calculate the first hidden layer. SecureML is much slower than SPNN~and is the slowest one since it uses secure multi-party computation techniques to calculate all the neural networks, and the speedup of SPNN~against SecureML will be more significant when the network structure is deeper.
The results demonstrate the superior efficiency of SPNN.
\begin{table}[t]
\centering
\caption{Comparison of training time per epoch (in seconds) on both datasets.}
\label{compare_time}
\begin{tabular}{|c|c|c|c|c|}
\hline
Training time & NN & SplitNN & SecureML & SPNN-SS \\
\hline
Fraud detection & 0.2152 & 0.7427 & 960.30 & 37.22 \\
\hline
Financial distress & 0.0507 & 0.4541 & 751.29 & 21.84 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\subfigure [\emph{Fraud detection}]{ \includegraphics[width=4.2cm]{figures/cement-he-ss-1}}~~~
\subfigure[\emph{Financial distress}] { \includegraphics[width=4.2cm]{figures/cement-he-ss-2}}
\caption{Efficiency comparison of SPNN-SS and SPNN-HE.}
\label{fig-ss-he-compare}
\end{figure}
\subsubsection{Comparison of SPNN-SS and SPNN-HE}
We then compare the efficiency of SPNN-SS and SPNN-HE, which implements SPNN~ using SS and HE, respectively. From Figure \ref{fig-ss-he-compare}, we find that the efficiency of SPNN-SS is significantly affected by network bandwidth, while SPNN-HE tends to be stable with respect to the change of bandwidth. When the network status is good (i.e., high bandwidth), SPNN-SS is much more efficient than SPNN-HE. However, SPNN-SS becomes less efficient than SPNN-HE when the network status is poor (i.e., low bandwidth), e.g., bandwidth=100Kbps on Financial distress dataset. The results indicate that our proposed two implementations are efficient and suitable for different network status.
\subsubsection{Running time with different data size}
Finally, we study the running time of SPNN-SS and SPNN-HE with different data size, where fix the network bandwidth to 100Mbps.
We do this by varying the proportion of training data size using the fraud detection dataset, and report the running time of SPNN-SS and SPNN-HE in Figure \ref{mpc_time_data}.
From it, we find that the running time of SPNN-SS and SPNN-HE scales linearly with the training data size.
The results illustrate that our proposed SPNN~ can scale to large datasets.
\begin{figure}[t]
\centering
\subfigure [\emph{SPNN-SS}]{ \includegraphics[width=4.2cm]{figures/cement-ss-data}}~~~
\subfigure[\emph{SPNN-HE}] { \includegraphics[width=4.2cm]{figures/cement-he-data}}
\caption{Effect of training data size on SPNN.}
\label{mpc_time_data}
\end{figure}
\section{Implementation}\label{imple}
In this section, we present the implementation of SPNN~and showcase its user-friendly APIs by an example.
\begin{figure}[t]
\centering
\includegraphics[width=5cm]{figures/imple}
\caption{Implementation framework of SPNN.}
\label{imple-framework}
\end{figure}
\subsection{Overview}\label{imple-overview}
We implement SPNN~in a decentralized network, where there are three kinds of computation nodes, i.e., a coordinator, a server, and a group of clients, as is shown in Figure \ref{imple-framework}.
\textit{The coordinator} controls the start and terminal of SPNN.
When SPNN~starts, the coordinator split the computation graph into three parts, sends each part to the corresponding clients and server, and notifies clients and server to begin training/prediction.
Meanwhile, the coordinator monitors the status and terminates the program if it reaches a certain pre-defined condition, e.g., the number of iterations.
\textit{The clients} are data holders who are in charge of private data related computations, and \textit{the server} is responsible for hidden layer related computations which can be adapted to existing deep learning backends such as PyTorch. We will describe the details of data holders and server below.
\subsection{Implementation Details}\label{imple-detail}
The detailed implementation mainly includes the computations on data holders, the computations on server, and communications.
\subsubsection{Computations on data holders}
We implement the forward and backward computations by clients (data holders) using Python and PyTorch.
First, for the private feature related computations by clients collaboratively, when clients receive orders from the coordinator, they first initialize model parameters and load their own private features, and then make calculations following Algorithm \ref{model-feature-ss} and Algorithm \ref{model-feature-he}.
We implement the private feature related computations using Python.
Second, for the private label related computations by the client who has label, when the client receives the last hidden layer from the server, it initializes the model parameter and makes prediction based on Eq. \eqref{eq-predict}.
The private label related computations are done in PyTorch automatically.
\subsubsection{Computations on server}
For the heavy hidden layer related computations on server, we also use PyTorch as backend to perform the forward and backward computations.
Specifically, after server receives the first hidden layer from clients, it takes the first hidden layer as the input of a PyTorch network structure to make the hidden layer related computations, and gets the last hidden layer on server, and then the client who has label makes predictions.
Both the forward and backward computations are made automatically on PyTorch.
Note that private label related computations on client and the heavy hidden layer related computations on server are done using the ``model parallel'' mechanism in PyTorch \cite{paszke2019pytorch}.
\subsubsection{Communications}
Communications between the coordinator, server, and clients make sure the model parameters are correctly updated.
We adopt Google's gRPC\footnote{https://grpc.io/} as the communication protocol.
Before training/prediction, we configure detailed parameters for clients and on the coordinator, such as the IP addresses, gateways, and dataset locations.
At the beginning of the training/prediction, the coordinator shakes hands with clients and server to build connection.
After that, they exchange data to finish model training/prediction as described above.
\subsection{User-friendly APIs}\label{imple-api}
Our implementation supports user-friendly API, similar as PyTorch. Developers can easily build any privacy preserving deep neural network models without the complicated cryptography knowledge.
Figure \ref{examcode} shows an example code of SPNN, which is a neural network with network structure $(64, 256, 512, 256, 64, 2)$. Here, we assume two clients (A and B) each has 32-dimensional input features, and A has the 5-classes labels.
From Figure \ref{examcode}, we can find that the use of SPNN~is quite the same as PyTorch, and the most different steps are the forward and backward computations of the first hidden layer by clients using cryptographic techniques (Line 35 and Line 44).
\begin{figure}[t]
\centering
\begin{lstlisting}[basicstyle=\ttfamily\footnotesize, language=cement]
import torch
import torch.nn as nn
import torch.optim as optim
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
# server makes hidden layer related computations
self.hidden1 = torch.nn.ReLU(256, 512).to('server')
self.hidden2 = torch.nn.sigmoid(512, 256).to('server')
self.hidden3 = torch.nn.sigmoid(256, 64).to('server')
# A makes private label related computations
self.output = torch.nn.ReLU(64, 5).to('client_a')
def forward(self, first_hidden):
last_hidden = self.hidden3(self.hidden2(self.hidden1(first_hidden))))
return self.net2(last_hidden.to('client_a'))
# A and B initialize their model parameters
theta_a = client_a.init(32, 32)
theta_b = client_b.init(32, 32)
# clients load data
x_a = client_a.load_features('xa_location')
y = client_a.load_labels('y_location')
x_b = client_b.load_features('xb_location')
model = ToyModel()
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
for iter in range(max_iter):
# clients make private feature related forward computations using Python
first_hidden = execute_algorithm_2(x_a, theta_a, x_b, theta_b)
# server makes forward-backward computations in PyTorch
optimizer.zero_grad()
outputs = model(first_hidden)
loss(outputs, y).backward()
# clients make private feature related backward computations using Python
first_hidden_gradient = first_hidden.grad.data
update_client_models(first_hidden_gradient, x_a, theta_a, x_b, theta_b)
\end{lstlisting}
\caption{Example code of SPNN~for a neural network.}
\label{examcode}
\end{figure}
\section{Related Work}\label{background}
In this section, we briefly review two popular types of privacy preserving DNN models.
\subsection{Algorithmic methods}
These methods build privacy preserving DNN by split the computation graphs of DNN from algorithmic perspective \cite{gupta2018distributed,vepakomma2018split,osia2019hybrid,gu2019securing,hu2019fdml}.
A common way is to let each data holder trains a partial neural network individually and then sends the hidden layers to another data holder (or server) for the rest model training \cite{gupta2018distributed}.
For example, \cite{gu2019securing} proposed to enclose sensitive computation in a trusted execution environment, e.g., Intel software guard extensions, to mitigate input information disclosures, and delegate non-sensitive workloads with hardware-assisted deep learning acceleration.
\cite{vepakomma2018split} proposed splitNN, where each data holder trains a partial deep network model using its own features individually, and the partial models are concatenated
followed by concatenating the partial models and sending to a server who has labels to train the rest of the model, as shown in Figure \ref{compare-exam-split}.
However,
the above-mentioned methods may suffer from accuracy and privacy problems, as we analyzed in Section \ref{sec-intro}.
First, since data holders train partial neural networks individually and the correlation between the data held by different parties is not captured \cite{vepakomma2018split}, and therefore the accuracy is limited.
Second, during model training of these methods, labels need to be shared with other participants such as the data holder or server \cite{gupta2018distributed,vepakomma2018split}, therefore, the data privacy are not fully protected.
In this paper, our proposed SPNN~differs from existing algorithmic methods in mainly two aspects.
First, we use cryptographic techniques for data holders to calculate the hidden layers collaboratively rather than compute them based on their plaintext data individually.
By doing so, SPNN~can not only prevent a server from obtaining the individual information from each participant, but also capture feature interactions from the very beginning of the neural network, and therefore achieve better performance, as we will show in experiments.
Second, SPNN~assumes both private feature and label data are held by participants themselves. Therefore, SPNN~can protect both feature and label privacy.
\subsection{Cryptographic Methods}
Cryptographic methods are of two types, i.e., (1) customized methods for privacy preserving neural network, and (2) general frameworks that can be used for privacy preserving neural network.
\subsubsection{Customized methods}
This type of methods designs specific protocols for privacy preserving neural networks using cryptographic techniques such as secure Multi-Party Computation (MPC) techniques and Homomorphic Encryption (HE).
For example, some existing researches built privacy preserving neural network using HE \cite{yuan2013privacy,gilad2016cryptonets,hesamifard2017cryptodl,xu2019cryptonn}. To use these methods, the participants first need to encrypt their private data and then outsource these encrypted data to a server who trains neural networks using HE techniques.
However, these methods have a drawback in nature. That is, they suffer from data abuse problem since the server can do whatever computations with these encrypted in hand.
There are also researches built privacy preserving neural networks using MPC techniques such as secret sharing and garbled circuit \cite{juvekar2018gazelle,rouhani2018deepsecure,agrawal2019quotient,wagh2019securenn}.
\subsubsection{General frameworks}
Besides the above customized privacy preserving neural network methods, there are also some general multi-party computation frameworks that can be used to build privacy preserving neural network models, e.g., SPDZ \cite{damgaard2012multiparty}, ABY \cite{demmler2015aby}, SecureML \cite{mohassel2017secureml}, ABY$^3$ \cite{mohassel2018aby}, and PrivPy \cite{li2018privpy}.
Taking SecureML, a general privacy preserving machine learning framework, as an example, it provides three types of secret sharing protocols, i.e., Arithmetic sharing, Boolean sharing, and Yao sharing, and it also allows efficient conversions between plaintext and three types of sharing.
However, all the above methods suffer from scalability problem. This is because deep neural networks contain many nonlinear active functions that are not cryptographically friendly.
For example, existing works use polynomials \cite{chen2018logistic} or piece-wise function \cite{mohassel2017secureml,chen2019secure} to approximate continuous activation functions such as sigmoid and Tanh. And the piece-wise activation functions such as Relu \cite{glorot2011deep} rely on time-consuming secure comparison protocols.
The polynomials and piece-wise functions not only reduce accuracy but more importantly, they significantly affect efficiency.
Therefore, these models are difficult to scale to deep networks and large datasets due to the high communication and computation complexities of the cryptographic techniques.
In this paper, we propose SPNN~to combine the advantages of algorithmic methods and cryptographic methods.
We use cryptographic techniques
for data holders to calculate the first hidden layers securely, and delegate the heavy hidden layer related computations to server.
Therefore, our proposal enjoys much better scalability comparing with the existing cryptographic methods, as we will report in experiments.
|
2,869,038,155,007 | arxiv |
\section{The effect of a pulsed optical excitation on the density matrix of a quantum dot-cavity system: Analytic solution}\label{sec:excitation}
Let us consider an excitation of a quantum dot (QD)-cavity system
by a sequence of ultrashort optical pulses or by an extended finite wave packet of light. The master equation describing the time evolution of the density matrix (DM) is given by Eq.\,(1),
\begin{equation} i\dot{\rho}(t)= \left[\hL+\hat{\cal L}(t)\right]\rho(t)\,,
\label{eqn:ME}\end{equation}
with the time-independent Lindblad operator $\hL$ defined by Eq.\,(2) and
a time-dependent operator $\hat{\cal L}(t)$ defined as
\begin{equation} \hat{\cal L}(t)\rho(t)= [ V(t),\rho(t)]\,,
\label{eqn:L}\end{equation}
where $V(t)$ is given by Eq.\,(4) as
\begin{equation} V(t)=-\pmb{\mu}\cdot \mathbfcal{E}(t) \ensuremath{a^{\dagger}} - \pmb{\mu}^\ast\!\!\cdot \mathbfcal{E}^\ast(t) a\,.
\label{equ:V} \end{equation}
The formal solution of \Eq{eqn:ME} can be written as
\begin{equation} \rho(t)=T\exp\left\{ -i\int_{t_0}^t\left[\hL+\hat{\cal L}(\tau)\right]d\tau\right\} \rho(t_0)
= T\prod_{j=0}^{J-1} Q_j \rho(t_0)\,,
\label{equ:split} \end{equation}
where $T$ is the standard time-ordering operator. In the second part of \Eq{equ:split}, the full time evolution of the DM, between $t_0$ and $t$, is split into a time-ordered product of $J$ operators
$$ Q_j=T\exp\left\{ -i\int_{t_j}^{t_{j+1}}\left[\hL+\hat{\cal L}(\tau)\right]d\tau\right\}\,,$$
obtained by dividing the full time interval (from $t_0$ to $t$) into $J$ pieces, which are not necessarily equal:
$$ t_0<t_1<\dots <t_j<t_{j+1}<\dots < t_J=t\,.$$
Assuming that the time steps $\Delta t_j=t_{j+1}-t_j$ are small enough, these operators may be approximated as
\begin{equation} Q_j\approx T\exp\left\{ -i\int_{t_j}^{t_{j+1}}\hL d\tau \right\} T\exp\left\{-i\int_{t_j}^{t_{j+1}} \hat{\cal L}(\tau) d\tau \right\}\,,
\label{eqn:Q}\end{equation}
with an error scaling as $(\Delta t_j)^2$~\cite{Suzuki1976}. While the first operator in \Eq{eqn:Q} can be written as $e^{-i\hL\Delta t_j}$ due to the time-independent $\hL$, the second operator requires integration of the time-dependent field $\mathbfcal{E}(t)$ exciting the system. Using its definition \Eq{eqn:L}, the action of the second operator in \Eq{eqn:Q} on the DM can be evaluated as
\begin{equation} T\exp\left\{ -i\int_{t_j}^{t_{j+1}}\hat{\cal L}(\tau)d\tau\right\}\rho(t_j)= U_j \rho(t_j) U^\dagger_j\,,
\label{equ:pulse}\end{equation}
where
$$ U_j=T\exp\left\{ -i\int_{t_j}^{t_{j+1}}V(\tau)d\tau\right\} $$
is the standard evolution operator due to a time-dependent interaction $V(t)$.
Using the explicit form of $V(t)$, given by \Eq{equ:V}, from which it follows in particular that the commutator
$[V(t),V(t')]=0$ vanishes for any $t$ and $t'$, we obtain
\begin{equation} U_j=e^{i(E_j\ensuremath{a^{\dagger}} +E_j^\ast a)} \,,
\label{equ:U}\end{equation}
where
\begin{equation} E_j= \int_{t_j}^{t_{j+1}} \pmb{\mu}\cdot \mathbfcal{E}(\tau) d\tau\,.
\label{equ:Ej}\end{equation}
Combining the results, we find
\begin{equation} Q_j\rho(t_j)\approx e^{-i\hL\Delta t_j}U_j \rho(t_j) U^\dagger_j\,,
\label{equ:Qj}\end{equation}
where $U_j$ is given by \Eq{equ:U}.
Note that the full time evolution of the DM described by Eqs.\,(\ref{equ:split}) and (\ref{equ:U})--(\ref{equ:Qj}) becomes exact if the excitation field is represented by a sequence of $\delta$ pulses, given by Eq.\,(5):
\begin{equation} \pmb{\mu}\cdot \mathbfcal{E}(t)=\sum_j E_j \delta(t-t_j)\,.
\label{eqn:delta}\end{equation}
In fact, \Eq{eqn:delta} is equivalent to the rectangular rule of numerical integration of a finite wave packet
$$ \int_{-\infty}^\infty \pmb{\mu}\cdot \mathbfcal{E}(t) dt = \sum_j E_j=\sum_j \pmb{\mu}\cdot \mathbfcal{E}(t_j) \Delta t_j\,,$$
where $E_j$ are the pulse areas corresponding to the time intervals $\Delta t_j$\,.
\bigskip Let us now consider the effect of a single $\delta$ pulse on the DM, which is given by \Eq{equ:pulse}\,. Dropping the index $j$ for brevity, we first transform the evolution operator
$$ U(E)=e^{i(E\ensuremath{a^{\dagger}} +E^\ast a)}=e^{-|E|^2/2}e^{iE\ensuremath{a^{\dagger}}}e^{iE^\ast a}\,,$$
using the fact~\cite{Suzuki1976} that $e^{A+B}=e^A e^B e^C$, if $C=-\frac{1}{2}[A,B]$ commutes with both operators $A$ and $B$, which is true in the present case. When acting on the ground state $|0\rangle$ of the optical cavity (with a single cavity mode), this operator generates a Glauber coherent states $|\alpha \rangle$ with the eigenvalue $\alpha=iE$. In fact,
$$ |\alpha \rangle= U(E)|0 \rangle = e^{-|E|^2/2}e^{iE\ensuremath{a^{\dagger}}} |0 \rangle = e^{-|E|^2/2} \sum_{n=0}^\infty \frac{(iE)^n (a^\dagger)^n}{n!} |0 \rangle = e^{-|E|^2/2} \sum_{n=0}^\infty \frac{(iE)^n}{\sqrt{n!}} |n \rangle\,,$$
so that
$$ a|\alpha \rangle = e^{-|E|^2/2} \sum_{n=1}^\infty \frac{(iE)^n\sqrt{n}}{\sqrt{n!}} |n-1 \rangle
= e^{-|E|^2/2} \sum_{n=0}^\infty \frac{(iE)^{n+1}}{\sqrt{n!}} |n \rangle = iE|\alpha \rangle\,.$$
This result is useful if the system is initially in its ground state, so that the density matrix before the $\delta$ pulse is given by $|0 \rangle \langle 0|$. In general, this is not the case, and the density matrix before the pulsed excitation is given by Eq.\,(8), or \Eq{equ:rhofull} in \Sec{Sec:Lindblad} below. We therefore need to evaluate the effect of a $\delta$ pulse on an arbitrary state $|m \rangle$ of the cavity, which is given by a matrix $U_{nm}(E)$ defined by
$$ U(E)|m \rangle =\sum_{n=0}^\infty |n \rangle U_{nm}(E)\,, $$
where
\begin{eqnarray} U_{nm}(E)= \langle n| U(E)|m \rangle &=& e^{-|E|^2/2}\sum_{k=0}^\infty \langle n| e^{iE\ensuremath{a^{\dagger}}}|k \rangle \langle k| e^{iE^\ast a}|m \rangle \label{equ:Unm}\\
&=& e^{-|E|^2/2}\sum_{k=0}^l \frac{(iE)^{n-k}}{(n-k)!}\sqrt{\frac{n!}{k!}} \frac{(iE^\ast)^{m-k}}{(m-k)!} \sqrt{\frac{m!}{k!}}\,,\nonumber\end{eqnarray}
with $l=\min(n,m)$. Introducing the phase $\varphi$ of the excitation pulse, via $E=|E|e^{i\varphi}$, \Eq{equ:Unm} becomes
\begin{equation} U_{nm}(E)=i^{n-m} e^{i\varphi(n-m)} |E|^{n-m}\sqrt{\frac{m!}{n!}} e^{-|E|^2/2}
\sum_{k=0}^m\frac{ (-|E|^2)^{m-k} n!}{(n-k)! (m-k)! k!}
\label{equ:Unm1}\end{equation}
for $n\geqslant m$, and
\begin{equation} U_{nm}(E)=i^{m-n} e^{i\varphi(n-m)} |E|^{m-n}\sqrt{\frac{n!}{m!}} e^{-|E|^2/2}
\sum_{k=0}^n\frac{ (-|E|^2)^{n-k} m!}{(n-k)! (m-k)! k!}
\label{equ:Unm2}\end{equation}
for $n\leqslant m$. Comparing the series in \Eqs{equ:Unm1}{equ:Unm2} with the associated Laguerre polynomials~\cite{Gradshtein}, given by a series
$$ L^\alpha_p(x)=\sum_{j=0}^p \frac{(-x)^j (p+\alpha)!}{(p-j)!(\alpha+j)!j!} $$
for $p\geqslant0$, we find that for any values of $n$ and $m$\,,
$$ U_{nm}(E) =e^{i\varphi(n-m)} C_{nm}(|E|) $$
with $C_{nm}(|E|)$ expressed in terms of the Laguerre polynomials:
\begin{equation} C_{nm}(|E|)=i^\alpha |E|^\alpha\sqrt{\frac{p!}{(p+\alpha)!}}L^\alpha_{p}(|E|^2) e^{-|E|^2/2}\,,
\label{Cnm}\end{equation}
where $\alpha=|n-m|$ and $p=\min(n,m)$. Using the property
$$ L_m^{n-m}(x)=L_n^{m-n}(x)\frac{n!}{m!} (-x)^{m-n}\,,$$
\Eq{Cnm} can be written more explicitly as Eq.\,(11)
$$ C_{nm}(|E|)=i^{n-m} |E|^{n-m} \sqrt{\frac{m!}{n!}}L^{n-m}_{m}(|E|^2) e^{-|E|^2/2}\,. $$
Finally, applying the operators $U(E)$ and $U^\dagger(E)$, respectively, on the left and right hand sides of the DM, in accordance with \Eq{equ:pulse}, we arrive at Eq\,(10) of the main text.
\section{Analytic Diagonalization of the Lindblad Operator for the QD-cavity system}
\label{Sec:Lindblad}
The evolution of the QD-cavity system between and after excitation pulses is described by the master equation
\begin{equation} i\dot{\rho}= \hL\rho\,.
\label{equ:ME}\end{equation}
The action of the Lindblad operator on the DM can be conveniently expressed as
\begin{equation} \hat{L}\rho=H\rho-\rho H^\ast +2i\gamma_xd\rho \ensuremath{d^{\dagger}}+2i\gamma_ca\rho \ensuremath{a^{\dagger}}
\label{equ:Lindblad2}\end{equation}
where the Jaynes-Cummings (JC) Hamiltonian and its complex conjugate are, respectively, given by
\begin{equation} \begin{split}
H&=\omega_X\ensuremath{d^{\dagger}} d+\omega_C\ensuremath{a^{\dagger}} a+g(\ensuremath{a^{\dagger}} d+\ensuremath{d^{\dagger}} a)\,,\\
H^\ast&=\omega^\ast_X\ensuremath{d^{\dagger}} d+\omega^\ast_C\ensuremath{a^{\dagger}} a+g(\ensuremath{a^{\dagger}} d+\ensuremath{d^{\dagger}} a)\,.
\end{split} \nonumber \end{equation}
Here, $\omega_X=\Omega_X-i\gamma_X$ and $\omega_C=\Omega_C-i\gamma_C$ are the complex frequencies of the QD exciton and the cavity mode, respectively. Note that \Eq{equ:Lindblad2} is equivalent to Eq.\,(2).
In the basis of Fock states of the QD-cavity system, the full DM is given by Eq.\,(8):
\begin{equation} \rho=\sum_{\nu\nu' n n'}\rho^{\nu\nu'}_{nn'}|\nu,n\rangle \langle\nu',n'|\,,
\label{equ:rhofull}\end{equation}
where $\nu$ and $\nu'$ refer to the exciton and $n$ and $n'$ to the cavity indices. Let us consider a group of elements of the DM describing the coherence between rungs $N+\ensuremath{{\cal S}}$ and $N$ of the JC ladder, where $\ensuremath{{\cal S}}$ is the separation between rungs. These are the elements with
$$ \nu+n=N+\ensuremath{{\cal S}}\equiv \ensuremath{{N_{{\cal S}}}}\ \ \ {\rm and} \ \ \ \nu'+n'=N $$
in \Eq{equ:rhofull}. The corresponding part of the DM is given by
\begin{eqnarray}
\rho(\ensuremath{{N_{{\cal S}}}};N)&=&\phantom{+}\rho_1^{(N)} |1,\ensuremath{{N_{{\cal S}}}}-1 \rangle \langle 1, N-1| \nonumber\\
&&+\rho_2^{(N)} |1,\ensuremath{{N_{{\cal S}}}}-1 \rangle \langle 0, N| \nonumber\\
&&+\rho_3^{(N)} |0,\ensuremath{{N_{{\cal S}}}} \rangle \langle 1, N-1| \nonumber\\
&&+\rho_4^{(N)} |0,\ensuremath{{N_{{\cal S}}}} \rangle \langle 0, N|\,,
\label{equ:rhoN}
\end{eqnarray}
where for convenience we have introduced new notations for the DM elements: $\rho_1^{(N)}=\rho^{11}_{\ensuremath{{N_{{\cal S}}}}-1,N-1}$, $\rho_2^{(N)}=\rho^{10}_{\ensuremath{{N_{{\cal S}}}}-1,N}$, $\rho_3^{(N)}=\rho^{01}_{\ensuremath{{N_{{\cal S}}}},N-1}$, and $\rho_4^{(N)}=\rho^{00}_{\ensuremath{{N_{{\cal S}}}},N}$. For the elements involving the ground state ($N=0$ or $\ensuremath{{N_{{\cal S}}}}=0$), the DM reduces to only two elements:
\begin{equation} \rho(\ensuremath{{\cal S}};0)=\rho_1^{(0)} |1,\ensuremath{{\cal S}}-1 \rangle \langle 0, 0| +\rho_2^{(0)} |0,\ensuremath{{\cal S}} \rangle \langle 0, 0|
\label{equ:rho0}\end{equation}
with $\ensuremath{{\cal S}}>0$ taken for definiteness. We use these new notations, in order to form a vector $\vec{\rho}$ consisting of the elements of the DM which appear in \Eqs{equ:rhoN}{equ:rho0}, for all rungs and a fixed $\ensuremath{{\cal S}}$:
\begin{equation}
\vec{\rho}=
\begin{bmatrix}
\vec{\rho}^{\:(0)}\\
\vec{\rho}^{\:(1)}\\
\vec{\rho}^{\:(2)}\\
\vdots
\end{bmatrix}
,
\ \ \ {\rm where}\ \ \
\vec{\rho}^{\:(0)}=
\begin{bmatrix}
\rho_1^{(0)}\\
\rho_2^{(0)}
\end{bmatrix}
, \ \mbox{and} \ \
\vec{\rho}^{\:(N)}=
\begin{bmatrix}
\rho_1^{(N)}\\
\rho_2^{(N)}\\
\rho_3^{(N)}\\
\rho_4^{(N)}
\end{bmatrix}
\ \mbox{for} \ N>0\, .
\label{equ:vecs}\end{equation}
The master equation (\ref{equ:ME}) then takes the matrix form $i\dot{\vec{\rho}}= \hL\vec{\rho}$, where $\hat{L}$ is a matrix consisting of the blocks
\begin{align}\label{equ:L}
\hat{L}&= \begin{bmatrix}
L_0 & M_{01} & \zero & \hdots \\
\zero & L_1 & M_{12} & \hdots\\
\zero & \zero & L_2 & \hdots\\
\vdots & \vdots & \vdots & \ddots \end{bmatrix}
\,,\end{align}
where $\zero$ denotes blocks of zero elements. It is convenient at this point to introduce a $2\times2$ matrix of the $N$-th rung of the JC Hamiltonian, as in Eq.\,(13),
\begin{align}\label{equ:HN}
H_N&= \begin{bmatrix}
\omega_X+(N-1)\omega_C & \sqrt{N}g \\
\sqrt{N}g & N\omega_C \end{bmatrix}
\,.\end{align}
The diagonal blocks of $\hL$
are produced by the first two terms of the Lindblad operator \Eq{equ:Lindblad2}
and are given by
\begin{eqnarray}
\label{equ:L0}
L_0&=& H_{\ensuremath{{\cal S}}}\,,\\
L_N&=& G_{\ensuremath{{N_{{\cal S}}}}}- F_N^\ast\ \ \ \mbox{for} \ N>0\,,
\label{equ:LN}
\end{eqnarray}
where
\begin{equation} G_{N}= \begin{bmatrix}
\omega_X+(N-1)\omega_C & 0 & \sqrt{N}g & 0 \\
0 & \omega_X+(N-1)\omega_C & 0 & \sqrt{N}g \\
\sqrt{N}g & 0 & N\omega_C & 0 \\
0 & \sqrt{N}g & 0 & N\omega_C \end{bmatrix}
\label{equ:G} \end{equation}
consists of the four elements of $H_N$, contributing twice, one time distributed over the first and third rows and columns of $G_{N}$, the other over the second and fourth rows and columns. The other matrix,
$F_{N}^\ast$, is the complex conjugate of
\begin{equation} F_N=\begin{bmatrix}
H_{N}& \zero\\
\zero &H_{N} \end{bmatrix}\,,
\label{equ:F}\end{equation}
which in turn consists of $2\times 2$ diagonal blocks $H_{N}$, given by \Eq{equ:HN}, and $2\times 2$ zero matrices $\zero$ occupying its off-diagonal blocks. The off-diagonal blocks of $\hL$ are due to the last two terms of the Lindblad operator \Eq{equ:Lindblad2} and take the form
\begin{eqnarray} M_{01}&= &\begin{bmatrix}
0 & 2i\gamma_C \sqrt{\ensuremath{{\cal S}}}& 0 & 0 \\
2i\gamma_X & 0 & 0 & 2i\gamma_C \sqrt{\ensuremath{{\cal S}}+1} \end{bmatrix} \,,
\label{equ:Mnn1}
\\
M_{N,N+1}&= &\begin{bmatrix}
2i\gamma_C \sqrt{\ensuremath{{N_{{\cal S}}}} N}& 0 & 0 &0 \\
0& 2i\gamma_C \sqrt{\ensuremath{{N_{{\cal S}}}} (N+1)} & 0 &0 \\
0& 0 & 2i\gamma_C \sqrt{(\ensuremath{{N_{{\cal S}}}}+1)N} & 0 \\
2i\gamma_X & 0 & 0 & 2i\gamma_C \sqrt{(\ensuremath{{N_{{\cal S}}}}+1)(N+1)}
\nonumber
\end{bmatrix}\,.
\end{eqnarray}
An analytic diagonalization of the matrix $\hL$ presented below is based on the eigenvalues and eigenvectors of the Hamiltonian matrix $H_N$ of the $N$-th rung of the JC ladder. This $2\times2$ matrix, playing the role of a building block for the diagonalization of $\hL$, is diagonalized as
\begin{equation}
\label{equ:evN}
H_N Y_N=Y_N\Lambda_N\,,
\end{equation}
where the transformation matrix $Y_N$ and the eigenvalue matrix $\Lambda_N$ are given by
\begin{align}
Y_N&= \begin{bmatrix}
\alpha_N & \beta_N \\
-\beta_N & \alpha_N \end{bmatrix}
\quad\text{and}\quad\Lambda_N=
\begin{bmatrix}
\lambda^-_N & 0 \\
0 & \lambda^+_N \end{bmatrix},
\end{align}
respectively, with
\begin{align}
\label{equ:lam}
\lambda^\pm_N&=N\omega_C+\delta/2\pm\Delta_N \,,\\
\alpha_N&=\frac{\Delta_N-\delta/2}{D^-_N}=\frac{\sqrt{N}g}{D^+_N} \,,\ \ \ \ \beta_N=\frac{\sqrt{N}g}{D^-_N}=\frac{\Delta_N+\delta/2}{D^+_N}\,,\nonumber\\
\Delta_N&=\sqrt{(\delta/2)^2+Ng^2}\,,\ \ \ \ \ \ \ \ \ \, D^\pm_N=\sqrt{(\Delta_N\pm\delta/2)^2+Ng^2}\,,
\label{equ:Dgam}
\end{align}
where $\delta=\omega_X-\omega_C$ is the complex frequency detuning, and constants $D^\pm_N$ are normalizing the eigenvectors of $H_N$ in such a way that
\begin{equation}
\label{equ:norm}
\alpha_N^2+\beta_N^2=1\,.
\end{equation}
Note that $\Delta_N$ and $D^\pm_N$ are also complex-valued and expressed by \Eq{equ:Dgam} in terms of square roots, each having two values, or two branches. The choice of the sign (i.e. the square root branch) can be arbitrary in each case. However, this choice has to be used consistently in all the equations containing $\Delta_N$ and $D^\pm_N$, with the sign of $\Delta_N$ being independent from those of $D^\pm_N$, while the signs of $D^+_N$ and $D^-_N$ are linked together (however, only one of these two constants, either $D^+_N$ or $D^-_N$, is required in calculations). Owing to the normalization \Eq{equ:norm}, the transformation matrix $Y_N$ is orthogonal, i.e.
$$ Y^{-1}_N=Y^{\rm T}_N\,,$$
where $Y^{\rm T}_N$ is the transpose of $Y_N$, so that \Eq{equ:evN} can also be written as $Y^{\rm T}_N H_N Y_N=\Lambda_N$.
The diagonal block $L_0$ of the Lindblad matrix $\hL$, which is given by \Eq{equ:L0}, is identical to $H_{\ensuremath{{\cal S}}}$ and is thus diagonalized by $Y_{\ensuremath{{\cal S}}}$:
\begin{equation} Y_{\ensuremath{{\cal S}}}^{\rm T}L_0 Y_{\ensuremath{{\cal S}}}=\Omega_0=\begin{bmatrix}
\lambda^-_{\ensuremath{{\cal S}}} & 0 \\
0 & \lambda^+_{\ensuremath{{\cal S}}} \end{bmatrix}\,.
\label{equ:Omega0}\end{equation}
To diagonalize any other diagonal block $L_N$ with $N>0$, which is given by \Eq{equ:LN}, we introduce two $4\times4$ matrices
\begin{equation} A_N=\begin{bmatrix}
\alpha_N & 0 & \beta_N & 0 \\
0 & \alpha_N & 0 & \beta_N \\
-\beta_N & 0 & \alpha_N & 0 \\
0 & -\beta_N & 0 & \alpha_N \end{bmatrix}
\,,\ \ \ B_N= \begin{bmatrix}
\alpha_N & \beta_N & 0 & 0 \\
-\beta_N & \alpha_N & 0 & 0 \\
0 & 0 & \alpha_N & \beta_N \\
0 & 0 & -\beta_N & \alpha_N \end{bmatrix}\,.
\label{equ:AB}\end{equation}
Clearly, matrix $B_N$ is block-diagonal, consisting of two identical blocks of $Y_N$. Matrix $A_N$ can be obtained from $B_N$ by simultaneous swapping the 2nd and 3rd rows and columns. Note that exactly the same link exists between matrices $G_N$ and $F_N$ contributing to $L_N$ and consisting of zero elements and the elements of $H_N$, see \Eqs{equ:G}{equ:F}. Consequently, matrices $A_N$ and $B_N$ are orthogonal, i.e. $A_N^{-1}=A_N^{\rm T}$ and $B_N^{-1}=B_N^{\rm T}$, and diagonalize matrices $G_N$ and $F_N$, respectively. At the same time, owing to the structure of these matrices, the following commutation relations hold:
\begin{equation} [A_{\ensuremath{{N_{{\cal S}}}}},B^\ast_N]=[A_{\ensuremath{{N_{{\cal S}}}}},F^\ast_N]=[G_{\ensuremath{{N_{{\cal S}}}}},B^\ast_N]=0\,.
\label{equ:com}\end{equation}
Owing to the above properties, the matrix
\begin{equation} S_N=A_{\ensuremath{{N_{{\cal S}}}}} B^\ast_N
\label{equ:SAB}\end{equation}
is also orthogonal, $S_N^{-1}=S_N^{\rm T}$, and diagonalizes $L_N$, a diagonal block of the Lindblad matrix $\hL$:
\begin{equation} S_N^{\rm T}L_N S_N=\Omega_N\,.
\label{equ:SLS}\end{equation}
In fact, matrix $B^\ast_N$ diagonalized $F^\ast_N$ while keeping $G_{\ensuremath{{N_{{\cal S}}}}}$ untouched, due to \Eq{equ:com}. Similarly, $A_\ensuremath{{N_{{\cal S}}}}$ diagonalizes $G_{\ensuremath{{N_{{\cal S}}}}}$ while keeping $F^\ast_N$ untouched. The diagonal matrix $\Omega_N$ of the eigenvalues of $L_N$ then takes the form:
\begin{equation} \Omega_N=\begin{bmatrix}
\lambda^-_{\ensuremath{{N_{{\cal S}}}}}-{\lambda^-_N}^\ast & 0 & 0 & 0 \\
0 & \lambda^-_{\ensuremath{{N_{{\cal S}}}}}-{\lambda^+_N}^\ast & 0 & 0 \\
0 & 0 & \lambda^+_{\ensuremath{{N_{{\cal S}}}}}-{\lambda^-_N}^\ast & 0 \\
0 & 0 & 0 & \lambda^+_{\ensuremath{{N_{{\cal S}}}}}-{\lambda^+_N}^\ast \end{bmatrix}\,,
\label{equ:OmegaN}\end{equation}
where ${\lambda^\pm_N}$ are given by \Eq{equ:lam}. The eigenvalues $\Omega_N$ are considered in more detail in \Sec{Sec:frequencies}, where limiting cases of large and zero detuning, and of large rung number $N$ are analyzed.
Let us now diagonalize the full matrix $\hat{L}$, finding matrices $\hat{U}$ and $\hat{V}$ of right and left eigenvectors, respectively:
\begin{equation} \hat{L}\hat{U}=\hat{U}\hat{\Omega}\,, \qquad \hat{V}\hat{L}=\hat{\Omega}\hat{V}\,.
\label{equ:ev}\end{equation}
Due to the block form of $\hat{L}$, the diagonal matrix $\hat{\Omega}$ consists of the eigenvalue matrices $\Omega_N$ found above, and $\hat{U}$ and $\hat{V}$ are the block-triangular matrices:
\begin{equation} \hat{\Omega}= \begin{bmatrix}
\Omega_0 & \zero & \zero & \hdots \\
\zero & \Omega_1 & \zero & \hdots \\
\zero & \zero & \Omega_2 & \hdots \\
\vdots & \vdots & \vdots & \ddots \end{bmatrix}
\,,\ \ \
\hat{U}=\begin{bmatrix}
U_{00} & U_{01} & U_{02} & \hdots \\
\zero & U_{11} & U_{12} & \hdots \\
\zero& \zero & U_{22} & \hdots \\
\vdots & \vdots & \vdots & \ddots \end{bmatrix}
\,,\ \ \
\hat{V}=\begin{bmatrix}
V_{00} & V_{01} & V_{02} & \hdots \\
\zero & V_{11} & V_{12} & \hdots \\
\zero& \zero & V_{22} & \hdots \\
\vdots & \vdots & \vdots & \ddots \end{bmatrix}
\,.
\label{equ:OUV}\end{equation}
Here, $\Omega_0$, $U_{00}$ and $V_{00}$ are $2\times 2$ blocks, $U_{0N}$ and $V_{0N}$ with $N>0$ are
$2\times 4$ blocks, and $U_{NK}$ and $V_{NK}$ with both $N,\,K>0$ are $4\times 4$ matrices. Substituting $\hat{U}$ and $\hat{V}$ into the eigenvalue equations (\ref{equ:ev}), we find series of recursive relations for all blocks $U_{NK}$ and $V_{NK}$ and explicit analytic expressions for their elements.
Let us first consider the right eigenvectors $\hat{U}$. Substituting $\hat{\Omega}$ and $\hat{U}$ from \Eq{equ:OUV} and $\hat{L}$ from \Eq{equ:L} into the first eigenvalue equation (\ref{equ:ev}), we obtain
for any fixed $N$ the matrix equation $ L_N U_{NN}=U_{NN} \Omega_N$, so that $U_{NN}=S_N$, having the explicit form given by \Eqs{equ:AB}{equ:SAB}. For any $0\leqslant K<N$, we then find a matrix equation linking $U_{KN}$ to $U_{K+1,N}$:
$$ L_K U_{KN}+M_{K,K+1}U_{K+1,N}=U_{KN}\Omega_N\,.$$
Multiplying this equation from the left with $S_K^{\rm T}$, and using $S_K^{\rm T}L_K=\Omega_K S_K^{\rm T}$, we obtain
\begin{equation} \Omega_K \tilde{U}_{KN}+\tilde{M}_{K,K+1}\tilde{U}_{K+1,N}=\tilde{U}_{KN}\Omega_N\,,
\label{equ:left}\end{equation}
where
\begin{equation} \tilde{U}_{KN}=S_K^{\rm T}U_{KN}\,,\ \ \ \tilde{M}_{K,K+1}= S_K^{\rm T}M_{K,K+1} S_{K+1}\,.
\label{equ:Utilde}\end{equation}
As $\Omega_K$ is a diagonal matrix, \Eq{equ:left} results in the following explicit form of the matrix elements of
$\tilde{U}_{KN}$:
\begin{equation} (\tilde{U}_{KN})_{ij}=\frac{(\tilde{M}_{K,K+1}\tilde{U}_{K+1,N})_{ij}}{(\Omega_N)_{jj}-(\Omega_K)_{ii}}\,.
\label{equ:UKN}\end{equation}
For each $N$, we use $\tilde{U}_{NN}=S_N^{\rm T}U_{NN} =S_N^{\rm T}S_N =\one$ (here $\one$ is the identity matrix) as a start point and calculate $\tilde{U}_{KN}$ from \Eq{equ:UKN} sequentially, for $K=N-1,N-2,...,0$. Note that the index $i$ ($j$) of the matrix elements takes the values of 1 or 2 for $K=0$ ($N=0$) and 1,\,2,\,3 or 4 for $K>0$ ($N>0$), due to the sizes of the corresponding blocks.
Finally the blocks of the right eigenvector matrix $\hat{U}$ are found from the matrix multiplication $U_{KN}=S_K\tilde{U}_{KN}$, which is the inverse transformation compared to \Eq{equ:Utilde}. Figure~\ref{fig:U} illustrates the above algorithm.
\begin{figure}
\centering
\includegraphics[height=6cm,clip]{MatrixUdiagram.png}
\caption{Scheme illustrating the algorithm of the analytic calculation of matrix $\hat{U}$ of right eigenvectors. The diagonal blocks of $\hat{U}$ are found by iterating over $N$ which changes from 0 to $\infty$. Nonzero off-diagonal blocks are found by fixing $N$ and iterating over $K$ which changes from the diagonal value $K=N$ to $K=0$. }
\label{fig:U}
\end{figure}
The procedure of finding the left eigenvector matrix $\hat{V}$ is similar. We first obtain matrix equations $V_{NN}L_N = \Omega_N V_{NN}$ for the diagonal blocks of $\hat{V}$, concluding that $V_{NN}=S_N^{\rm T}$\,. Then, for any $K>N$, we have
$$
V_{NK}L_K +V_{N,K-1}M_{K-1,K}=\Omega_N V_{NK}\,.
$$
Multiplying this equation with matrix $S_K$ from the right, and using the fact that $L_K S_K=S_K\Omega_K$, we find
$$
\tilde{V}_{NK}\Omega_K+\tilde{V}_{N,K-1}\tilde{M}_{K-1,K}=\Omega_N\tilde{V}_{NK}\,,
$$
where $\tilde{V}_{NK}=V_{NK}S_K$ and $\tilde{M}_{K-1,K}$ is defined in \Eq{equ:Utilde}. This again allows us to obtain an explicit form of the matrix elements:
\begin{equation} (\tilde{V}_{NK})_{ij}=\frac{(\tilde{V}_{N,K-1}\tilde{M}_{K-1,K})_{ij}}{(\Omega_N)_{ii}-(\Omega_K)_{jj}}\,.
\label{equ:VNK}\end{equation}
For a given fixed $N$, one can generate sequentially, starting from $\tilde{V}_{NN}=\one$ and using \Eq{equ:VNK}, all the matrices $\tilde{V}_{NK}$ for $K=N+1,N+2,...$. Matrices $V_{NK}$ can then be found, using the inverse transformation, as $V_{NK}=\tilde{V}_{NK} S_K^{\rm T}$. This algorithm of reconstructing the full matrix $\hat{V}$ is illustrated by Figure~\ref{fig:V}.
\begin{figure}
\centering
\includegraphics[height=6cm,clip]{MatrixVdiagram}
\caption{As \Fig{fig:U} but for matrix $\hat{V}$ of left eigenvectors. }
\label{fig:V}
\end{figure}
Note that the left and right eigenvectors are orthogonal,
$$ \hat{V}\hat{U}=\hat{U}\hat{V}=\hat{\mathbb{1}}\,,$$
which results for $N'\geqslant N$ in the relations
\begin{equation} \sum_{K=N}^{N'}V_{NK}U_{KN'}=\sum_{K=N}^{N'}U_{NK}V_{KN'}=\mathbb{1}\delta_{NN'}\,,
\label{equ:orth}\end{equation}
where $\mathbb{1}$ is the identity matrix and $\delta_{NN'}$ is the Kronecker delta. Equation (\ref{equ:orth}) can also be written as
$$\sum_{K=N}^{N'}\tilde{V}_{NK}\tilde{U}_{KN'}=\sum_{K=N}^{N'}\tilde{U}_{NK}\tilde{V}_{KN'}=\mathbb{1}\delta_{NN'}\,.
$$
\bigskip
To conclude this section, let us illustrate the analytic diagonalization presented above on a $6\times6$ Lindblad matrix corresponding to the lowest order of the standard FWM polarization treated in~\cite{KasprzakNMa10}:
$$ \hat{L}= \begin{bmatrix}
L_0 & M_{01} \\
\mathbb{0} & L_1 \end{bmatrix}\,,
\quad
\hat{U}= \begin{bmatrix}
S_0 & U_{01} \\
\mathbb{0} & S_1 \end{bmatrix}\,,
\quad
\hat{V}= \begin{bmatrix}
S^{\rm T}_0 & V_{01} \\
\mathbb{0} & S^{\rm T}_1 \end{bmatrix}\,.$$
We find $U_{01}=S_0 \tilde{U}_{01}$ and $V_{01}=\tilde{V}_{01}S^{\rm T}_2$, where
\begin{equation} (\tilde{U}_{01})_{ij}=\frac{(\tilde{M}_{01})_{ij}}{(\Omega_1)_{jj}-(\Omega_0)_{ii}} \ , \quad
(\tilde{V}_{01})_{ij}=\frac{(\tilde{M}_{01})_{ij}}{(\Omega_0)_{ii}-(\Omega_1)_{jj}}=-(\tilde{U}_{01})_{ij}\,,
\label{equ:UeqV} \end{equation}
and $\tilde{M}_{01}=S^{\rm T}_0 M_{01}S_1$.
The orthogonality of $\hat{V}$ and $\hat{U}$ can then be verified:
$$ \hat{U}\hat{V}=\begin{bmatrix}
S_0 & U_{01} \\
\mathbb{0} & S_1 \end{bmatrix}
\begin{bmatrix}
S^{\rm T}_0 & V_{01} \\
\mathbb{0} & S^{\rm T}_1 \end{bmatrix}
=\begin{bmatrix}
S_0S^{\rm T}_0 & S_0V_{01}+U_{01}S^{\rm T}_1 \\
\mathbb{0} & S_1 S^{\rm T}_1 \end{bmatrix}
=\hat{\mathbb{1}}\,,$$
using
$$ S_0V_{01}+U_{01}S^{\rm T}_1=S_0\tilde{V}_{01}S^{\rm T}_1+
S_0\tilde{U}_{01}S^{\rm T}_1
=S_0(\tilde{V}_{01}+\tilde{U}_{01})S^{\rm T}_1=\mathbb{0}\,, $$
which follows from \Eq{equ:UeqV}\,.
\section{Transition frequencies}
\label{Sec:frequencies}
By fixing the distance $\ensuremath{{\cal S}}$ between rungs of the JC ladder, contributing to the left and right parts of the DM, we isolate a specific component of the coherent dynamics of the QD-cavity system, corresponding to a selected phase combination of the optical pulses exciting it. The time dependence of this polarization is given by Eq.\,(14) which contains the transition frequencies $\omega_r$ between rungs. These frequencies are the eigenvalues of the reduced Lindblad matrix $\hat{L}$, isolated from the full Lindblad operator by fixing the $\ensuremath{{\cal S}}$. The eigenvalues are given by the diagonal matrix $\hat{\Omega}$, \Eq{equ:OUV}, which consists of blocks $\Omega_N$ described by \Eqs{equ:Omega0}{equ:OmegaN}. For $N>0$, these diagonal blocks can be written as
\begin{equation}
\Omega_N= (\bar{\Omega} -i\gamma_N)\one +
\begin{bmatrix}
-\Delta^i_N & 0 & 0 & 0 \\
0 & -\Delta^o_N & 0 & 0 \\
0 & 0 & \Delta^o_N & 0 \\
0 & 0 & 0 & \Delta^i_N \end{bmatrix}\,,
\label{equ:four}\end{equation}
where
\begin{equation}
\Delta^{o,i}_N=\sqrt{(\delta/2)^2+\ensuremath{{N_{{\cal S}}}} g^2}\pm\sqrt{(\delta^\ast/2)^2+Ng^2}\,.
\label{equ:delpm}\end{equation}
The complex frequencies of all four transitions, which occur between the two pairs of quantum levels of rungs $N$ and $\ensuremath{{N_{{\cal S}}}}=N+\ensuremath{{\cal S}}$, have the same dominant contribution, described by the first term of \Eq{equ:four}. It consists of the average frequency distance between the rungs,
\begin{equation} \bar{\Omega}=\ensuremath{{\cal S}}\Omega_C\,,
\label{equ:Om}\end{equation}
which is the same for all pairs of rungs separated by $\ensuremath{{\cal S}}$, and
the average damping,
\begin{equation} \gamma_N=(N+\ensuremath{{N_{{\cal S}}}}-1)\gamma_C+\gamma_X\,,
\label{equ:gam}\end{equation}
showing a linear increase with $N$, as the dampings of both rungs add up.
The second term in \Eq{equ:four} describes a fine structure of the transitions, given by the splittings $\Delta^{o,i}_N$, which depend on the detuning $\delta=\omega_X-\omega_C$, the coupling constant $g$, and the rung number $N$. Below we analyze this fine structure in more detail, providing simple asymptotic expressions for limiting cases of (i) large and (ii) small or zero detuning, in the latter case paying attention to the limit of large $N$.
\subsection{Large detuning}
Assuming $|\delta/2|\gg \sqrt{\ensuremath{{N_{{\cal S}}}}}g$ and $|\delta/2|\gg \sqrt{N}g$, we find from \Eq{equ:delpm}
$$ \Delta^{o,i}_N\approx\frac{\delta\pm\delta^\ast}{2}+g^2\left(\frac{\ensuremath{{N_{{\cal S}}}}}{\delta}\pm\frac{N}{\delta^\ast}\right)\,,$$
so that
$$ \Delta^o_N\approx \delta' + g^2\frac{(N+\ensuremath{{N_{{\cal S}}}})\delta'-i\ensuremath{{\cal S}}\delta''}{|\delta|^2}\,,\\
\quad \Delta^i_N\approx i\delta'' + g^2\frac{\ensuremath{{\cal S}}\delta'-i(N+\ensuremath{{N_{{\cal S}}}})\delta''}{|\delta|^2}\,,$$
where the complex detuning $\delta=\delta'+i\delta''$ is split into the real and imaginary parts, given by $\delta'=\Omega_X-\Omega_C$ and $\delta''=\gamma_C-\gamma_X$, respectively. Furthermore, for $|\delta'|\gg|\delta''|$, the above equations simplify to
$$ \Delta^o_N\approx \delta' +g^2\frac{N+\ensuremath{{N_{{\cal S}}}}}{|\delta|}\ \ \ \ {\rm and}\ \ \ \
\Delta^i_N\approx i\delta''+g^2\frac{\ensuremath{{\cal S}}}{|\delta|}\,,$$
giving approximate frequencies $\pm\Delta^o_N$ and $\pm\Delta^i_N$ of, respectively, ``outer'' and ``inner'' transition doublets, to be considered on top of the same for all four transitions lead frequency and damping, described by \Eqs{equ:Om}{equ:gam}, respectively. We see that the splitting between the outer transitions is dominated by $2\delta'$, with a correction proportional to $g^2$ and growing linearly with $N$. At the same time, the inner transitions have a small splitting $2g^2\ensuremath{{\cal S}}/|\delta|$, which is independent of $N$. The damping of both outer transitions is given by $\gamma_N$. For the inner transitions ($\bar{\Omega}\pm\Delta^i_N$), the dampings are different, $\gamma_N\mp\delta'' $, so that the lower-frequency transition is broader than the higher-frequency one for $\gamma_C>\gamma_X$.
\subsection{Small detuning}
Assuming $|\delta/2|\ll \sqrt{\ensuremath{{N_{{\cal S}}}}}g$ and $|\delta/2|\ll\sqrt{N}g$, we find from \Eq{equ:delpm}
$$ \Delta^{o,i}_N\approx \left(\sqrt{\ensuremath{{N_{{\cal S}}}}}\pm \sqrt{N} \right)g+\frac{\delta^2}{8\ensuremath{{N_{{\cal S}}}} g^2}\pm
\frac{{\delta^\ast}^2}{8Ng^2}\,.$$
For $N\gg\ensuremath{{\cal S}}$, this result further simplifies to
$$\Delta^o_N\approx 2\sqrt{N}g+\frac{\ensuremath{{\cal S}}}{2\sqrt{N}} g +\frac{(\delta')^2-(\delta'')^2}{4Ng^2} \ , \quad
\Delta^i_N\approx \frac{\ensuremath{{\cal S}}}{2\sqrt{N}} g + i\frac{\delta'\delta''}{2Ng^2}\,.$$
Finally, for zero detuning, $\Omega_X=\Omega_C$, we obtain in leading order of $\ensuremath{{\cal S}}/N$
$$ \Delta^o_N\approx 2\sqrt{N}g \ \ \ \ {\rm and} \ \ \ \ \Delta^i_N\approx \frac{\ensuremath{{\cal S}}}{2\sqrt{N}} g \,,$$
from where we find also the change of the transition frequencies of outer and inner doublets with rung number $N$,
\begin{equation} \Delta^o_{N+1}-\Delta^o_N\approx \frac{g}{\sqrt{N}} \ \ \ \ {\rm and} \ \ \ \
\Delta^i_{N+1}-\Delta^i_N\approx -\frac{\ensuremath{{\cal S}} g}{4N\sqrt{N}} \,.
\label{equ:freq}\end{equation}
\section{Degenerate ${\cal N}$-wave mixing in the low-damping limit}\label{sec:lowdamp}
In this section, we consider all possible phase channels in the optical polarization when the system is excited by two laser pulses of arbitrary strength. We focus on the situation when both pulses arrive simultaneously (i.e. the time delay is zero) and call the optical response on this excitation {\it degenerate} ${\cal N}$-wave mixing (${\cal N}$WM) polarization, where ${\cal N}$ determines the detected phase channel. We first treat rigorously the change of the DM due to the pulsed excitation, concentrating on two important special cases when the pulse area of one of the two pulses small enough to be accounted for in the lowest order while the pulse area of the other pulse can be arbitrarily large. Then we consider the coherent dynamics after the pulses in the limit of small exciton and cavity damping, so that the off-diagonal blocks of the Lindblad matrix can be neglected. Finally, we treat analytically the limit of a large average number of photons $n_{\rm ph}$ excited in the cavity, $n_{\rm ph}\gg 1$, corresponding to a large pulse area of one of the pulses, which allows us to develop a closed-form solution for the ${\cal N}$WM polarization in both time and frequency domain.
\subsection{Two-pulse excitation}
The so-called ${\cal N}$WM mentioned above describes a mixing of ${\cal N}$ waves which produce an optical response of the system with a phase
\begin{equation} \Phi=\ensuremath{{\cal S}}_1 \varphi_1+\ensuremath{{\cal S}}_2 \varphi_2\,,
\label{equ:phase}\end{equation}
where $\varphi_j=\arg(E_j)$ and
\begin{equation} |\ensuremath{{\cal S}}_1|+|\ensuremath{{\cal S}}_2|+1={\cal N}\,.
\label{equ:N}\end{equation}
For example, the standard FWM corresponds to $\ensuremath{{\cal S}}_1=-1$ and $\ensuremath{{\cal S}}_2=2$; therefore ${\cal N}=4$ and
$\Phi=2\varphi_2-\varphi_1$\,. Starting from the DM of a fully relaxed system before the excitation,
\begin{equation}
\rho(0_-)=|0 \rangle\langle 0|\,,
\label{equ:DM0}
\end{equation}
where $|0\rangle$ is its absolute ground state and $0_-$ is a negative infinitesimal, we consider below the effect on the DM of the two pulses both arriving at $t=0$ and having pulse areas $E_1$ and $E_2$, focusing on the two limiting cases mentioned above.
\subsubsection*{Case 1: $E_1$ is small, $E_2$ is arbitrary.}
While we are assuming that the pulses arrive simultaneously at $t=0$, it is convenient to consider first the effect of the smaller pulse. Using the general Eq.\,(10) with the QD exciton indices dropped for brevity and the unexcited DM in the form of \Eq{equ:DM0}, the DM straight after the first pulse takes the form
$$ \rho_{kk'}(0)= \left[\hat{X}(E_1) \rho(0_-)\right]_{kk'}=e^{i\varphi_1(k-k')} C_{k0}(|E_1|) C^\ast_{k'0}(|E_1|)\,.$$
From \Eq{equ:phase} it follows that $k-k'=\ensuremath{{\cal S}}_1$. Then, concentrating on the lowest-order response, we find the following two options for $k$ and $k'$:
$$ \begin{array}{cllllll}
(i)&k=\ensuremath{{\cal S}}_1 &{\rm and} & k'=0 && {\rm for} & \ensuremath{{\cal S}}_1\geqslant 0\,,\\
(ii)&k=0 &{\rm and} & k'=-\ensuremath{{\cal S}}_1 && {\rm for} & \ensuremath{{\cal S}}_1\leqslant 0\,.
\end{array}$$
Since the pulse area $E_2$ of the second pulse can be arbitrarily large, we take into account its effect rigorously in all orders, which results in the following DM after the pulses:
\begin{eqnarray} \rho_{nn'}(0_+)= \left[\hat{X}(E_2) \rho(0)\right]_{nn'}&=&e^{i\varphi_2(n-n'-\ensuremath{{\cal S}}_1)} C_{nk}(|E_2|) C^\ast_{n'k'}(|E_2|) \rho_{kk'}(0)
\nonumber\\
&=& e^{i\Phi} C_{nk}(|E_2|) C^\ast_{n'k'}(|E_2|) C_{k0}(|E_1|) C^\ast_{k'0}(|E_1|) \,,
\nonumber\end{eqnarray}
according to Eq.\,(10). From \Eq{equ:phase} we find $ n-n'=\ensuremath{{\cal S}}_1+\ensuremath{{\cal S}}_2=\ensuremath{{\cal S}}$, and then from Eq.\,(11) obtain
\begin{equation} \rho_{n+\ensuremath{{\cal S}} N,n}(0_+)=e^{i\Phi} i^{\ensuremath{{\cal S}}} |E_1|^{|\ensuremath{{\cal S}}_1|} |E_2|^{|\ensuremath{{\cal S}}_2|} R_n\,,
\label{equ:rhonn}\end{equation}
where
\begin{equation} R_n=\frac{\lambda^n e^{-\lambda}}{\sqrt{n!(n+\ensuremath{{\cal S}})!}} \tilde{R}_n
\label{equ:Rn}\end{equation}
with $\lambda=|E_2|^2$ and
\begin{equation}
\tilde{R}_n = \left\{
\begin{array}{rlll}
\medskip
\lambda^m L_{\ensuremath{{\cal S}}_1}^{n+\ensuremath{{\cal S}}_2}(\lambda) && {\rm for} & \ensuremath{{\cal S}}_1\geqslant 0\,,\\
\lambda^{m+\ensuremath{{\cal S}}_1} L_{-\ensuremath{{\cal S}}_1}^{n+\ensuremath{{\cal S}}_2}(\lambda) && {\rm for} & \ensuremath{{\cal S}}_1\leqslant 0\,.
\end{array}
\right.
\label{equ:Rtilde}
\end{equation}
Here $L^k_p(x)$ are the Laguerre polynomials, and
\begin{equation} m = \left\{\begin{array}{clll}\medskip
0 && {\rm for} & \ensuremath{{\cal S}}_2\geqslant 0\,,\\
\ensuremath{{\cal S}}_2&& {\rm for} & \ensuremath{{\cal S}}_2\leqslant 0\,.\end{array}\right.
\label{equ:m}\end{equation}
For ${\cal N}$WM, we have in particular $\ensuremath{{\cal S}}_1=1-{\cal N}/2$ and $\ensuremath{{\cal S}}_2={\cal N}/2$, so that $\ensuremath{{\cal S}}=1$, in accordance with \Eq{equ:N}, and \Eqss{equ:rhonn}{equ:m} reduce to
\begin{equation} \rho_{n+1,n}(0_+)=e^{i\Phi} i |E_1|^{|\ensuremath{{\cal S}}_1|} |E_2|^{|\ensuremath{{\cal S}}_2|} R_n\,,
\label{equ:rhonnN}\end{equation}
where
\begin{equation} R_n=\frac{\lambda^n e^{-\lambda}}{n!\sqrt{n+1}} \tilde{R}_n
\label{equ:RnN}\end{equation}
and
\begin{equation} \tilde{R}_n= \lambda^{1-{\cal N}/2} L_{{\cal N}/2-1}^{n+1-{\cal N}/2}(\lambda)\,.
\label{equ:NWM1} \end{equation}
The last equation simplifies to
\begin{equation}
\tilde{R}_n= \frac{1}{\lambda} L_{1}^{n-1}(\lambda)
\label{equ:FWM1}
\end{equation}
for the standard FWM, in which case ${\cal N}=4$, $\ensuremath{{\cal S}}_1=-1$, and $\ensuremath{{\cal S}}_2=2$.
\subsubsection*{Case 2: $E_2$ is small, $E_1$ is arbitrary.}
To address this case we use the fact that for simultaneous pulses, the pulse operators $\hat{X}(E_1)$ and $\hat{X}(E_2)$ commute:
\begin{equation}
\hat{X}(E_1)\hat{X}(E_2)\rho=\hat{X}(E_2)\hat{X}(E_1)\rho\,.
\end{equation}
Technically, this is easy to see from the definition of $\hat{X}(E)$, Eq.\,(7), and the fact that the commutator
$[E_1\ensuremath{a^{\dagger}} +E_1^\ast a,E_2\ensuremath{a^{\dagger}} +E_2^\ast a]= E_1^\ast E_2-E_1 E_2^\ast $ is a constant.
Physically, this means that for an infinitesimal delay between the pulses exciting the cavity, the time-ordering of the pulses does not matter. It would matter, however, if the QD-cavity system was excited via the QD, since the QD exciton is described by Fermionic operators obeying anti-commutation relations, and therefore the corresponding pulse operators do not commute.
The result obtained for {\it Case 1}, given by \Eqss{equ:rhonn}{equ:m}, can therefore be used for {\it Case 2} by swapping $E_1\leftrightarrow E_2$ and $\ensuremath{{\cal S}}_1\leftrightarrow\ensuremath{{\cal S}}_2$. Then the DM after the pulses is described by the same \Eqs{equ:rhonn}{equ:Rn} with $\lambda=|E_1|^2$ and $\tilde{R}_n$ now given by
\begin{equation}
\tilde{R}_n = \left\{\begin{array}{rlll}\medskip
\lambda^m L_{\ensuremath{{\cal S}}_2}^{n+\ensuremath{{\cal S}}_1}(\lambda) && {\rm for} & \ensuremath{{\cal S}}_2\geqslant 0\,,\\
\lambda^{m+\ensuremath{{\cal S}}_2} L_{-\ensuremath{{\cal S}}_2}^{n+\ensuremath{{\cal S}}_1}(\lambda) && {\rm for} & \ensuremath{{\cal S}}_2\leqslant 0\,,\end{array}\right.
\label{equ:Rtilde2}\end{equation}
where
\begin{equation}
m = \left\{\begin{array}{clll}\medskip
0 && {\rm for} & \ensuremath{{\cal S}}_1\geqslant 0\,,\\
\ensuremath{{\cal S}}_1&& {\rm for} & \ensuremath{{\cal S}}_1\leqslant 0\,.\end{array}\right.
\label{equ:m2}\end{equation}
Again, for ${\cal N}$WM, \Eqs{equ:rhonnN}{equ:RnN} remain the same as in {\it Case 1}, while
\Eqs{equ:Rtilde2}{equ:m2} simplify to
\begin{equation}
\tilde{R}_n= \lambda^{1-{\cal N}/2} L_{{\cal N}/2}^{n+1-{\cal N}/2}(\lambda)\,,
\label{equ:NWM2}
\end{equation}
which reduces for the standard FWM with $\ensuremath{{\cal S}}_1=-1$ and $\ensuremath{{\cal S}}_2=2$ to
\begin{equation} \tilde{R}_n= \frac{1}{\lambda} L_{2}^{n-1}(\lambda)\,.
\label{equ:FWM2}\end{equation}
Note that in the ${\cal N}$WM, the difference between the two {\it Cases} is only in the lower index of the Laguerre polynomials; compare \Eqs{equ:NWM1}{equ:NWM2} and similarly \Eqs{equ:FWM1}{equ:FWM2}.
\subsection{Coherent dynamics after the pulses}
Now, omitting the factor
\begin{equation} e^{i\Phi} i^{\ensuremath{{\cal S}}} |E_1|^{|\ensuremath{{\cal S}}_1|} |E_2|^{|\ensuremath{{\cal S}}_2|}
\label{equ:factor} \end{equation}
in \Eq{equ:rhonn}, which is common for all elements of the DM, we write the initial DM straight after the pulses in vector form,
$$ \vec{\rho}^{\:(0)}(0_+)=\begin{bmatrix}
0\\
R_0 \end{bmatrix}
, \ \ \
\vec{\rho}^{\:(n)}(0_+)= \begin{bmatrix}
0\\
0\\
0\\
R_n \end{bmatrix}\,,$$
where $n\geqslant 1$ and the exciton components have been restored; for definition of the basis, see \Eqss{equ:rhoN}{equ:vecs} in \Sec{Sec:Lindblad}.
In the limit of small damping of both the QD exciton and the cavity mode, $\gamma_X,\gamma_C\ll g$, one can neglect the off-diagonal blocks $M_{n,n+1}$ of the Lindblad matrix $\hat{L}$, see \Eq{equ:Mnn1}. The remaining diagonal blocks of $\hat{L}$ are diagonalized according to \Eq{equ:SLS},
\begin{equation} L_n=S_n\Omega_n S_n^{\rm T}\,,
\label{equ:SOS}\end{equation}
where matrices $S_n$ and $\Omega_n$ are given, respectively, by \Eqs{equ:SAB}{equ:OmegaN}. The time evolution is then described as
$$ \vec{\rho}^{\:(n)}(t) = e^{-iL_n t}\vec{\rho}^{\:(n)}(0_+)=S_ne^{-i\Omega_n t}S_n^{\rm T}\vec{\rho}^{\:(n)}(0_+)\,.$$
Using the general form Eq.\,(9) of the optical polarization, we find
$$ P(t)= \sum_{n=0}^\infty \vec{a}^{\:(n)}\cdot \vec{\rho}^{\:(n)}(t)\,,$$
where $\vec{a}^{\:(n)}$ is the vector representation of the photon annihilation operator $a$:
$$ \vec{a}^{\:(0)}= \begin{bmatrix}
0\\
1 \end{bmatrix}
, \ \ \
\vec{a}^{\:(n)}= \begin{bmatrix}
\sqrt{n}\\
0\\
0\\
\sqrt{n+1} \end{bmatrix}\,,$$
in accordance with the basis defined in \Eqs{equ:rhoN}{equ:rho0}. Now, using the explicit form of the matrices $S_n$ and $\Omega_n$ provided in \Eqs{equ:AB}{equ:SAB}, and \Eqs{equ:four}{equ:delpm}, respectively, we find
$$P(t)= e^{-i\bar{\Omega}t} \sum_{\sigma=i,o} \sum_{s=\pm} P_{\sigma s}(t)\,,$$
where
\begin{equation} P_{\sigma s}(t)= \sum_{n=0}^\infty R_n C_n^{\sigma s} e^{-i(s\Delta_n^\sigma -i\gamma_n) t}\,.
\label{equ:Pss}\end{equation}
The frequencies $\Delta_n^i$ and $\Delta_n^o$ of, respectively, the inner and outer transitions are given by \Eq{equ:delpm}, and the damping $\gamma_n$ by \Eq{equ:gam}.
Using the matrices $A_{n+1}$ and $B_n^\ast$ [\Eq{equ:AB}] forming the transformation matrix $S_n$, we find the coefficients $C_n^{\sigma s}$ for an arbitrary detuning:
\begin{eqnarray}
C_n^{i+}&=&\alpha_n^\ast \alpha_{n+1} \left(\alpha_n^\ast \alpha_{n+1} \sqrt{n+1} +\beta_n^\ast \beta_{n+1}\sqrt{n}\right)\,,\nonumber \\
C_n^{i-}&=&\beta_n^\ast \beta_{n+1} \left(\beta_n^\ast \beta_{n+1} \sqrt{n+1} +\alpha_n^\ast \alpha_{n+1}\sqrt{n}\right)\,,\nonumber \\
C_n^{o+}&=&\beta_n^\ast \alpha_{n+1} \left(\beta_n^\ast \alpha_{n+1} \sqrt{n+1} -\alpha_n^\ast \beta_{n+1}\sqrt{n}\right)\,,\nonumber \\
C_n^{o-}&=&\alpha_n^\ast \beta_{n+1} \left(\alpha_n^\ast \beta_{n+1} \sqrt{n+1} -\beta_n^\ast \alpha_{n+1}\sqrt{n}\right)\,.
\label{equ:CC}
\end{eqnarray}
For a detuning much smaller than the energy splitting of the $n$-th rung, $|\delta|\ll \sqrt{n}g$,
which is relevant to the case of large excitation pulse area treated below, they
take approximate forms
\begin{equation}
C_n^{i\pm}=\frac{1}{4} \left(\sqrt{n+1} +\sqrt{n}\right) \quad \mbox{and} \quad
C_n^{o\pm}=\frac{1}{4} \left(\sqrt{n+1} -\sqrt{n}\right)
\label{equ:Cio}\end{equation}
for $n\geqslant1$, as well as $C_0^{i+}=C_0^{o-}=1/2$ and $C_0^{o+}=C_0^{i-}=0$, using $\alpha_n=\beta_n=1/\sqrt{2}$ for $n\geqslant1$, and $\alpha_0=1$ and $\beta_0=0$. For zero detuning, $\delta=0$, \Eq{equ:Cio} is exact. The general property $C_n^{\sigma -}= (C_n^{\sigma +})^\ast$ is fulfilled for \Eq{equ:CC} only approximately but becomes strict at zero detuning, since all the coefficients in \Eq{equ:Cio} are real.
\subsection{Large pulse area}
In the limit of large pulse area ($\lambda=|E_2|^2\gg 1$ in {\it Case 1} or $\lambda=|E_1|^2\gg 1$ in {\it Case 2}), the excited system contains a large number of photons, $n_{\rm ph}\approx \lambda\gg1$. The Poisson distribution in \Eq{equ:RnN} then becomes Gaussian, with the mean rung number $\langle n \rangle = \lambda$ and the mean square deviation $\langle n^2 - \langle n \rangle ^2\rangle = \lambda$. To achieve this limit mathematically, we replace in \Eq{equ:RnN}
$$ n!\approx \sqrt{2\pi n} e^{-n} n^n$$
and, introducing a small quantity $\varepsilon\ll 1$, which is defined in such a way that
$ n=\lambda(1+\varepsilon)\,, $
we further approximate
\begin{eqnarray} e^{-n} n^n&=& e^{-n} \lambda^n (1+\varepsilon)^{\lambda(1+\varepsilon)}=
e^{-n} \lambda^n e^{\lambda(1+\varepsilon)\ln(1+\varepsilon)}\nonumber\\\nonumber
&\approx & e^{-n} \lambda^n e^{\lambda(1+\varepsilon)(\varepsilon- \varepsilon^2/2)}
\approx e^{-\lambda-\lambda\varepsilon} \lambda^n e^{\lambda(\varepsilon+ \varepsilon^2/2)}
=e^{-\lambda} \lambda^n e^{\lambda \varepsilon^2/2}\,.\end{eqnarray}
Equation (\ref{equ:RnN}) then becomes
\begin{equation} R_n=\frac{\lambda^n e^{-\lambda}}{n!\sqrt{n+1}} \tilde{R}_n\approx
\frac{\lambda^n e^{-\lambda}}{\sqrt{2\pi \lambda}e^{-\lambda} \lambda^n e^{\lambda \varepsilon^2/2}\sqrt{\lambda}} \tilde{R}_n=\frac{e^{-z^2}}{\sqrt{2\pi} \lambda}\tilde{R}_n\,,
\label{equ:RnNa} \end{equation}
where we have introduced for convenience a new variable
\begin{equation} z=\frac{n-\lambda}{\sqrt{2\lambda}}\,,
\label{equ:zdef}\end{equation}
such that $\langle z \rangle =0$ and $\langle z^2 \rangle =1/2$. The Lagguere polynomials in \Eqs{equ:NWM1}{equ:NWM2} for $\tilde{R}_n$ are approximated as
\begin{equation} L^{n-m}_m(\lambda) \approx \frac{1}{m!} \left(\frac{\lambda}{2}\right)^{\frac{m}{2}} H_m(z)\,,
\label{equ:LH}\end{equation}
where $ H_m(z)$ are Hermite polynomials. To prove \Eq{equ:LH}, we use the recursive relation~\cite{Gradshtein}
\begin{equation} m L^{n-m}_m(\lambda)=(n-\lambda) L^{n-m+1}_{m-1}(\lambda) -n L^{n-m+2}_{m-2}(\lambda)+n L^{n-m+2}_{m-3}(\lambda)\,.
\label{equ:Lrec}\end{equation}
The first few polynomials in this sequence have the following form:
\begin{eqnarray}
L^{n}_{0}(\lambda) &=&1\,, \nonumber\\
L^{n-1}_{1}(\lambda) &=&n-\lambda= \left(\frac{\lambda}{2}\right)^{\frac{1}{2}} 2z \,, \nonumber\\
L^{n-2}_{2}(\lambda) &=&\frac{1}{2} \left(-\lambda+(n-\lambda)^2\right)= \frac{1}{2}\,\frac{\lambda}{2}(4z^2-2)\,,
\label{equ:L123}\end{eqnarray}
demonstrating the strict validity of \Eq{equ:LH} for $m=0$, 1, and 2. To prove \Eq{equ:LH} for higher $m$, we note that $L^{n-m}_m(\lambda) \sim \lambda^{\frac{m}{2}}$, which is clear from \Eq{equ:L123} and the recursive formula \Eq{equ:Lrec}. In fact, all terms in \Eq{equ:Lrec} except the last one are of order $\lambda^{\frac{m}{2}}$, while the last term is of order $\lambda^{\frac{m-1}{2}}$ and thus can be neglected for large $\lambda$. For the same reason, $L^{n-m}_m(\lambda)\approx L^{n+1-m}_m(\lambda)$, so that \Eq{equ:LH} can be used for both {\it Cases} in the ${\cal N}$WM, described by \Eqs{equ:NWM1}{equ:NWM2}. Finally, substituting \Eq{equ:LH} into \Eq{equ:Lrec} and dropping the last term in \Eq{equ:Lrec}, in accordance with the above discussion, results in a recursive relation
\begin{equation} H_m(z)=2z H_{m-1}(z) - 2(m-1) H_{m-2}(z) \,,
\label{equ:Hrec}\end{equation}
which generates the Hermite polynomials~\cite{Gradshtein}, starting from $H_0(z)=1$ and $H_1(z)=2z$. Note that the latter are the two lowest-order polynomials which appear in \Eq{equ:L123}.
We further approximate the eigenfrequencies $\Delta_n^\sigma$, the damping $\gamma_n$, and the transition amplitudes $C_n^{\sigma s}$ in \Eq{equ:Pss} for large excitation pulse area ($\lambda\gg1$):
\begin{align}
&\Delta_n^o\approx 2\sqrt{\lambda}g+\sqrt{2}g z\,,
&&\Delta_n^i\approx \frac{g}{2\sqrt{\lambda}}-\frac{g}{2\sqrt{2}\lambda} z\,,\nonumber\\
&\gamma_n\approx(2\lambda+1)\gamma+2\sqrt{2\lambda}\gamma z\,,\nonumber\\
&C_n^{o \pm} \approx \frac{1}{8\sqrt{\lambda}}\,,
&&C_n^{i \pm} \approx \frac{\sqrt{\lambda}}{2}\,,
\label{equ:Capp}
\end{align}
with $z$ defined by \Eq{equ:zdef}.
Again, the approximation is valid for relatively small ($|\delta|\ll \sqrt{\lambda} g$) or zero detuning ($\delta=0$, so that $\gamma_X=\gamma_C=\gamma$).
Finally, switching in \Eq{equ:Pss} from summation to integration,
\begin{equation}
\sum_{n=0}^\infty \to \sqrt{2\lambda} \int_{-\infty}^\infty dz\,,
\end{equation}
and using the approximations \Eqsss{equ:RnNa}{equ:LH}{equ:Capp}, we obtain
\begin{equation}
P_{\sigma s}(t)\approx \frac{i^m}{2} A^{(m)}_\sigma e^{-i\omega_{\sigma s} t}
\frac{1}{\sqrt{\pi}} \int_{-\infty}^\infty dz e^{-z^2} e^{-i\gamma_{\sigma s} tz} H_m(z)\,,
\label{equ:Pss2}
\end{equation}
where
\begin{align}
&\omega_{o s}= s 2\sqrt{\lambda}g -i(2\lambda+1)\gamma\,,
&&\omega_{i s}=s\frac{g}{2\sqrt{\lambda}}-i(2\lambda+1)\gamma\,,\nonumber\\
&\gamma_{o s}= s\sqrt{2}g -i 2\sqrt{2\lambda }\gamma\,,
&&\gamma_{i s}= -s\frac{g}{2\sqrt{2}\lambda} -i2\sqrt{2\lambda }\gamma\,,\nonumber\\
&A^{(m)}_o =\frac{(-i)^m}{4 m! (\sqrt{2\lambda})^m}\,,
&&A^{(m)}_i =4\lambda A^{(m)}_o \,,
\label{equ:Adef}
\end{align}
and $s=\pm1$. The amplitudes $A^{(m)}_\sigma$ of the ${\cal N}$WM polarization are given in \Eq{equ:Adef} for {\it Case 2}, for which $m={\cal N}/2$. Note, however, that \Eqs{equ:Pss2}{equ:Adef} describe also the ${\cal N}$WM polarization in {\it Case 1}, provided that all $P_{\sigma s}(t)$ are divided by $\lambda$ and $m={\cal N}/2-1$ is used.
Now, performing the integration in \Eq{equ:Pss2} we find
\begin{equation} P_{\sigma s}(t)\approx \frac{1}{2} A^{(m)}_\sigma(\gamma_{\sigma s} t)^m \exp\left\{{-i\omega_{\sigma s} t}-(\gamma_{\sigma s} t)^2/4\right\} \,,
\label{equ:Pss3}\end{equation}
using the analytic integral
\begin{eqnarray} I_m(p)&=&\int_{-\infty}^\infty e^{ipz}H_m(z) e^{-z^2} dz \nonumber\\
&=&\int_{-\infty}^\infty e^{ipz}\left[2z H_{m-1}(z) - 2(m-1) H_{m-2}(z)\right] e^{-z^2} dz \nonumber\\
&=&\int_{-\infty}^\infty e^{ipz}\left[ip H_{m-1}(z) + H'_{m-1}(z)- 2(m-1) H_{m-2}(z)\right] e^{-z^2} dz \nonumber\\
&=& ip I_{m-1}=(ip)^m\sqrt{\pi} e^{-p^2/4}\,.
\label{equ:Im}\end{eqnarray}
Note that in deriving \Eq{equ:Im} we have used the recursive relation \Eq{equ:Hrec}, integration by parts, the property of Hermite polynomials
$$ H'_m(z)=2m H_{m-1}(z)\,,$$
where the prime indicates the derivative versus the argument, and the Fourier transform of the Gaussian function
$$ I_0(p)=\int_{-\infty}^\infty e^{ipz} e^{-z^2} dz =\sqrt{\pi} e^{-p^2/4}\,.$$
Finally, to obtain the ${\cal N}$WM spectrum, using $\bar{\Omega}$ as zero of frequency for convenience, we Fourier transform the time-dependent optical polarization:
\begin{eqnarray}
\tilde{P}_{\sigma s}(\omega)&= &\int_0^\infty e^{i\omega t} P_{\sigma s}(t) dt\nonumber\\
&=&A^{(m)}_\sigma \frac{1}{2} \int_0^\infty (\gamma_{\sigma s} t)^m \exp\left\{{i(\omega-\omega_{\sigma s}) t}-(\gamma_{\sigma s} t)^2/4\right\} dt\nonumber\\
&=& \frac{A^{(m)}_\sigma}{\gamma_{\sigma s}} w_m\left(\frac{\omega- \omega_{\sigma s}}{\gamma_{\sigma s}}\right)\,,
\label{equ:Pw}
\end{eqnarray}
for $|\arg(\gamma_{\sigma s})|<\pi/4$. Otherwise, $\gamma_{\sigma s}$ must be replaced with $-\gamma_{\sigma s}$ and a sign factor $(-1)^m$ be added, see below for more details. This is actually the case of $o-$ and $i+$ transitions, for which Re\,$\gamma_{o-}<0$ and Re\,$\gamma_{i+}<0$. However, this can be conveniently dealt with by using the spectral symmetry:
\begin{equation} \tilde{P}(\omega)=\sum_{\sigma=i,o} \sum_{s=\pm} \tilde{P}_{\sigma s}(\omega)=\bar{P}(\omega)+\bar{P}^\ast(-\omega)\,,
\label{equ:Pw0}\end{equation}
where
\begin{equation} \bar{P}(\omega)=\tilde{P}_{o+}(\omega)+\tilde{P}_{i-}(\omega)=
A^{(m)}_o\left[\frac{1}{\gamma_{o+}} w_m\left(\frac{\omega- \omega_{o+}}{\gamma_{o+}}\right)
+\frac{4\lambda}{\gamma_{i-}} w_m\left(\frac{\omega- \omega_{i-}}{\gamma_{i-}}\right) \right]\,.
\label{equ:Pw1}\end{equation}
The function $w_m(z)$ in \Eqs{equ:Pw}{equ:Pw1} is defined as
\begin{equation} w_m(z)= \frac{1}{2} \int_0^\infty t^m e^{izt}e^{-t^2/4} dt\,,
\label{equ:wn}\end{equation}
and can be expressed in terms of the Faddeeva function, $w(z)=2w_0(z)/\sqrt{\pi}$, via its $m$-th derivative
$$ w_m(z)=(-i)^m\frac{d^m}{dz^m} w_0(z)\,. $$
It is, however, more practical to use a recursive formula
which can be obtained integrating \Eq{equ:wn} by parts, which gives
\begin{equation} w_m(z)=2izw_{m-1}(z) + 2(m-1) w_{m-2}(z)
\label{equ:wn2}\end{equation}
for $m\geqslant 2$,
$$ w_1(z)=1+2izw_0(z)$$
for $m=1$, and
\begin{equation} w_0(z)=G(z)+iD(z) =\frac{\sqrt{\pi}}{2} w(z)
\label{equ:w0}\end{equation}
for $m=0$. Here, $G(z)$ is the Gaussian function,
$$ G(z)= \frac{\sqrt{\pi}}{2} e^{-z^2}\,,$$
$D(z)$ is the standard Dawson's integral,
$$ D(z)= \frac{1}{2} \int_0^\infty e^{-t^2/4}\sin(zt) dt = e^{-z^2} \int_0^z e^{t^2} dt\,,$$
and $w(z)$ is the Faddeeva function. The latter is well-know due to its real part, describing a Voigt (Gaussian) profile for complex (real) $z$.
The integral $w_m(z)$ in \Eq{equ:wn} can also be written explicitly using the Faddeeva function, Hermite polynomials and associated polynomials $Q_m(z)$ satisfying the recursive relation \Eq{equ:Hrec} of Hermite polynomials,
\begin{equation} Q_m(z)=2z Q_{m-1}(z) - 2(m-1) Q_{m-2}(z)\,,
\label{equ:Qrec}\end{equation}
but starting from $Q_1(z)=1$ and $Q_2(z)=2z$ instead. Functions $w_m(z)$ take the form
$$ w_m(z)=i^m H_m(z) w_0(z) +i^{m-1} Q_m(z)$$
with $w_0(z)$ given by \Eq{equ:w0} and polynomials
\begin{align}
&H_0(z)=1\,, && Q_0(z)=0\,, \nonumber\\
&H_1(z)=2z\,, && Q_1(z)=1\,, \nonumber\\
&H_2(z)=4z^2-2\,, && Q_2(z)=2z\,, \nonumber\\
&H_3(z)=8z^3-12z\,, && Q_3(z)=4z^2-4\,, \nonumber\\
&H_4(z)=16z^4-48z^2+12\,, && Q_4(z)=8z^3-20z\,, \nonumber\\
&H_5(z)=32z^5-160z^3+120z\,, && Q_5(z)=16z^4-72z^2+32\,,
\nonumber\end{align}
listed above for the first few $m$.
Note also that we have reduced the integral in \Eq{equ:Pw} to the Faddeeva function in the following way
\begin{eqnarray} \int_0^\infty e^{iat}e^{-(bt)^2/4} dt &=& e^{-(a/b)^2} \int_0^\infty e^{-b^2(t-t_0)^2/4} dt \nonumber\\
&=& e^{-(a/b)^2} \left[ \int_0^{t_0} e^{-b^2t^2/4} dt + \int_0^\infty e^{-b^2 t^2/4} dt \right] \nonumber\\
&=& e^{-(a/b)^2} \left[\frac{2i}{b} \int_0^{a/b} e^{t^2} dt + \frac{\sqrt{\pi}}{b} \right] =\frac{2}{b} w_0(a/b)\,,
\label{equ:b}\end{eqnarray}
where $t_0=2ia/b$. While the initial integral is invariant with respect to the sign change of $b$ and only requires Re\,$(b^2)>0$ for convergence, the Gaussian term in the last line of \Eq{equ:b}, containing the factor ${\sqrt{\pi}}/{b}$, is valid only if $|\arg(b)|<\pi/4$. This leads to the requirement introduced above that $|\arg(\gamma_{\sigma s})|<\pi/4$\,, otherwise $\gamma_{\sigma s}$ should be taken with the opposite sign.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{SM_FWMcomp}
\caption{ Exact FWM spectrum (black and green lines) for $|E_1|=6$, $|E_2|=0.001$, zero detuning $\delta=0$, so that $\gamma_C=\gamma_X=\gamma$, with the values of $\gamma$ as given, in comparison with the analytic approximation \Eqs{equ:Pw0}{equ:Pw1} (red lines), and the full sum \Eq{equ:Pss} (blue lines). Left and right panels show the spectral regions of, respectively, inner and outer transitions (for positive frequencies). The spectra are shown without the factor \Eq{equ:factor}. }
\label{fig:FWMcomp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{SM_NWMcomparison.pdf}
\caption{ Analytic approximation \Eqs{equ:Pw0}{equ:Pw1} (red lines) for the outer-transition sideband of the ${\cal N}$WM spectrum with ${\cal N}=2,\,4,\,6,\,8,$ and 10, $|E_1|=10$, $|E_2|=0.001$, zero detuning, and $\gamma=\gamma_C=\gamma_X=0$, in comparison with the exact calculation with $\gamma=0.001g$ (blue lines). The horizontal bars show the spectral linewidth of $4g$. The left, middle, and right panels show, respectively the real, imaginary part, and the absolute value of $\tilde P(\omega)$. All spectra are shown without the factor \Eq{equ:factor} and are multiplied with $|E_1|^{{\cal N}/2}$. The 2WM contains the linear response, creating a spectral tail of the inner doublet. The dotted lines show the exact result minus this tail $10ig/\omega$.}
\label{fig:NWM12}
\end{figure}
Figure~\ref{fig:FWMcomp} illustrates a comparison of FWM spectra, calculated using the exact solution, given by the analytic formulas \Eqs{equ:Pw0}{equ:Pw1}, and the sum \Eq{equ:Pss} without converting it to an integral, with coefficients taken in the form of \Eq{equ:Cio}. For a damping of $\gamma=0.001g$, the sideband (right panels) shows the contributions of individual outer transitions, both in the sum and in the full spectrum. This is visible because the difference between the transition frequencies, $g/\sqrt{n}$ [see \Eq{equ:freq}], is larger than the damping $\gamma_n=(2n+1)\gamma$ (here $n\sim 36$). The pattern of oscillations seen in the spectral profile can be understood from the modulation of the Poisson distribution by the Lagguere polynomial $L_2^{n-1}(\lambda)$ specific to this nonlinearity channel, see \Eq{equ:FWM2}. In fact, $L_2^{n-1}(\lambda)$ presents a parabola, which is clearly seen in the amplitude of oscillations, having knots at around $\omega/g=11$ and 13. The frequency difference between the neighboring inner transitions, $-g/(4n\sqrt{n})$, is in turn much smaller than the damping, so that similar oscillations in the peak of the central band (left panel) are not seen even for a 10 times smaller damping. The analytic approximation (red curves) shows no oscillations, since the conversion of a sum into an integral used in its derivation effectively introduces a continuum of transitions. Interestingly, the analytic approximation shows somewhat better agreement with the full calculation when it is taken with zero damping, instead of using the correct $\gamma=0.001g$.
We further look at the spectral profile for higher non-linearities, concentrating on the outer transitions. Figure~\ref{fig:NWM12} shows the real and imaginary parts, as well as the absolute value of the ${\cal N}$WM spectrum, for all even ${\cal N}$ from 2 to 10. The number of oscillations in the real and imaginary parts grows linearly with ${\cal N}$, the real part of the ${\cal N}$WM spectrum being very similar to the imaginary part of the $({\cal N}-2)$WM spectrum, which is the property of the generalized Faddeeva function $w_m$ determining the spectra. The absolute value, however, shows no oscillations, and the same linewidth of around $4g$, essentially independent of ${\cal N}$. The right panels demonstrate excellent agreement between the analytic approximation and the exact calculation, for all spectra, apart from the case ${\cal N}=2$ which contains the linear response. Here, an extended spectral tail scaling with the inverse frequency remains in the full calculation, which is not reproduced by the analytic solution. Subtracting this tail, a good agreement is found.
\section{FWM power versus pulse area}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{E12powGcomp}
\caption{Power of FWM response for varying $|E|=|E_2|=|E_1|$, with $\delta=0$ and various $\gamma=\gamma_X=\gamma_C$ as indicated. Inset: the FWM power for $|E|=0.1$ versus $\gamma/g$. The scaling $\propto g/\gamma$ is given as dashed line.}
\label{fig:E12powGcomp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{E1powGcomp}
\caption{As \Fig{fig:E12powGcomp}, but for varying $|E_1|$, with $|E_2|=0.001$. }
\label{fig:E1powGcomp}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{E2powGcomp}
\caption{As \Fig{fig:E12powGcomp}, but for varying $|E_2|$, with $|E_1|=0.001$.}
\label{fig:E2powGcomp}
\end{figure}
The scaling of the FWM power, $\int_0^\infty |P(t)|^2 dt$, versus pulse area $|E|=|E_2|=|E_1|$ is shown in \Fig{fig:E12powGcomp} for zero detuning and zero delay between pulses. In the perturbative (i.e. low-excitation) regime, the expected scaling $\propto |E|^6$ is observed, and in the high pulse area regime a saturation of the power is seen. The scaling of the FWM power versus pulse area $|E_1|$ with a fixed $|E_2|=0.001$ is shown in \Fig{fig:E1powGcomp}. In the perturbative regime, the expected scaling $\propto |E_1|^2$ is observed. In the high pulse area regime, a reduction of the power is observed, and for $\gamma/g$ of the order of one, Rabi-oscillations are seen.
The scaling of the FWM power versus pulse area $|E_2|$ with a fixed $|E_1|=0.001$ is shown in \Fig{fig:E2powGcomp}. In the perturbative regime, the expected scaling $\propto |E_2|^4$ is observed.
In the high pulse area regime, the behaviour is somewhat similar to the case of changing $|E_1|$. However, almost no Rabi oscillations are seen, which is different from the previous case.
\clearpage
\section{More results on the FWM spectra}\label{sec:moreresults}
This section contains a collection of results similar to Fig.\,1 of the main text, investigating the effect of varying system parameters on transition amplitudes and FWM polarization, presented to provide the reader with a broader picture of possible responses. We vary, in particular, the exciton and cavity dampings, which are assumed equal, $\gamma_X=\gamma_C$, and taking values of $g$, $g/5$, and $g/20$. We also vary the detuning $\delta =\Omega_X-\Omega_C$ between the exciton and cavity-mode transition frequencies: results are shown for $\delta=0$ and $g$. Finally, for each parameters set, we vary the pulse strength $|E_1|$ while keeping $|E_2|$ small, or $|E_2|$ while keeping $|E_1|$ small, or both using $|E_1|=|E_2|$.
The resulting 18 figures listed in \Tab{tab:sim} show a number of effects on the FWM polarization.
By varying the excitation pulse strength within each figure, we demonstrate a formation of the QMQ for each excitation condition, damping and detuning. By reducing the damping we demonstrate a fine structure, both in the inner and outer doublets. By increasing the damping up to the values of critical damping $\gamma_C=g$, we show how the QMQ gradually disappears. This happens not only because of the spectral broadening but also due to a population relaxation down the ladder, which reduces the outer doublet splitting. Varying the detuning from 0 to $g$ results in spectral asymmetry: the low-intensity strong-coupling doublet shifts towards the positive frequency, and the shape of the QMQ changes.
\setlength{\tabcolsep}{10pt}
\begin{table}
\renewcommand*{\arraystretch}{1.1}
\begin{tabular}{l|l|l|l|l}
\hline
Figure & $\delta/g$ & $\gamma_C/g$ & $|E_1|$ & $|E_2|$ \\
\hline
\Fig{fig:e1d0g1} & 0 & 1 & 0-10 & 0.001\\
Fig\,1 & 0 & 1/2 & 0-10 & 0.001\\
\Fig{fig:e1d0g5} & 0 & 1/5 & 0-10 & 0.001\\
\Fig{fig:e1d0g20} & 0 & 1/20 & 0-10 & 0.001\\
\hline
\Fig{fig:e1d1g1} & 1 & 1 & 0-10 & 0.001\\
\Fig{fig:e1d1g5} & 1 & 1/5 & 0-10 & 0.001\\
\Fig{fig:e1d1g20} & 1 & 1/20 & 0-10 & 0.001\\
\hline
\Fig{fig:e2d0g1} & 0 & 1 & 0.001 & 0-10\\
\Fig{fig:e2d0g5} & 0 & 1/5 & 0.001 & 0-10\\
\Fig{fig:e2d0g20} & 0 & 1/20 & 0.001 & 0-10\\
\hline
\Fig{fig:e2d1g1} & 1 & 1 & 0.001 & 0-10\\
\Fig{fig:e2d1g5} & 1 & 1/5 & 0.001 & 0-10\\
\Fig{fig:e2d1g20} & 1 & 1/20 & 0.001 & 0-10\\
\hline
\Fig{fig:e12d0g1} & 0 & 1 & 0-10 & 0-10\\
\Fig{fig:e12d0g5} & 0 & 1/5 & 0-10 & 0-10\\
\Fig{fig:e12d0g20} & 0 & 1/20 & 0-10 & 0-10\\
\hline
\Fig{fig:e12d1g1} & 1 & 1 & 0-10 & 0-10\\
\Fig{fig:e12d1g5} & 1 & 1/5 & 0-10 & 0-10\\
\Fig{fig:e12d1g20} & 1 & 1/20 & 0-10 & 0-10\\
\hline
\end{tabular}
\caption{Overview of available simulation results.}
\label{tab:sim}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d0g1.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=0$, $\gamma_C=g$.}
\label{fig:e1d0g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d0g5.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=0$, $\gamma_C=g/5$.}
\label{fig:e1d0g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d0g20.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=0$, $\gamma_C=g/20$.}
\label{fig:e1d0g20}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d1g1.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=g$, $\gamma_C=g$.}
\label{fig:e1d1g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d1g5.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=g$, $\gamma_C=g/5$.}
\label{fig:e1d1g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e1d1g20.pdf}
\caption{As Fig.\,1, with alternate parameters $\delta=g$, $\gamma_C=g/20$.}
\label{fig:e1d1g20}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d0g1.pdf}
\caption{As Fig.\,1 with alternate parameters $\delta=0$, $\gamma_C=g$, $|E_1|=0.001$ and varying $|E_2|$.}
\label{fig:e2d0g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d0g5.pdf}
\caption{As \Fig{fig:e2d0g1} with alternate parameters $\delta=0$, $\gamma_C=g/5$.}
\label{fig:e2d0g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d0g20.pdf}
\caption{As \Fig{fig:e2d0g1} with alternate parameters $\delta=0$, $\gamma_C=g/20$.}
\label{fig:e2d0g20}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d1g1.pdf}
\caption{As \Fig{fig:e2d0g1} with alternate parameters $\delta=g$, $\gamma_C=g$.}
\label{fig:e2d1g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d1g5.pdf}
\caption{As \Fig{fig:e2d0g1} with alternate parameters $\delta=g$, $\gamma_C=g/5$.}
\label{fig:e2d1g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e2d1g20.pdf}
\caption{As \Fig{fig:e2d0g1} with alternate parameters $\delta=g$, $\gamma_C=g/20$.}
\label{fig:e2d1g20}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d0g1.pdf}
\caption{As Fig.\,1 with alternate parameters $\delta=0$, $\gamma_C=g$, and varying $|E_1|=|E_2|$.}
\label{fig:e12d0g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d0g5.pdf}
\caption{As \Fig{fig:e12d0g1} with alternate parameters $\delta=0$, $\gamma_C=g/5$.}
\label{fig:e12d0g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d0g20.pdf}
\caption{As \Fig{fig:e12d0g1} with alternate parameters $\delta=0$, $\gamma_C=g/20$.}
\label{fig:e12d0g20}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d1g1.pdf}
\caption{As \Fig{fig:e12d0g1} with alternate parameters $\delta=g$, $\gamma_C=g$.}
\label{fig:e12d1g1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d1g5.pdf}
\caption{As \Fig{fig:e12d0g1} with alternate parameters $\delta=g$, $\gamma_C=g/5$.}
\label{fig:e12d1g5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{e12d1g20.pdf}
\caption{As \Fig{fig:e12d0g1} with alternate parameters $\delta=g$, $\gamma_C=g/20$.}
\label{fig:e12d1g20}
\end{figure}
\clearpage
\section{Numerical Convergence and use of multiprecision}\label{sec:convergence}
In this section we discuss the numerical convergence and the need for multiprecision arithmetic when diagonalizing the Lindblad matrix for our system.
The condition number of a matrix $A$ is defined as ${\rm cond}_p(A)=\norm{A}_p\norm{A^{-1}}_p$ for any $p$-norm $\norm{A}_p$, and provides an estimate of lost precision when used in numerical calculations. Specifically, $\log_{2(10)}({\rm cond}(A))$ estimates the number of bits (digits) of numerical precision additionally required in the calculation with respect to the precision of the final result. We use the 1-norm here, which is found by taking the sum of the absolute values of the matrix elements of each column of a square matrix, then taking the maximum value of this sum.
We calculate the condition number of the eigenvectors of the Lindblad matrix, ${\rm cond}_1(U)=\norm{U}_1\norm{V}_1$ for varying rung truncation using the analytic form of $U$ and its inverse $V$ (see \Sec{Sec:Lindblad}). Our code uses 1000 bits (301 digits) of precision for all data presented in the main text and in this supplement.
\begin{figure}[H]
\centering
\includegraphics[width=11cm]{Lcondition.pdf}
\caption{Upper bound for the number of additional digits required in the numerics, expressed in terms of the logarithm of the condition number, $\log_{10}({\rm cond}_1(U))$, as function of number of rungs considered. }
\label{fig:Lcondition}
\end{figure}
The result shown in \Fig{fig:Lcondition} demonstrates that for each rung included, about 0.6 digits of extra precision are needed. Standard double precision arithmetic, which has some 15 digits of precision, is therefore expected to fail for more than 20 rungs included, and we have observed such behaviour. We thus did move to multi-precision calculations.
We show in \Fig{fig:bop_convergence} the calculated FWM polarization with $E_1=10$, $E_2=10^{-3}$, $\gamma_{C,X}=10^{-3}$, $\delta=0$ as function of the number of bits of precision. Convergence is found at $b=325$ bit precision. Notably, at lower precision the spectra are exponentially diverging proportional to $2^{-b}$ with nearly constant shape.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{bop_convergence.pdf}
\caption{Calculated FWM spectrum $|\tilde{P}(\omega)|$ for $E_1=10$, $E_2=10^{-3}$, $\gamma_{C}=\gamma_{X}=10^{-3}$, $\delta=0$ and 500 rungs, as function of the number of bits precision as indicated.}
\label{fig:bop_convergence}
\end{figure}
Assuming enough digits for numerical accuracy, the only additional limit to accuracy is the number of rungs considered. Clearly, all rungs which are significantly occupied will need to be taken into account, and this number is always more than the average number of photons, which in turn is roughly given by the square of the largest excitation pulse area. Here we give some examples for the convergence of the ${\cal N}$WM\ polarization for different input parameters, and show the required number of rungs for a polarization curve to converge.
To study the convergence we evaluate the change in the spectrum when increasing the number of rungs $\eta$ included in the calculation by one. We do this by introducing the relative root mean square error
\begin{equation} \sigma_\eta=\sqrt{\frac{\int |\tilde{P}_{\eta+1}(\omega)-\tilde{P}_{\eta}(\omega)|^2 d\omega }{\int |\tilde{P}_{\eta+1}(\omega)|^2 d\omega }}\,.
\label{eqn:sigmaeta}\end{equation}
We say the result is converged if $\sigma_\eta<10^{-5}$. Examples for the dependence of $\sigma_\eta$ on the number of rungs $\eta$ are shown in \Fig{fig:convergence_main} for FWM and 10WM, $|E_1|=6$ or 10, with $\gamma_C=\gamma_X=10^{-3}g$ or $g$, and zero detuning $\delta=0$.
\begin{figure}
\centering
\includegraphics[width=16cm]{convergence_gamma.pdf}
\caption{Error $\sigma_\eta$ versus number of rungs $\eta$. a) FWM, $|E_1|=6$, $\gamma_C=\gamma_X=10^{-3}g$;
b) FWM, $|E_1|=6$, $\gamma_C=\gamma_X=g$; c) FWM, $|E_1|=10$, $\gamma_C=\gamma_X=10^{-3}g$; d) FWM, $|E_1|=10$, $\gamma_C=\gamma_X=g$; e) 10WM, $|E_1|=10$, $\gamma_C=\gamma_X=10^{-3}g$. The insets show $|\tilde{P}(\omega)|$ normalized to a maximum of $1$ at three $\eta$ as given, corresponding to convergence (red), 2/3 of convergence (orange) and 1/3 of convergence (black).}
\label{fig:convergence_main}
\end{figure}
We find that the error tends to a monotonous exponential decay with $\eta$, for $\eta$ above two to three times the average number of photons (given by $|E_1|^2$ here), reaching convergence at about four to five times $|E_1|^2$.
Increasing $\gamma_C$ and $\gamma_X$, and the order of the nonlinearity, the required number of rungs also increases.
The spectra shown in the insets for 1/3, 2/3 and 3/3 of the $\eta$ at convergence demonstrate a variety of behaviours depending on the parameters. We found that the inner doublet is the feature in the spectrum requiring the largest number of rungs to converge.
To optimize the numerical complexity of the simulations, it is important to choose a low rung number, while it has to be sufficiently high to provide convergence. For example, for Fig.\,1 we used a minimum number of rungs of 10 and maximum of 510. Interpolation over these two values for a total number of 501 curves (vertical resolution of a)) results in increasing rung truncation by 1 per curve. The absolute minimum number of rungs required for convergence in the low excitation regime $|E|\ll1$ is $1+{\cal N}/2$. \Fig{fig:convergence_lowg_rungreq} shows the required number of rungs to converge for arbitrary $|E_1|$ in the low damping regime.
\begin{figure}
\centering
\includegraphics[width=10cm]{reqrung4convergence.pdf}
\caption{Number of rungs $\eta$ at which convergence for FWM is reached, as function of $|E_1|$, for $E_2=10^{-3}$, $\gamma_{C,X}=10^{-3}$, and $\delta=0$. The data (points) is described well by $10+3.5|E_1|^2$ shown as line.}
\label{fig:convergence_lowg_rungreq}
\end{figure}
|
2,869,038,155,008 | arxiv | \section{Introduction}
Consider the canonical Gaussian measure on $\R^n$, $\gamma_n$. Given $k\in \N$ and $k$ disjoint measurable subsets of $\R^n$ each of $\gamma_n$ measure $1/k$ we can compute the $(n-1)$-dimensional Gaussian measure of the union of the boundaries of these $k$ sets. Below we shall make clear what exactly we mean by the $(n-1)$-dimensional Gaussian measure but in particular our normalization will be such that the $(n-1)$-dimensional Gaussian measure of a hyperplane at distance $t$ from the origin will be $e^{-t^2/2}$. The question we are interested in is what is the minimal value that this quantity can take when ranging over all such partitions of $\R^n$. As is well known, the Gaussian isoperimetric inequality (\cite{bo, st} implies that, for $k=2$, the answer is $1$ and is attained when the two sets are half spaces. The answer is also known for $k=3$ and $n\ge 2$ and is given by $3$ $2\pi/3$-sectors in $\R^2$ (product with $R^{n-2}$) (\cite{cch}). The value in question is then $3/2$.
If the $k$ sets are nice enough (for example if, with respect to the $(n-1)$-dimensional Gaussian measure, almost every point in the union of the boundaries of the $k$ sets belongs to the boundary of only two of the sets) then the quantity in question is bounded from below by $c\sqrt{log k}$ for some absolute $c>0$. This was pointed out to us by Elchanan Mossel. Indeed, by the Gaussian isoperimetric inequality, the boundary of each of the sets has measure at least $e^{-t^2/2}$ where $t$ is such that $\frac{1}{\sqrt{2\pi}}\int_t^\infty e^{-s^2/2}ds=1/k$. If $k$ is large enough $t$ satisfies
\[
\frac{e^{-t^2/2}}{\sqrt{2\pi}2t}<\frac{1}{k}<\frac{e^{-t^2/2}}{\sqrt{2\pi}t}
\]
which implies $\sqrt{\log k}\le t\le \sqrt{2\log k}$ and so the boundary of each of the $k$ sets has $(n-1)$-dimensional Gaussian measure at least $e^{-t^2/2}\ge \sqrt{2\pi}t/k\ge\sqrt{2\pi\log k}/k$. Under the assumption that the sets are nice we then get a lower bound of order $\sqrt{2\pi\log k}$ to the quantity we are after.
Of course the minimality of the boundary of each of the $k$ sets cannot occur simultaneously for even 3 of the $k$ sets (as the minimal configuration is a set bounded by an affine hyperplane) so it may come as a surprise that one can actually achieve a partition with that order of the size of the boundary. To show this is the main purpose of this note. It is natural to conjecture that, for $k-1\le n$ the minimal configuration is that given by the Voronoi cells of the $k$ vertices of a simplex centered at the origin of $\R^n$. So it would be nice to compute or at least estimate well what one gets in this situation. This seems an unpleasant computation to do. However, in Corollary \ref{co:1} below we compute such an estimate for a similar configuration - for even $k$ with $k/2\le n$, we look at the $k$ cells obtained as the Voronoi cells of $\pm e_i$, $i=1,\dots,k/2$ and show that the order of the $(n-1)$-dimensional Gaussian measure of the boundary is of order $\sqrt{\log k}$ and we deduce the main result of this note:
\medskip
\noindent {\bf Main Result} {\em Given even $k$ with $k\le 2n$, the minimal $(n-1)$-dimensional Gaussian measure of the union of the boundaries of $k$ disjoint sets of equal Gaussian measure in $\R^n$ whose union is $\R^n$ is of order $\sqrt {\log k}$.}
\medskip
In Corollary \ref{co:2} we deduce analogue estimates for the Haar measure on the sphere $S^{n-1}$.
This note benefitted from discussions with Elchanan Mossel and Robi Krauthgamer. I first began to think of the subject after Elchanan and I spent some time trying (alas in vain) to use symmetrization techniques to gain information on the (say, Gaussian) ``$k$-bubble" conjecture and some variant of it (see Conjecture 1.4 in \cite{im}). Robi asked me specifically the question that is solved here, with some possible applications to designing some algorithm in mind (but apparently the solution turned out to be no good for that purpose). I thank Elchanan and Robi also for several remarks on a draft of this note. I had also a third motivation to deal with this question. It is related to the computation of the dependence on $\e$ in (the probabilistic version of) Dvoretzky's theorem. It is too long to explain here, especially since it does not seem to lead to any specific result.
\section{Approximate isoperimetry for $k$ sets}
We begin with a simple inequality.
\begin{lm}\label {lm:1}
For all $\e>0$ if $C$ is large enough (depending on $\e$) then for all $k\in\N$
\[
\frac{1}{\sqrt{2\pi}}\int_{\sqrt{2\log \frac{k}{C}}
-1}^{\sqrt{2\log Ck}}\Big(\frac{1}{\sqrt{2\pi}}\int_{-s}^se^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds\ge\frac{1}{(2+\e)k}
\]
\end{lm}
\pf Let $g_1,g_2,\dots,g_k$ be independent identically distributed $N(0,1)$ variables. Then
\begin{equation}\label{eq:1}
\frac{1}{\sqrt{2\pi}}\int_0^\infty\Big(\frac{1}{\sqrt{2\pi}}\int_{-s}^se^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds=P(g_1\ge |g_2|,\dots,|g_k|)=\frac{1}{2k}.
\end{equation}
Also,
\begin{align}\label{eq:2}
\frac{1}{\sqrt{2\pi}}\int_0^{\sqrt{2\log \frac{k}{C}}
-1}\Big(\frac{1}{\sqrt{2\pi}}&\int_{-s}^se^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds\nonumber\\
&=\frac{1}{2k}\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}\Big)^k\Big]_{s=0}^{\sqrt{2\log \frac{k}{C}}
-1}\\
&\le \frac{1}{2k}(1-\frac{2}{\sqrt{2\pi}}e^{-\log\frac{k}{C}})^k\le \frac{1}{2k}e^{-\frac{2C}{\sqrt{2\pi}}},\nonumber
\end{align}
and, for $C$ large enough,
\begin{align}\label{eq:3}
\frac{1}{\sqrt{2\pi}}\int_{\sqrt{2\log Ck} }^\infty&\Big(\frac{1}{\sqrt{2\pi}}\int_{-s}^se^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds\nonumber\\
&=\frac{1}{2k}\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}\Big)^k\Big]_{s=\sqrt{2\log Ck}}^\infty\\
&\le \frac{1}{2k}\Big(1-\Big(1-\frac{2}{\sqrt{2\pi}}\int_{\sqrt{2\log Ck}}^\infty e^{-s^2/2}\Big)^k\Big)\le \frac{1}{2k}(1-e^{-1/C}).\nonumber
\end{align}
The Lemma now follows from (\ref{eq:1}),(\ref{eq:2}) and (\ref{eq:3}). \endpf
The next proposition is the main technical tool of this note. The statement involves the $(k-1)$-dimensional Gaussian measure of a certain subset of $\R^k$. We did not formally defined this notion for general sets yet (see Definition \ref{df:1} below) but the set we are talking about here is a subset of a hyperplane (through the origin of $\R^k$) and for such sets it just coincides with the canonical Gaussian measure, associated with this subspace, of the set in question.
\begin{pr}\label{pr:1}
For each $\e>0$ there is a $C$ such that for all $k\ge 2$, the $(k-1)$-dimensional Gaussian measure of the set $\{(t_1,t_2,\dots,t_k);\ t_1=t_2\ge |t_3|,\dots,|t_k|\}$ is bounded between $\frac{\sqrt{\pi\log \frac{k}{C}}-1}{(1+\e)2k(k-1)}$ and $\frac{(1+\e)\sqrt{\pi\log Ck}}{2k(k-1)}$.
\end{pr}
\pf
The measure in question is
\[
\frac{1}{\sqrt{2\pi}}\int_0
^\infty\Big(\frac{1}{\sqrt{2\pi}}\int_{-s}^se^{-t^2/2}dt\Big)^{k-2}e^{-s^2}ds.
\]
Integration by parts (with parts $\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-2}e^{-s^2/2}$ and $e^{-s^2/2}$) gives that this it is equal to
\begin{equation}\label{eq:4}
\frac{1}{2(k-1)}\int_0
^\infty\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds.
\end{equation}
Now,
\begin{align}\label{eq:5}
\int_{\sqrt{2\log Ck}}
^\infty&\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds\\
&=-\int_s
^\infty\Big(\frac{2}{\sqrt{2\pi}}\int_0^ue^{-t^2/2}dt\Big)^{k-1}e^{-u^2/2}du s\Big]_{s=\sqrt{2\log Ck}}^\infty\nonumber\\
&\phantom{==}+\int_{\sqrt{2\log Ck}}
^\infty\int_s^\infty\Big(\frac{2}{\sqrt{2\pi}}\int_0^ue^{-t^2/2}dt\Big)^{k-1}e^{-u^2/2}duds\nonumber\\
&\le\frac{\sqrt{2\pi}}{2k}(1-e^{-1/C})\sqrt{2\log Ck}+\int_{\sqrt{2\log Ck}}
^\infty\frac{\sqrt{2\pi}}{2k}(1-e^{-ke^{-s^2/2}})ds\label{eq:6},
\end{align}
where the estimate for the first term in (\ref{eq:6}) follows from (\ref{eq:3}) and of the second term follows from a similar computation to (\ref{eq:3}). Now (\ref{eq:6}) is at most
\begin{align}\label{eq:7}
\frac{\sqrt{2\pi}}{2Ck}\sqrt{2\log Ck}+\int_{\sqrt{2\log Ck}}
^\infty\frac{\sqrt{2\pi}}{2}e^{-s^2/2}ds\le \frac{\sqrt{2\pi}(\sqrt{2\log Ck}+1)}{2Ck}
\end{align}
and we conclude that
\begin{align}\label{eq:8}
\int_{\sqrt{2\log Ck}}
^\infty\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds\le
\frac{\sqrt{2\pi}(\sqrt{2\log Ck}+1)}{2Ck}.
\end{align}
On the other hand
\begin{align}\label{eq:9}
&\int_0^{\sqrt{2\log Ck}}
\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds\\
&\phantom{==}\le
\sqrt{2\log Ck}\int_0^\infty
\Big(\frac{2}{\sqrt{2\pi}}\int_0^s e^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds=\frac{\sqrt{2\pi}\sqrt{2\log Ck}}{2k}.\nonumber
\end{align}
Now, (\ref{eq:4}),(\ref{eq:8}) and (\ref{eq:9}) gives the required upper bound. The lower bound (which also follows from the Gaussian isoperimetric inequality) is easier. By Lemma \ref{lm:1}
\begin{align}\label{eq:10}
\frac{1}{2(k-1)}&\int_0
^\infty\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds\\
&\ge \frac{1}{2(k-1)}\int_{\sqrt{2\log \frac{k}{C}}
-1}^{\sqrt{2\log Ck}}\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}se^{-s^2/2}ds\\
&\ge \frac{{\sqrt{2\log \frac{k}{C}}-1}}{2(k-1)}\int_{\sqrt{2\log \frac{k}{C}}
-1}^{\sqrt{2\log Ck}}\Big(\frac{2}{\sqrt{2\pi}}\int_0^se^{-t^2/2}dt\Big)^{k-1}e^{-s^2/2}ds\\
&\ge \frac{{\sqrt{\pi\log \frac{k}{C}}-1}}{(1+\e)2k(k-1)}.
\end{align}
\endpf
To formulate Corollary \ref{co:1}, which is the main result here, in the most general setting we need to define the $(n-1)$-dimensional Gaussian measure of the boundary of a partition of $\R^n$ into $k$ sets.
\begin{df}\label{df:1}
Let $A_1,A_2,\dots,A_k$ be a partition of $\R^n$ into $k$ measurable sets. Put $A=\{A_1,A_2,\dots,A_k\}$ and denote
\[
\partial_\e A=\cup_{i=1}^k ((\cup_{j\not=i}A_j)_\e\setminus \cup_{j\not=i}A_j)
\]
(where $B_\e$ denotes the $\e$-neighborhood of the set $B$). We shall call $\partial_\e A$ the {\em $\e$-boundary} of $A$.
{\em The $(n-1)$-dimensional Gaussian measure of the boundary of $A$} will be defined and denoted by
\[
\gamma_{n-1}(\partial A)=\liminf_{\e\to 0}\frac{\gamma_n(\partial_\e A)-\gamma_n(A)}{\sqrt{2/\pi}\e}.
\]
\end{df}
Note that we do not define the boundary of the partition, only the measure of the boundary. We would like however that in simple cases, when the boundary and its $(n-1)$-dimensional Gaussian measure are well understood, this definition will coincide with the classical one. In particular notice that if the partition is into two sets which are separated by a hyperplane at distance $t$ from the origin the definition says that the $(n-1)$-dimensional Gaussian measure of the boundary is $e^{-t^2/2}$ and in particular when $t=0$ the measure is 1 which coincides with what we understand as the classical $\gamma_{n-1}$ measure of a hyperplane through 0. This is why the factor $\sqrt{2/\pi}$ is present in the definition above.
\begin{co}\label{co:1} For some universal constants $0<c<C<\infty$ and all $k=2,3,\dots$,\\
(1) If $A=\{A_1,A_2,\dots,A_k\}$ is a partition of $\R^n$ into $k$ measurable sets each of $\gamma_n$ measure $1/k$. Then $\gamma_{n-1}(\partial A)\ge c \sqrt{\log k}$.\\
(2) If $k\le n$, there is a partition $A=\{A_1,A_2,\dots,A_{2k}\}$ of $\R^n$ into $2k$ measurable sets each of $\gamma_n$ measure $1/2k$ such that $\gamma_{n-1}(\partial A)\le C \sqrt{\log k}$.
\end{co}
(1) follows very similarly to the argument in the introduction, except that there is no need for the boundary to be nice anymore: By the Gaussian isoperimetric inequality, for each $\e>0$ and each $i=1,\dots,k$,
\[
\gamma_n((\cup_{j\not=i}A_j)_\e\setminus \cup_{j\not=i}A_j)\ge\frac{1}{\sqrt{2\pi}}\int_t^{t+\e}e^{-s^2/2}ds,
\]
where $t$ is such that $\frac{1}{\sqrt{2\pi}}\int_t^\infty e^{-s^2/2}ds=1/k$. If $\e$ is small enough, the argument in the introduction gives that the integral in question is of order $\e\frac{\sqrt{\log k}}{k}$. Since the $k$ sets $(\cup_{j\not=i}A_j)_\e\setminus \cup_{j\not=i}A_j$ are disjoint, we deduce (1).
(2) follows directly from Proposition \ref{pr:1} since the boundary of the partition into the Voronoi cells corresponding to $\{\pm e_i\}_{i=1}^k $ is contained in the union of $k(k-1)$ hyperplans through zero and thus $\gamma_{n-1}(\partial A)$ coincide with the classical $\gamma_{n-1}(\partial A)$ which is what is estimated in Proposition \ref{pr:1}.
A similar result to Corollary \ref{co:1} holds on the $n$-dimensional sphere, $S^{n-1}$ with its normalized Haar measure $\sigma_n$. One defines the $\e$-boundary of a partition $A$ of the sphere in a similar way to the first part of Definition \ref{df:1} (using, say, the geodesic distance to define the $\e$-neighborhood of a set). Then one defines the {\em $(n-1)$-dimensional Haar measure of the boundary of $A$} by
\[
\sigma_{n-1}(\partial A)=\liminf_{\e\to 0}\frac{\sigma_n(\partial_\e A)-\sigma_n(A)}{\sqrt{2n/\pi}\e}.
\]
The choice of the normalization constant $\sqrt{2n/\pi}$ was made so that if the partition is into two sets separated by a hyperplane then the measure of the boundary (which ``is" $S^{n-2}$) will be 1. The proof can be obtained from that of Corollary \ref{co:1} by a standard reduction, using the fact that if $(g_1,\dots,g_n)$ is a standard Gaussian vector then the distribution of $(\sum g_i^2)^{-1}(g_1,\dots,g_n)$ is $\sigma_n$.
\begin{co}\label{co:2} For some universal constants $0<c<C<\infty$ and all $k=2,3,\dots$,\\
(1) If $A=\{A_1,A_2,\dots,A_k\}$ is a partition of $S^{n-1}$ into $k$ measurable sets each of $\sigma_n$ measure $1/k$. Then $\sigma_{n-1}(\partial A)\ge c \sqrt{\log k}$.\\
(2) If $k\le n$, there is a partition $A=\{A_1,A_2,\dots,A_{2k}\}$ of $S^{n-1}$ into $2k$ measurable sets each of $\sigma_n$ measure $1/2k$ such that $\sigma_{n-1}(\partial A)\le C \sqrt{\log k}$.
\end{co}
\begin{re}
It may be interesting to investigate what happens when $k>>n$. In particular, if $k=2^n$ then the partition of $R^n$ into its $k=2^n$ quadrants satisfy that the $\gamma_{n-1}$ measure of its boundary (consisting of the coordinates hyperplanes) is $n=\log k$. Is that the best (order) that can be achieved?
\end{re}
|
2,869,038,155,009 | arxiv | \section{Introduction}
\noindent The Phase Field Crystal (PFC) model was introduced as a mesoscale description of a nonequilibrium crystalline phase, valid at the molecular length scale, but only over long, diffusive time scales \cite{re:elder02}. By eliminating the need to resolve the time scale associated with lattice vibration, the Phase Field Crystal model has become a widely used computational tool capable of describing a wide variety of phenomena in materials science \cite{re:emmerich12}. One of the strengths of the formulation is the ease in the description of defected solids, including, for example, dislocation dissociation, stacking fault formation, grain boundary motion, and coarsening of polycrystalline configurations. Further spatial coarse graining has also been undertaken, leading to models in which the characteristic spatial variation is also slow compared with the molecular length scale \cite{re:goldenfeld05,re:elder10,re:yeon10,re:praetorius19}.
The model begins with the introduction of a phenomenological, non convex free energy functional, $\Phi_{sh}[\psi]$, of a phase field $\psi(\mathbf{x},t)$ and its gradients. Although we will not explicitly use this functional form below, we mention the widely used form
\begin{equation}
\Phi_{sh}[\psi] = \int_{\Omega} d \bm{x} \; \varphi_{sh} = \int_{\Omega} d \bm{x} \; \left[ \frac{1}{2} \left[ (\nabla^{2} + q_{0}^{2}) \psi \right]^{2} - \frac{\epsilon}{2} \psi^{2} + \frac{1}{4}\psi^{4} \right],
\end{equation}
where in this dimensionless units $q_{0} = 1$ (we retain the notation $q_{0}$ for ease of discussion below), and $0 < \epsilon \ll 1$ is the dimensionless control parameter of the bifurcation between the ground states $\psi = 0$ and $\psi$ periodic. We also introduce $\overline{\psi}$, the conserved spatial average of $\psi$, as a control parameter. The combination of gradients is chosen so as to produce ground states that are spatially periodic, with characteristic wavenumber $q_{0}$. The choice of nonlinearity and the value of $\overline{\psi}$ determine the symmetry of the resulting ground state lattice. While the bulk of the early work focused on two dimensional hexagonal lattices, research has also considered three dimensional systems, including fcc and bcc lattices \cite{re:elder10}, and specific materials such as, for example, Fe \cite{re:pisutha-arnond13} or graphene \cite{re:huter16}. We will assume below that $\Phi_{sh}[\psi]$ is given for a three dimensional system, but will not focus on its specific properties which have been extensively studied elsewhere (e.g., in Ref. \cite{re:huter16}).
Phase Field Crystal model free energies have been derived by using Density Functional Theory methods, with the expectation of obtaining functionals that capture the long time diffusive evolution of the mass density as the relevant order parameter \cite{re:elder07,re:huang10}. The free energies obtained provide a reasonable description of the freezing phase transition \cite{re:archer19}. However, extensions to include the momentum density in the set of slow or hydrodynamic variables have not been considered to the same extent, except for colloidal systems \cite{re:archer09}, and, more recently, in the so called hydrodynamic formulation of the Phase Field Crystal \cite{re:heinonen16}. In this latter case, both mass and momentum conservation are considered at the mesoscale (still involving spatial variations that are slow compared with the lattice spacing $1/q_{0}$). For weak distortions around the ground state, a smooth displacement field can be introduced, resulting in a dynamical dispersion relation that includes both phonon propagation and damping, in agreement with standard theory. Notably, the dispersion at large wavenumber becomes entirely diffusive as diffusion of the phase field controls the local relaxation of a weakly distorted configuration. Although the study does not address how to explicitly incorporate topological constraints necessary to describe a defected configuration, results are given for grain rotation and shrinkage in a two dimensional, hexagonal phase. Grain radius is seen to decay with time as $t^{-1/2}$, as expected. The amplitude of the decay rate increases with increasing Newtonian viscosity in the momentum equation. In the limit of large viscosity, the results of the overdamped model of Ref. \cite{re:heinonen14} for the grain size as a function of time are recovered. Since the boundary of the grain comprises a periodic array of dislocations, this example indicates that the theory is capable of describing the evolution of an initially defected configuration.
However, the Phase Field Crystal model has some important shortcomings that point to its incompleteness. In most research to date, the mass density and lattice distortion of the crystalline phase are generally described by the same scalar field $\psi$. If this is the case, then their variations are not independent. Consider, for example, that $\psi$ is a conserved mass density. Then, its local variation through distortion is $\delta \psi = - \partial_{k} (\psi \delta u_{k})$, where $u_{k}$ is the $k$-th component of the displacement vector, the phase of $\psi$. From this relation, the variation $\delta \Phi_{sh}/\delta u_{k} = \psi \partial_{k} (\delta \Phi_{sh}/\delta \psi)$. Since the stress is defined through $\partial_{j} T_{ij} = - \delta \Phi_{sh}/\delta u_{i}$, then $\partial_{j} T_{ij} = - \psi \partial_{i}(\delta \Phi_{sh}/\delta \psi)$.
This relation is correct in equilibrium where both sides of the equation vanish, but not in general outside of equilibrium. Furthermore, if both variations are not considered to be independent, then lattice distortions can only relax diffusively, which is unphysical. This difficulty has been recognized for a long time, and a number of modified models have been introduced to allow for relaxation of the phase field in a time scale faster than diffusion \cite{re:stefanovic06,re:majaniemi07,re:heinonen14,re:zhou19}, including the hydrodynamic formulation alluded to above \cite{re:heinonen16}.
Despite modifications to the Phase Field Crystal model in order to accelerate the relaxation of elastic distortions, restricting the model to a single field $\psi$ still leads to difficulties or inconsistencies. One such difficulty involves the definition of physical system boundaries, and the imposition of boundary conditions involving domain shape or traction. The specification of boundary traction, for example, needs to be done indirectly through manipulation of the phase field. In their study of the motion of a single dislocation under an imposed strain, Berry et al. \cite{re:berry06} rigidly displaced a small layer of sites at the boundary. The resulting distortion propagated into the bulk system slowly (diffusively), thus preventing direct control of the stress field in the defect region other than readjusting the displacement of the boundary layer, and waiting for a long time until the bulk stress would readjust. The ensuing motion of the dislocation is quite different from would be expected from classical elasticity and the Peach-K\"{o}hler force \cite{re:skaugen18b,re:salvalaglio20}. A second issue concerns the recent result that the ground state of the Phase Field Crystal appears to be, in fact, under a large pressure. For example, for the model parameters that are employed to describe bcc Fe, the ground state pressure is as large as $1.8 \times 10^{6}$ atm at melting \cite{re:pisutha-arnond13}. Whether this state of pressure is or is not taken into account in the determination of the linear elastic constants from the phase field free energy, it is possible to predict both a decrease or an increase in their values as a function of $\overline{\psi}$ (related to average density or pressure) \cite{re:wang18}. The proper definition of strain from the phase field has been further discussed in Ref. \cite{re:huter16} which suggests holding the value of $\overline{\psi}$ constant under volume change, which implies that it is not related to the mass density. Finally, modeling plastic motion of defects within the Phase Field Crystal leads to another class of difficulties. Elastic and plastic distortions are independent, and ordinarily relax over widely different time scales. While it is well understood that mass and lattice defect velocities are independent quantities \cite{re:kosevich79}, they are simultaneously described by a single scalar quantity $\psi$ in the Phase Field Crystal model.
The approach that we propose here is based on the realization that the PFC/Swift-Hohenberg functional does not posses intrinsic elasticity; indeed, the Swift-Hohenberg functional, despite its elegance and immense generality, contains no information on the forces that hold matter together either based on Quantum Mechanics or macroscopic elastic response, and this is borne out by the fitting it requires, see, e.g., \cite[Eqn. (65)]{re:huter16}. Therefore, we use it as a mathematical device or indicator function that (i) describes the symmetry of a crystalline lattice even when locally deformed, (ii) serves to locate topological defects and provide for their topological index, and, (iii) allows to conserve topological charge in close to equilibrium processes involving defect motion through its `phase' being constrained to equal a field (described below) whose mechanics explicitly satisfies a conservation law for (signed) topological charge allowing defect nucleation and annihilation. We introduce a configurational distortion tensor $\bm{P}$, a pointwise functional of the phase field $\psi$, which coincides with the inverse elastic distortion tensor of the medium $\bm{W}$ only in equilibrium. Away from equilibrium, we allow relative fluctuations between both such that the elastic response is captured by $\bm{W}$, and the diffusive relaxation by $\bm{P}$. Section \ref{sec:theory} describes our theory in general for nonlinear distortions, whereas Sec. \ref{sec:theory-linear} considers the approximation of small elastic distortions so as to compare our results with existing models.
The fully nonlinear (geometric and material) dynamics of the inverse elastic distortion field is governed by the partial differential equation based model of Field Dislocation Mechanics (FDM) \cite{acharya2001model, acharya2004constitutive, acharya2006size,
acharya2011microcanonical, arora2020dislocation, arora2020finite,arora2020unification}. It completes the program of the theory of continuously distributed dislocations \cite[and earlier references therein]{kazuo1963non}, \cite{bilby1955continuous,kroener1971continuum,kroner1981continuum,mura1963continuous,fox1966continuum,willis1967second,re:kosevich79} extended from its origins in linear elasticity and links between differential geometry and defect kinematics to a full-fledged nonlinear theory of continuum mechanics accounting for equations of balance, evolution, large irreversible material deformations (plasticity), material inertia and dissipation, geometric and material nonlinearity in finite bodies of arbitrary elastic anisotropy subjected to general boundary and initial conditions, and understood at a level of granularity suitable for computer implementation to obtain approximate solutions \cite{arora2020finite,zhang2015single}, \cite[and following works for the geometrically linear model]{roy2005finite}. FDM is `fluid-like' in its description of the behavior of solids with defects in not relying on the existence of a reference configuration or a plastic distortion tensor, while predicting physically observed large, irreversible plastic deformation of the body due to the motion of dislocations (as well as recoverable elastic deformation and residual stress). The coupled FDM-PFC model we propose shares all of these important properties.
In closing this Introduction, we mention the Phase Field models of dislocations \cite{wang2001nanoscale,koslowski2002phase,shen2003phase, rodney2003phase,denoual2004dynamic,re:levitas12,re:mianroodi15,mianroodi2016theoretical} that have been quite successful in solving a variety of problems related to dislocation mechanics close to equilibrium. These models are restricted to small deformation kinematics and the notion of plastic strains from a fixed reference configuration (that is not physically determinable from an internally stressed defected initial state). More importantly, Phase Field models require the definition of the so-called `crystalline energy' or the `Generalized Stacking Fault energy' that has to be defined creatively from the a-priori knowledge of slip-systems of a material and an atomistic $\gamma$-surface procedure first introduced by Vitek in \cite{vitek1968intrinsic}. As a consequence, the number of independent fields included in the model is related to the number of slip systems identified and considered \cite{rodney2003phase,re:levitas12}, and dislocation combination rules need to be adapted accordingly \cite{shen2003phase}. This is different from PFC which \emph{predicts} material symmetry and consequent defect motions on preferred planes and directions dictated by that symmetry \cite{re:yamanaka17}. Furthermore, the dynamics of phase field models rely on an Allen-Cahn gradient flow for a set of non-conserved scalar `disorder' fields (or non-convex incremental energy minimization with highly non-unique solutions as in \cite{koslowski2002phase}), with one consequence being that a spatially homogeneous phase field can evolve based on the levels of stress and energy density fields. This is in contrast to FDM where evolution of the elastic distortion (beyond `convection') can only occur at a field point where a dislocation exists (i.e., the curl of the distortion does not vanish), regardless of the level of stress or energy density at that point. This `thermodynamic driving force' property follows from the second law of thermodynamics constrained by an explicit condition of conservation of Burgers vector (topological charge) during the evolution of elastic distortion - and is a feature that is consistent with the form of the Peach-K\"{o}hler force of classical dislocation theory.
\section{Finite deformation phase field crystal theory of dislocation motion}
\label{sec:theory}
\subsection{Choice of fields}
\label{sec:variables}
We focus on an isothermal system and consider a simply-connected body (even in the presence of line defects) at all times. The following set of independent variables is introduced: $\rho$, the continuum mass density, the material velocity $\bm{v}$, $\bm{W}$, the inverse elastic distortion, and $\psi$ the phase field. The tensor field $\bm{W}$ maps the (linear approximation to the) deformed elastic lattice pointwise to the undeformed lattice (the latter assumed known). In the absence of line defects, $\mathop{\rm curl}\nolimits \bm{W} = \mathbf{0}$ (compatible elasticity), a potential field $\bm{X}$ defining a reference configuration exists in which the undeformed lattice can be embedded: $d X_{i} = \frac{\partial X_{i}}{\partial x_{j}} dx_{j} = F^{-1}_{ij} dx_{j}$, with $\bm{F}^{-1} = \bm{W}$. In terms of a displacement field $\bm{u}$ of the reference (which exists in the compatible case), the tensor $U_{ij} = \partial_{i} u_{j} = \partial_{i}(x_{j} - X_{j}) = \delta_{ij} - F^{-1}_{ij}$, so that $\bm{W} = \bm{F}^{-1} = \bm{I} - \bm{U}$. Even in the incompatible case, defining $\bm{W}^{-1} - \bm{I} = \bm{U}$ and assuming $|\bm{U}| \ll 1$, $\bm{W} \approx \bm{I} - \bm{U}$.
The key ingredient of our model is a new (two-point) second rank tensor $\bm{P}$ (standing for \emph{phase}) with the same symmetry properties under rotation as $\bm{W}$. Its value at each point in the material is a functional of the phase field $\psi$, and is defined so as to describe the distortion of the surfaces of constant $\psi$. After averaging the phase field over a scale on the order of $q_{0}^{-1}$ \cite{re:skaugen18}, one can define a triad of local wavevectors $\bm{q}^{n}$, different than those of the ground state of $\Phi_{sh}[\psi]$, the latter denoted by $\bm{q}_{0}^{n}$. Then we define $\bm{q}_{0}^{n} = \bm{P}^{-T} \bm{q}^{n}$.
The tensor $\bm{P}$ describes a local configurational distortion that can be associated with the field $\psi$, without endowing the phase field with any elastic properties.
Note that the curl of the tensor field $\bm{W} \neq \mathbf{0}$ in general, and $\bm{P}$ will not vanish at defects in the phase field equivalent lattice.
\subsection{Balance equations}
The density $\rho$ satisfies mass conservation
\begin{equation}
\dot{\rho} + \rho \mathop{\rm div}\nolimits \bm{v} = 0
\label{eq:mass_con}
\end{equation}
where $\dot{(~)}$ represents a material time derivative, and $\bm{v}$ is the material velocity (center of mass velocity of an element of volume), and all spatial differential operators at any given time are on the configuration occupied by the body at that time. Momentum conservation is written as
\begin{equation}
\rho \dot{\bm{v}} = \mathop{\rm div}\nolimits \bm{T} + \rho \bm{b}
\label{eq:mom_con}
\end{equation}
where $\bm{T}$ is the stress tensor, which in the present context, is symmetric, and $\bm{b}$ is a specified body force density (per unit mass). For quasi-static motions of the body, we simply write $ \mathop{\rm div}\nolimits \bm{T} + \rho \bm{b} = \mathbf{0}$.
If the medium contains dislocation lines, the inverse elastic distortion is imcompatible, and we write \cite{willis1967second}
\begin{equation}
\mathop{\rm curl}\nolimits \bm{W} = \mathop{\rm curl}\nolimits \bm{P} = - \bm{\alpha},
\label{eq:alpha_def}
\end{equation}
where $\bm{\alpha}$ is the dislocation density tensor. The integral of this tensor over a surface equals the sum of the Burgers' vectors of the dislocation lines that thread the surface. Motion of the dislocation lines induces a change in the distortion tensor given by \cite{acharya2004constitutive,acharya2011microcanonical}
\begin{equation}
\dot{\bm{W}} + \bm{W} \bm{L} = \bm{\alpha} \times \bm{V}
\label{eq:Vdef}
\end{equation}
where we have introduced the tensor $\bm{L} = \mathop{\rm grad}\nolimits \bm{v}$, and the local dislocation line velocity $\bm{V}$ relative to the local mass velocity. This equation is implied by topological charge conservation under defect motion (up to a gradient of a vector field that can be assumed to vanish for microscopic defect motions) \cite{acharya2015dislocation} and, conversely, enforces such conservation when operative.
\subsection{Free energy, dissipation inequality, and governing equations}
We next consider the free energy density of the system $\varphi$ to be a function not only of $\rho$, $\bm{W}$ and $\psi$, but also of $\bm{P}$, treated as an independent variable,
\begin{eqnarray}
\int_\Omega d \bm{x} \; \rho \varphi(\rho,\bm{W},\psi,\bm{P}) = \int_\Omega d\bm{x} \; \rho \varphi_{e}(\rho,\bm{W},\bm{P}) & + & C_{sh} \Phi_{sh}[\psi] + \nonumber \\ + \frac{C_{w}}{2} \int_\Omega d \bm{x} \; \rho | \bm{W} - \bm{P} |^{2} & + & \frac{C_{\rho}}{2} \int_\Omega d \bm{x} \; \rho (\rho-\psi)^{2}.
\label{eq:fe}
\end{eqnarray}
The first term in the right hand side of Eq. (\ref{eq:fe}) is the standard elastic energy. We allow a dependence on $\bm{P}$ only to express the fact that the actual functional form of the elastic constant matrix will depend on the symmetry of the lattice, and that potentially on the linear elastic constants that will themselves depend on that symmetry, and the local state of distortion of the phase field. For the simplest extension of linear elasticity to rotationally invariant nonlinear elasticity, for example, one would write
\begin{equation}
\varphi_{e} = \frac{1}{2 \rho_{0}} \bm{E}:\bm{C}(\bm{P}):\bm{E},
\end{equation}
where $\bm{C}$ is the tensor of elastic moduli, possibly dependent on $\bm{P}$, and $\bm{E}$ is the symmetric strain tensor $\bm{E} = \frac{1}{2} \left( \bm{F}^{eT}\bm{F}^e - \bm{I} \right)$, with $\bm{F}^e := \bm{W}^{-1}$.
For simplicity, we introduce the notation
\begin{equation}
\Phi_{wp} = \int_\Omega d\bm{x} \; \rho \varphi_{e}(\rho,\bm{W},\bm{P}) + \frac{C_{w}}{2} \int_\Omega d \bm{x} \; \rho | \bm{W} - \bm{P} |^{2},
\label{eq:phiwp}
\end{equation}
which is also, implicitly, a functional of the phase field $\psi$ (through $\bm{P}$). The coupling constants $C_{sh}, C_{w}$ and $C_{\rho}$ are non negative, and we will typically focus on the case in which $C_{w}$ is large $C_{w} \gg | \bm{C} |$.
Motivated by Eq. \eqref{eq:phiwp} and the evolution of $\bm{P}$ necessary for response due to a superposed rigid motion on a given motion of a body in which $\psi$ does not evolve, we assume that
\begin{equation}
\int_\Omega d\bm{x} \; \frac{\delta \Phi_{wp}}{\delta \psi} \dot{\psi} = \int_\Omega d\bm{x} \; \rho \frac{\partial}{\partial \bm{P}} \left( \varphi_{e} + C_{w} \varphi_{wp} \right) : \left[ \dot{\bm{P}} + \bm{P} \bm{L} \right].
\label{eq:sh_diss}
\end{equation}
where we have defined $\varphi_{wp} = \frac{1}{2} |\bm{W}-\bm{P}|^{2}$.
With the explicit form of the conservation laws, and the form of the free energy introduced, we can use a dissipation inequality to derive the kinetic laws governing the evolution of the fields. We write the Second Law of Thermodynamics in the form
\begin{equation}
\int_{\partial \Omega} \left( \bm{T} \cdot \hat{\bm n} \right) \cdot \bm{v} dS + \int_{\Omega} d \bm{x}\, \rho \bm{b} \ge \frac{d}{dt} \int_{\Omega} d \bm{x} \; \rho \varphi + \frac{d}{dt} \int_{\Omega} d \bm{x} \; \frac{1}{2} \rho |\bm{v}|^{2},
\end{equation}
so that the power expended by external agencies (applied traction on the outer boundary and the applied body forces) is greater or equal to the rate of change of the free energy plus kinetic energy. Integrating this relation by parts and using the balance of linear momentum, we write
\begin{equation}
\int_{\Omega} d \bm{x} \; \bm{T}:\bm{L} - \frac{d}{dt} \int_{\Omega} d \bm{x} \; \rho \varphi \ge 0.
\label{eq:second_law}
\end{equation}
By explicit substitution of Eq. (\ref{eq:fe}), one finds
\begin{eqnarray}
\int_{\Omega} d \bm{x} \; \bm{T}:\bm{L} & - & \int_{\Omega} d \bm{x} \; \rho \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{W}}\right) : (- \bm{W} \bm{L} + \bm{\alpha}\times\bm{V}) \nonumber \\
&-& \int_{\Omega} d \bm{x}\; \left[ \varphi_{e} + C_{w} \varphi_{wp} + C_{\rho}\varphi_{\rho} + C_{\rho} (\rho - \psi) \right] \left( - \rho {\rm Tr}(\bm{L}) \right) - \label{eq:second_law_2} \\
&-& \int_{\Omega} d\bm{x} \; \left[ C_{sh} \frac{\delta \Phi_{sh}}{\delta \psi} +C_{\rho} \rho (\rho - \psi) \right] \dot{\psi} - \int_{\Omega} d\bm{x} \; \rho \left[ \frac{\partial \varphi_{e}}{\partial \bm{P}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{P}} \right] : \dot{\bm{P}} \ge 0 \nonumber
\end{eqnarray}
By using Eq. (\ref{eq:sh_diss}), the last term in the L.H.S. of Eq. (\ref{eq:second_law_2}) can be written as
$$
- \int_{\Omega} d\bm{x} \; \rho \left[ \frac{\partial \varphi_{e}}{\partial \bm{P}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{P}} \right] : (-\bm{P} \bm{L}) - \int_{\Omega} d \bm{x} \; \frac{\delta \Phi_{wp}}{\delta \psi} \dot{\psi}
$$
This equation can be further rewritten to highlight products of thermodynamics forces and currents as
\begin{eqnarray}
& & \int_{\Omega} d \bm{x} \; \left[ \bm{T} + \rho \bm{W}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{W}} \right) + \rho a
\bm{I} \right] : \bm{L} \nonumber \\
& - & \int_{\Omega} d \bm{x} \; \rho \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{W}} \right) : (\bm{\alpha}\times \bm{V}) \nonumber \\
& + & \int_{\Omega} d \bm{x}\; \rho \bm{P}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{P}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{P}} \right) : \bm{L} \label{eq:second_law_3} \\
& - & \int_{\Omega} d \bm{x}\; \left[ C_{sh}\frac{\delta \Phi_{sh}}{\delta \psi} + C_{\rho} \rho(\rho - \psi) + \frac{\delta \Phi_{wp}}{\delta \psi} \right] \dot{\psi} \ge 0 \nonumber
\end{eqnarray}
where we have defined $a = \varphi_{e} + C_{w} \varphi_{wp} + C_{\rho} \varphi_{\rho} + C_{\rho} (\rho - \psi)$.
This expression can be further simplified since the free energy density $\varphi$ is invariant under rotation. In that case, the antisymmetric (or skew) part
$$
\left( \bm{W}^{T} \frac{\partial \varphi}{\partial \bm{W}} + \bm{P}^{T} \frac{\partial \varphi}{\partial \bm{P}} \right)_{\rm skew} = \mathbf{0}.
$$
Therefore of the terms proportional to $\bm{L}$ in Eq. (\ref{eq:second_law_3}), only those proportional to the symmetric part of velocity gradient $\bm{D} = (\bm{L}+\bm{L}^{T})/2$ contribute. We combine them into
\begin{equation}
\int_{\Omega} d \bm{x}\; \left\{ \bm{T} + \rho \left[ \bm{W}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\varphi_{wp}}{\partial \bm{W}} \right) + \bm{P}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{P}} + C_{w} \frac{\varphi_{wp}}{\partial \bm{P}} \right) + a \bm{I} \right] \right\} : \bm{D}
\label{eq:second_law_4}
\end{equation}
This completes our calculation of the dissipation inequality. One can now identify the reversible parts of the various currents, followed by the introduction of the respective dissipative currents in order to respect the inequality. The symmetric reversible stress follows directly from Eq. (\ref{eq:second_law_4}),
\begin{equation}
\bm{T}^{R} = - \rho \left[ \bm{W}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\varphi_{wp}}{\partial \bm{W}} \right) + \bm{P}^{T} \left( \frac{\partial \varphi_{e}}{\partial \bm{P}} + C_{w} \frac{\varphi_{wp}}{\partial \bm{P}} \right) + a \bm{I} \right]
\label{eq:stress_r}
\end{equation}
Since our formulation applies not only to crystalline phases, but also to other phases with broken symmetries still described by a phase field, we mention that it is possible to introduce a dissipative stress as $\bm{T}^{D} = \bm{\eta}:\bm{D}$, where $\bm{\eta}$ is a fourth rank viscosity tensor. The number of independent components of the elastic constant and viscosity tensors depend the the symmetry of the system, and have been enumerated for several important cases \cite{re:martin72}.
We will restrict our analysis to dissipative defect velocities only. In order to ensure positivity of dissipation, we write
\begin{equation}
\bm{V} = - \bm{M} \, \bm{X} : \left[ \rho \left( \frac{\partial \varphi_{e}}{\partial \bm{W}} + C_{w} \frac{\partial \varphi_{wp}}{\partial \bm{W}} \right)^{T} \bm{\alpha}\right]
\label{eq:VD_def}
\end{equation}
where $\bm{M}$ is a positive definite mobility tensor. For $C_w = 0$, it can be shown that the driving force in the above relation corresponds to the exact generalization of the form of the Peach-K\"{o}hler force to the fully nonlinear setting \cite{acharya2004constitutive}.
Finally, we identify the reversible and irreversible currents of the phase field $\psi$. The condition for reversible motion is simply $\dot{\psi} = 0$, that is, advection of the phase field. The dissipative component is chosen to enforce positivity, leading to an order parameter equation,
\begin{equation}
\dot{\psi} = - L \left[ C_{sh}\frac{\delta \Phi_{sh}}{\delta \psi} + C_{\rho} \rho(\rho - \psi) + \frac{\delta \Phi_{wp}}{\delta \psi} \right]
\label{eq:op_evol}
\end{equation}
where the constant $L > 0$ is the phase field mobility. Importantly, although mass is a conserved quantity, the phase field that describes the broken symmetry is not. On this particular, our model does differ from implementations of the Phase Crystal model based on density functional theory in which the order parameter is chosen to be the mass density.
In summary, the complete set of equations includes mass (Eq. (\ref{eq:mass_con})), momentum (Eq. (\ref{eq:mom_con})), and topological charge (Eq. (\ref{eq:Vdef}) conservation, along with the definition of the dislocation tensor, Eq. (\ref{eq:alpha_def}). The phenomenological currents that follow from the dissipation inequality and the model free energy, Eq. \eqref{eq:fe}, are the stress, Eq. (\ref{eq:stress_r}), the defect velocity Eq. (\ref{eq:VD_def}), and the evolution equation for the phase field, Eq. (\ref{eq:op_evol}).
Before considering the small deformation limit of the model, we outline several qualitative features of the evolution of a defected phase as given by the governing equations. An initially defected configuration will be described by an order parameter field $\psi$. Topological defects will be located in regions of non zero curl of $\bm{P}$, with $\bm{P}$ defined by a point wise oriented triad in reciprocal space from $\psi$ \cite{re:skaugen18b}, compared to the same object for the ground state of $\Phi_{sh}$. For $ C_{w}, C_{sh}$ large and of comparable magnitude, the order parameter will relax quickly (and diffusively) to a local minimum of
$$
C_{sh} \Psi_{sh} + \frac{ C_{w}}{2} \int d \bm{x} \; \rho |\bm{W}-\bm{P}|^{2}
$$
relatively independently of the resulting changes induced in the elastic energy $\varphi_{e}$, and in mass density fluctuations. This process will be accompanied by the relaxation of the elastic distortion in phonon lifetime scales, also quickly if the quasistatic elastic limit is invoked. Further evolution will be slow, driven by the Peach-K\"{o}hler force in Eq. (\ref{eq:VD_def}), which is dominated by the elastic stress term $\partial \varphi_{e}/\partial \bm{W}$. If the configuration is not defected, but subjected to body forces, traction and/or velocity boundary conditions, the solution of the elasticity problem will yield $\bm{W}$, which will - if $C_{w}$ and $C_{sh}$ are large - quickly modify $\psi$. In this case of no defects, $\psi$ becomes a passive indicator function mediating nonlinear anisotropic elastic response up to homogeneous nucleation of defects.
\section{Small deformation limit}
\label{sec:theory-linear}
In the small deformation or geometrically linear limit, we consider a fixed simply connected reference configuration for the body and assume that the deforming body remains close to this configuration at all times so that all spatial derivatives can be written w.r.t. this fixed reference configuration. As is customary, it is also formally assumed that various distortion measures are `small' in magnitude. In this case, as mentioned in Sec. \ref{sec:variables}, the inverse elastic distortion is $\bm{W} = \bm{I} - \bm{U}$ and we treat $\bm{U}$ as the fundamental measure of elastic distortion. We note that $\mathop{\rm curl}\nolimits \bm{U} \neq \mathbf{0}$ in the presence of defects, when it cannot be written as a gradient of a displacement field. We will also consider the symmetrized elastic distortion $\bm{\epsilon} = \bm{U}_{sym}$, $\epsilon_{ij} = (1/2)(U_{ij}+U_{ji})$. Analogously, we define $\bm{Q} = \bm{I} - \bm{P}$.
From Eqs. (\ref{eq:alpha_def}) and (\ref{eq:Vdef}), the equations defining the dislocation density tensor and defect motion are now
\begin{equation}
\mathop{\rm curl}\nolimits \bm{U} = \bm{\alpha}, \quad\quad \bm{L} = \dot{\bm{U}} + \bm{\alpha} \times \bm{V}
\label{eq:kinematics_sd}
\end{equation}
where we have neglected the quadratic term $\bm{U}\bm{L}$. These equations are the classical equations of plastic motion \cite{re:kosevich79, mura1963continuous}. Here, $\bm{L}$ is still the velocity gradient, but now with respect to the fixed reference configuration.
In analogy to Eq. (\ref{eq:fe}) we write the free energy density as
\begin{eqnarray}
\int_{\Omega} d \bm{x} \; \varphi(\rho,\bm{U},\psi,\bm{Q}) = \int_{\Omega} d\bm{x} \; \varphi_{e}(\rho,\bm{U},\bm{Q}) & + & C_{sh} \Phi_{sh}[\psi] + \nonumber \\ + \frac{C_{w}}{2} \int_{\Omega} d \bm{x} \; | \bm{U} - \bm{Q} |^{2} & + & \frac{C_{\rho}}{2} \int_{\Omega} d \bm{x} \; (\rho-\psi)^{2}.
\label{eq:fesd}
\end{eqnarray}
In the small deformation regime, the dissipation inequality is written as
\begin{equation}
\int_{\Omega} d \bm{x}\; \bm{T} : \bm{L} - \int_{\Omega} d \bm{x}\; \dot{\varphi} \ge 0.
\label{eq:second_lawsd}
\end{equation}
As in Sec. \ref{sec:theory}, we define
\begin{equation}
\Phi_{uq} = \int_{\Omega} d \bm{x}\; \varphi_{e}(\bm{U},\bm{Q}) + \frac{C_{w}}{2} \int_{\Omega} d \bm{x}\; |\bm{U}-\bm{Q}|^{2}
\end{equation}
The second term of Eq. (\ref{eq:second_lawsd}) can now be written as
$$
\int_{\Omega} d \bm{x}\; \dot{\varphi} = \int_{\Omega} d \bm{x}\; \frac{\partial \varphi_{e}}{\partial \bm{U}} : (\bm{L} - \bm{\alpha}\times\bm{V}) + \int_{\Omega} d \bm{x}\; \frac{\delta \Phi_{uq}}{\delta \psi} \dot{\psi} + C_{sh} \int_{\Omega} d \bm{x}\; \frac{\delta \Phi_{sh}}{\delta \psi} \dot{\psi}
$$
\begin{equation}
+ C_{\rho} \int_{\Omega} d \bm{x}\; \left( \rho - \psi \right) \left(-\rho {\rm Tr} ( \bm{L}) \right) - C_{\rho} \int_{\Omega} d \bm{x}\; (\rho - \psi)\dot{\psi},
\label{eq:omegadot_sd}
\end{equation}
where we have used the relation, analogous to Eq.( \ref{eq:sh_diss}),
\begin{equation}
\int_{\Omega} d \bm{x}\; \frac{\delta \Phi_{uq}}{\delta \psi} \dot{\psi} = \int_{\Omega} d \bm{x}\; \left[ \frac{\partial \varphi_{e}}{\partial \bm{Q}} + C_{w} \frac{\partial \varphi_{uq}}{\partial \bm{Q}} \right] : \dot{\bm{Q}}.
\label{eq:dissip2}
\end{equation}
Complete invariance properties under superposed rigid motions is not customarily considered in the geometrically linear theory and hence certain nonlinear terms like $\bm{Q}\bm{L}$ in (\ref{eq:dissip2}) do not appear in Eq. (\ref{eq:omegadot_sd}).
Since the stress tensor is symmetric, and (infinitesimal) rotational invariance requires that the dependence of $\varphi_{e}$ on $\bm{U}$ be only through the symmetrized distortion $\bm{\epsilon}$, the dissipation relation Eq. (\ref{eq:second_lawsd}) can be written as,
$$
\int_{\Omega} d \bm{x} \left[ \bm{T} - \frac{\partial \varphi_{e}}{\partial \bm{\epsilon}} + C_{\rho} \rho (\rho - \psi) \bm{I} \right] : \bm{L}_{sym} + \int_{\Omega} d \bm{x}\; \frac{\partial \varphi_{e}}{\partial \bm{\epsilon}} : (\bm{\alpha} \times \bm{V}) +
$$
\begin{equation}
+ \int_{\Omega} \left[ - \frac{\delta \Phi_{uq}}{\delta \psi} - C_{sh} \frac{\delta \Phi_{sh}}{\delta \psi} + C_{\rho} \frac{\Phi_{\rho\psi}}{\delta \psi} \right] \dot{\psi} \ge 0,
\end{equation}
where we have used the notation
$$
\Phi_{\rho \psi}= \frac{1}{2} \int_{\Omega} d \bm{x}\; (\rho - \psi)^{2}.
$$
With this form of the dissipation inequality, we can identify the stress and the remaining quantities. The reversible part of the stress is
\begin{equation}
\bm{T}^{R} = \frac{\partial \varphi_{e}}{\partial \bm{\epsilon}} - C_{\rho} \rho(\rho-\psi) \bm{I},
\label{eq:stress_sd}
\end{equation}
with the dissipative part nominally given by the same expression as in Sec. \ref{sec:theory}. The defect velocity is the standard Peach-K\"{o}hler force,
\begin{equation}
\bm{V} = \bm{M} \bm{X} :\left[ \left( \frac{\partial \varphi_{e}}{\partial \bm{\epsilon}} \right)^{T} \bm{\alpha} \right]
\label{eq:pk}
\end{equation}
with $\bm{M}$ a mobility tensor, positive definite. Finally, as in Sec. \ref{sec:theory}, the reversible part of the evolution of the order parameter is $\dot{\psi} = 0$. Adding the dissipative contribution, we arrive at the equation governing the evolution of the phase field,
\begin{equation}
\dot{\psi} = L \left[ -C_{sh} \frac{\delta \Phi_{sh}}{\delta \psi} - \frac{\delta \Phi_{uq}}{\delta \psi} + C_{\rho} \frac{\delta \Phi_{\rho\psi}}{\delta \psi} \right].
\label{eq:pf_sd}
\end{equation}
The constant $L > 0$ is a scalar mobility.
The complete set of equations includes mass and momentum conservation, Eqs. (\ref{eq:mass_con}) and (\ref{eq:mom_con}), the simpler kinematic laws valid for small deformations (\ref{eq:kinematics_sd}), and the phenomenological currents in Eqs. (\ref{eq:stress_sd}), (\ref{eq:pk}), and (\ref{eq:pf_sd}).
\section{Discussion and conclusions}
We have reformulated the Phase Field Crystal model to account for the necessary microscopic independence between the phase field, reflecting the symmetry of the phase, and both mass density and elastic distortion. Although these quantities are related in equilibrium through a macroscopic equation of state, they are independent variables in the free energy, and can be independently varied in evaluating the dissipation functional that expresses the Second Law. We have therefore introduced an independent configurational distortion tensor $\bm{P}$ which is a pointwise functional of the phase field $\psi$, but independent of the elastic distortion $\bm{W}$. It captures the local state of distortion of $\psi$, including any topological defects. The latter would be located in regions in which $\mathop{\rm curl}\nolimits \bm{P} \neq \bm{0}$, in analogy with the incompatibility condition of the distortion $\mathop{\rm curl}\nolimits \bm{W} = - \bm{\alpha}$. In addition, we explicitly include a mass density $\rho$ which is independent of the phase field $\psi$. These considerations assume that the phase field $\psi$ is a non conserved, broken symmetry variable that reflects the symmetry of the system under study, but that is independent of both mass and distortion.
In order to realistically model defect motion in a crystalline phase, choices need to be made in the magnitude of the coupling terms in the free energy linking the phase variable $\psi$ on the one hand, and $\bm{W}$ and $\rho$ on the other. Given a material dependent magnitude of the elastic constant tensor $|\bm{C}|$, we assume that $C_{sh} \sim C_{w} \gg |\bm{C}|$. These conditions ensure fast diffusive relaxation of the phase field to accommodate the existing elastic distortion and topology constraints. As discussed in Sec. \ref{sec:theory}, this is accomplished by having the phase field relax to a local minimum of $C_{sh}\Phi_{sh}+ \frac{ C_{w}}{2} \int d \bm{x}\; \rho |\bm{W}-\bm{P}|^2$, so that the resulting elastic energy and density fluctuations will then decay in their respective time scales. The tensor difference $(\bm{W}-\bm{P})$ plays the role of the compatible strain $\bm{\epsilon}^{\delta}$ of Refs. \cite{re:skaugen18b,re:salvalaglio20}. They are zero in equilibrium, but allow making $\psi$ and $\bm{W}$ independent otherwise.
Allowing the mass density $\rho$ to be independent of the phase field $\psi$ allows for permeation, the independent motion of mass and lattice. In the case of a monocomponent crystalline solid, for example, this dissipative mode has to be understood as vacancy diffusion. Equations (\ref{eq:op_evol}) (or Eq. (\ref{eq:pf_sd}) in the small deformation limit) can be interpreted as permeation equations as their right hand sides equal the normal projection of $\bm{v}-\bm{v}_{\psi}$ along the surface of constant $\psi$, where $\bm{v}_{\psi}$ is the local velocity of such a surface. If $C_{\rho}$ is chosen sufficiently large, then $\rho$ and $\psi$ will locally coincide. However, the ability to separate mass density and phase field is necessary in the treatment of dislocation climb, for example.
The model also naturally incorporates mechanical boundary conditions, either directly applied to the material velocity field $\bm{v}$, or traction involving the stress tensor at the boundary $\bm{T} \bm{\hat{n}}$. The phase field - also with its own natural boundary conditions - will adjust dynamically in the bulk \cite{re:skaugen18}. Solution procedures for the dislocation mechanics part of the problem at small and finite deformations are detailed in \cite{arora2020finite,roy2005finite}; these are non-standard systems taking into account the nonlinear transport of the dislocation density field and the calculation of nonlinear stress fields of dislocation distributions. The computation of the presented coupled model is material for future work.
We close by noting that the formulation developed is applicable not only to crystalline solids, but also to other broken symmetry phases such as colloidal, columnar, and smectic phases.
\section*{Acknowledgments}
JV's research has been supported by the National Science Foundation, Grant No. DMR-1838977.
\bibliographystyle{ieeetr}
|
2,869,038,155,010 | arxiv | \section*{Generative Machine Learning}
Many machine learning applications are concerned with pattern
recognition problems in the supervised setting. Recently, however, a
rather different set of problems has received significant
attention, where the goal is to \textit{generate} patterns rather than
\textit{recognize} them. Application domains are numerous and diverse
and very often involve the generation of data for multimedia
environments. Examples include natural
images~\cite{radford_unsupervised_2015},
videos~\cite{vondrick_generating_2016},
paintings~\cite{elgammal_can:_2017},
text~\cite{yu_seqgan:_2016},
and music~\cite{yang_midinet:_2017,yu_seqgan:_2016,boulanger-lewandowski_modeling_2012}.
Pattern generation is closely related to unsupervised learning, where
a dataset $\{x^{(1)},\dots,x^{(n)}\}$ of patterns, sampled from an
unknown distribution $p$, is given as input to a learning algorithm
whose task is to estimate $p$ or to extract useful information about
the structure of $p$ such as clusters (i.e.\, groups of similar
patterns) or support (i.e.\, regions of high density, especially when
it consists of a low-dimensional manifold). In pattern generation,
however, we are specifically interested in \textit{sampling} new
patterns from a distribution that matches $p$ as well as possible. Two
important techniques for pattern generation are generative adversarial
networks and (variational) autoencoders, briefly reviewed in the
following.
\subsection*{Generative Adversarial Networks}
Generative Adversarial Networks
(GANs)~\cite{goodfellow_generative_2014} consist of a pair of neural
networks: a \textit{generator}, $\mathcal{G}:\mathds{R}^d\mapsto\mathds{R}^m$,
parameterized by weights $w_g$, and a \textit{discriminator},
$\mathcal{D}:\mathds{R}^m\mapsto \{0,1\}$, parameterized by weights
$w_d$. The generator receives as input a vector $z\in\mathds{R}^d$ sampled
from a given distribution $q$ and outputs a corresponding pattern
$\mathcal{G}(z)\in\mathds{R}^m$. We can interpret $z$ as a low-dimensional
code for the generated pattern, or as a tuple of coordinates within
the manifold of patterns. The discriminator is a binary classifier,
trained to separate \textit{true} patterns belonging to the training
dataset (positive examples) from \textit{fake} patterns produced by
the generator (negative examples). Training a GAN is based on an
adversarial game where the generator tries to produce fake patterns
that are as hard to distinguish from true patterns as possible, while
the discriminator tries to detect fake patterns with the highest
possible accuracy. At the end of training we hope to reach a game
equilibrium where the generator produces realistic patterns as
desired. The discriminator is no longer useful after
training. Equilibrium is sought by minimizing the following objective
functions, for the discriminator and for the generator, respectively:
\begin{IEEEeqnarray}{lCl}
\label{eq:discriminator-obj}
J_d(w_d) &=& \mathbb{E}_{x\sim p} \left[ L(\mathcal{D}(x),1) \right] +
\mathbb{E}_{z\sim q} \left[ L(\mathcal{D}(\mathcal{G}(z)),0) \right]\\
\label{eq:generator-obj}
J_g(w_g) &=& \mathbb{E}_{z\sim q} \left[ L(\mathcal{D}(\mathcal{G}(z)),1) \right]
\end{IEEEeqnarray}
where $L$ denotes the binary cross-entropy loss, $q$ can be either a
uniform distribution on a compact subset of $\mathds{R}^d$ or,
alternatively, a Gaussian distribution with zero mean and unit
variance. Since $p$ is not
accessible, the expectation in Eq.~(\ref{eq:discriminator-obj}) is
replaced by its empirical value on the training sample. In practice,
fake data points are also sampled. Optimization typically proceeds by
stochastic gradient descent or related algorithms where a balanced
minibatch of real and fake examples is generated at each optimization
step.
\subsection*{Autoencoders and Variational Autoencoders}
Autoencoders also consist of a pair of networks: an encoder,
$\mathcal{E}$, parameterized by weights $w_e$, that maps an input
pattern $x\in\mathds{R}^m$ into a latent code vector
$z=\mathcal{E}(x)\in\mathds{R}^d$, and a decoder, $\mathcal{D}$,
parameterized by weights $w_d$, mapping latent vectors $z\in\mathds{R}^d$
back to the pattern space $\mathds{R}^m$. In this case, the two networks
are stacked one on the top of the other to create a composite function
$\mathcal{D} \circ \mathcal{E} :\mathds{R}^m\mapsto\mathds{R}^m$, and the
overall model is trained to reproduce its own inputs at the
output. Since typically $d\ll m$, the model is forced to develop a
low-dimensional representation that captures the manifold of the
pattern associated with the data distribution $p$. Training is
performed by minimizing the objective
\begin{IEEEeqnarray}{lCl}
\label{eq:autoencoder-obj}
J(w_e,w_d) &=& \mathbb{E}_{x\sim p} \left[ L(\mathcal{D}(\mathcal{E}(x)),x) \right]
\end{IEEEeqnarray}
where the parameters $w_e$ and $w_d$ are optimized jointly and $L$ an
appropriate reconstruction loss.
Variational autoencoders (VAEs)~\cite{kingma_auto-encoding_2013} also
consist of an encoder and a decoder, but they bear a probabilistic
interpretation. To generate a pattern, we first sample a vector
$z\in\mathds{R}^d$ from a prior distribution $p(z)$ (usually a multivariate
Gaussian with zero mean and unit variance), and we then apply $z$ as
input to the decoder, in order to obtain $p(x|z)$. The encoder in this
case produces an approximation $q(z|x)$ to the intractable posterior
$p(z|x)$. Specifically, $q(z|x)$ is a multivariate Gaussian
whose mean $\mu(x)$ and diagonal covariance $\sigma(x)$ are computed
by the encoder network $\mathcal{E}$ receiving a pattern $x$ as
input. A VAE is then trained to minimize the difference between the
Kullback-Leibler divergence
\begin{IEEEeqnarray}{lCl}
\label{eq:vae-loss1}
\mathrm{KL}(q(z)||p(z)) & = & \int q(z) \log \frac{p(z)}{q(z)} dz\\
& = & \frac{1}{2} \sum_{j=1}^d
\left( 1 + \log \sigma^2_j(x) - \mu^2_j(x) - \sigma^2_j(x) \right) \nonumber
\end{IEEEeqnarray}
and the log conditional likelihood
\begin{IEEEeqnarray}{lCl}
\label{eq:vae-loss2}
\log p(x|z) = - \mathbb{E}_{x\sim p} \left[ L(x,\mathcal{D}(z)) \right].
\end{IEEEeqnarray}
\subsection*{Deep and Recurrent Networks}
All the above networks (generator and discriminator for GANs, decoder
and encoder for VAEs) can be constructed by stacking several
neural network layers. In particular, our encoder for the VAE was
based on three bidirectional long-short-term-memory
(LSTM)~\cite{hochreiter_long_1997} recurrent layers with tanh
nonlinearities, followed by four fully connected layers with ReLU
nonlinearities, ending in a representation of size $d=4$. LSTM
layers were used to capture the temporal structure of the data and, in
particular, the correlations among note-on MIDI events within a drum
pattern. Convolutional layers could have also been employed and we
found that they produce similar reconstruction errors during
training. We developed a slight aesthetic preference towards LSTM
layers in our preliminary listening sessions during the development of
the VAE, although differences compared to convolutional layers were
not very strong.
The decoder simply consisted of five fully connected
layers with ReLUs. We used logistic units on the last layer of the
decoder and a binary cross-entropy loss for comparing reconstructions
against true patterns, where MIDI velocities were converted into
probabilities by normalizing them in [0,1]. Details on the
architecture are visible in Figure~\ref{fig:architecture}.
The discriminator and the generator networks for the GAN had
essentially the same architectures as the encoder and the decoder for
the VAE, respectively, except of course the GAN discriminator terminates
with a single logistic unit and for the VAE we used a slightly
smaller (two-dimensional) noise space, in order to exploit the
``swirling'' explorer described below in the ``autonomous
drumming'' subsection.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=.49\textwidth]{LSTM-encoder}
~\\
~\\
\includegraphics[width=.3\textwidth]{LSTM-decoder2}
\caption{\label{fig:architecture}
Architecture of the variational autoencoder used to
interpolate drum patterns. Top: Encoder; Bottom: Decoder.}
\end{center}
\end{figure}
\subsection*{Electronic Dance Music Dataset}
One of the authors, who is a professional musician, used his in-depth
knowledge of EDM to compose a collection of drum patterns
representative of three genres: Electro, Techno, and Intelligent Dance
Music (IDM). In all patterns, the following six instruments of a
Roland TR-808 Rhythm composer drum machine were used: bass drum, snare
drum, closed hi-hat, open hi-hat, rimshot, and cowbell. The TR-808
(together with its sisters TR-606 and TR-909), was integral to the
development of electronic dance music and these six instrument sounds
are still widely used in EDM genres today which makes them suitable
for our interpolation approach. All patterns are one measure (4 bars)
long, and quantized to 1/16th note on the temporal scale. At the
intended tempo of 129 BPM, it takes 7.44s to play one
measure. Patterns were constructed with the help of the Ableton Live
music production software, and delivered in the form of standard
MIDI files. After checking for duplicates, a data set consisting of
1782 patterns resulted, which is summarized in
Table~\ref{tab:patterns}.
Each drum pattern was represented as a two-dimensional array whose
first and second axes are associated with the six selected drum
instruments and the temporal position at which a MIDI note-on event
occurs, respectively. Note durations were not included in the
representation as they are irrelevant for our choice of percussive
instruments. The duration of four measures results in a $6\times 64$
array for each pattern. Values (originally in the integer range [0,127], then
normalized in [0,1]) correspond to MIDI velocities and were used
during dataset construction mainly to represent dynamic accents or
ghost (echoing) notes that may be present in some musical styles. In
our representation, a zero entry in the array indicates the absence of
a note-on event.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=.99\textwidth]{patterns}
\caption{\label{fig:patterns} Ten sample drum patterns in
the EDM dataset. Instruments from the top are
(1): bass drum, (2): snare drum, (3): closed hi-hat, (4): open
hi-hat, (5): rimshot, (6): cowbell. Pixel intensities correspond
to MIDI velocities. Top row: Electro-Funk; mid two rows: IDM;
bottom two rows: Techno.}
\end{center}
\end{figure*}
\begin{table}[htp]
\caption{Electronic Dance Music Dataset}
\label{tab:patterns}
\begin{center}
\begin{tabular}{lS[table-format = 5.0]S[table-format = 5.0]l}
{\bf Style} & {\bf \# of patterns} & \multicolumn{2}{c}{\bf Playing time} \\
\hline
IDM & 608 & 4,525s &(1h 15m 25s) \\
Electro & 690 & 5,135s &(1h 25m 35s) \\
Techno & 484 & 3,602s &(1h 0m 2s) \\
\hline
Total & 1,782 & 13,261s &(3h 41m 1s) \\
\end{tabular}
\end{center}
\end{table}
\section*{Generating interpolations}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=.5\textwidth]{Interpolations2}
\caption{\label{fig:interpolations}
Building transitions by
interpolating drum patterns in their representation space.}
\end{center}
\end{figure}
Both techniques discussed above were used to generate sequences of
drum patterns that interpolate between genres.
\subsection*{Using VAEs for start-goal interpolations}
When using VAEs, it is straightforward to create an interpolation
between a starting pattern $x_s$ and a goal pattern $x_g$ as follows
(see also Figure~\ref{fig:interpolations}):
\begin{enumerate}
\item Apply the encoder $\mathcal{E}$ to the endpoint patterns to obtain the
associated coordinates in the manifold space of the autoencoder:
$z_s=\mathcal{E}(x_s)$ and $z_g=\mathcal{E}(x_g)$;
\item For a given interpolation length, $L$, construct a sequence of
codes in the manifold space:
$\langle z_0=z_s, z_1, \dots, z_L=z_g\rangle$
\item Apply the decoder $\mathcal{D}$ to each element of this
sequence, to obtain a sequence of patterns:
$\langle \mathcal{D}(z_0), \dots, \mathcal{D}(z_L)\rangle$; note
that (unless the autoencoder underfits the dataset)
$\mathcal{D}(z_0)\approx x_s$ and $\mathcal{D}(z_L)\approx x_g$.
\end{enumerate}
\subsection*{Linear and spherical interpolation}
In the case of linear interpolation (LERP), the sequence of codes is
defined as
\begin{equation}
\label{eq:lerp}
z_i = (1-\mu_i) z_s + \mu_i z_g
\end{equation}
for $\mu_i=i/L$, $i=0, \dots L$. In the case of spherical
interpolation (SLERP), the sequence is
\begin{equation}
\label{eq:slerp}
z_i = \frac{z_s \sin(\theta(1.0-\mu_i)) + z_g \sin(\theta\mu_i)}{\sin(\theta)}
\end{equation}
where $\theta=\arccos(\frac{\transpose{h}_s z_g}{\|z_s\|\|z_g\|})$.
\cite{white_sampling_2016} offers a thorough discussion of the
benefits of SLERP in the case of image generation. We found that SLERP
interpolations produced musically more adventurous and expressive
results and thus we used them in our experimental evaluation.
\subsection*{Crossfading vs.\ interpolation in the representation space}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=0.63\textwidth]{mnist.pdf}
\caption{\label{fig:interpolating-vs-xfading} Top: Interpolation
in the pattern space (i.e., crossfading) between two MNIST characters; Bottom:
interpolation in the representation space.}
\end{center}
\end{figure*}
We remark the significance of performing the interpolation in the
representation space: rather than generating a weighted average of two
patterns (as it would happen with crossfading, which consists of a
linear combination as in Eq.~\ref{eq:lerp} but using identity
functions instead of $\mathcal{E}$ and $\mathcal{D}$), we generate at
each step $i$ a \textit{novel drum pattern} from the learned
distribution. To help the reader with a visual analogy, we show in
Figure~\ref{fig:interpolating-vs-xfading} the difference between
interpolation in pattern space (crossfading) and in representation
space using two handwritten characters from the MNIST dataset.
\subsection*{Pattern novelty}
A quantitative measure of quality and novelty of patterns generated by
models such as VAEs or GANs is not readily available. We observed
however that several of the patterns produced by interpolating between
start and goal patterns in our dataset are genuinely new. In
Figure~\ref{fig:PCA} we visualize the result of two-dimensional principal components analysis
(PCA) showing all training set patterns and those generated by
interpolating between a subset of them. It can be seen that
trajectories tend to respect the distribution of the training data but
include new data points, showing that novel patterns are indeed
generated in the transitions.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{PCA-transitions-lstm_goodsimp4-slerp.png}
\end{tabular}
\caption{PCA plot of training data (black dots) and a set of
possible start-goal interpolations obtained with a deep LSTM VAE
(labeled by the genres of the start and goal patterns).}
\label{fig:PCA}
\end{center}
\end{figure}
\subsection*{A software instrument for start-goal interpolations}
\label{subsec:interface}
The trained VAE (in the form of a Tensorflow model) was embedded as a
plugin in Ableton Live Suite 9 for Mac OS, a program that is widely
used by performing and producing musicians in EDM, and that enables
the construction of software instruments via the programming
environment \textit{Max for Live}.
During performance, musicians first specify a start and a goal pattern
(chosen from the dataset), and the length of the interpolation. This
can be conveniently done within the Live user interface. The
controller (a small Python script) then produces the required sequence
of patterns using the VAE and the resulting MIDI notes are sent to
\textit{Live} to be rendered in audio with a user-specified
soundset. The whole process is fast enough for real-time usage.
\subsection*{Using GANs for autonomous drumming}
In the case of GANs, Step 1 of the procedure we used to create
start-goal interpolations with VAEs is not readily available. We
attempted to ``invert'' the generator network using the procedure
suggested in~\cite{creswell_inverting_2016} but our success was
limited since training patters are largely not reproducible by the
generator.
Although unsuitable for start-goal interpolations, we found that GANS
are very effective to create an autonomous drummer by exploring the
noise space in a smooth way. Exploration can be designed in many ways
and here we propose a very simple approach based on the following
complex periodic function
\begin{equation}
\label{eq:swirl}
f(t,\omega_1,\omega_2,\omega_3,\omega_4) \doteq
e^{\omega_1 jt} -
\frac{e^{\omega_2 jt}}{2} +
\frac{je^{\omega_3 jt}}{3} + \frac{e^{\omega_4 jt}}{4}
\end{equation}
for $t\in[0,2\pi]$ and constants $\omega_1=2$, $\omega_2=19$,
$\omega_3=-20$, $\omega_4=20$. Using a GAN with $d=2$, the real and
the imaginary part of $f$ are used to form the two components of
vector $z$. The resulting ``swirl'' in noise space is illustrated in
Figure~\ref{fig:swirl}.
\begin{figure}
\begin{center}
\includegraphics[width=0.35\textwidth]{swirlm2}
\caption{Swirl in GAN noise space associated with Eq.~\ref{eq:swirl}.}
\label{fig:swirl}
\end{center}
\end{figure}
\section*{Evaluation experiments}
Although patterns generated by VAEs and GANs are novel, we still need
to establish that they do add something new to the current practice of
EDM and that they are of interest to its practitioners. To this end,
we designed three experiments where we asked professional musicians
to assess the quality of the generated patterns. The
\textit{identification experiment} aims to verify if practitioners are
able to tell start-goal interpolations apart from start-goal
crossfades; the \textit{task experiment} aims to assess how much
musicians appreciated and were able to make use of the drum
interpolation as a compositional tool; the \textit{robot experiment}
aims to rate the aesthetic quality of the autonomous drumming produced by the GAN
when generating patterns by swirling in the representation space.
The goal was to answer the following questions:
\noindent
\textbf{Q1}: Are musicians able to tell interpolations and crossfades
between genres apart during listening sessions?
\noindent
\textbf{Q2}: How do practitioners rate the novelty, adequacy, and
style of the ``instrument'' for creating interpolations between
genres?
\noindent
\textbf{Q3}: Are the drum tracks generated by moving or interpolating
smoothly in the representation space of VAEs and GANs useful as a
material for musicians in composition and performance?
\subsection*{Identification experiment}
The goal of the experiment was to answer Q1.
Subjects were asked to listen to pairs of transitions, a crossfade
and an interpolation. Both \textit{straight} and \textit{mixed} pairs
were formed, in which starting and goal patterns were identical or
different, respectively. Three drum patterns for each of the three
genres were chosen from the dataset. Nine different transitions using
these patterns were specified in a design that includes a transition
for each possible pair of genres in both directions, as well a
transition within each of the three genres. Interpolations and
crossfades had a length of 6 measures (24 bars, 44.7s playing
time). For interpolations, the endpoints were the VAE's
reconstructions of the start and goal pattern. Crossfades were
produced using a standard function (equal power) of Logic Pro X.
The difference between an interpolation and a crossfade was explained
to the subjects in the visual domain using an animated version of
Figure~\ref{fig:interpolating-vs-xfading}. Every subject was asked to
tell apart 6 pairs, preceded by one practice pair to get acquainted
with the procedure, and received no feedback on the correctness of
their answers.
\subsection*{Task experiment}
The goal of the experiment was to answer Q2 and Q3. We used the
creative product analysis model (CPAM)~\cite{besemer1999confirming},
that focuses on the following three \textit{factors}: Novelty,
Resolution, and Style. Each factor is characterized by a number of
facets that further describe the product. For each facet, there is a
7-point scale built on a semantic differential: subjects are asked to
indicate their position on the scale between two bipolar words (also
referred to as anchors). Novelty involves two facets: Originality and
Surprise. Resolution considers how well the product does what it is
supposed to do and has four facets: Logicality, Usefulness, Value, and
Understandability. Style considers how well the product presents
itself to the customer and has three facets: Organicness, Well-
craftedness, and Elegance. In this experiment, subjects were allowed
to choose start and goal patterns from those available in the dataset
in order to create their own interpolations using our Ableton Live
interface. In this experiment, subjects were allowed to choose start
and goal patterns from those available in the dataset in order to
create their own interpolations using our Ableton Live interface.
\subsection*{Robot experiment}
The goal of the experiment was to answer Q3. We used in this case the
Godspeed questionnaire~\cite{bartneck2009godspeed} a well-known set of
instruments designed to measure the perceived quality of robots, based
on subjects' observations of a robot's behavior in a social setting.
They consist of 5-point scales based on semantic differentials. In
our case, observation is limited to hearing the artificial agent drum
and thus we chose to measure only two factors: Animacy (three facets:
Lively, Organic, Lifelike) and Perceived Intelligence (three facets:
Competent, Knowledgeable, Intelligent).
A long interpolation of 512 bars (124 measures) was generated using
the trained GAN, by ``sweeping'' the code space with a complex
function.
Six segments of 60 bars each were selected from the MIDI file, 9
measures preceded and followed by half a measure (2 bars) for leading
in and out. These MIDI files were rendered into sound using an
acoustic drum soundset in Logic Pro X (Drum Designer/Smash kit), where
the parts of the rimshot and cowbell were transposed to be played by
toms. Acoustic rather than electronic drum sounds were used to
facilitate the comparison with human drumming.
Subjects were instructed that they were going to listen to an
improvisation by an algorithmic drummer,
presented with one of the 6 audio files (distributed evenly over the
subject population), and asked to express a judgment on animacy and
perceived intelligence.
\subsection*{Experimental procedure}
The experiments were conducted with subjects active in the wider field
of electronic music (DJs, producers, instrumentalists, composers,
sound engineers), that were familiar with the relevant genres of
EDM\@. Their experience in electronic music ranged from 2--30 years
(median 7 years, average 8.75). They were recruited by the authors
from educational institutes and the local music scenes in Krakow (PL),
Cuneo and the wider Firenze area (IT), and Eindhoven (NL). Experiments
took place in a class room or music studio setting, where subjects
listened through quality headphones or studio monitors. All audio
materials in the experiment were prepared as standard stereo files
(44.1 kHz, 16 bits)
\begin{figure*}
\begin{tabular}{cc}
\begin{tikzpicture}
\begin{axis}
[
ytick={1,2,3,4,5,6,7,8,9},
yticklabels={{\small Originality}, {\small Surprise}, {\small Logicality}, {\small Usefulness},{\small Value}, {\small Understandability}, {\small Organicness}, {\small Well-craftedness}, {\small Elegance}},
]
\addplot+[
boxplot prepared={
median=6,
upper quartile=7,
lower quartile=5.75,
upper whisker=7,
lower whisker=5
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=6.25,
lower quartile=6,
upper whisker=7,
lower whisker=4
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=7,
lower quartile=5.75,
upper whisker=7,
lower whisker=4
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=6,
lower quartile=6,
upper whisker=7,
lower whisker=5
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=7,
upper quartile=6,
lower quartile=7,
upper whisker=7,
lower whisker=5
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=6.25,
lower quartile=5,
upper whisker=7,
lower whisker=5
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=6.25,
lower quartile=5.75,
upper whisker=7,
lower whisker=4
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=7,
lower quartile=5.75,
upper whisker=7,
lower whisker=4
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=6,
upper quartile=6,
lower quartile=6,
upper whisker=7,
lower whisker=5
}, fill,draw=black,
] coordinates {};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture}
\begin{axis}
[
ytick={1,2,3,4,5,6},
yticklabels={{\small Lively}, {\small Organic}, {\small Lifelike}, {\small Competent},{\small Knowledgeable},{\small Intelligent}},
]
\addplot+[
boxplot prepared={
median=4,
upper quartile=5,
lower quartile=3,
upper whisker=5,
lower whisker=2
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=4,
upper quartile=4,
lower quartile=3,
upper whisker=5,
lower whisker=1
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=3,
upper quartile=4,
lower quartile=2,
upper whisker=5,
lower whisker=1
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=4,
upper quartile=5,
lower quartile=4,
upper whisker=4,
lower whisker=2
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=4,
upper quartile=5,
lower quartile=4,
upper whisker=5,
lower whisker=2
}, fill,draw=black,
] coordinates {};
\addplot+[
boxplot prepared={
median=4,
upper quartile=4,
lower quartile=3,
upper whisker=5,
lower whisker=2
}, fill,draw=black,
] coordinates {};
\end{axis}
\end{tikzpicture}\\
(a) & (b)
\end{tabular}
\caption{(a): Task experiment box plots (n=16, 7-point scale); (b):
Robot experiment box plots (n= 38, 5-point scale).}
\label{fig:task-robot}
\end{figure*}
\section*{Results}
We now present and discuss the experimental results.
\subsection*{Identification experiment}
This experiment was conducted with 19 subjects using 18 distinct
stimulus pairs. 13 identification errors were made in 114 pairs. For
each pair correctly identified by a subject 1 point was awarded (0 for
a miss). Subjects achieved an average score of $2.68\pm 0.8$ and
$2.63\pm 0.58$ (out of 3) for straight and mixed interpolations,
respectively. In total they achieved a score of $5.32\pm 1.03$ (out of
6). A Chi-squared test confirms that participants scored better than
chance $\chi^2 (19) = 25.92 $ (critical value $5.99$). Clearly,
subjects are able to tell interpolations and crossfades apart in a
musical context.
\subsection*{Task experiment}
Fifteen subjects with knowledge of the means of EDM production were
invited to construct an interpolation with the Ableton Live interface
as described above (six of them had previously participated in the
Identification experiment). We asked them to rate their experience
(process and result) on the CPAM scales. Figure~\ref{fig:task-robot}(a)
summarizes the results in a set of box plots, one for each of the
facets. Median scores for all facets are 6 (for \textit{Value} even
7). The average scores for the facets of the factor Resolution
(\textit{Logicality} 6; \textit{Usefulness} 6.13; \textit{Value} 6.5;
\textit{Understandability} 5.8) are generally slightly higher than
those for the factors Novelty (\textit{Originality} 6.13;
\textit{Surprise 5.94}) and Style (\textit{Organic} 5.82;
\textit{Well-craftedness} 6.06; \textit{Elegant} 5.88). Although we
did not use the CPAM to compare different solutions for generating
transitions between drum tracks, subjects judged the process for
creating interpolations and its results against their background
knowledge of existing techniques such as crossfades. The relatively
high scores on all facets indicate that developing the current
prototype into an interpolation instrument will be of value to
practitioners in the field.
\subsection*{Robot experiment}
We asked 38 subjects to listen to a drum track produced by the trained
GAN and to rate the robotic drummer on the scales for Animacy and
Perceived Intelligence. Figure~\ref{fig:task-robot}(b) summarizes the result
in a set of box plots for the aspects. The median score on all aspects
is 4, with the exception of \textit{Lifelike} where it is 3. Average
scores are higher for the aspects of Perceived Intelligence
(\textit{Competent} 4.24; \textit{Knowledgeable} 3.95;
\textit{Intelligence} 3.84) than for those of Animacy (\textit{Lively}
3.89; \textit{Organic} 3.45; \textit{Lifelike} 3.13). Comments written
by the subjects indicate that they judged Perceived Intelligence
mainly with respect to the construction and evolution of the patterns,
whereas for Animacy the execution of the patterns was more prominent:
absence of small variations in timing and timbre of the drum hits
pushed their judgments towards the anchors Stagnant, Mechanical, and
Artificial. This could be addressed with standard techniques to
``humanize'' sequenced drum patterns by slightly randomizing the note
onsets and velocities, and rotating between multiple samples for each
of the instruments, but for this experiment we used the patterns
output by the GAN without such alterations. Even though this
measurement just sets a first benchmark for further development, the
high scores for \textit{Competent} and \textit{Knowledgeable} are
encouraging as they suggest that the deep learning process has
captured the genres in the dataset to a large extent.
\section*{Conclusion}
\label{sec:conclusion}
Our tool has already potential applications. First, it can be used to
improve the process of producing (and delivering) libraries of drum
patterns as the trained network can generate a large number of
patterns in the style represented by the training data. Second, it
can support the workflows of dance musicians in new ways. Generated
interpolation tracks can be recorded inside the tool to create
fragments to be used in post-production or during live performance as
a foundation on which a DJ or instrumentalist can layer further
musical elements. In addition, VAEs or GANs can be trained on
materials created by individual users, providing users with a highly
customized software instrument that ``knows'' their personal style and
is able to generate new drum tracks in this style for post-production
or in performance.
There are several directions that can be followed to further enrich
the drumming space, including the generation of tempo for tracks that
require tempo that varies over time, and the generation of additional
information for selecting drum sounds in a wide soundset. A more
ambitious direction is to extend our approach for generating whole
sets of instruments (bass lines, leads, pads, etc.) in EDM, which
involves not only note onsets but also pitch and duration.
|
2,869,038,155,011 | arxiv | \section{Introduction}
\label{sec:introduction}
Mathematical models of the real world are typically nonlinear,
examples in medical or biological applications can be found for
instance in \citet{lind:2001} or \citet{jone:plan:slee:2010}. Setting
up prior distributions in a statistical analysis of nonlinear models,
however often remains a challenge. If external, numerical or
non-numerical information exists, one can try to quantify it into a
probability distribution, see for example the works of O'Hagan et
al. (2006), \nocite{ohag:2006}\citet{born:icks:2009b}, and
\citet{neue:capk:bran:2010}. The classical approach in the absence of
substantive information is Jeffreys prior distribution (or variants),
given by $p\left(\bm \theta \right) \propto \sqrt{{\det(I(\bm
\theta))}}$, where $\bm \theta \in \bm \Theta \subset
\mathbb{R}^p$ is the parameter, and $I\left(\bm \theta \right)$ the
Fisher information matrix of the underlying statistical model. See
\citet{kass:wass:1996}, \citet[ch. 5]{ghos:dela:sama:2006} or
\citet{berg:bern:sun:2009} for this approach and generalizations. A
serious drawback is the fact that this prior can depend on observed
covariates. In the case of nonlinear regression analysis, the prior
depends on the design points and relative allocations to these points
and thus violates the likelihood principle. Apart from the
foundational issues this raises (see, \textit{e.g.},
\citet[ch. 3]{ohag:fors:2004}) it also has undesirable practical
consequences. For Bayesian optimal design calculations in nonlinear
regression models, for example, Jeffreys prior cannot be used, because
it depends on the design points, which is what we want to calculate in
the optimal design problem. In the context of adaptive dose-finding
clinical trials, patients are allocated dynamically to the doses
available (see the works of \citet{muel:berr:grie:2006} or Dragalin et
al. (2010)\nocite{drag:2010}) so that the sequential analysis of the
data will differ from the analysis combining all data, when using
Jeffreys rule. In summary the main issue with the Jeffreys prior
distribution is that one cannot state it before data collection, which
is crucial in some applications. Surprisingly few proposals have been
made to overcome this situation. In current practice often uniform
distributions for $\bm \theta$ on a reasonable compact subset of the
parameter space are used. This approach is however extremely sensitive
to the chosen parametrization (which might be more or less arbitrary)
and can be much more informative than one would expect intuitively.
To illustrate the point, we will use a simple example. Suppose one
would like to analyse data using the exponential model $\exp(-\theta
x)$, here with $x\in [0,10]$, which could be the mean function in a
regression analysis. Assume that no historical data or practical
experiences related to the problem are available.
\begin{figure}
\centerline{%
\includegraphics[width=0.9\textwidth]{expo1.eps}}
\caption{(i) Display of the uniform distribution on $\theta$ scale;
(ii) Display of the regression function $\exp(-\theta x)$
for $\theta=0$, $\theta = 5$ and the $\theta$ corresponding to the
$i/10$ quantile $i=1,\ldots,9$ of the uniform distribution.}
\label{fig:expo1}
\end{figure}
A first pragmatic approach in this situation is to use a uniform
distribution on $\theta$ values leading to a reasonable shape coverage
of the underlying regression function $\exp(-\theta x)$, for example
the interval $\theta \in [0,5]$ covers the underlying shapes almost
entirely. The consequences of assuming a uniform prior on $[0,5]$ can
be observed in Figure \ref{fig:expo1} (ii). While the prior is uniform
in $\theta$ space, it places most of its prior probability mass on the
functional shapes that decrease quickly towards zero, and we end up
with a very informative prior distribution in the space of functional
shapes. This is highly undesirable when limited prior information
regarding the shape is available. In addition it depends crucially on
the upper bound selected for $\theta$, and a uniform distribution in
an alternative parameterization would lead to entirely different prior
in the space of shapes. One way to overcome these problems is to use a
distribution that is uniform in the space of functional shapes of the
underlying nonlinear function. This will be uninformative from the
functional viewpoint and will not depend on the selected
parameterization.
In finite dimensional situations it is a standard approach to use
distributions that are uniform in an interpretable parameter
transformation, when it is difficult to use the classical default
prior distributions. In the context of Dirichlet process mixture
modelling, one can use a uniform distribution on the probability that
two observations cluster into one group and then transfer this into a
prior distribution for the precision parameter of the Dirichlet
process. In the challenging problem of assigning a prior distribution
for variance parameters in hierarchical models, \citet{dani:1999}
assumes a uniform distribution on the shrinkage coefficient and then
transfers this to a prior distribution for the variance parameter. In
these cases the standard change of variables theorem can be used to
derive the necessary uniform distributions. When we want to impose a
uniform distribution in the space of functional shapes of an
underlying regression function, however, it is not entirely obvious
how to construct a uniform distribution. In the next section we will
review a methodology that allows to construct uniform distributions on
general metric spaces. In Section \ref{sec:nonreg} we will adapt this
to the nonlinear models that we consider in this article. Finally in
Section \ref{sec:appl} we test the priors for nonlinear regression on
a data set from a dose-finding trial, a simulation study and an
optimal design problem.
\section{Methodology}
\label{sec:methodology}
\subsection{General Approach}
\label{sec:general-approach}
Suppose one would like to find a prior distribution for a parameter
$\bm \theta$ in a compact subspace $\bm \Theta \subset
\mathbb{R}^p,\,p < \infty$. The approach proposed in this paper is to
map the parameter $\bm \theta$ from $\bm \Theta$ into another compact
metric space $(M,d)$, with metric $d$, using a differentiable
bijective function $\varphi: \bm \Theta \mapsto M$,
so that $\varphi(\bm \theta) = \bm \phi \in M$. The metric $d$ should
ideally define a reasonable measure of closeness and distance between
the parameters, and its choice will of course be model and application
dependent. In the exponential regression example, for instance, it
seems adequate to measure the distance between two parameter values
$\theta'$ and $\theta''$ by a distance between the resulting functions
$\exp(-x\theta')$ and $\exp(-x\theta'')$, rather than the Euclidean
distance between the plain parameter values. In this metric space
$(M,d)$, one then imposes a uniform distribution, reflecting the
appropriate notion of distance of the metric space $(M,d)$, and
transforms this distribution back to the parameter scale.
The construction of a uniform distribution in general metric spaces
has been described by \citet{demb:1990}, using the notion of packing
numbers. \citet{ghos:ghos:rama:1997} apply this result for two
particular Bayesian applications (derivation of Jeffreys prior for
parametric problems and nonparametric density estimation). In the
following we review and adapt this theory to our situation. Some basic
mathematical notions are needed to present the ideas: Define an
$\epsilon$-net as a set $S_\epsilon \subset M$, so that for all $\bm
\phi',\bm \phi'' \in S_\epsilon$ holds $d(\bm \phi',\bm \phi'') \geq
\epsilon$, and the addition of any point to $S_\epsilon$ destroys this
property. An $\epsilon$-lattice $S_\epsilon^m$ is the $\epsilon$-net
with maximum possible cardinality. Dembski defines the uniform
distribution on $M$ as the limit of a discrete uniform distribution on
an $\epsilon$-lattice on $M$, when $\epsilon \rightarrow 0$.
\begin{defn}
\label{eqn:def}
The uniform distribution $\Pi$ on $M$ defined as
$$\Pi(A)=\underset{\epsilon \rightarrow 0}{\lim}\,\Pi_\epsilon(A),$$
for $A\subset M$ and $\Pi_\epsilon(A)$ is the discrete uniform
distribution supported on the points in $S_\epsilon^m$, \textit{i.e.}
$\Pi_\epsilon(A)=\frac{1}{|S_\epsilon^m|}\underset{\bm \phi \in
S_\epsilon^m}{\sum} \delta_{\bm \phi}(A)$, with $|S_\epsilon^m|$ the
cardinality of $S_\epsilon^m$.
\end{defn}
Loosely speaking the uniform distribution is hence defined as the
limit of a discrete uniform distribution on an equally spaced grid,
where the notion of ``equally spaced'' is determined by the distance
metric underlying $(M,d)$. Even though this definition is intuitive it
is not constructive. Apart from special cases, generating an
$\epsilon$-lattice is computationally difficult in a general metric space;
calculating the limit of $\epsilon$-lattices even more so. In addition it
is unclear, whether there is just one limit distribution all
$\epsilon$-lattices would converge to. To overcome these problems
\citet{demb:1990} uses the closely related notion of packing
numbers. The packing number $D(\epsilon,A,d)$ of a subset $A\subset M$ in
the metric $d$ is defined as the cardinality of an $\epsilon$-lattice on
$A$, and packing numbers are known for a number of metric spaces. An
$\epsilon$-pseudo-probability can then be defined as
$P_\epsilon(A)=\frac{D(\epsilon,A,d)}{D(\epsilon,M,d)}$. It is straightforward to
see that $0\leq P_\epsilon(A) \leq 1$ and that $P_\epsilon(M)=1$, but packing
numbers are sub-additive and hence $P_\epsilon$ is not a probability
measure. However for disjoint sets $A'$ and $A''$ with minimum
distance $> \epsilon$ additivity holds, \textit{i.e.} $P_\epsilon(A'\cup
A'')=P_\epsilon(A')+P_\epsilon(A'')$. \citet{demb:1990} then shows that
whenever $\underset{\epsilon \rightarrow 0}{\lim}\,P_\epsilon(A)$ exists for
any $A$, then the limit distribution is the unique uniform
distribution on $(M,d)$ (see \citet{demb:1990} or
\citet{ghos:ghos:rama:1997} for details). As packing numbers are
known for a number of metric spaces, this result provides a
constructive way for building uniform distributions, without the need
for explicitly constructing $\epsilon$-lattices.
Subsequently we consider the practically important case of a finite
number of parameters $p$ and assume that the metric of $(M,d)$, $d(\bm
\phi,\bm \phi_0)=d(\varphi(\bm \theta),\varphi(\bm \theta_0))=d^*(\bm
\theta,\bm \theta_0)$ in terms of $\bm \theta$ can be approximated by
a local quadratic approximation of the form
\begin{equation}
\label{eqn:quad}
d^*(\bm \theta,\bm \theta_0) = c_1\sqrt{c_2(\bm \theta-\bm
\theta_0)^T\bm V(\bm \theta_0)(\bm \theta-\bm \theta_0)+O(||\bm
\theta-\bm \theta_0||^k)},
\end{equation}
where $c_1,c_2>0$ are constants and $k\geq 3$. Equation
\eqref{eqn:quad} implies that $d(.,.)$ can locally be approximated by
a Euclidean metric. This is not a very strong condition, for a
sufficiently often differentiable metric $d(.,\bm \theta_0)$ one can
make use of a Taylor expansion of second order of $d(.,\bm
\theta_0)^2$ and apply the square root to obtain \eqref{eqn:quad}. The
following theorem calculates the distribution induced on $\bm \theta$
by imposing a uniform distribution in $(M,d)$, when assumption
\eqref{eqn:quad} holds. The proof is only a slight adaption of earlier
results by \citet{ghos:ghos:rama:1997}, see Appendix A.
\begin{thm}
\label{thm:res}
For a metric space $(M,d)$ and a bijective function $\varphi$,
fulfilling \eqref{eqn:quad}, where $\bm V(\bm \theta)$ is a
symmetric matrix with finite strictly positive eigenvalues $\forall
\bm \theta \in \bm \Theta$ and continuous as a function of $\bm
\theta$, $P_\epsilon(A)=\frac{D(\epsilon,A,d)}{D(\epsilon,M,d)}$ for $A\subset
\bm \Theta$ converges to
$$\frac{\int_{A}\sqrt{\det(\bm V(\bm \theta))}d\bm \theta}{\int_{\bm
\Theta}\sqrt{\det(\bm V(\bm \theta))}d\bm \theta},\; \mathit{as}\; \epsilon
\rightarrow 0.$$ The density of the
uniform probability distribution is hence given by:
$$p(\bm \theta) = \frac{\sqrt{\det(\bm V(\bm \theta))}}{\int_{\bm
\Theta}\sqrt{\det(\bm V(\bm \theta))}d\bm \theta}.$$
\end{thm}
We note that the last result can be obtained as well by using
considerations based on Riemannian manifolds, in which case
(\ref{eqn:quad}) would be the Riemannian metric: For example
\citet{penn:2006} explicitly considers uniform distributions on
Riemannian manifolds and obtains the same result. We concentrated on
Dembski's derivation as it seems both more general and intuitive.
It is important to note that the so defined distribution is
independent of the parametrization. This is intuitively clear, as the
space $(M,d)$, where the uniform distribution is imposed, is fixed, no
matter, which parametrization is used. We illustrate this invariance
property for the special case of a Taylor approximation in the Theorem
below; for a proof see Appendix B.
\begin{thm}
\label{thm:invar}
Assume $(M,d)$ with $d(\bm \theta,\bm \theta_0)^2 = \frac{1}{2}(\bm
\theta-\bm \theta_0)'\bm V(\bm \theta_0)(\bm \theta-\bm \theta_0) +
O(||\bm \theta-\bm \theta_0||^3)$, where $\bm V(\bm
\theta)=\left(\frac{d^2(\bm \theta,\bm \theta_0)}{\partial
\theta_i\partial \theta_j}\right)_{i,j}$ evaluated at $\bm
\theta$, which leads to a prior $p(\bm
\theta)\propto \sqrt{\det(\bm V(\bm \theta))}$.\\
When calculating the uniform distribution associated to the
transformed parameter $g(\bm \theta)=\bm \gamma$, with $g:
\mathbb{R}^p\rightarrow \mathbb{R}^p$ a bijective twice
differentiable transformation, one obtains $p(\bm \gamma) \propto
\det(\bm H(\bm \gamma))\sqrt{\det(\bm V(h(\bm \gamma)))}$, where $h$
is the inverse of $g$ and $\bm H(\bm
\gamma)=(\frac{\partial}{\partial \theta_1}h(\bm \theta), \ldots,
\frac{\partial}{\partial \theta_p}h(\bm \theta))$ is the Jacobian
matrix associated with $h$, which is the same result as applying the
change of variables theorem to $p(\bm \theta)$.
\end{thm}
A technical restriction of the theory described in this section is the
concentration on compact metric spaces $\bm \Theta$. However, it is
possible to extend this based on taking limits of a sequence of
growing compact spaces, see the works of \citet{demb:1990} and
\citet{ghos:ghos:rama:1997} for details. Note that the resulting
limiting density does not need to be integrable.
\subsubsection{Examples}
\label{sec:examples}
\textbf{Non-functional uniform priors}
While the approach outlined in Section \ref{sec:general-approach} is
developed for general metric spaces, it coincides with standard
results about change of variables, when the metric space $M$ is a
compact subset of $\mathbb{R}^p$ as well. Suppose one would like to
use a uniform distribution for $\varphi(\bm \theta)$ with $\varphi(\bm
\theta): \mathbb{R}^p \rightarrow \mathbb{R}^p$ a bijective,
continuously differentiable function and then back-transform to $\bm
\theta$ scale. Using the standard change of variables theorem one
obtains: $p(\bm \theta)\propto|\det(D(\bm \theta))|$, where $D(\bm
\theta) = (\frac{\partial}{\partial \theta_1}\varphi(\bm \theta),
\ldots, \frac{\partial}{\partial \theta_p}\varphi(\bm \theta))$ is the
Jacobian matrix of the transformation $\varphi(\bm \theta)$. Framed in
the approach of the last section, the metric space $M$ is a compact
subset of $\mathbb{R}^p$ with the Euclidean metric in the transformed
space $d(\varphi(\bm \theta), \varphi(\bm \theta_0)) =
\sqrt{(\varphi(\bm \theta)-\varphi(\bm \theta_0))^T(\varphi(\bm
\theta)-\varphi(\bm \theta_0))}$. A local linear approximation to
$\varphi(\bm \theta)-\varphi(\bm \theta_0)$ is $D(\bm \theta_0)(\bm
\theta-\bm \theta_0)$ with remainder $O(||\bm \theta-\bm
\theta_0||^2)$. Hence, one obtains $d(\varphi(\bm \theta), \varphi(\bm
\theta_0)) = \sqrt{(\bm \theta-\bm \theta_0)^TD(\bm \theta)^TD(\bm
\theta)(\bm \theta-\bm \theta_0)+O(||\bm \theta-\bm
\theta_0||^3)}$. Applying Theorem \ref{thm:res} one ends up with the
desired distribution $p(\bm \theta) \propto \sqrt{\det(D(\bm
\theta)^TD(\bm \theta))}=|\det(D(\bm \theta))|$.
\textbf{Jeffreys Prior}
Another special case of this general approach is Jeffreys prior
itself. \citet{jeff:1961} described his rule by noting that
\eqref{eqn:quad} approximates the empirical Hellinger distance (as
well as the empirical Kullback-Leibler divergence) between the
residual distributions in a statistical model, when $\bm V(\bm
\theta)$ is the Fisher information matrix. In this situation the
parameters of a statistical model $\bm \theta$ are mapped into the
space of residual densities and this space is used to define the
notion of distance between the $\bm \theta$'s. Applying the machinery
from the last section then leads to a uniform distribution on the
space of residual densities. This interpretation of Jeffreys rule is
rare, but has been noted among others for example by
\citet[ch. 3.6]{kass:wass:1996}. \citet{ghos:ghos:rama:1997} and
\citet{bala:1997} explicitly derive Jeffreys rule from these
principles. From this viewpoint Jeffreys prior is hence useful as a
universal ``default'' prior, because it gives equal weights to all
possible residual densities underlying a statistical model. However,
the used metric can depend, for example, on values of covariates,
which is undesirable in the nonlinear regression application, as
discussed in the introduction.
\textbf{Triangular Distribution}
In this example Definition \ref{eqn:def} is directly used to
numerically approximate a uniform distribution on a metric space. This
is can be done in the case $p=1$, where the construction of
$\epsilon-$lattices is easily possible numerically.
The triangular distribution, with density
\begin{equation}
\label{eq:triang}
p(x|\theta)=\begin{cases} 2x/\theta, & 0 < x \leq
\theta\\ 2(1-x)/(1-\theta), & \theta \leq x < 1 \end{cases},
\end{equation}
for $\theta \in (0,1)$ is a simple, yet versatile distribution, for
which the Jeffreys prior does not exist
\citep{berg:bern:sun:2009}. One possible metric space where to impose
the uniform distribution is the space of triangular densities or
triangular distribution functions parametrized by $\theta$. Several
metrics might be used, we will consider the Hellinger metric
$d_H(\theta_1, \theta_2)=\left\{\int_0^1
\left(\sqrt{p(x|\theta_1)}-\sqrt{p(x|\theta_2)}\right)^2dx\right\}^\frac{1}{2}$
and the Kolmogorov metric $d_K(\theta_1, \theta_2) = \sup_{y\in [0,1]}
|\int_0^y(p(x|\theta_1)-p(x|\theta_2))dx|$. Numerically calculating
the corresponding $\epsilon-$lattices one obtains the distributions
displayed in Figure \ref{fig:triang}. Interestingly one can observe
that the calculated uniform distribution in the Hellinger metric space
is equal to a $Beta(1/2, 1/2)$ distribution (as the reference prior in
\citet{berg:bern:sun:2009}), while the calculated functional uniform
distribution in the Kolmogorov metric results in a uniform
distribution on $[0,1]$.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{triang.eps}}
\caption{Numerically calculated uniform distributions in the (i)
Hellinger metric and (ii) the Kolmogorov metric. The solid curves
are based on interpolation of the empirical distribution functions
of 0.005-lattices followed by differentiation, the dots represent
an 0.03-lattice.}
\label{fig:triang}
\end{figure}
\subsection{Nonlinear Regression}
\label{sec:nonreg}
The implicit assumption when employing a nonlinear regression function
$\mu(x, \bm \theta)$, is that for one $\bm \theta$ the shape of the
function $\mu(x, \bm \theta)$ will adequately describe reality. It is
usually unclear, however, which of these shapes is the right one. A
uniform distribution on the functional \textit{shapes} hence seems to
be a reasonable prior. A suited metric space is consequently the space
of functions $\mu(., \bm \theta)$, with $x\in
\mathcal{X}\subset\mathbb{R}$, $\bm \theta \in K \subset \mathbb{R}^p$
with compact $K$ and metric for example given by the $L_2$ distance
$d(\bm \theta,\bm \theta_0)=\sqrt{\int_{\mathcal X}(\mu(x,\bm
\theta)-\mu(x,\bm \theta_0))^2dx}$. By a first order Taylor
expansion one obtains $\mu(x, \bm \theta)-\mu(x, \bm \theta_0)=J_x(\bm
\theta_0)(\bm \theta - \bm \theta_0) + O(||\bm \theta - \bm
\theta_0||^2)$, where $J_x(\bm \theta_0)=\frac{\partial}{\partial \bm
\theta} \mu(x, \bm \theta)$ is the row vector of first partial
derivatives. This results in an approximation of form $(\mu(x,\bm
\theta)-\mu(x,\bm \theta_0))^2=(\bm \theta - \bm \theta_0)^TJ_x(\bm
\theta_0)^TJ_x(\bm \theta_0)(\bm \theta - \bm \theta_0)+O(||\bm \theta
- \bm \theta_0||^3)$. Integrating this with respect to $x$ and taking
the square root, leads to an approximation of $d(\bm \theta,\bm
\theta_0)$ of form $\sqrt{(\bm \theta-\bm \theta_0)^T\bm Z^*(\bm
\theta_0)(\bm \theta-\bm \theta_0)+O(||\bm \theta-\bm
\theta_0||^3)},$ where $\bm Z^*(\bm
\theta)=\int_{\mathcal{X}}J_x(\bm \theta)^TJ_x(\bm
\theta)dx$. Consequently, from Theorem \ref{thm:res} the functional
uniform distribution for $\bm \theta$ equals $p(\bm \theta) \propto
\sqrt{\det(\bm Z^*(\bm \theta))}.$ In the special case of a linear
model $\mu(x,\bm \theta)=f(x)^T\bm \theta$, the functional uniform
distribution collapses to a constant prior distribution, which is the
uniform distribution on $\bm \Theta$ for compact $\bm \Theta$ and
improper, when extending to non-compact $\bm \Theta$.
We now revisit the exponential regression example from the
introduction. In this case one obtains $J_x(\theta)=-x\exp(-\theta
x)$, calculating $\int_0^{10}J_x(\theta)^2dx$ and applying the square
root, one obtains $p(\theta)\propto
\exp(-10\theta)\sqrt{\frac{\exp(20\theta)-200\theta^2-20\theta-1}{\theta^3}}$,
normalizing this leads to the prior displayed in Figure
\ref{fig:expo2} (i). On the $\theta$ scale the shape based functional
uniform density hence leads to a rather non-uniform distribution. In
Figure \ref{fig:expo2} (ii) one can observe that the probability mass
is distributed uniformly over the different shapes, as desired.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{expo2.eps}}
\caption{(i) Display of the functional uniform distribution on $\theta$ scale;
(ii) Display of the regression function $\exp(-\theta x)$
for $\theta=0$, $\theta = 5$ and the $\theta$ corresponding to the
$i/10$ quantile $i=1,\ldots,9$ of the functional uniform
distribution.}
\label{fig:expo2}
\end{figure}
An advantage of the functional uniform prior over the uniform prior is
that it is independent of the choice of parameterization and not
particularly sensitive to the potential choice of the bounds, provided
all major shapes of the underlying function are covered. In Figure
\ref{fig:expo2} (i) one can see that the density is already rather
small at $\theta=5$ as most of the underlying functional shapes are
already covered. In fact in this example one can extend the functional
uniform distribution from the compact interval $[0,5]$ to a proper
distribution on $[0, \infty)$.
Although the choice of the $L_2$ metric for $d$ seems reasonable in a
variety of situations, other choices are possible. One could for
example use a weighted version of the $L_2$ distance, when interest is
in particular regions of the design space $\mathcal{X}$. In fact
Jeffreys prior can be identified as a special case, when the assumed
residual model is given by a homoscedastic normal distribution. In
this situation the empirical measure on the design points is used as a
weighting measure. The Jeffreys prior has also been mentioned by
\citet[p. 217]{bate:watts:1988}, as a prior that is uniform on the
response surfaces, but the possibility of an alternative weighting
measures has not been considered.
One potential obstacle in the use of the proposed functional uniform
prior is the fact that it can be computationally challenging to
calculate. In some of the situations it might be possible to calculate
$\bm Z^*(\bm \theta)$ analytically in others one might need to use
numerical integration to approximate the underlying
integrals. However, it needs to be noted that the prior only needs to
be calculated once, as the prior is independent of the observed data
(it only depends on the design region $\mathcal{X}$ and on potential
parameter bounds), and can then be approximated for example in terms
of more commonly used distributions. This approximation can then be
reused in different modelling situations.
\section{Applications}
\label{sec:appl}
In this section, we will evaluate the proposed functional uniform
priors for nonlinear regression. One application of nonlinear
regression is in the context of pharmaceutical dose-finding trials. A
challenge in these trials is that the variability in the response is
usually large and the number of used doses fairly small, so that the
underlying inference problem is challenging, despite an often
seemingly large sample size. The priors will first be tested in a real
example, then the frequentist operating characteristics of the
proposed functional uniform priors are assessed more formally in a
simulation study for a binary endpoint. In the last example we will
use the functional uniform distribution for calculation of a Bayesian
optimal design in the exponential regression example.
\subsection{Irritable Bowel Syndrome Dose-Response Study}
\label{sec:df-regression}
Here the \texttt{IBScovars} data set taken from the
\texttt{DoseFinding} package will be used \citet{DoseFinding}. The
data were part of a dose ranging trial on a compound for the treatment
of the irritable bowel syndrome with four active doses 1, 2, 3, 4
equally distributed in the dose range $[0, 4]$ and placebo. The
primary endpoint was a baseline adjusted abdominal pain score with
larger values corresponding to a better treatment effect. In total 369
patients completed the study, with nearly balanced allocation across
the doses. Assume a normal distribution is used to model the residual
error and that the hyperbolic Emax model $\mu(x,\bm \theta) =
\theta_0+\theta_1x/(\theta_2+x)$ was chosen to describe the
dose-response relationship. The parameters $\theta_0$ and $\theta_1$
determine the placebo mean and the asymptotic maximum effect, while
the parameter $\theta_2$ determines the dose that gives 50 percent of
the asymptotic maximum effect, so that it determines the steepness of
the curve. In clinical practice vague prior information typically
exists for $\theta_0$ and $\theta_1$, but for illustration here we use
improper constant priors for these two parameters and a prior
proportional to $\sigma^{-2}$ for $\sigma$. For the nonlinear
parameter $\theta_2$ we will use a uniform prior and the functional
uniform prior distribution. When using a uniform distribution for
$\theta_2$ it is necessary to assume bounds, as otherwise an improper
posterior distribution may arise. We will use the bounds $[0.004,6]$
here, the selection of the boundaries is based on the fact that
practically all of the shapes of the underlying model are covered
taking into account that the dose range is $[0,4]$. For comparability
for the functional uniform prior the same bounds were used, although
one can extend it to an integrable density on $[0,\infty)$. The
functional uniform prior will be used based on the function space
defined by $x/(\theta_2+x)$. Performing the calculations described in
Section \ref{sec:nonreg} one obtains
$J^2_x={{x^2}/{\left(x+\theta_2\right)^4}}$, calculating the integral
$\int_0^4J^2_x(\theta_2)\,dx$ and applying the square root leads to
$p(\theta_2)\propto
{1}/{\sqrt{\theta_2^4+12\theta_2^3+48\theta_2^2+64\theta_2}}1_{[0.004,6]}(\theta_2)$. Similar
to the exponential regression example in the introduction, a uniform
distribution on $\theta_2$ space induces an informative distribution
in the space of functional shapes. Shapes corresponding to larger
values of $\theta_2$ (say $>3$) correspond to almost linear shapes,
while only very small values of $\theta_2$ lead to more pronounced
concave shapes. A uniform prior on $\theta_2$ hence induces a prior
that favors linear shapes over steeply increasing model shapes.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{IBS.eps}}
\caption{Posterior the dose-response curve under a uniform and a
functional uniform prior for $\theta_2$.}
\label{fig:IBS}
\end{figure}
We used importance sampling resampling based on a proposal
distribution generated by the iterated Laplace approximation to
implement the model (see, \citet{born:2011}). In Figure \ref{fig:IBS}
one can observe the posterior uncertainty intervals under the two
prior distributions. As is visible, the bias towards linear shapes,
when using a uniform distribution for $\theta_2$ pertains in the
posterior distribution. This happens despite the rather large sample
size, and despite the fact that the response at the doses 0, 4 and
particularly at dose 1 are not very well fitted by a linear shape. So
the posterior seems to be rather sensitive to the prior uniform
distribution. The posterior based on the shape based functional
uniform prior, in contrast, fits the data better at all doses, and
seems to provide a more realistic measure of uncertainty for the
dose-response curve, particularly for $x\in (0,1)$.
\subsection{Simulations}
\label{sec:simulations}
One might expect that the functional uniform prior distribution works
acceptable no matter which functional shape is the true one. To
investigate this in more detail and to compare this prior to other
prior distributions in terms of their frequentist performance,
simulation studies have been conducted. Here we report results from
simulations in the context of binary nonlinear regression.
For simulation the power model $\mu(x,
\theta)=\theta_0+\theta_1x^{\theta_2}$ will be used to model the
response probability depending on $x$. The parameters $\theta_0$ and
$\theta_1$ are hence subject to $\theta_0+\theta_1\leq 1$ and
$\theta_0,\theta_1\geq 0$, as a probability is modelled. Note that
only $\theta_2$ enters the model function non-linearly
The doses $0, 0.05, 0.2, 0.6, 1$ are to be used with equal allocations
of 20 patients per dose. We use four scenarios in this case: in the
first three cases the power model is used with $\theta_0=0.2$ and
$\theta_1=0.6$, while $\theta_2$ is equal to $0.4$ (Power 1), $1$
(Linear) and $4$ (Power 2). In addition we provide one scenario, where
an Emax model $0.2+0.6/(x+0.05)$ is the truth. The Emax scenario is
added to investigate the behaviour under misspecification of the
model. Each simulation scenario will be repeated 1000 times.
We will compare the functional uniform prior distribution to the
uniform distribution on the parameters and to the Jeffreys prior
distribution. For the uniform prior distribution approach uniform
prior distributions were assumed for all parameters, and the nonlinear
parameter $\theta_2$ was assumed to be within $[0.05, 20]$ to ensure
integrability. The same bounds are used for the two other approaches
for comparability. For the functional uniform prior approach, uniform
priors are used for $\theta_0$ and $\theta_1$, while for
$\theta_2$ the functional uniform prior will be used on the function
space defined by $x^{\theta_2}$. The prior can be calculated to be
$p(\theta_2)\propto
1/\sqrt{2\theta_2^3/3+\theta_2^2+\theta_2/2+1}$. For the Jeffreys
prior approach we used a prior proportional to $\sqrt{{\det(I(\bm
\theta))}}$, within the imposed parameter bounds. For analysis we
used MCMC based on the HITRO algorithm, which is an MCMC sampler that
combines the hit and run algorithm with the ratio of uniforms
transformation. It does not need tuning and is hence well suited for a
simulation study. The sampler is implemented in the \texttt{Runuran}
package \citep{leyd:hoer:2010}, computations were performed with
\texttt{R} \citep{R}. 10000 MCMC samples are used from the
corresponding posterior distributions in each case, using a burnin
phase of 1000 a thinning of 2.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|cccc|}
\hline
Prior & Model & MAE$_1$ & MAE$_2$ &CP & ILE \\
\hline
Uniform & Linear & 0.082 & 0.062 & 0.819 & 0.259 \\
& Power 1& 0.079 & 0.056 & 0.816 & 0.255 \\
& Power 2& 0.066 & 0.067 & 0.881 & 0.220 \\
& Emax & 0.073 & 0.056 & 0.780 & 0.226 \\ \hline
Jeffreys & Linear & 0.058 & 0.065 & 0.900 & 0.233 \\
& Power 1& 0.054 & 0.056 & 0.901 & 0.220 \\
& Power 2& 0.056 & 0.073 & 0.895 & 0.227 \\
& Emax & 0.056 & 0.055 & 0.845 & 0.196 \\ \hline
Func. Unif. & Linear & 0.060 & 0.060 & 0.892 & 0.240 \\
& Power 1& 0.057 & 0.053 & 0.893 & 0.226 \\
& Power 2& 0.057 & 0.070 & 0.912 & 0.240 \\
& Emax & 0.059 & 0.054 & 0.834 & 0.203 \\
\hline
\end{tabular}
\caption{Estimation of dose-response; MAE$_1$ and MAE$_2$ correspond to the
dose-response estimation error (for the posterior median and mode), CP
denotes the average coverage probability of pointwise 0.9
credibility intervals and ILE denotes the average credibility
interval lengths.}
\label{tab:results4}
\end{center}
\end{table}
In Table \ref{tab:results4} one can observe the estimation results in
terms of the mean absolute estimation error for the dose-response
function, $\mathrm{MAE} = 1/9\sum_{i=0}^8|\mu(i/8)-\hat{\mu}(i/8)|$,
where $\mu(.)$ is the underlying true function and $\hat{\mu}(.)$ is
either the point wise posterior median (corresponding to MAE$_1$) or
the prediction corresponding to the posterior mode for the parameters
(MAE$_2$), the posterior mode for the uniform prior is equal to the
maximum likelihood estimate. The values displayed in Table
\ref{tab:results4} are the average $\mathrm{MAE}$ over 1000
repetitions. In addition for each simulation the 0.9 credibility
intervals at the dose-levels $0,1/8,2/8,...,1$ have been
calculated. The number given in the Table is $CP = 1/9\sum_{i=0}^8
\hat{P}_{i/8}$, where $\hat{P}_{d}$ is the average coverage
probability of the 0.9 credibility interval at dose $d$ over 1000
simulation runs. In addition the average length of the credibility
intervals has been calculated as $ILE= 1/9\sum_{i=0}^8 \hat{L}_{i/8}$,
where $L_d$ is the average length of the 0.9 credibility interval at
dose $d$ over 1000 simulation runs. For estimation of the
dose-response Jeffreys prior and the functional uniform prior improve
upon the uniform prior distribution, while the Jeffreys prior and the
functional uniform prior are close, with slight advantages for the
Jeffreys prior. In terms of the credibility intervals the functional
uniform and Jeffreys prior roughly keep their nominal level for the
linear, and the power model cases, while the uniform prior probability
does not. None of the priors achieves the nominal level for the Emax
model, which is probably due to the fact that the Emax model is too
different from the power model. Interestingly the credibility
intervals of the uniform prior are larger than those of the other two
priors, but lead to a smaller coverage probability.
Table \ref{tab:results2} provides the estimation results with respect
to parameter estimation. The main message here is that all priors
perform roughly equal for estimation of the linear parameters
$\theta_0$ and $\theta_1$. For the nonlinear parameters, Jeffreys
prior distribution and the functional uniform prior perform better
than the uniform disribution.
In summary the functional uniform prior hence performs roughly equally
well as the Jeffreys prior in these simulations. However, the
functional uniform prior has the pragmatic and conceptual advantages
that it does not depend on the observed covariates, and can thus be
used for example for calculation of a Bayesian optimal design, or in
sequential situations.
\begin{table}
\centering
{\small
\begin{tabular}{|c|c|cccc|cccc|} \hline
\multicolumn{2}{|c|}{} & \multicolumn{4}{c|}{Uniform Prior} &
\multicolumn{4}{c|}{Functional Uniform Prior} \\ \hline
Scenario & $N$ & MAE$_1$ & MAE$_2$ & CP & ILE & MAE$_1$ & MAE$_2$ & CP & ILE\\ \hline
Sig. Emax 1 & 125 & 0.256 & 0.277 & 0.903 & 1.098 & 0.230 & 0.270 & 0.914 & 1.028\\
Sig. Emax 2 & & 0.278 & 0.283 & 0.895 & 1.144 & 0.258 & 0.278 & 0.909 & 1.089\\
Sig. Emax 3 & & 0.243 & 0.275 & 0.902 & 1.014 & 0.251 & 0.262 & 0.898 & 1.030\\
Linear & & 0.266 & 0.291 & 0.901 & 1.100 & 0.241 & 0.289 & 0.918 & 1.057\\
Quadratic & & 0.272 & 0.278 & 0.880 & 1.109 & 0.242 & 0.276 & 0.898 & 1.038\\ \hline
Sig. Emax 1 & 250 & 0.185 & 0.214 & 0.908 & 0.818 & 0.167 & 0.209 & 0.920 & 0.768\\
Sig. Emax 2 & & 0.196 & 0.206 & 0.908 & 0.850 & 0.187 & 0.201 & 0.910 & 0.811\\
Sig. Emax 3 & & 0.174 & 0.202 & 0.913 & 0.738 & 0.170 & 0.188 & 0.912 & 0.744\\
Linear & & 0.200 & 0.209 & 0.891 & 0.831 & 0.189 & 0.211 & 0.900 & 0.794\\
Quadratic & & 0.201 & 0.215 & 0.881 & 0.839 & 0.185 & 0.216 & 0.886 & 0.782\\\hline
\end{tabular}
\caption{Estimation of dose-response; MAE$_1$ and MAE$_2$ correspond
to the estimation error at the doses 0,1,...,8 (for the posterior
median and the posterior mode), CP denotes the average
coverage probability of pointwise 0.9 credibility intervals and ILE
denotes the average credibility interval lengths.}
\label{tab:results2}
}
\end{table}
\subsection{Bayesian optimal design for exponential regression}
\label{sec:optdes}
In this section we will use the prior distribution for the exponential
regression model derived in Section \ref{sec:nonreg} to calculate a
Bayesian optimal design. When assuming a homoscedastic normal model,
the Fisher information is $I(d,\theta)\propto \sum w_i x_i^2
\exp(-2\theta x_i)$. Hence minimizing $-\log(I(d,\theta))$ will lead
to a design with most information. Unfortunately the expression
depends on $\theta$, which is of course unknown before the
experiment. One way of dealing with this uncertainty are Bayesian
optimal designs, where one optimizes the design criterion averaged
with respect to a prior distribution: $-\int \log(I(d,\theta))
p(\theta)d\theta$. In this situation we will use the uniform and
functional uniform prior distribution (see Figures \ref{fig:expo1} and
\ref{fig:expo2}) both on the interval $[0,5]$ for calculation of the
optimal design. Restricting the design space to $x \in [0,10]$ and
only performing the optimization up to 5 design points, one ends up
with the weights $\bm w=(0.956,0.022,0.022)$ on the design points
$x=(0.38,4.04,10)$, for the uniform prior, while the functional
uniform prior distribution leads to a design of the form $\bm
w=(0.19,0.3,0.51)$ and $x=(0.54,2.35,10)$. The design corresponding to
the functional uniform prior hence spreads its allocation weights more
uniformly on the design range, whereas the uniform prior results in
essentially one major design point.
One way of comparing the two calculated designs is to look at the
efficiency $\mathrm{Eff}(d, \theta)=\exp(\log(I(d,
\theta))-\log(I(d_{opt}(\theta), \theta)))$, of the calculated
designs, with respect to the design $d_{opt}(\theta)$ that is locally
optimal for the parameter value $\theta$, for a range of different
shapes. In Figure \ref{fig:shapeEff} we plot the efficiency for the
different shapes on the functional shape scale.
\begin{figure}
\centerline{\includegraphics[width=0.9\textwidth]{shapeEff.eps}}
\caption{Efficiency of the two designs for different shapes.}
\label{fig:shapeEff}
\end{figure}
One can observe that the uniform prior design is only efficient for
the sharply decreasing shapes with $\theta > 0.71$, but otherwise has
very low efficiency. The functional uniform prior improves quite a
bit over the uniform prior distribution for most of the functional
shape space, and provides at least a reasonable efficiency for most
shapes.
\section{Conclusions}
\label{sec:conclusions}
A main motivation for this work is the practical limitation of the
classical Jeffreys prior that it cannot be used in nonlinear
regression settings, where the prior needs to be specified before data
collection, for example when one wants to calculate a Bayesian optimal
design or in adaptive dose-finding trials. For this purpose the
functional uniform distribution has been introduced, which imposes a
distribution on the parameters, so that it is uniform in the
functional shapes underlying the nonlinear regression function. This
was achieved by using a general framework for constructing uniform
distributions based on earlier work by \citet{demb:1990} and
\citet{ghos:ghos:rama:1997}. We investigated the functional uniform
prior for nonlinear regression in a real example, a simulation study
and an optimal design problem where it showed very satisfactory
performance.
There is no reason to call the priors proposed in this article
globally uninformative, because one needs to choose the space $M$ and
in particular the metric $d$, where to impose the uniform
distribution. The priors derived from the theory in Section
\ref{sec:general-approach} might then be considered uninformative in
the particular aspect that $(M,d)$ reflects. In the case of nonlinear
regression we argue that the uniform distribution on the space of
functional shapes is often, depending of course on the considered
application, a \textit{reasonable} assumption for nonlinear regression
when particular prior information is lacking. However, this also does
not apply generally: A situation, where the functional uniform prior
might not be adequate, occurs, for example, when the considered
nonlinear model is extremely flexible, containing virtually all
continuous functions for example (as neural network models). In this
case it is often more adequate to concentrate most prior probability
on a reasonable subset of the function space (\textit{e.g.}, smooth
functions), rather than building a uniform distribution on all
potential shapes, including shapes that might be implausible a-priori.
The theory outlined in Section \ref{sec:methodology} might be of
interest to formulate functional uniform priors also for other type of
models with a nonlinear aspect. In quite a few modelling situations
one might be able to find a space $(M,d)$, where imposing a uniform
distribution is plausible and then back-transform this distribution to
the parameter scale.
\citet{ghos:ghos:rama:1997} employ this idea, when $(M,d)$ is a
space of densities and define priors for nonparametric density
estimation. Another application could be the estimation of covariance
matrices: \citet{dryd:kolo:zhou:2009} discuss the use of more
adequate non-Euclidean distance metrics for covariance matrices, which
would in our framework define the metric space for imposing the
uniform distribution. \citet{paul:2005} derives default priors
for Gaussian process interpolation, which are rather time consuming to
evaluate. In this situation might choose the space of the covariance
functions as $(M,d)$.
\begin{appendix}
\section{Proof of Theorem 1}
\label{sec:the1}
\citet{ghos:ghos:rama:1997} prove a closely related result, when the
underlying metric is the Hellinger distance and $M$ the space of
residual densities. We review their proof and adapt to metrics of the
form (1) and proceed in two parts. Part A summarizes the
proof of \citet{ghos:ghos:rama:1997} for completeness and part B
provides additional Lemmas needed in our situation.
\textit{Part A}\\
The proof starts by covering $\bm \Theta$ with hypercubes and inner
hypercubes placed inside these cubes. Let $A_1,\ldots,A_J$ be the
intersections of $A$ with the hypercubes and $A'_1,\ldots,A'_J$ be the
intersections of $A$ with the inner hypercubes. Now separate the
hypercubes and inner hypercubes so that each inner hypercube is at
least $\epsilon$ apart from any other in the $d(.,.)$ metric (this is
possible, when the results proved in Lemma 1 hold). By the
sub-additivity of packing numbers one then has
$\sum_jD(\epsilon,A'_j,d)\leq D(\epsilon, A, d)\leq \sum_jD(\epsilon,
A_j,d)$ and $\sum_jD(\epsilon,\bm \Theta'_j,d)\leq D(\epsilon, \bm
\Theta, d)\leq \sum_jD(\epsilon, \bm \Theta_j,d)$, where $\bm
\Theta'_j$ and $\bm \Theta_j$ denote the intersection with the
hypercubes and inner hypercubes.
Now an upper an lower bound for $D(\epsilon, A_j,d)$ is derived based on
the local Euclidean approximation (1) to the metric
$d$. For a Euclidean metric one can calculate the packing number
explicitly, see \citet{kolm:tiho:1961}. Up to proportionality
$D(\epsilon,A,||.||)$ is given by $vol(A)\epsilon^{-p}$, consequently for a
metric of form $\sqrt{(\bm \theta-\bm \theta')^T\bm V(\bm \theta-\bm
\theta')},$ with $\bm V$ a fixed positive definite matrix the
packing number is up to proportionality $\sqrt{\mathrm{det}(\bm
V)}vol(A)\epsilon^{-p}$. Using the local Euclidean approximation
(1) and Lemma 2 one can derive lower and upper
bounds for $D(\epsilon, A_j,d)$ and $D(\epsilon, A'_j,d)$ in terms of
$\sqrt{\mathrm{det}(\bm V(\bm \theta_j))}vol(A_j)\epsilon^{-p}$ and
$\sqrt{\mathrm{det}(\bm V(\bm \theta_j))}vol(A'_j)\epsilon^{-p}$. and
similarly for $D(\epsilon,\bm \Theta,d)$ and thus for $P_\epsilon(A)= D(\epsilon,
A,d)/D(\epsilon, \bm \Theta,d)$. As the size of the hypercubes goes to
zero the bounds become sharper (see Lemma 2) and lower and
upper bound $P_\epsilon(A)$ converge to $\frac{\int_{A}\sqrt{\det(\bm
V(\bm \theta))}d\bm \theta}{\int_{\bm \Theta}\sqrt{\det(\bm V(\bm
\theta))}d\bm \theta}$, see \citet{ghos:ghos:rama:1997} for
the details of this argument.
\textit{Part B}\\
Without loss of generality we focus on setting $c_1=c_2=1$ in
(1) for what follows.
\begin{lem}
\label{lemma2}
For symmetric, positive definite $\bm V(\bm \theta)$
there exist $l^*,u^*>0$ for $\bm \theta, \bm \theta' \in \bm \Theta$ so that
$$l^*||\bm \theta-\bm \theta'|| \leq d(\bm \theta,\bm \theta') \leq u^*||\bm \theta-\bm \theta'||$$
\end{lem}
Proof:\\
$$\frac{d(\bm \theta, \bm \theta')^2}{||\bm \theta-\bm
\theta'||^2}=\frac{{(\bm \theta-\bm \theta')^T\bm V(\bm \theta')(\bm
\theta-\bm \theta')+O(||\bm \theta-\bm \theta'||^p)}}{||\bm
\theta-\bm \theta'||^2}.$$ Now by an eigendecomposition and the
compactness of $\bm \Theta$ and continuity of $\bm V(\bm \theta)$ one
knows that there exist $l,u>0$ so that $$l(\bm \theta-\bm
\theta')^T(\bm \theta-\bm \theta') \leq (\bm \theta-\bm \theta')^T\bm
V(\bm \theta)(\bm \theta-\bm \theta')\leq u(\bm \theta-\bm
\theta')^T(\bm \theta-\bm \theta').$$ So that in total we get a lower
and upper bound by $u^{*2}=u+\kappa$, where $\kappa = \max(\max_{\bm
\theta \in \bm \Theta} O(||\bm \theta-\bm \theta'||^{p-2}),0)$, and
similarly lower bounded. $\Box$
\begin{lem}
\label{lemma3}
For $\bm \theta, \bm \theta',\bm \theta^*$ lying in a
hypercube $Q\subset \bm \Theta$ we obtain
$$k_1(\bm \theta-\bm \theta')^T\bm V(\bm \theta^*)(\bm \theta-\bm \theta')
\leq d^2(\bm \theta,\bm \theta') \leq k_2(\bm \theta-\bm \theta')^T\bm V(\bm
\theta^*)(\bm \theta-\bm \theta'),$$ where $k_1 \rightarrow c_0$ and
$k_2 \rightarrow c_0$ for $c_0 > 0$, when the side length of $Q$
converges to 0.
\end{lem}
Proof:
\begin{eqnarray*}
\frac{d^2(\bm \theta, \bm \theta')}{(\bm \theta-\bm \theta')'\bm V(\bm \theta^*)(\bm \theta-\bm \theta')}
&=& \frac{(\bm \theta-\bm \theta')^T\bm V(\bm \theta')(\bm \theta-\bm \theta')}{(\bm \theta-\bm \theta')^T\bm V(\bm \theta^*)(\bm \theta-\bm \theta')}
+\frac{O(||\bm \theta-\bm \theta'||^p)}{(\bm \theta-\bm
\theta')^T\bm V(\bm \theta^*)(\bm \theta-\bm \theta')} \nonumber\\
&=& \frac{(\bm \theta-\bm \theta')^T\bm V(\bm \theta')(\bm \theta-\bm \theta')}{(\bm \theta-\bm \theta')^T\bm V(\bm \theta^*)(\bm \theta-\bm \theta')}
+O(||\bm \theta-\bm \theta'||^{p-2}) \nonumber \\
&=& 1+\frac{(\bm \theta-\bm \theta')^T(\bm V(\bm \theta')-\bm V(\bm \theta^*))(\bm \theta-\bm \theta')}{(\bm \theta-\bm \theta')^T\bm V(\bm \theta^*)(\bm \theta-\bm \theta')}+O(||\bm \theta-\bm \theta'||^{p-2})
\end{eqnarray*}
Now $\bm V(\bm \theta)$ is continuous, so one can lower and upper
bound the second summand on $Q$. It converges to zero and hence the
bounds towards each other when the size of the hypercubes shrinks (and
by this $\bm \theta^* \rightarrow \bm \theta'$). Upper and lower
bounding $O(||\bm \theta-\bm \theta'||^{p-2}$ implies the desired
result. $\Box$
\vspace{0.5cm}
\section{Proof of Theorem 2}
\label{sec:the2}
Consider the distance metric $d^*(\bm \gamma, \bm \gamma_0)=d(h(\bm
\gamma),h(\bm \gamma_0))$. To show invariance of the proposed
procedure, the uniform distribution derived from $d^*$ needs to be
$p(\bm \gamma) \propto \det(\bm H(\bm \gamma))\sqrt{\det(\bm V(h(\bm
\gamma)))}$, which is the distribution derived from $p(\bm \theta)
\propto \sqrt{\det(\bm V(\bm \theta))}$ using a change of variables.
A second order Taylor expansion of $d^2(h(\bm \gamma),h(\bm
\gamma_0))$ in $\bm \gamma_0$ leads to an approximation of the form
$(\bm \gamma-\bm \gamma_0)'\bm M(\bm \gamma_0)(\bm \gamma-\bm
\gamma_0),$ where $i,j$ element of $\bm M(\bm \gamma)$ is given by
$\bm M(\bm
\gamma)_{(i,j)}=\sum_{l=1}^p\sum_{k=1}^p\frac{\partial}{\partial
\theta_k \partial \theta_l} m(\bm \theta)\frac{\partial}{\partial
\gamma_i}h_k(\bm \gamma)\frac{\partial}{\partial \gamma_j}h_l(\bm
\gamma)+ \sum_{k=1}^p\frac{\partial}{\partial \theta_k}m(\bm
\theta)\frac{\partial}{\partial \gamma_i \partial \gamma_j}h_k(\bm
\gamma)$, where $m(\bm \theta)=d^2(\bm \theta,\bm \theta_0)$ and $\bm
\theta=h(\bm \gamma)$. When evaluating this expression in the expansion point
the second summand vanishes as the gradient is zero. Hence one obtains
$\bm M(\bm \gamma)=\bm H(\bm \gamma)^T\bm V(h(\bm \gamma))\bm H(\bm
\gamma)$, which results in the density $p(\bm \gamma) \propto
\sqrt{\det(\bm H(\bm \gamma))^T\bm V(h(\bm \gamma))\det(\bm H(\bm
\gamma))}= \det(\bm H(\bm \gamma))\sqrt{\det(\bm V(h(\bm \gamma)))}.$ $\Box$
\end{appendix}
\bibliographystyle{biom}
|
2,869,038,155,012 | arxiv | \section{Introduction}
The COVID-19 pandemic started in March 2020 and has killed 316,844 people in the US as of December 21, 2020\footnote{\url{https://web.archive.org/web/20201222043017/https://covid.cdc.gov/covid-data-tracker/\#cases_casesper100klast7days}}.
Contact tracing is widely known as a key strategy to slow the spread of infectious diseases such as COVID-19.
It involves identifying who may have been exposed to an infected person and helping exposed people take protective measures at the right time~\cite{Prioriti31:online}.
The conventional approach to contact tracing relies on manual investigation, which can not keep up with the rising cases during the global COVID-19 outbreak~\cite{SurveyOf88:online, ContactT19:online}.
Hence, smartphone-based contact-tracing apps have been proposed as a complementary solution to help scale up the contact tracing process~\cite{troncoso2020decentralized, chan2020pact, rivest2020pact}.
The effectiveness of contact-tracing apps is contingent on a critical fraction of the population installing and using the app~\cite{hinch2020effective, ferretti2020quantifying}.
However, deployed contact-tracing apps have suffered from low adoption rates (from 21.6\% in Australia to 0.2\% in Philippines)~\cite{COVID1987:online}, with security and privacy concerns blamed as a main culprit~\cite{braithwaite2020automated}.
Although recent research has investigated factors that affect people's willingness to install a contact-tracing app in general~\cite{abeler2020support, redmilesuser, horstmann2020does, walrave2020ready, walrave2020adoption, o2020national, bachtiger2020belief, abuhammad2020covid, von2020covid, hassandoust2020individuals, altmann2020acceptability, saw2020predicting, simko2020covid, kostka2020times, trang2020one, zhang2020americans}, some important aspects remain unclear. In particular, we aim to focus on three fundamental issues:
First, the design of contact-tracing apps lends itself to multiple choices featuring different trade-offs between security/privacy risks and public health benefits~\cite{baumgartner2020mind, redmilesuser, cho2020contact, ahmed2020survey, martin2020demystifying, shubina2020survey}.
Researchers have conducted choice-based conjoint studies to measure user preferences of different configurations of COVID-19 contact-tracing apps for the UK population \cite{horvath2020citizens, wiertz2020predicted} and the U.S. population \cite{li2020decentralized, zhang2020americans}.
This method emulates a situation where users have to choose between app designs, but the nature of contact-tracing apps determines that users in a certain region can only choose to install or not install a single designated app\footnote{Currently, every country/region only has one active version of contact-tracing app~\cite{bay2020bluetrace, Howtoget92:online}}.
Second, prior research in contact-tracing apps focuses solely on measuring people's general intentions to \textit{install} the app~\cite{abeler2020support, redmilesuser, horstmann2020does, walrave2020ready, walrave2020adoption, o2020national, bachtiger2020belief, abuhammad2020covid, von2020covid, hassandoust2020individuals, altmann2020acceptability, saw2020predicting, simko2020covid, kostka2020times, trang2020one, zhang2020americans}. However, app installation intentions is not sufficient for effective contact tracing because users must also \textit{actively report cases} and \textit{keep the app installed} in the long run~\cite{IrishBattery:online}.
Third, previous research has conducted qualitative studies to identify reasons why people would or would not install a contact-tracing app~\cite{simko2020covid, li2020decentralized, williams2020public, abeler2020support}, with perceived risks and benefits turning out to be recurring themes.
However, there is a lack of quantitative understanding in how perceived risks and benefits vary with different app designs, across people, and how these variances affect the app adoption intentions accordingly. As a result, it remains unclear which app designs best reconcile the risk-benefit trade-offs and what are the rationales behind the preferences of different sub-populations.
In this paper, we present a national survey experiment ($N=1963{}$) in the U.S.\ to complement prior findings on the impact of app design choices on app adoption intention for contact-tracing apps. We focus primarily on three research questions:
\begin{description}
\item[RQ1] To what extent do app design choices affect people's adoption intentions about a COVID-19 contact-tracing app?
\item[RQ2] To what extent do individual differences affect people's adoption intentions about a COVID-19 contact-tracing app?
\item[RQ3] How do people's perceived risks and benefits about a contact-tracing app mediate the influence of app design choices and individual differences on the app adoption intention?
\end{description}
In our study, we used a between-subjects factorial design, showing each participant only one solution and asking about their intentions to install and use the app.
This is a better approximation for the choice they will actually face and can therefore lead to a more realistic estimation of how app design differences shape adoption intentions, compared to previous studies that have used a within-subjects approach.
We vary design decisions by controlling four variables: proximity-based contact tracing architecture (i.e., decentralized vs. centralized architecture), location use, app provider, and security risk presentation.
The first three correspond to app design choices that were found to be important in prior research in building privacy-preserving contact-tracing apps.
The fourth variable \textit{security risk presentation} allows us to compare participants' adoption intentions when not primed about any security risks and when primed about one of three major security risks of contact-tracing apps: data breach risk, secondary data use risk, and the re-identification risk of COVID-19 positive users.
We also requested participants to answer questions about personal characteristics (prosocialness, COVID-19 risk perceptions, general privacy concerns, and technology readiness) and demographic information (e.g., age, gender) for analyzing the effects of individual differences on adoption intentions.
Our study resulted in a number of key findings, including:
\begin{itemize}
\item 58.9\% people reported that they at least somewhat agreed to install the app, which is similar to prior work's estimation~\cite{li2020decentralized, Washingt23:online}. However, only 41.7\% people reported they at least somewhat agreed that most people would install this app, which shows that U.S.\ people hold an overly pessimistic attitude towards the adoption of contact-tracing apps. 76.2\% people reported that they at least somewhat agreed to report to the app if they test positive. This suggests that people are more amenable to using contact-tracing apps and contribute their data when they test positive for COVID-19.
\item App design choices had very small effects on all five aspects of app adoption intention (e.g., install app, report positive cases, keep the app installed). People were significantly more inclined to install apps that collect location than apps that do not collect location due to the additional benefits from the location data (e.g., for analyzing infection hotspots). Among the three security risks we tested, all of them increased users' perceived security and privacy risks while only the secondary data use risk significantly reduced adoption intention.
\item Individual differences had large effects on all five aspects of app adoption intention. Older people, females, and essential workers were significantly less inclined to install a COVID-19 contact-tracing app, while Hispanics, people with higher household income, frequent public-transit users during the pandemic, and people living in urban areas were significantly more inclined to install a COVID-19 contact-tracing app.
\item Certain app design choices could exacerbate the difference in adoption intention due to individual differences, which could lead to potentially unbalanced adoption for certain sub-populations. For example, people living in urban areas showed similar acceptance of state health authorities as the app provider and of a large tech company as the app provider, while people living in rural areas showed much lower acceptance of a large tech company than of state health authorities.
\item The perceptions about the app's benefits and how much adoption the app can achieve played a more important role in determining one's intention to install a contact-tracing app than perceptions about security and privacy risks.
\end{itemize}
\section{Related Work and Research Questions}
In this section, we present an overview of the contact-tracing app design space that we are studying in the survey experiment, drawing on both research proposals and industry frameworks (e.g., the Google/Apple Exposure Notification API) and review findings of prior work to introduce our research questions.
\subsection{Contact-Tracing App Adoption Challenge}
\label{sec:app_adoption_aspects}
A contact-tracing app needs widespread adoption to work~\cite{hinch2020effective, ferretti2020quantifying}.
Specifically, the installation rate has been widely used as a success metric of contact-tracing apps~\cite{WhyArent70:online} and previous research focused on estimating the percentage of people that will install contact-tracing apps~\cite{redmilesuser, Washingt23:online} and factors that affect people's willingness to install contact-tracing apps~\cite{li2020decentralized, zhang2020americans, horvath2020citizens, wiertz2020predicted, simko2020covid}.
However, for continued contact-tracing users need to keep the app installed and actively report if they test positive~\cite{FrancesC22:online}.
Some evidence has demonstrated that long-term use of the app and honest reporting of positive cases could be impeded by usability concerns (e.g., shorter battery life~\cite{Covid19a10:online}) and privacy concerns (e.g., the app could remain as a surveillance tool after the pandemic~\cite{zhang2020americans}).
Note that the usability and privacy issues vary greatly among different app designs.
To provide a more comprehensive understanding about factors that affect the adoption intentions of contact-tracing apps, we measure five \textit{outcome variables} of different aspects of adoption in our survey design and analysis, including \circled{1}~the general app installation intentions \circled{2}~whether to report to the app if the user tests positive for COVID-19, \circled{3}~whether to keep the app installed when the battery drains faster, \circled{4}~when COVID-19 cases are steadily decreasing and \circled{5}~when a vaccine becomes available).
\subsection{Effects of App Design Choices on Contact-Tracing App Adoption Intentions}
\label{sec:design_space_overview}
Many digital technologies have been proposed and deployed to help combat the pandemic.
In this paper, we focus on smartphone contact-tracing apps that users voluntarily install to complement conventional contact tracing~\cite{PrivacyP48:online}.
Contact-tracing apps are inherently privacy-sensitive as they rely on users' sensitive data such as their contact history and location history to function~\cite{cho2020contact, baumgartner2020mind}.
On the other hand, collecting more data can improve the accuracy of the automated contact tracing results~\cite{redmilesuser} and provide more information to health workers~\cite{ivers2020can, horvath2020citizens}.
To tackle this risk-benefit trade-off issue, researchers have proposed technical solutions for privacy-preserving contact tracing.
In the following, we introduce two main design dimensions of contact-tracing apps: Proximity-based contact tracing and Location-based contact tracing. Then we discuss two other factors related to contact-tracing app design: app providers and security risks.
Research questions proposed in this subsection are extensions of RQ1: ``\textit{To what extent do app design choices affect people’s adoption intentions about a COVID-19 contact-tracing app?}''
\subsubsection{Proximity-Based Contact Tracing}
Most contact-tracing apps offer Bluetooth Low Energy (BLE)-based proximity tracking to notify people who have recently come into close contact with people who test positive for COVID-19~\cite{redmilesuser, ahmed2020survey, martin2020demystifying, shubina2020survey}.
In March 2020, Singapore created the first COVID-19 contact-tracing app using a \textit{centralized} architecture which completes the contact-tracing process on the server end~\cite{bay2020bluetrace}.
This approach can lead to severe security risks because users' identities (e.g., phone numbers) are associated with their COVID-19 exposure status~\cite{bay2020bluetrace, cho2020contact}.
Therefore, many researchers have proposed \textit{decentralized} architectures that can fulfill the fundamental need of sending exposure notifications to people who might be infected with minimum data shared with a central entity ~\cite{troncoso2020decentralized, chan2020pact, rivest2020pact}.
This allows users to remain anonymous from the central server, but there is still a risk that other app users can identify the infected user they were exposed to by installing modded app that logs additional information such as locations~\cite{ahmed2020survey, troncoso2020decentralized} along with the exposure history.
Because the contact-tracing process is completed on the users' phones, the central server does not know how many exposure notifications were sent to users and how users reacted to them.
This makes it difficult to evaluate the efficacy of the system and integrate it with the conventional contact tracing to facilitate further testing and quarantine processes~\cite{ivers2020can, horvath2020citizens}.
That being said, Google and Apple used this architecture in their Google-Apple Exposure Notification (GAEN) framework~\cite{Exposure87:online, PrivacyP48:online}, which has become the most prevalent way of building contact-tracing apps in the U.S.~\cite{Howtoget92:online}
Researchers have also proposed \textit{privacy-preserving centralized} contact-tracing architectures~\cite{peppptdo31:online, castelluccia2020robert}. Like decentralized architectures, these allow users to remain anonymous from the central server.
Because the contact-tracing process is completed on the server end, the central server can track when and how many exposure notifications are sent out to help measure the performance of the system and integrate with the conventional contact tracing.
However, it is still possible for app providers to infer the identities of users using the anonymized contact history shared with the server~\cite{troncoso2020decentralized}.
This system could also suffer from the re-identification risk under a Sybil attack, namely users of this app can narrow down the scope of infected users they were exposed to by registering multiple accounts~\cite{troncoso2020decentralized}.
\citet{li2020decentralized} conducted a choice-based conjoint study that studied similar design choices and found users preferred the centralized architecture.
However they did not investigate the \textit{privacy-preserving centralized} architecture which serves as a middle ground between the two extreme solutions. Besides, their description highlighted the re-identification risk for the decentralized architecture but did not mention other risks like data breach that a centralized architecture is more susceptible to, which could bias users' decisions.
In our study, we examine users' preferences and feelings about these three mechanisms of proximity-based contact tracing described above.
\begin{description}
\item[RQ1.1] To what extent do different proximity-based contact tracing designs (1. decentralized architecture, 2. centralized architecture using anonymized identifiers, 3. centralized architecture using real identities) affect people's intentions to adopt a COVID-19 contact-tracing app?
\end{description}
\subsubsection{Location Use}
Infected people's location histories are useful for contact-tracing, especially for tracing indirect contact (e.g., spread through shared surfaces or aerosol in a public spaces) which cannot be captured by proximity-based contact-tracing apps~\cite{culler2020covista, raskar2020apps}.
However, the use of location data in contact-tracing apps has been controversial and the Google/Apple exposure notification framework even forbids apps that built with it to collect location data~\cite{Exposure87:online, PrivacyP48:online} due to the risks of increased surveillance of all app users and privacy leak and stigmatization of infected users~\cite{SouthKor57:online, zhang2020americans}.
Previous research has not reached a consensus on how location use affects users' preferences of contact-tracing apps.
\citet{zhang2020americans} showed that using Bluetooth data for proximity-only contact tracing increases users' acceptance of contact-tracing apps compared to using GPS for location-based tracing, while \citet{li2020decentralized} showed that collecting location data in public areas and providing users with infection hotspot information significantly increased willingness to adopt.
These findings suggest that location data collection may be more acceptable to users when it provides additional benefits over basic proximity-based contact tracing.
Therefore, our study focuses on comparing no location collection (and no additional benefits) with location features that have additional benefits and can still preserve privacy to some extent.
The first feature we study relies on \textit{storing the location data on device} to mitigate privacy risks, such as the Care19 Diary app in South Dakota, USA~\cite{COVIDinS96:online}.
If a user of the app tests positive for COVID-19, they can refer to the location logs tracked by this app to help them recall their recent whereabouts when interviewed by a human contact tracer.
The second feature we study relies on \textit{uploading the location data of infected users} so that infection hotspots recently visited by many infected users can be shared with the public.
Research has shown that users find knowing about infection hotspots useful and may be more willing to install an app that offers this feature~\cite{redmilesuser, li2020decentralized}.
To protect users' privacy, researchers have proposed technologies such as Safe Paths~\cite{raskar2020apps} that enable users to upload anonymized, redacted, and obfuscated location history.
\begin{description}
\item[RQ1.2] To what extent do different location-based contact tracing features (1. no location use, 2. storing location on device as a memory aide, 3. sharing locations with health authorities to analyze infection hotspots) affect adoption intention for a COVID-19 contact-tracing app?
\end{description}
\subsubsection{App Providers}
In addition to different app designs, the organizations that develop and release the contact-tracing app and therefore have access to users' data can also have significant impact on users' intentions to adopt it ~\cite{redmilesuser, li2020decentralized, zhang2020americans, horvath2020citizens, wiertz2020predicted, simko2020covid}.
Previous research found that sharing sensitive information such as location and contact history with government agencies in general could lead to a low acceptance of contact-tracing apps~\cite{simko2020covid, horvath2020citizens, wiertz2020predicted}.
In contrast, sharing data with health authorities in particular such as CDC of the U.S. and NHS of the U.K. could improve users' willingness to adopt contact-tracing apps~\cite{li2020decentralized, horvath2020citizens, wiertz2020predicted}.
However, the health-authority-leading solution encountered more challenges in the U.S. than other places.
In the US, there is no single national contract tracing app due to the lack of coordination by the federal government, while the rollout of state-specific apps has been slow due to the lack of technical expertise in state health departments~\cite{WhyArent70:online}. In fact, scholars recommended to seek ``the piecemeal creation of public trust'', and other entities have taken actions to help build contact-tracing apps~\cite{blasimme2020s}.
For example, Google and Apple launched the ``Exposure Notifications Express'' project which integrates contact tracing as an opt-in feature built into their OS's to alleviate the need of users to install any contact-tracing apps~\cite{AppleGoo94:online}.
Similarly, some U.S. universities have built their own contact-tracing apps to protect their faculty, staff, and students on campus~\cite{CMUCreat95:online, UCCampus71:online, UCBerkel10:online}.
In our study, we examine the impact of the four providers mentioned above on the adoption intention: state-level health authorities, federal-level health authorities, a large tech company (such as Google and Apple), and the users' employer or school.
\begin{description}
\item[RQ1.3] To what extent do different app providers (1. state health authorities, 2. federal health authorities, 3. a large tech company, 4. your employer or school) affect people's adoption intentions of a COVID-19 contact-tracing app?
\end{description}
\subsubsection{Security Risks}
Despite all the technical approaches to protect users' privacy, the nature of contact-tracing apps means that some security risks are inevitable regardless of the specific app design, though developers rarely mention them in their app description\cite{baumgartner2020mind, cho2020contact}.
However, very few contact-tracing app studies explicitly explained their security risks to their participants, and they focused on a certain type of security risk that is less protected against in a certain app design.
For example, ~\citet{li2020decentralized} highlighted the re-identification risk of infected users, which decentralized apps are more vulnerable to, and learned users tended to prefer centralized apps over decentralized ones.
~\citet{horvath2020citizens} controlled for whether to prompt users about the data breach risk, which centralized apps are more vulnerable to, and found the data breach stimuli did not change users' preferences for data storage.
In our research, we want to know how users' awareness of security risks affects their decisions in adopting contact-tracing apps.
Because different app design choices are more vulnerable to different risks, we are also interested in whether they have different levels of impact on adoption intention.
Specifically, we test four conditions, including a baseline condition that does not directly mention any security risk, and three other conditions that prime users about the data breach risk, secondary data use risk, or the re-identification risk.
\begin{description}
\item[RQ1.4] To what extent does priming users about different security risks of a COVID-19 contact-tracing app (1. not priming users about security risks, 2. priming about data breach risks, 3. priming about secondary data use risks, 4. priming about re-identification risks) affect their adoption intentions?
\end{description}
\subsection{Effects of Individual Differences on Contact-Tracing App Adoption Intention}
\label{sec:individual_difference_rqs}
Previous research has demonstrated that individual differences can play an important role in people's willingness to adopt a COVID-19 contact-tracing app.
In our survey, we build upon prior findings to examine how different sub-populations and people who hold different opinions to certain topics in general (e.g., privacy, COVID-19 risks, etc.) react to contact-tracing apps.
Research questions proposed in this subsection are extensions of RQ2: ``\textit{To what extent do individual differences affect people’s adoption intentions about a COVID-19 contact-tracing app?}''
\subsubsection{Prosocialness}
Altruism and contributing to the ``greater good'' were identified as important reasons for contact-tracing app supporters~\cite{simko2020covid, williams2020public, redmilesuser}.
Furthermore, \citet{trang2020one} found that emphasizing the societal benefits of the app led to a higher adoption willingness than emphasizing the benefits to users themselves.
Because people who are more prosocial may feel more strongly about contributing to the ``greater good,'', marketing these apps to appeal to this aspect of people could foster adoption and increase overall rates of usage of contact-tracing apps.
Hence we have the following research question to formally study the effects of prosocialness on adoption intentions:
\begin{description}
\item[RQ2.1] To what extent is one's prosocialness associated with COVID-19 contact-tracing app adoption intentions?
\end{description}
\subsubsection{General Privacy Concerns}
In contrast, the fear of increased surveillance and privacy risks were identified as important reasons for people who did not want to install contact-tracing apps~\cite{simko2020covid, williams2020public, redmilesuser, hassandoust2020individuals}.
As people's perceived privacy risks about contact-tracing apps in particular is likely to be affected by their privacy concerns in general, we have the following research question:
\begin{description}
\item[RQ2.2] To what extent is one's general privacy concern associated with COVID-19 contact-tracing app adoption intentions?
\end{description}
\subsubsection{COVID-19 Risk Perception}
We learned from past pandemics that public perceptions of the risks of a disease has a significant influence on the success of controlling the spread of a highly infectious disease~\cite{dryhurst2020risk, epstein2008coupled}.
However, conspiracy theories about the seriousness of COVID-19 have become barriers to the adoption of measures to control the spread of the disease such as social distancing~\cite{romer2020conspiracy}.
As a result, we have the following research question.
\begin{description}
\item[RQ2.3] To what extent is one's risk perception about COVID-19 associated with COVID-19 contact-tracing app adoption intentions?
\end{description}
\subsubsection{Technology Readiness}
\citet{parasuraman2015updated} divided people into five segments based on their attitudes towards technologies, including \textit{Skeptics}, \textit{Explorers}, \textit{Avoiders}, \textit{Pioneers}, and \textit{Hesitators} and found that they exhibit different intentions and behaviors in adopting new technologies.
Because contact-tracing apps are a new technology designed to complement the conventional manual contact-tracing process, people's intrinsic attitudes towards new technologies could have an essential impact on their adoption of contact-tracing apps.
Therefore, we have the following research question:
\begin{description}
\item[RQ2.4] To what extent is one's attitude towards new technologies associated with COVID-19 contact-tracing app adoption intentions?
\end{description}
\subsubsection{Demographics}
A large body of research has studied the influence of demographic factors such as age~\cite{horstmann2020does, walrave2020ready, hassandoust2020individuals, walrave2020adoption}, gender~\cite{horstmann2020does, walrave2020ready, hassandoust2020individuals, walrave2020adoption}, race~\cite{anderson2020most}, education~\cite{horstmann2020does, walrave2020ready, hassandoust2020individuals, walrave2020adoption}, income~\cite{abuhammad2020covid}, and living area~\cite{abuhammad2020covid} on COVID-19 contact-tracing app adoption intentions in the settings of various countries.
However, their findings are not consistent.
For example, regarding the age factor, some research showed that older people are significantly less willing to adopt contact-tracing apps~\cite{horstmann2020does, von2020covid}, while some found an opposite trend~\cite{hassandoust2020individuals} and some did not find that age had a significant influence~\cite{walrave2020ready, walrave2020adoption, saw2020predicting}.
The difference could be due to differences in culture, political climate, and the stage of the pandemic in different countries when the studies were conducted.
It could also be related to the difference in study design (e.g., within-subjects vs. between-subjects design) and the app description (e.g., a general description vs. a detailed description of the risks and benefits of a specific design).
\begin{description}
\item[RQ2.5] To what extent do demographic factors (e.g., age, gender, race, education, income, living area) correlate with a person's willingness to adopt a COVID-19 contact-tracing app in the U.S.?
\end{description}
Note that certain sub-populations are at higher risks to get exposed to COVID-19, such as essential workers, health workers, and people who need to take public transit frequently during the pandemic.
However, there has been little research about the adoption of contact-tracing apps for these people.
Therefore, our survey asks users to self-report whether they belong to any of the above high-risk sub-populations to answer the following research question:
\begin{description}
\item[RQ2.6] To what extent do people at higher risks of getting exposed to COVID-19 (e.g., essential workers, health workers, frequent public transit users) like to install a COVID-19 contact-tracing app?
\end{description}
Although some past work examined people's reactions to different app designs~\cite{horvath2020citizens, wiertz2020predicted, li2020decentralized, zhang2020americans, utz2020apps}, they focused on finding the designs that are likely to achieve high adoption rate for the entire population.
We want to take a step further to understand more nuances about how installation intentions of different sub-populations (e.g. men vs. women, older people vs. younger people) are moderated by different app design choices.
Hence, the following research question studies the interaction effect between factors related to app design choices and demographic factors:
\begin{description}
\item[RQ2.7] To what extent do app design choices moderate the intentions to install a COVID-19 contact-tracing app of different sub-populations?
\end{description}
\subsection{Explaining the Effects of App Design Choices and Individual Differences on Installation Intentions Through Risk-Benefit Tradeoffs}
\label{sec:mediation_rqs}
Recent qualitative research has identified the \textit{risks} of increased surveillance and privacy invasion and the \textit{benefits} to society and to the users themselves as two main reasons that explain why a person would install or not install a COVID-19 contact-tracing app~\cite{simko2020covid, li2020decentralized, williams2020public, abeler2020support}.
These findings are in line with the Privacy Calculus theory~\cite{dinev2006extended}, which states that individuals view privacy as a trade-off problem and make data disclosure decisions by weighing the potential risks and potential benefits.
Correspondingly, some prior work has drawn on the Privacy Calculus theory and examined the influence of perceived risks and benefits in users' decisions and how perceived risks and benefits mediate the relationship between abstract attributes and app adoption intentions.
Specifically, \citet{hassandoust2020individuals} drew on the Privacy Calculus theory and conducted structural equation modeling to examine the influence of perceived risks and benefits in users' decisions and found that technical attributes (\textit{anonymity} and \textit{information sensitivity}) could influence the adoption intentions by affecting users' risk beliefs.
Despite the theoretical insights, it is hard to link these abstract features to existing app designs and translate the results to practical design recommendations.
In our survey, we use a similar method as the above work~\cite{hassandoust2020individuals} to further explain why certain app design choices and individual differences have significant influences on app installation intention.
We also use perceived risks and benefits as mediators, while our independent variables include factors related to app design choices grounded in real-world contact-tracing app designs (Section~\ref{sec:design_space_overview}) rather than abstract features, which can more directly contribute to our understanding of the design space. The following research questions are extensions of RQ3: ``\textit{How do people's perceived risks and benefits about a contact-tracing app mediate the influence of app design choices and individual differences on the app adoption intention?}'':
\begin{description}
\item[RQ3.1] (\textit{Risks}) To what extent do security and privacy risks mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app?
\item[RQ3.2] (\textit{Self benefits}) To what extent does perceived protection to the users themselves mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app?
\item[RQ3.3] (\textit{Societal benefits}) To what extent does perceived effectiveness in slowing the spread of COVID-19 mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app?
\end{description}
Due to the unique requirement of achieving a widespread adoption to be effective, how much a person believes other people would like to install the app could affect their perception of the efficacy of the app~\cite{li2020decentralized, utz2020apps}.
Therefore, we also include the factor \textit{perceived adoption} as a potential mediator in our analysis:
\begin{description}
\item[RQ3.4] (\textit{Perceived adoption}) To what extent does perceived adoption of the app mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app?
\end{description}
\section{Methodology}
To answer the research questions and test the hypotheses about factors that affect people's intentions to adopt a COVID-19 contact-tracing app, we conducted a randomized between-subjects survey experiment on a representative sample of U.S. population ($N=1963{}$) recruited using a Qualtrics panel.
The sample size was determined before the formal study based on power analysis results ($\beta > 0.8$).
The effect size was estimated using data collected in pilot studies.
Our survey was programmed and hosted on Qualtrics.
The data was collected in November, 2020.
Our study has been reviewed and approved by our institution's IRB.
\subsection{Participants}
We recruited participants based in the U.S. using a Qualtrics online panel.
To obtain a nationally representative sample, we employed a quota-sampling method~\cite{cumming1990probability} for recruiting participants and controlled for gender, age, race, and living region to make the distributions of these variables consistent with U.S. census data.
We required participants to be fluent English speakers, aged 18 or older, and use smartphones.
We obtained 2026 responses that passed all understanding check and attention check questions using a Qualtrics online panel\footnote{\url{https://web.archive.org/web/20201120174828/https://www.qualtrics.com/research-services/online-sample/}}.
63 responses were removed as they did not provide a valid ZIP code, which yields a final sample of 1963{} unique responses.
The survey was configured to allow a respondent to take the survey only once so they could not re-attempt the survey after failing attention checks.
\subsection{Experiment Design}
\begin{table}[htbp]
\centering
\caption{Summary of the experimental manipulations to participants}
\resizebox{\linewidth}{!}{%
\begin{tabular}{p{0.22\linewidth}p{0.34\linewidth}p{0.5\linewidth}}
\toprule
Manipulations & Conditions & App behaviors and data practices\\
\hline
\multirow{3}{3cm}{Proximity-based contact tracing (RQ1.1)} & Decentralized & Notify exposed users.\newline Contact tracing on device using anonymous IDs. \newline\\
& Anonymized Centralized & Notify exposed users. \newline
Provide health authorities with exposure stats. \newline
Contact tracing on servers using anonymous IDs. \newline\\
& Identified Centralized& Notify exposed users. \newline
Provide health authorities with exposure stats. \newline
Support health workers to contact exposed users. \newline
Contact tracing on servers using real identities.\\
\hline
\multirow{3}{2.8cm}{Location use (RQ1.2)} & No location use & No location history will be collected. \newline\\
& Location on device & Help infected users recall their recent whereabouts. \newline
Location history stored on device. \newline\\
& Location uploaded & Help infected users recall their recent whereabouts. \newline
Help health workers analyze hotspots of infection. \newline
Infected users' location history stored on servers.\\
\hline
\multirow{3}{2.8cm}{App provider (RQ1.3)} & State health authorities & State health authorities built the app. \\
& Federal health authorities & Federal health authorities built the app. \\
& Tech company & A large tech company built the app. \\
& Employer or school & Your employer or school built the app. \\
\hline
\multirow{3}{2.8cm}{Security risk (RQ1.4)} & No security risk & No security risk is mentioned.\\
& Data breach risk & Stored data may be or stolen by outside hackers.\\
& Secondary use risk & Data may be stored longer than needed and used for other purposes.\\
& Re-identification risk & Exposed users could guess who were infected and led to their exposure.\\
\bottomrule
\end{tabular}}
\label{tab:experimental_design}
\end{table}
As summarized in Table \ref{tab:experimental_design}, our study follows a 3 (Decentralized vs. Anonymized Centralized vs. Identified Centralized) x 3 (No location use vs. Location on device vs. Location uploaded) x 4 (State health authorities vs. Federal health authorities vs. Tech company vs. Employer or school) x 4 (No security risk vs. Data breach risk vs. Secondary data use risk vs. Re-identification risk) factorial design.
Each participant was randomly assigned into one condition and saw the app description created with the selected values of the four variables.
Note that compared to prior work, in this study, each participant is only presented with one contact-tracing app design to simulate a more realistic setting and reduce the effect of fatigue.
Then they reported their willingness to install and use the app and their perceived risks and benefits of the app.
These manipulations allow us to study the effects of the four factors related to app design choices on the adoption intentions for contact-tracing apps (RQ1.1-1.4, see Section~\ref{sec:design_space_overview}) and study the how app design choices affect the adoption intentions through perceived risks and benefits (RQ3.1-3.4, see Section~\ref{sec:mediation_rqs}).
We intentionally had each participant see only one app design to emulate the real-world situation when there is only one COVID-19 contact-tracing app available in a region.
This design also reduces the potential fatigue caused by reading and evaluating multiple app designs.
We also asked participants to provide their demographic information, which allows us to study the effects of individual differences on the adoption intentions for contact-tracing apps (RQ2.1-2.4, RQ2.5 and RQ2.6, see Section~\ref{sec:individual_difference_rqs}) and the interaction effects between app design choices and individual differences (RQ2.7, see Section~\ref{sec:individual_difference_rqs}).
\subsection{Experiment Procedure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/experiment_procedure.png}
\caption{An Illustration of the experiment procedure. Our experiment consists of three main steps. The first step presents the app description and requires participants to correctly answer all quiz questions to proceed. The second step asks participants to report their intentions to install and use the app and their perceived risks, benefits, and community adoption rate of this app. The third step asks questions about the participants themselves, including validated scales that measure personal characteristics such as prosocialness and common demographic questions.}
\label{fig:experiment_procedure}
\end{figure}
Our experiment consisted of three steps as demonstrated in Figure~\ref{fig:experiment_procedure}.
An example of the complete survey can be found at \url{https://github.com/covid19-hcct/HCCT-documents/blob/master/national_survey_design_example.pdf}.
\subsubsection{Step 1: App Description and Quiz Questions} Participants were first presented with a description about the COVID-19 contact-tracing app randomly selected from 144 variations (3x3x4x4 factorial design).
We include a screenshot of one of the app descriptions as a example in the appendices (Figure~\ref{fig:example_app_description}).
To ensure participants correctly understood the app's features and data practices, we required participants to answer quiz questions. If the participants gave an incorrect answer, they could go back to read the description again. However, they could not proceed to the next step until they answered all the quiz questions correctly.
This method is borrowed from previous research that had similar experiment design~\cite{wang2020factors}.
All quiz questions are multiple choice questions except for the questions about security risks which requested participants to type the name of the security risk (ignoring spaces and case differences).
This is because we did not want to prime users in the ``\textit{No security risk}'' condition (control condition) about any security risk from reading the options in the quiz question.
\subsubsection{Step 2: Questions About the App}
\label{sec:step2}
This step contains two pages and both pages began with the same app description as Section 1.
In the first page, participants were asked to answer questions about their intentions to install and use the app.
There were five questions corresponding the five aspects of app adoption introduced in Section~\ref{sec:app_adoption_aspects}, which covered the general intentions to install the app, and intentions to report positive case to the app and keep the app installed.
In the second page, participants were asked to rate their perceived risks, benefits, and other people's adoption intentions.
We inserted an attention check question after all other questions (``This is an attention check, please go ahead and select strongly agree'').
When clicking on the next page button, the survey would automatically terminate if the participants did not pass the attention check.
At the end of this step, there was an open-ended question that allowed participants to freely express their opinions regarding the contact-tracing app.
\subsubsection{Step 3: Questions About Individual Differences}
After answering app-related questions, participants were asked to fill out validated scales that measure their prosocialness~\cite{caprara2005new}, general privacy concerns~\cite{malhotra2004internet}, technology readiness~\cite{parasuraman2015updated}, and COVID-19 risk perceptions~\cite{dryhurst2020risk}.
The four scales were presented in four different pages in random order.
We inserted an attention check question similar to Step 2 for each scale and the survey would terminate when participants clicking the next page button if they failed the attention check on that page.
Finally, participants were asked to fill out demographic questions (e.g., age, gender, race).
The complete list of demographic factors can be found in Section~\ref{sec:demographics_operationalization}.
\subsection{Operationalization}
\subsubsection{Dependent Variables}
\label{sec:dependent_vars}
We asked participants to report their adoption intentions in five aspects on a 7-point likert scale (1=strongly disagree, 7=strongly agree) in Step 2 Page 1 (Section~\ref{sec:step2}).
\textbf{Install app:} We asked participants to rate to what extent they agreed or disagreed with the statement ``I will install this app if it becomes available in my area.''
Then we asked participants to assume they have already installed the app, and then rate to what extent to agreed or disagreed with the following statements:
\textbf{Report positive case}: ``I will report to this app if I test positive.''
\textbf{Shorter battery life}: ``I will keep this app installed even if my phone battery seems to last less long.''
\textbf{Fewer cases}: ``I will keep this app installed even if COVID-19 cases are steadily decreasing in my area.''
\textbf{Vaccine available}: ``I will keep this app installed even if a COVID-19 vaccine becomes widely available.''
\subsubsection{Mediator Variables}
\label{sec:mediator_vars}
We asked participants their perceived risks, benefits, and other people's adoption intentions about the contact-tracing app presented to them on a 7-point likert scale (1=strongly disagree, 7=strongly agree) in Step 2 Page 2 (Section~\ref{sec:step2}).
The statements for each variable are listed as follows:
\textbf{Security and privacy risks}: ``Installing this app presents a risk to my security and privacy.''
\textbf{Self benefits}: ``Installing this app helps me protect myself against COVID-19.''
\textbf{Societal benefits}: ``This app helps slow the spread of COVID-19 in my area.''
\textbf{Perceived adoption}: ``Most people in my area would install this app if it became available.''
\subsubsection{Independent Variables}
\label{sec:independent_vars}
For factors related to app design choices, each presented contact-tracing app description was coded using four variables.
We chose the condition ``\textit{Decentralized}, \textit{No location use}, \textit{State health authorities} developed, \textit{No security risk} mentioned'' as the reference levels for the four variables.
Because they correspond with how contact-tracing apps are built in the U.S. (until December 2020): Different apps are developed for each state using the Google/Apple Exposure Notification framework, which implements the decentralized architecture and forbid the use of location in the same app.
\textbf{Proximity-based contact tracing}: We operationalize the three types of designs as two indicator variables: \textit{Anonymized Centralized} and \textit{Identified Centralized}, which take the value of 1 for participants in the respective condition and 0 otherwise.
\textbf{Location use}: We operationalize the three types of designs as two indicator variables: \textit{Location on device} and \textit{Location uploaded}, which take the value of 1 for participants in the respective condition and 0 otherwise.
\textbf{App providers}: We operationalize the four app provider options as three indicator variables: \textit{Federal health authorities}, \textit{Tech company}, and \textit{Employer or school}, which take the value of 1 for participants in the respective condition and 0 otherwise.
\textbf{Security risks}: We operationalize the four types of designs as three indicator variables: \textit{Data breach risk}, \textit{Secondary use risk}, and \textit{Re-identification risk} which take the value of 1 for participants in the respective condition and 0 otherwise.
For individual differences, we first used validated scales to measure the following personal characteristics of interest:
\textbf{Prosocialness}: We used the 16-item scale developed by \citet{caprara2005new} to measure participants' prosocialness.
The sixteen questions are on a 5-point likert scale and higher score means higher prosocialness.
We define the prosocialness value for each individual as the average rating of the 16 questions so the range of this variable is still $[1, 5]$.
The internal consistency (Cronbach’s alpha) of all 16 questions was 0.94 on our sample which showed high reliability.
\textbf{General Privacy Concerns}: We used the 10-item Internet Users' Information Privacy Concerns (IUIPC) scale developed by \citet{malhotra2004internet} to measure participants' general privacy concerns.
The ten questions are on a 7-point likert scale and higher score means higher privacy concerns.
We define the general privacy concern value for each individual as the average rating of the 10 questions so the range of this variable is still $[1, 7]$.
The internal consistency (Cronbach’s alpha) of all 10 questions was 0.86 on our sample which showed high reliability.
\textbf{COVID-19 Risk Perception}: We developed six questions to measure participants' perceptions about the severity and risks of COVID-19. The questions are adapted from based on \citet{dryhurst2020risk}'s work about COVID-19 risk perceptions.
The six questions are on a 5-point likert scale and higher score means higher COVID-19 Risk Perceptions.
We define the COVID-19 risk perception value for each individual as the average rating of the 6 questions so the range of this variable is still $[1, 5]$.
The internal consistency (Cronbach’s alpha) of all 10 questions was 0.83 on our sample which showed high reliability.
\textbf{Technology readiness}: We used the 16-item Technology Readiness Index (TRI) 2.0 scales developed by \citet{parasuraman2015updated} to measure participants' predisposition to use new technologies.
The sixteen questions are on a 5-point likert scale and high scores indicate positive attitudes to new technologies.
We define the technology readiness value for each individual as the average rating of the 16 questions so the range of this variable is still $[1, 5]$.
The internal consistency (Cronbach’s alpha) of all 10 questions was 0.83 on our sample which showed high reliability.
We also asked users to report demographic factors:
\label{sec:demographics_operationalization}
\textbf{Age}: We provided a text input box to allow participants to enter their age.
\textbf{Gender}: We provided five options for participants to select: ``Male'', ``Female'', ``Non-binary'', ``Prefer not to disclose'', and ``Prefer to self-describe''.
In our regression and mediation analysis, we only included participants who identified themselves as ``Male'' or ``Female'' and code the variable as 1 for ``Female'' and 0 for ``Male'' because the other groups contained too few responses.
\textbf{Race}: We provided 9 options for participants to select: ``American Indian or Alaska Native'', ``Asian'', ``Black or African American'', ``Hispanic or Latino'', ``Middle Eastern'', ``Native Hawaiian or Pacific Islander'', ``White'', ``Prefer not to disclose'', and ``Prefer to self-describe''.
In our regression and mediation analysis, we only included participants who identified themselves as \textit{Asian} and \textit{Black or African American}, \textit{Hispanic or Latino} because the other groups contained too few responses.
We operationalize this variable using three indicator variables: \textit{Asian} and \textit{Black or African American}, \textit{Hispanic or Latino}, which take the value of 1 for participants belonging to the corresponding race and 0 otherwise.
\textbf{Education}: We provided 11 options for participants to select: ``No schooling completed'', ``Nursery school to 8th grade'', ``Some high school, no diploma'', ``High school graduate, diploma or the equivalent (for example: GED)'', ``Some college credit, no degree'', ``Trade/technical/vocational training'', ``Associate degree'', ``Bachelor’s degree'', ``Master’s degree'', ``Professional degree'', ``Doctorate degree''.
Because ``Education'' is an ordinal variable, we converted the 11 options to integers 1 to 11, with 1 corresponding to ``No schooling completed'' and 11 to ``Doctorate degree''.
\textbf{Income}: We provided 7 options for participants to select: ``Less than \$25,000'', ``\$25,000 to \$34,999'', ``\$35,000 to \$49,999'', ``\$50,000 to \$74,999'', ``\$75,000 to \$99,999'', ``\$100,000 to \$149,999'', ``\$150,000 or more''.
Because ``Income'' is an ordinal variable, we converted the 7 options to integers 1 to 7, with 1 corresponding to ``Less than \$25,000'' and 7 to ``\$150,000 or more''.
\textbf{Health workers}: We asked participants to self-report whether they were health workers.
This variable takes the value of 1 for participants who answered ``Yes'', and 0 for “No”.
\textbf{Essential workers} We asked participants to self-report whether they were essential workers\footnote{We provided a definition of essential worker next to the question: ``workers who conduct operations and services that are essential for critical infrastructure operations, such as health care, food service, and public transportation.''}.
This variable takes the value of 1 for participants who answered ``Yes'', and 0 for “No”.
\textbf{Transit use}: We asked the question ``
How often do you take public transportation \textit{during the pandemic}?'' and provided 5 options: ``Never'', ``Rarely'', ``Monthly'', ``More than once a week'', ``Every day''.
Because ``Transit use'' is an ordinal variable, we converted the 5 options to integers 1 to 5, with 1 corresponding to ``Never'' and 5 to ``Every day''.
\textbf{Urban area percentage}: We asked participants to provide their ZIP code to identify which county they resided in when taking the survey.
Then we used the most recent U.S. Census data (2010)\footnote{\url{https://web.archive.org/web/20201210153214/https://www.census.gov/programs-surveys/geography/guidance/geo-areas/urban-rural/2010-urban-rural.html}} to look up what percentage of area of the county is urbanized and operationalize this variable using this number.
\subsection{Statistical Analysis Method}
To answer our research questions, we used two statistical analysis methods: linear regression analysis and mediation analysis.
\subsubsection{RQ1\&2: Linear Regression Analysis}
We created five additive linear regression models to study the main effects of app design choices (RQ1.1-1.4) and individual differences (RQ2.1-2.4, RQ2.5-2.2) for each outcome variable and an interactive linear regression model to study the interaction effects between demographic factors and app design choices (RQ2.7) on the installation intentions.
Multicollinearity was not a problem for all our linear regression analyese because the maximum generalized variance inflation factors ($GVIF^{(1/(2*Df)})$) for our models is 1.21, which is lower than the cutoff value 2.25.
\subsubsection{RQ3: Mediation Analysis Using Structural Equation Modeling}
To answer RQ3 (Section~\ref{sec:mediation_rqs}), we analyzed the mediation effects of the four mediator variables (Section~\ref{sec:mediator_vars}) using structural equation modeling (SEM), following guidelines from prior literature~\cite{rucker2011mediation, preacher2011effect}.
For our mediation analysis, we focus on the main outcome variable ``Install app'' intention rating.
We first selected independent variables that had a significant effect in our additive linear regression model for this outcome variable.
Then we operationalize our mediation analysis using the following regressions:
\resizebox{\linewidth}{!}{
\begin{tabular}{p{0.01\linewidth}p{1.1\linewidth}}
\\
1. & Installation intention rating $\sim$ [Selected independent variables] + Security and privacy risk rating + Self benefit rating + Societal benefit rating + Perceived adoption rating\\
2. & Security and privacy risk rating $\sim$ [Selected independent variables] \\
3. & Self benefit rating $\sim$ [Selected independent variables] \\
4. & Societal benefit rating $\sim$ [Selected independent variables] \\
5. & Perceived adoption rating $\sim$ [Selected independent variables] \\
\end{tabular}}
\section{Results}
\subsection{Descriptive Statistics}
We summarize the demographics of our survey sample ($N=1963$) in Table~\ref{tab:sample_overview}.
Our sample has consistent demographics statistics with the latest U.S.\ Census data\footnote{
\label{note_us_census}. For age and race, we used \url{https://web.archive.org/web/20201220221336/https://www.census.gov/data/tables/time-series/demo/popest/2010s-national-detail.html}.
For education, we used \url{https://web.archive.org/web/20201117011544/https://www.census.gov/content/census/en/data/tables/2019/demo/educational-attainment/cps-detailed-tables.html}.
For income, we used \url{https://web.archive.org/web/20201215160528/https://www.census.gov/data/tables/2020/demo/income-poverty/p60-270.html}.}.
\begin{table}[htbp]
\centering
\caption{Demographics statistics of our survey sample ($N=1963$). Our sample is consistent with the latest U.S. Census results.}
\vspace{0.5em}
\resizebox{\linewidth}{!}{%
\begin{threeparttable}
\begin{tabular}{p{0.49\linewidth}R{0.1\linewidth}R{0.2\linewidth}R{0.2\linewidth}}
\toprule
Demographic Characteristics & N & Sample (\%) & U.S. (\%) \\
\midrule
\textbf{Gender}\\
\hspace{3mm} Female & 994 & 50.6\% & 51.3\%\\
\hspace{3mm} Male & 961 & 49.0\% & 48.7\%\\
\hspace{3mm} Non-binary & 6 & 0.3\% \\
\hspace{3mm} Prefer not to disclose & 1 & $<$0.1\%\\
\hspace{3mm} Prefer to self-describe & 1 & $<$0.1\%\\
\textbf{Age}\\
\hspace{3mm}18--24 & 171 & 8.7\% & 11.7\%\\
\hspace{3mm}25--34 & 473 & 24.1\% & 17.9\%\\
\hspace{3mm}35--44 & 387 & 19.7\% & 16.4\%\\
\hspace{3mm}45--54 & 245 & 12.5\% & 15.6\%\\
\hspace{3mm}55--64 & 285 & 14.5\% & 16.4\%\\
\hspace{3mm}65+ & 402 & 20.5\% & 22.0\%\\
\textbf{Race}\\
\hspace{3mm}American Indian or Alaska Native & 20 & 1.0\% & 1.2\%\\
\hspace{3mm}Asian & 127 & 6.5\% & 6.3\%\\
\hspace{3mm}Black or African American & 235 & 12.0\% & 13.0\%\\
\hspace{3mm}Hispanic or Latino & 243 & 12.4\% & 16.8\%\\
\hspace{3mm}Middle Eastern & 5 & 0.3\%\\
\hspace{3mm}Native Hawaiian or Pacific Islander & 4 & 0.2\% & 0.2\%\\
\hspace{3mm}White & 1289 & 65.7\% & 60.1\%\\
\hspace{3mm}Prefer not to disclose & 11 & 0.5\%\\
\hspace{3mm}Prefer to self-describe & 29 & 1.5\%\\
\textbf{Education}\\
\hspace{3mm} Bachelor's degree or higher & 883 & 45.0\% & 33.3\%\\
\textbf{Household Income}\\
\hspace{3mm}Less than \$25,000 & 377 & 19.2\% & 17.1\%\\
\hspace{3mm}\$25,000 to \$34,999 & 261 & 13.3\% & 8.3\%\\
\hspace{3mm}\$35,000 to \$49,999 & 290 & 14.8\% & 11.7\%\\
\hspace{3mm}\$50,000 to \$74,999 & 365 & 18.6\% & 16.5\%\\
\hspace{3mm}\$75,000 to \$99,999 & 264 & 13.4\% & 12.3\%\\
\hspace{3mm}\$100,000 to \$149,999 & 236 & 12.0\% & 15.5\%\\
\hspace{3mm}\$150,000 or more & 170 & 8.7\% & 18.6\%\\
\bottomrule
\end{tabular}
\begin{tablenotes}[para,flushright]
\item[1] For gender, our source data from U.S.\ Census only have female and male percentage.\\
\item[2] For Race, our source data from U.S.\ Census doesn't have Middle Eastern as a separate race.
\end{tablenotes}
\end{threeparttable}}
\label{tab:sample_overview}
\end{table}
\subsubsection{Estimates of Adoption Rate}
For questions measuring the the five aspects of adoption (Section~\ref{sec:dependent_vars}), we grouped the options ``Somewhat agree'', ``Agree'' or ``Strongly agree'' to estimate the percentage of people that would install and use contact-tracing apps.
Table~\ref{tab:adoption_rate_estimates} summarizes the results.
58.9\% participants reported they at least somewhat agreed that they would install the app, which is close to findings of previous studies with U.S. smartphone users~\cite{li2020decentralized, Washingt23:online}.
When participants were asked about actions they would take if they had installed the app, 76.2\% reported they at least somewhat agreed to report to the app if they tested positive for COVID-19.
Note that this is higher than the estimated install rate, which suggests that there are people who do not want to be tracked in general, but are less concerned to share the same information if they are infected to facilitate contact tracing.
Then we estimated the install retention rate in the long run in three different situations.
The \textit{Fewer cases} situation achieved the highest retention rate (63.7\%) and the \textit{Vaccine} situation achieved the lowest retention rate (57.6\%) which is similar to our expectation.
Although it is surprising to see more than half of the participants rated they would keep the app installed even when a vaccine becomes widely available.
This may be because some people have disbelief in vaccines or because they do not find these apps a big threat and tend not to actively uninstall the app once it is installed.
We also note that the install retention rate if the app drains the battery quickly (58.8\%) is close to the \textit{Vaccine} situation.
This suggests that practical concerns such as the impact on battery life can have crucial influence on users' decisions, which echos findings of prior work~\cite{redmilesuser}.
\begin{table}[htbp]
\caption{Estimates of adoption rate (\%). A participant is considered as likely to install and use the app if choosing ``Somewhat agree'', ``Agree'', or ``Strongly agree'' for the corresponding statement (presented in Section~\ref{sec:dependent_vars}). The first column of each variable is the reference condition, and the conditions that have significantly different adoption intentions in our linear regression analyses in Table~\ref{tab:linear_regression_results} are marked in bold.}
\vspace{0.5em}
\resizebox{\linewidth}{!}{%
\begin{threeparttable}
\centering
\begin{tabular}{p{0.15\linewidth} | p{0.05\linewidth}| p{0.07\linewidth}p{0.07\linewidth}p{0.06\linewidth} | p{0.05\linewidth}p{0.055\linewidth}p{0.078\linewidth} | p{0.05\linewidth}p{0.04\linewidth}p{0.04\linewidth}p{0.07\linewidth} | p{0.05\linewidth}p{0.05\linewidth}p{0.05\linewidth}p{0.08\linewidth}}
\toprule
Dependent & \multirow{2}{*}{All} & \multicolumn{3}{c|}{Proximity} & \multicolumn{3}{c|}{Location Use} & \multicolumn{4}{c|}{App Provider} & \multicolumn{4}{c}{Security risk} \\ \cline{3-16}
variable & & Decen. & Ano.C. & Id.C. & None & Local & Upl. & State & Fed. & Tech & Empl. & None & Brea. & 2nd. & Re-id. \\ \midrule
\multicolumn{16}{c}{\% of participants who agreed \textbf{``I will install this app if it becomes available in my area.''}}\\ \midrule
Install & 58.9 & 58.4 & 60.1 & 58.4 & 57.1 & 58.7 & \textbf{61.1}** & 58.9 & 62.0 & 56.5 & 58.3 & 60.1 & 59.4 & \textbf{55.3}* & 60.0 \\ \midrule
\multicolumn{16}{c}{\% of participants who agreed \textbf{``I will report to this app if I test positive.''}}\\ \midrule
Report & 76.2 & 74.6 & 76.1 & 78.0 & 76.0 & 76.0 & 76.6 & 75.5 & 78.4 & 75.0 & 75.9 & 76.9 & 76.5 & 76.7 & 74.5 \\ \midrule
\multicolumn{16}{C{18cm}}{\% of participants who agreed \textbf{``I will keep this app installed} even if [my phone \textbf{battery} seems to last less long/COVID-19 \textbf{cases are steadily decreasing} in my area/a COVID-19 \textbf{vaccine} becomes widely available]''}\\ \midrule
Battery & 58.8 & 58.4 & 60.3 & 57.8 & 58.5 & 58.5 & \textbf{59.4}* & 58.6 & 59.5 & 57.9 & 59.1 & 62.5 & 58.0 & 55.8 & 58.7 \\ \hline
Fewer cases & 63.7 & 62.8 & 66.2 & 62.2 & 63.8 & 63.9 & 63.4 & 64.0 & 65.6 & 63.2 & 61.9 & 66.1 & 64.9 & 59.5 & 64.2 \\ \hline
Vaccine & 57.6 & 57.3 & 58.5 & 57.2 & 56.0 & \textbf{58.4}* & \textbf{58.6}* & 58.6 & 58.9 & 58.4 & 54.6 & 60.4 & 57.8 & 55.1 & 57.1 \\
\bottomrule
\end{tabular}
\begin{tablenotes}[para,flushright]
\item[1]* p$<$ 0.05; ** p$<$0.01;\\
\item[2] Condition names are abbreviated. Decen.: Decentralized; Ano.C.: Anonymized Centralized; Id.C.: Identified Centralized; None: No location use; Local: Location on device; Upl.: Location uploaded; State: State health authorities; Fed.: Federal health authorities; Tech: Tech company; Empl.: Employer or school; None: No security risk; Brea.: Data breach risk; 2nd.: Secondary use risk; Re-id.: Re-identification risk
\end{tablenotes}
\label{tab:adoption_rate_estimates}
\end{threeparttable}}
\end{table}
\begin{table}[htbp]
\caption{Estimates of percentage of people who at least somewhat agreed the app has security and privacy risks/self benefits/societal benefits and other people will install this app (\%). The first column of each variable is the reference condition.}
\vspace{0.5em}
\resizebox{\linewidth}{!}{%
\begin{threeparttable}
\centering
\begin{tabular}{p{0.25\linewidth} | p{0.05\linewidth}| p{0.07\linewidth}p{0.07\linewidth}p{0.06\linewidth} | p{0.05\linewidth}p{0.055\linewidth}p{0.078\linewidth} | p{0.05\linewidth}p{0.04\linewidth}p{0.04\linewidth}p{0.07\linewidth} | p{0.05\linewidth}p{0.05\linewidth}p{0.05\linewidth}p{0.08\linewidth}}
\toprule
Mediator & \multirow{2}{*}{All} & \multicolumn{3}{c|}{Proximity} & \multicolumn{3}{c|}{Location Use} & \multicolumn{4}{c|}{App Provider} & \multicolumn{4}{c}{Security risk} \\ \cline{3-16}
variable & & Decen. & Ano.C. & Id.C. & None & Local & Upl. & State & Fed. & Tech & Empl. & None & Brea. & 2nd. & Re-id. \\ \midrule
\multicolumn{16}{c}{\% of participants who agreed \textbf{``Installing this app presents a risk to my security and privacy.''}}\\ \midrule
S\&P risks & 54.8 & 52.7 & 52.5 & 59.1 & 50.2 & 57.1 & 57.3 & 54.1 & 50.4 & 56.8 & 58.1 & 43.3 & 62.8 & 59.7 & 53.6 \\ \midrule
\multicolumn{16}{c}{\% of participants who agreed \textbf{``Installing this app helps me protect myself against COVID-19.''}}\\ \midrule
Self benefits & 68.2 & 68.6 & 67.0 & 69.0 & 66.2 & 67.7 & 70.8 & 70.2 & 70.9 & 65.4 & 66.4 & 69.8 & 67.1 & 67.2 & 68.9 \\ \midrule
\multicolumn{16}{c}{\% of participants who agreed \textbf{``This app helps slow the spread of COVID-19 in my area.''}}\\ \midrule
Societal benefits & 64.9 & 63.6 & 65.2 & 65.9 & 62.5 & 64.1 & 68.0 & 64.9 & 65.4 & 63.0 & 66.4 & 68.0 & 63.4 & 61.8 & 66.6 \\ \hline
\multicolumn{16}{c}{\% of participants who agreed \textbf{``Most people in my area would install this app if it became available.''}}\\ \midrule
Perceived adoption & 41.7 & 41.4 & 41.3 & 42.4 & 41.3 & 40.7 & 43.2 & 40.0 & 44.7 & 42.4 & 39.7 & 42.9 & 42.7 & 37.4 & 44.1 \\
\bottomrule
\end{tabular}
\begin{tablenotes}[para,flushright]
\item[1] Condition names are abbreviated. Decen.: Decentralized; Ano.C.: Anonymized Centralized; Id.C.: Identified Centralized; None: No location use; Local: Location on device; Upl.: Location uploaded; State: State health authorities; Fed.: Federal health authorities; Tech: Tech company; Empl.: Employer or school; None: No security risk; Brea.: Data breach risk; 2nd.: Secondary use risk; Re-id.: Re-identification risk
\end{tablenotes}
\label{tab:mediator_estimates}
\end{threeparttable}}
\end{table}
\subsubsection{Estimates of Perceived Risks, Benefits, and Community Adoption Rate}
We also calculated estimates of the four mediator variables using the same method as in Table~\ref{tab:adoption_rate_estimates}. Table~\ref{tab:mediator_estimates} presents the result.
More people believed that installing the app could provide benefits to themselves (68.2\%) and to the society (64.9\%) than believed that installing the app would present a risk to their privacy and security (54.8\%).
Interestingly, only 41.7\% of our participants at least somewhat agreed that most people would install this app if it became available, which is much lower than the estimates of their own installation rate (58.9\%).
This suggests people generally hold an overly pessimistic attitude towards the adoption of contact-tracing app in the U.S.
These estimates also help validate the manipulations of our survey design.
For example, more people assigned to the \textit{identified centralized architecture} condition perceived security and privacy risks than people assigned to the \textit{decentralized architecture} condition (59.1\% vs. 52.7\%);
more people assigned to the two conditions that collect location data perceived security and privacy risks than people assigned to the \textit{no location use} condition (57.1\% and 57.3\% vs. 50.2\%).
More people assigned to the conditions that present one of the three security risks perceived security and privacy risks in the app than people assigned to the \textit{no security risk} condition (data breach risk: 62.8\%, secondary data use risk: 59.7\%, re-identification risk: 53.6\% vs. not priming about risk: 43.3\%).
These results are in line with our expectations when designing the conditions, which demonstrates that our between-subjects design effectively conveyed the key characteristics of the app and our participants were able to correctly understand these characteristics before reporting their subjective feelings.
\subsection{Effects of App Design Choices on Adoption Intentions (RQ1)}
\label{sec:RQ1_results}
In RQ1, we are interested in investigating how app design choices such as decentralized vs. centralized architecture, location use, app providers, and the description about security risks in the app affect one's adoption intentions.
According to the linear regression results demonstrated in Table~\ref{tab:linear_regression_results}, location use of the app (RQ1.2) and the disclosure of the secondary data use security risk (RQ1.4) had significant effects on several aspects of adoption intentions.
Conversely, the difference in decentralized vs. centralized architectures (RQ1.1) and app providers (RQ1.3) did not have a significant effect on adoption intentions.
We calculated the $f^2$ scores of factors related to app design choices to measure their effect size for the five outcome variables.
The $f^2$ for the five outcome variables are 0.006, 0.004, 0.003, 0.002, 0.004 respectively, which shows app design choices have a very small effect on adoption intentions in general\footnote{\label{note:f-squared-rule-of-thumb}The basic rule of thumb to interpret $f^2$ is: $f^2=0.02$ indicates a small effect; $f^2=0.15$ indicates a medium effect; $f^2=0.35$ indicates a large effect;}.
For location use, we can see that the condition \textit{Location on device} and \textit{Location uploaded} both had positive significant effects on some aspects of adoption intentions.
For example, the coefficient for \textit{Location uploaded} condition for the \textit{Install app} outcome variable is 0.258, which represents an estimated increase in the 7-point installation intention rating if the app collects location data and uploads data to the servers for analyzing infection hotspots as compared to not providing location features at all.
This suggests contributing a little more location data in exchange for more useful information could drive more adoption of the app.
We want to note that the two location features follow privacy-by-design principles by only storing and uploading the data if necessary (e.g., only uploading location data of users who test positive rather than all users).
These design considerations should also be taken into consideration when interpreting the positive effects of these features.
For security risk, although prompting participants about all three types of risks consistently increased their perceptions about the risks to their security and privacy caused by installing this app (Table~\ref{tab:mediator_estimates}), only showing the secondary data use risk significantly decreased the installation intentions.
This suggests that one's security and privacy concerns may not be a determinant of their adoption intentions, which is further supported with our mediation analysis (Section~\ref{sec:mediation_results}).
In addition, the difference between the reactions to the secondary data use risk and the other two types of risks provides us with another aspect to view the comparisons between decentralized and centralized architectures.
As centralized architecture requires more data to be stored on central servers, app users will become more vulnerable to the data breach risk and secondary data use risk than decentralized architectures.
Therefore, although there is no significant difference in people's adoption intentions when directly comparing the two architectures, our results suggest that using a decentralized architecture could help reduce security risks that people are more concerned about.
\begin{table}[htbp]
\caption{Linear regression results: The main effects of app design choices and individual differences on app adoption intentions. As described in Section~\ref{sec:independent_vars}, we excluded the data from groups that contained too few responses (e.g., Non-binary gender, Native Hawaiian or Pacific Islander). The sample used for the regression analysis contains 1889{} responses.}
\vspace{0.5em}
\resizebox{\linewidth}{!}{%
\begin{threeparttable}
\centering
\begin{tabular}{p{0.4\linewidth} C{0.25\linewidth}C{0.25\linewidth}C{0.25\linewidth}C{0.25\linewidth}C{0.25\linewidth}}
\toprule
\multirow{3}{*}{Independent variable} & \multirow{2}{*}{Install app} & \multirow{2}{*}{Report positive} &
\multicolumn{3}{c}{Install retention} \\
& & & Battery & Fewer cases & Vaccine\\
& Coef. (S.E.) & Coef. (S.E.)& Coef. (S.E.) & Coef. (S.E.)& Coef. (S.E.)\\\midrule
(Intercept) & -1.663*** (0.428) & -0.531 (0.413) & -2.129*** (0.446) & -1.968*** (0.432) & -2.296*** (0.451) \\
\textbf{Factors related to app design choices} \\
Proximity (Decentralized=0) \\
\hspace{3mm} Anonymized Centralized & -0.067 (0.092) & 0.077 (0.089) & 0.002 (0.096) & -0.014 (0.093) & -0.062 (0.097) \\
\hspace{3mm} Identified Centralized & -0.036 (0.091) & 0.110 (0.087) & -0.002 (0.094) & -0.017 (0.091) & -0.038 (0.095)\\
Location use (No use=0)\\
\hspace{3mm} Location on device & 0.098 (0.092) & 0.065 (0.089) & 0.086 (0.096) & 0.127 (0.0926) & 0.210* (0.097)\\
\hspace{3mm} Location uploaded & 0.258** (0.091) & 0.119 (0.088) & 0.189* (0.095)& 0.072 (0.092) & 0.203* (0.096)\\
App provider (State=0) \\
\hspace{3mm} Federal health authorities & 0.132 (0.106) & 0.167 (0.102) & 0.029 (0.110) & 0.084 (0.107)& 0.050 (0.111)\\
\hspace{3mm} Tech company & -0.050 (0.105) & -0.032 (0.101) & -0.038 (0.109) & -0.045 (0.106)&-0.046 (0.111)\\
\hspace{3mm} Employer or school & -0.008 (0.106) & 0.121 (0.103) & 0.091 (0.111)& -0.005 (0.107) & -0.074 (0.112)\\
Security risk (No risk=0) \\
\hspace{3mm} Data breach risk & -0.067 (0.104) & -0.015 (0.100) & -0.039 (0.108) & -0.053 (0.105) & -0.099 (0.109)\\
\hspace{3mm} Secondary use risk & -0.209* (0.104) & -0.046 (0.101) & -0.039 (0.108) & -0.140 (0.106) & -0.197 (0.110)\\
\hspace{3mm} Re-identification risk & -0.141 (0.107) & -0.099 (0.104) & -0.090 (0.112) & -0.085 (0.108) & -0.148 (0.113)\\
\textbf{Factors related to individual differences} \\
Prosocialness & 0.418*** (0.053) & 0.298*** (0.051) & 0.475*** (0.055) & 0.444*** (0.053) & 0.544*** (0.056)\\
General privacy concern & -0.113* (0.047) & -0.003 (0.045) & -0.131** (0.049) & -0.080 (0.047) & -0.125* (0.049)\\
COVID-19 risk perception & 0.682*** (0.047) & 0.681*** (0.046) & 0.659*** (0.049) & 0.718*** (0.048) & 0.697*** (0.050)\\
Technology readiness & 0.689*** (0.067) & 0.561*** (0.064) & 0.607*** (0.070) & 0.623*** (0.067) & 0.596*** (0.070)\\
Age & -0.011*** (0.003) & 0.002 (0.002) & 0.008** (0.003) & 0.001 (0.003) & 0.002 (0.003)\\
Gender (Male=0) \\
\hspace{3mm} Female & -0.362*** (0.082) & -0.184* (0.079) & -0.402*** (0.085) & -0.264** (0.082) & -0.243** (0.086)\\
Race (White=0) \\
\hspace{3mm} Asian & 0.261 (0.155) & 0.256 (0.150) & 0.287 (0.161) & 0.241 (0.156) & 0.415* (0.163)\\
\hspace{3mm} Black/African American & 0.148 (0.119) & 0.114 (0.115) & 0.220 (0.124) & 0.356** (0.120) & 0.512*** (0.125)\\
\hspace{3mm} Hispanic/Latino & 0.276* (0.120) & 0.231* (0.116) & 0.066 (0.125) & 0.346** (0.121) & 0.399** (0.127)\\
Education & 0.034 (0.023) & 0.011 (0.022) & 0.036 (0.024) & 0.048* (0.023) & 0.031 (0.024)\\
Household Income & 0.110*** (0.023) & 0.040 (0.023) & 0.103*** (0.024) & 0.059* (0.024) & 0.057* (0.025)\\
Essential worker & -0.203* (0.098) & -0.316*** (0.094) & -0.225* (0.102) & -0.167 (0.098) & -0.120 (0.103)\\
Health worker & -0.013 (0.151) & -0.202 (0.146) & 0.045 (0.158) & -0.073 (0.153) & -0.007 (0.159)\\
Public transit use & 0.155*** (0.036) & 0.051 (0.035) & 0.205*** (0.038) & 0.129*** (0.037) & 0.219*** (0.038)\\
Urban area percentage & 0.006** (0.002) & 0.002 (0.002) & 0.003 (0.002) & 0.003 (0.002) & 0.004* (0.002)\\\midrule
$R^2$ & 0.329 & 0.226 & 0.278 & 0.282&0.289\\
Adjusted $R^2$ & 0.320 & 0.216 & 0.269 & 0.272& 0.279\\
\bottomrule
\end{tabular}
\begin{tablenotes}[para,flushright]
Note: * p$<$0.05; ** p$<$0.01; *** p$<$0.001
\end{tablenotes}
\label{tab:linear_regression_results}
\end{threeparttable}}
\end{table}
\subsection{Effects of Individual Differences on Adoption Intentions (RQ2)}
\label{sec:RQ2_results}
\subsubsection{Main Effects (RQ2.1-2.6)}
\label{sec:individual_difference_main_effect_results}
In RQ2, we are interested in investigating how individual differences such as demographic factors and other personal characteristics like prosocialness and general privacy concerns affect one's adoption intentions.
Table~\ref{tab:linear_regression_results} presents the results.
The $f^2$ of all factors related to individual differences for the five outcome variables are 0.475, 0.286, 0.380, 0.385, 0.397 respectively, which shows that factors related to individual differences have a very large effect on adoption intentions, especially app installation intentions.
We found prosocialness, COVID-19 risk perception, and technology readiness all had significant positive effect on the five aspects of adoption intentions.
Conversely, general privacy concern had a significant negative effect on three out of the five aspects of adoption intentions.
These results are consistent with our expectations and answer RQ2.1-2.4.
We found multiple demographic factors had significant effects on adoptions intentions (RQ2.5).
Females had significantly lower intentions to adopt the app in all five aspects, especially for installing the app (Coef. = -0.362, i.e., our model predicts females' 7-point installation intention rating 0.362 lower than males).
High household income had significant positive effects on intentions to install the app and keep the app installed but had no significant effect on intentions to report positive cases.
Higher education had significant positive effects on intentions to keep the app installed when the COVID-19 cases are steadily decreasing.
Older people had significantly lower intentions to install the app, while they had significantly higher intentions to keep the app installed when the battery seems to last less long.
Unlike other demographic factors, the significant effects of race mostly appeared on the intentions to keep the app installed rather than the intentions to install the app.
For example, although only Hispanics had significantly higher intentions to install the app than Whites, Asians, Blacks, and Hispanics all had significantly higher intentions to keep the app installed even if a vaccine becomes widely available.
Note that the causes of the higher intentions to keep the app installed for the three races could be different.
For example, Pew research recently found that Black Americans are less inclined to get vaccinated than other racial and ethnic groups and Asians are the most inclined to get vaccinated~\cite{Intentto82:online}.
This requires further investigation by future work.
For people who are at higher risks to get exposed to COVID-19, we found people who are frequent public transit users during the pandemic and people who live in more urbanized areas had significantly higher adoption intentions.
To our surprise, essential workers had significantly lower adoption intentions in several aspects, especially for reporting positive cases to the app (Coef.=-0.316).
Our mediation analysis provides more insights into possible causes of this finding (Section~\ref{sec:mediation_results}).
Note that the average app installation intention rating for all essential workers (mean=4.77) is actually slightly higher than other participants (mean=4.54)
This may be because essential workers are generally younger (median age=35) than the rest of the sample (median age=49) and we have showed that younger people are more inclined to adopt contact-tracing apps, which counteracted the influence of being an essential worker.
\subsubsection{Interaction Effects (RQ2.7)}
\label{sec:individual_difference_interaction_effect_results}
In RQ2.7, we focus on the \textit{app installation intentions} and study if the same app design could result in different installation intentions for different sub-populations.
This could help us predict the adoption of a certain app design by people who are at different levels of risks of getting exposed to or infected with COVID-19 and analyze the implications of potentially unbalanced app adoption.
To answer this research question, we built a interactive model for the ``\textit{Install app}'' outcome variable which includes the interaction between factors related to app design choices and demographic factors.
Due to the space constraints, we only present the interaction that had a significant effect in Figure~\ref{fig:interaction_effects_proximity_demographics}, \ref{fig:interaction_effects_location_demographics}, \ref{fig:interaction_effects_app_provider_demographics}, \ref{fig:interaction_effects_risk_demographics}.
The complete results can be found in the Appendices (Table~\ref{tab:interaction_regression_results}).
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{figures/female_exposure3_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/black_exposure2_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/education_exposure3_interaction.png}
\caption{Significant interaction effects: Proximity-based contact tracing x Demographics. The vertical bars represent the estimated 95\% confidence intervals of the ``\textit{Install app}'' intention rating. Note that we group the eleven education levels into two classes for illustrative purposes.}
\label{fig:interaction_effects_proximity_demographics}
\end{figure}
For the interaction between proximity-based contact tracing design and demographic factors (Figure~\ref{fig:interaction_effects_proximity_demographics}), we found that the effects of different architectures to achieve proximity-based contact tracing are moderated by gender, race, and education level factors.
Specifically, females tended to prefer the identified centralized architecture while males tend to prefer the decentralized architecture (Coef.=0.624, p$<$.01).
The difference in installation intentions between Black and White people was exacerbated when changing from decentralized to anonymized centralized architecture (Coef.=0.662, p$<$.05).
People who received higher education preferred identified centralized architectures to the decentralized architecture (Coef.=0.118, p$<$.05).
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{figures/essential_worker_location3_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/health_worker_location2_interaction.png}
\caption{Significant interaction effects: Location use x Demographics. The vertical bars represent the estimated 95\% confidence intervals of the ``\textit{Install app}'' intention rating.}
\label{fig:interaction_effects_location_demographics}
\end{figure}
For the interaction between location use and demographic factors (Figure~\ref{fig:interaction_effects_location_demographics}), we found that the effects of location use are moderated by whether the person is a essential/health worker. Although the ``\textit{Location uploaded}'' feature could drive a significantly higher installation intention rating at the population level, essential workers preferred the ``\textit{No location use}'' condition (Coef.=-0.544, p$<$.05).
Similarly, health workers preferred the ``\textit{No location use}'' condition a lot more than the ``\textit{Location on device}'' condition (Coef.=-0.939, p$<$.05).
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{figures/female_controller3_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/urban_area_controller3_interaction.png}
\caption{Significant interaction effects: App provider x Demographics. The vertical bars represent the estimated 95\% confidence intervals of the ``\textit{Install app}'' intention rating. Note that we group the urban area percentage values into two classes for illustrative purposes.}
\label{fig:interaction_effects_app_provider_demographics}
\end{figure}
For the interaction between app provider and demographic factors (Figure~\ref{fig:interaction_effects_app_provider_demographics}), we found that the effects of app provider are moderated by gender and urban area percentage.
Females (Coef.=-0.621, p$<$.01) and people living in less urbanized areas (Coef.=0.0119, p$<$.05) tended to prefer contact-tracing apps provided by the state health authorities to a large tech company.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{figures/female_risk4_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/health_worker_security3_interaction.png}
\includegraphics[width=0.45\linewidth]{figures/health_worker_security4_interaction.png}
\caption{Significant interaction effects: Security risk presentation x Demographics. The vertical bars represent the estimated 95\% confidence intervals of the ``\textit{Install app}'' intention rating.}
\label{fig:interaction_effects_risk_demographics}
\end{figure}
For the interaction between security risk presentation and demographic factors (Figure~\ref{fig:interaction_effects_risk_demographics}), we found that the effects of security risk are moderated by gender and whether the person is a health worker.
Specifically, females were more discouraged by the secondary data use risk than males (Coef.=-0.471, p$<$.05);
People who are not health workers were more discouraged by the secondary data use risk (Coef.=0.878, p$<$.05) and the re-identification risk than health workers (Coef.=1.05, p$<$.05).
\subsection{Explaining the Effects of App Design Choices and Individual Differences (RQ3)}
\label{sec:mediation_results}
In RQ3, we aim to explain how certain app design choices and individual differences had significant effects on one's app installation intentions through the four mediator variables: security and privacy risks, self benefits, societal benefits, and perceived adoption.
To answer this research question, we conducted a mediation analysis using structural equation modeling and measured the relative magnitude of the indirect effects through the four mediators following methods of previous research~\cite{preacher2011effect}.
Table~\ref{tab:mediation_indirect_effects} presents the ratios of the indirect effects to the total effects for each pair of independent variables and mediator variables.
Figure~\ref{fig:sem} illustrates the significant correlations between independent variables and mediator variables and between mediator variables and the outcome variable installation intention rating.
Our model fit is acceptable according to the Standardized Root Mean Square Residual (SRMR=0.057)\footnote{The rule-of-thumb to interpret SRMR is: SRMR less than 0.05 means the model fits well; SRMR less than 0.08 means the model fit is acceptable.}.
\begin{table}[]
\centering
\caption{This table shows the estimated ratios of indirect effects to the total effect, i.e., what percentage of the total effects of the independent variables on installation intentions were achieved through the four mediator variables. The cells that contain a number indicate that the independent variable (e.g., ``\textit{Location uploaded}'') affects the app installation intentions through the mediator variable (e.g., ``\textit{Self benefits}'') and the indirect effect is significant. A positive number means the indirect effect is in the same direction as the total effect and a negative number means the indirect effect is in the opposite direction of the total effect which counteracts the other variables' positive indirect effects. We leave the cell blank (``--'') if the indirect effect is not significant.}
\vspace{0.5em}
\resizebox{\linewidth}{!}{
\begin{tabular}{p{0.35\linewidth} p{0.25\linewidth} p{0.25\linewidth} p{0.25\linewidth} p{0.25\linewidth}}
\toprule
Independent var. & S\&P risks & Self benefits & Societal benefits & Perceived adoption \\
\midrule
Location uploaded & -- & 0.2122 & 0.1778 & --\\
Secondary use risk & 0.1698 & -- & -- & 0.2602\\
Prosocialness & 0.0792 & 0.2242 & 0.1511 & 0.2825\\
COVID-19 risk perception & 0.0428 & 0.2399 & 0.1760 & 0.0910\\
General privacy concern & 0.7689 & -- & -- & --\\
Technology readiness & 0.1548 & 0.2168 & 0.1342 & 0.1333\\
Age & -0.1517 & 0.2077 & -- & 0.2693\\
Female & -- & 0.1535 & 0.1750 & 0.1911\\
Hispanic & -- & -- & -- & --\\
Income & -- & -- & 0.1049 & 0.2161\\
Essential worker & -- & -- & 0.1949 & --\\
Transit use & -0.1329 & 0.1861 & -- & 0.3914\\
Urban area percentage & 0.1136 & 0.1296 & 0.2122 & 0.4582\\
\bottomrule
\end{tabular}}
\label{tab:mediation_indirect_effects}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/sem.pdf}
\caption{An illustration of mediation effects that explain how certain app design choices and individual differences affect people's intentions to install the app. Edges in \textbf{grey} (e.g., \textit{Secondary use risk} $\rightarrow$ \textit{Security risk rating}) indicate positive correlation and Edges in \textbf{\color{red} red} (e.g., \textit{Technology readiness} $\rightarrow$ \textit{Security risk rating}) indicate negative correlation. Only edges that have \textit{significant effects} are plotted and the edge weight and transparency corresponds to the standardized coefficients (NOT effect size; effect size is presented in Table~\ref{tab:mediation_indirect_effects}). There is a significant indirect effect between an independent variable and the outcome variable (i.e., install intention rating) through a mediator variable if there is a pathway from the independent variable to the outcome variable through the mediator variable.}
\label{fig:sem}
\end{figure}
Table~\ref{tab:mediation_indirect_effects} shows that when the perceptions of risks and benefits both had significant indirect effects, the effect sizes of the two benefit factors were almost always larger than the security and privacy risk factor.
For \textit{Age} and \textit{Transit use}, security risks even had a negative indirect effect, which means although these two independent factors had significant effects on security risk perceptions, the effects on other mediator variables were larger and had an opposite direction.
Therefore, we conclude that one's perceptions about the benefits of COVID-19 contact-tracing apps are more powerful determinants of app installation intentions than the perceptions about the security and privacy risks caused by the app.
Furthermore, we learned from Table~\ref{tab:mediation_indirect_effects} that \textit{Perceived adoption} often had an even larger effect size than the two benefit factors.
This result is not surprising as we already knew the efficacy of contact-tracing apps largely depends on whether it can achieve a widespread adoption.
If a person does not have enough confidence in having enough people installing a contact-tracing app, they may refrain from installing it themselves.
Figure~\ref{fig:sem} provides more information about two parts of an indirect effect: the correlation between the independent variable (the first-row nodes) and the mediator variables (the four second-row nodes on the right) and the correlation between the mediator variables and the outcome variable \textit{Install app} intention rating.
This could help us gain more understanding in the results of RQ1 and RQ2.
For example, we can see the negative correlation between being an essential worker and the installation intention rating could be partly attributed to the decreased perception of societal benefits.
However, we want to note that these four mediators were not able to explain all the effects.
For example, none of them had a significant indirect effect for explaining why Hispanics had significantly higher intentions to install the app than Whites, which requires further investigation by future work.
\section{Discussion}
Our research has several key practical implications on the design, marketing, and deployment of COVID-19 contact-tracing apps in the U.S., many of which could also apply in broader contexts such as strategies to increase adoption for digital technologies to help contain the spread of COVID-19 and building effective contact-tracing apps for infectious diseases in general.
\subsection{Design Contact-Tracing Apps to Match User Preferences}
Overall, our regression analysis showed that app design choices such as decentralized vs. centralized architecture, location use, who provides the app, and disclosures about app security risks had very small effects on participants' adoption intentions of COVID-19 contact-tracing apps (RQ1, Section~\ref{sec:RQ1_results}).
Since the baseline levels in our study represent the current design of contact-tracing apps in the U.S. (State-level, decentralized architecture, location collection not permitted), which features the strictest restrictions in data use, our results convey a positive signal that U.S. mobile users are open to or may even slightly prefer alternative designs that collect more sensitive data in a privacy-friendly way and offer additional benefits.
Participants also showed similar adoption intentions for app providers other than state health authorities, which suggests that using a piecemeal solution that leverages resources from different entities (e.g., Google/Apple OS-level support, apps provided by employers or schools) to complement the systematic yet slow responses from state-level authorities as proposed by~\citet{blasimme2020s} is a viable approach.
The few factors related to app design choices that had significant effects on adoption intentions also point out the sweet-spot in the current design space of contact-tracing apps to optimize for app adoption.
For the ``\textit{location uploaded}'' feature, although the current GAEN API does not allow collecting location directly in the same app, researchers have proposed creative solutions to gather information about places that infected users visited without logging location traces at an individual level~\cite{culler2020covista}.
The key idea is to treat places as people so the GAEN API could be extended to monitor a place's exposure to infected users and gather anonymized location traces of infected users at an aggregated level.
We consider this work a promising solution as it greatly reduces the security risk when maintaining the benefits that seem to be very attractive to users according to our results.
For the study around the security risk presentation, we learned that people were more concerned about the risk of secondary data use (which is more of an issue for centralized architectures), while less concerned about the risk of re-identification (one of the few security risks that decentralized apps are vulnerable to).
These results provide more empirical evidence to support the current deployment of decentralized architectures for contact-tracing apps.
Furthermore, as our results suggest that priming users about security risks does not reduce their app adoption intentions in most situations, app developers should be more candid about the possible security risks when presenting contact-tracing apps to users to help them make informed decisions.
\subsection{Consider Individual Differences in App Design and Marketing Strategies}
Contrary to the small effects of app design choices, we found individual differences had large effects on adoption intentions of COVID-19 contact-tracing apps.
First, we found people with higher prosocialness, higher COVID-19 risk perceptions and higher technology readiness are significantly more inclined to install and use contact-tracing apps.
This shows an marketing opportunity of contact-tracing apps to appeal to people with these characteristics by emphasizing related values such as helping the society combat the disease, helping protect yourself and other people, and taking advantage of the new technology to alleviate the work of human contact tracers.
Second, we found certain demographic groups had significantly higher or lower adoption intentions than other people regardless of the app design choices (RQ2.1-2.6, Section~\ref{sec:individual_difference_main_effect_results}).
Some of these findings are particularly concerning.
For example, older people had significantly lower intentions to install COVID-19 contact-tracing apps although they are at higher risk for severe illness from COVID-19.
Similarly, essential workers also had significantly lower intentions to install COVID-19 contact-tracing apps although they are at higher risk for exposure to COVID-19.
With our mediation analysis results (Section~\ref{sec:mediation_results}), we speculate that the lower installation intentions of older people could be because they were less tech-savvy and did not feel this technical solution provides much benefit to them.
For essential workers, the mediation analysis only showed a significant indirect effect through a reduction in the perceived societal benefit rating of the app.
We hope future research could conduct qualitative studies regarding the adoption intentions of essential workers in particular to provide better explanations about their preferences and rationales.
Third, we found different demographic groups had different preferences among two app design choices (RQ2.7, Section~\ref{sec:individual_difference_interaction_effect_results}).
Although these interaction effects did not change the general trends of adoption intentions for different demographic groups, we want to caution potential developers of contact-tracing apps of the unequal effects of certain app design choices on different demographic groups.
For example, although introducing location features sometimes increased the adoption intentions of participants in general, many essential workers and health workers seemed to prefer apps that do not collect location over those that do.
We speculate this may be because there is a greater privacy risk related to essential workers as their job require them to go outside and visit more places than other people.
This suggests that if app designers do want to incorporate location features for more public health benefits, the enabling of these features should be completely voluntary and require users to explicit opt in.
By protecting these vulnerable groups, we could also help better protect the general population due to the increase in adoption rate of people who are at higher risks of getting exposed.
For people living in rural areas, installation intention was drastically lower for apps developed by a large tech company than for apps developed by their state health authorities.
That is to say, contact-tracing apps developed by a large tech company may not be as effective in rural areas as in urban areas.
Note that in real world, the app provider may not be as obvious as in the app description of our study, which means that user's perceived app provider could have similar effect on their adoption intentions as the effects of app provider tested in our study.
Since current U.S. contact-tracing apps are all built with the GAEN API provided by Google and Apple, it is important for the marketing of the app to clearly convey to users who built the app and who has access to their data.
\subsection{Emphasize Public Health Benefits to Promote Contact-Tracing App Adoption}
The findings of our mediation analysis showed that although both security and privacy risks and public health benefits had significant indirect effects, the indirect effects of perceptions about contact-tracing apps' benefits (i.e., protecting the users themselves and the societal benefit of slowing the spread of COVID-19) were consistently larger than the indirect effects of perceived security and privacy risks.
This suggests that emphasizing the apps' benefits could increase user awareness of these benefits and drive more adoption, while efforts to decrease user awareness of security and privacy risks are likely to have less impact.
This result echos \citet{trang2020one}'s findings that the variations of app description in terms of benefits provided by the app had a larger effect size than variations in terms of privacy protection levels.
Accordingly, we derive two recommendations for designing and deploying COVID-19 contact-tracing apps.
First, contact-tracing app designers need to make sure the system works accurately, so that it actually offers key benefits. Opt-in features (e.g., progressive requests of location data) could allow users who are willing to contribute more data to obtain more useful features while enabling users who are more concerned about the security and privacy risks to share only the minimum amount of data.
Second, contact-tracing app design and marketing should also serve an educational purpose and emphasize more on the public health benefits both to the user themselves and to the society.
In addition to providing clear app descriptions, providing basic statistics using proper visualizations to help users get a better sense of how the app works in real life is also a direction worth exploring.
\subsection{Methodological Limitations}
This research has several limitations.
First, because our study tested hypothetical app designs to achieve a thorough and systematic exploration of the design space, we could only investigate people's adoption intentions rather than their actual behaviors.
Therefore, our findings may not fully represent the corresponding actions people take for a real-world contact-tracing app.
Second, users were reading app descriptions that presented more app design and implementation details (even including security risks in some conditions), which contained more information than they can obtain in real-world situations. This could affect the generalizability of the results. Although our findings suggest contact-tracing app providers should be more open about what benefits the app offers to motivate more adoption and what potential risks the app can cause to give people more transparency when not heavily discouraging their interests in using the app.
Third, We only surveyed mobile users and people aged over 18. So the findings may not generalize to people who are not using a mobile phone (but could use other approaches, such as IoT devices or infrastructure, to participate in digital contact tracing~\cite{Polenta_2020,tedeschi2020iotrace,hu2020iotbased}) and minors.
Lastly, due to the general limitations of quantitative study methodologies, we could not fully uncover the nuances in people's rationales behind their perceptions and adoption intentions, such as why Hispanic people and Black people had higher adoption intentions in some situations and why essential workers were less willing to install contact-tracing apps.
We hope future work could investigate these aspects specifically.
\section{Conclusion}
In this research, we conducted a national scale survey experiment ($N=1963{}$) in the U.S.\ following a between-subjects factorial design to examine the effects of app design choices and individual differences on the adoption intentions of COVID-19 contact-tracing apps and how participants' perceptions of security and privacy risk, public health benefit, and community adoption rate mediate these effects.
Our results showed that individual differences had a larger impact on participants' app adoption intentions than app design choices, and both app design choices and individual differences affect the adoption intentions more through the perceptions of public health benefit and community adoption rate than perceptions of security and privacy risk.
Based on these findings, we derived practical implications on app design, marketing, and deployment.
Specifically, we identified sweetspots in the contact-tracing design space that could drive higher adoption.
We discussed app design considerations and marketing strategies with regards to individual differences, especially the importance of paying attention to protecting certain vulnerable groups such as essential workers, health workers, and people living in rural areas when designing and promoting the app.
Lastly, we emphasized public health benefit as an effective leverage to promote contact-tracing app adoption.
\section{Acknowledgement}
The authors would like to acknowledge Cori Faklaris, Ruotong Wang, and Laura Dabbish for their help on the study design.
This work is supported in part by CMU Block Center and the National Science Foundation under Grant No. 1801472.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,155,013 | arxiv | \section{Introduction}
The International Classification of Diseases (ICD) establishes a standardized fine-grained classification system for a broad range of diseases, disorders, injuries, symptoms, and other related health conditions \cite{who}. It is primarily intended for use by healthcare workers, policymakers, insurers and national health program managers. The United States incurs administrative costs in billions of dollars annually arising from a complex billing infrastructure \cite{digital_health}. Specifically, the ICD code assignment is typically a manual process, consuming on average between 25 to 43 minutes per patient depending on the ICD version \cite{perspectives_2014}. It is also prone to errors resulting from inexperienced coders, variation between coders, incorrect grouping of codes or mistakes in the patient discharge summaries. These errors are very costly with one report estimating that preventable errors in ICD coding have cost Medicare system 31.6 billion in FY2018 \cite{cmi}.\\\\
Recent work \cite{mullenbach-etal-2018-explainable, Schmaltz2020ExemplarAF, amin} has tried to automate the task of ICD code assignment using deep learning. Typically framed as a multilabel classification problem, researchers have trained Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Transformer models to predict ICD-9 codes from patient discharge summaries. These models have outperformed rule-based approaches and those utilizing conventional algorithms such as Logistic Regression, Support Vector Machines, Random Forests etc., achieving competitive micro F1-scores in the range 42\% - 68\%. Amongst these models, those based on CNNs have achieved the best performance.
Neural network models have revolutionized the field of NLP and SOTA models for various NLP tasks involve deep neural network models such as BERT, Bidirectional RNN or CNN-based methods. Recent works \cite{kurakin2016adversarial,kurakin2016adversarialb,papernot2016practical,zhao2017generating} have shown a particular vulnerability of such deep models to adversarial examples that are often produced by adding small and imperceptible perturbations to the input data. The state of the art models of NLP are no exceptions to such perturbations. \cite{zhang2019adversarial} provides a review of different adversarial attacks and defense strategies in the NLP literature. Based on granularity of the perturbation, adversarial attack strategies in NLP can be classified into three types - character-level attacks, word-level attacks and sentence-level attacks. In a character-level attack strategy, the model induces noise at the character level. Character-level noise can be induced due to naturally occurring reasons such as typos and misspellings or due to intentional modification by a malicious third-party. \cite{Li_2019, eger2019text, ebrahimi2018adversarial} are some of the existing character-level attack strategies in NLP. To accurately model the naturally occurring typos, \cite{sun2020advbert} restrict the typos distribution based on the character constraints found in a standard English keyboard. We follow this strategy in our work. Furthermore, we assume a white-box setting where the adversary has access to gradients of the loss function wrt to the model inputs. To our knowledge, this is the first work to investigate the effects of adversarial samples in clinical NLP domain.
\section{Data and Preprocessing}
We used MIMIC-III \cite{mimiciii}, a large open source database comprising information of patients admitted to critical care units of Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA). The database contains de-identified Electronic health records with both structured and unstructured data including diagnostics and laboratory results, medications, and discharge summaries. In this work, we focus on discharge summaries which encapsulates details pertaining to a patient’s stay. \\\\
Each discharge summary is manually annotated by human coders with multiple ICD-9 codes, describing both the diagnoses and procedures that the patient underwent. Out of the approx. 13000 possible ICD-9 codes, 8921 (6918 diagnosis, 2003 procedure) are present in our dataset. Following previous work, we merge discharge summaries corresponding to the same patient ID, such that no patient appears twice in our dataset resulting in 47,427 discharge summaries. This is done to ensure that there is no ‘data leakage’ between train, validation, and test sets. \\\\
The full label setting is quite noisy and suffers from class imbalance. Potential sources of noise include both missed assignments (not annotating all relevant ICD-9 codes) and incorrect assignments (annotating similar but incorrect ICD-9 codes). Consequently, it is relatively trivial to develop an adversarial attack strategy in the full label setting. For instance, one could simply find the keywords corresponding to low frequency labels and then either append or remove them from a discharge summary to alter a machine learning model’s prediction. This strategy will however fail for frequent labels since we expect the model to generalize beyond simply memorizing a few keywords. Therefore, we limited the label set to the 50 most frequent labels and removed discharge summaries which were not annotated with at least one of the labels. The resulting dataset was then split into training, validation and testing sets which contained 8067, 1574, and 1730 discharge summaries, respectively.\\\\
We followed the same pre-processing steps as in previous work \cite{mullenbach-etal-2018-explainable}. All tokens without any alphabetic characters were removed. We then lowercased all tokens and replaced those appearing less than three times in the training documents with an ‘UNK’ token.
\label{headings}
\section{Baseline model}
Our baseline models were the same as \cite{mullenbach-etal-2018-explainable}. Specifically, we used a CNN-based sentence classifier model introduced by \cite{kim} which utilizes a max pooling layer to get sentence vector representations. We call this model Max Pool based CNN. The other model that we use instead utilizes label embeddings to calculate attention weights over word positions. These weights are then used to pool the output of the convolutional layer and calculate the sentence vector representation. This model is referred to as the Attention Pool based CNN.
\label{others}
\section{Adversarial attack strategy}
\label{adv}
We generate adversarial examples based on the following algorithm: Given a pre-trained NLP algorithm $f : X \to y$ and a measure of classification $q: y \to s$, we are interested in finding perturbations $\delta x$ on the input $X$ such that $q(X + \delta x) \leq q(X)$ under the constraint $||\delta x || \leq K$. The final constraint ensures that the perturbations are small. In our work, we consider perturbations (typos) of four types:
\begin{enumerate}
\item \textbf{Insert} - Insert characters into a word, such as hike $\to$ hlike
\item \textbf{Delete} - Delete characters in a word, such as hike $\to$ hke
\item \textbf{Swap} - swap two characters of a wors, such as hike $\to$ hkie
\item \textbf{Replace} - Replace a character in a word with any neighboring keys in the keyboard, such as hike $\to$ hoke. Here o is a neighboring word to i in a standard english keyboard.
\end{enumerate}
Given an input sentence $s$ that is tokenized according to the model's tokenizer as $s = (w_1,w_2,..,w_N)$, we compute the partial derivative of loss with respect to each input item as shown below,
\begin{equation}
\mathcal{G}_{f}\left(w_{i}\right)=\nabla_{w_{i}} \mathcal{L}\left(w_{i}, y\right)
\label{eq1}
\end{equation}
Based on this gradient information, we select a input word $w_i$ to attack. We experiment with two different strategies here, the maximum gradient strategy where we choose the word corresponding to the maximum gradient and a random strategy where a random word is chosen to attack. Once a word is chosen, we generate all possible typos based on the four ways described above. The typo which decreases the score of the output $y$ based on the score function $q$ is chosen. Here, we use the top5 precision as the score function. Now the word replaced with the optimal typo word is again fed through this loop for $K$ times. Each time, a different word is chosen to ensure that final words don't change from the initial words by a lot. We experiment with different choices of $K$. The algorithm is shown in alg.~\ref{alg:adv-ICD9}
\begin{algorithm}[t]
\small
\caption{Adversarial attack for ICD-9 classification}
\label{alg:adv-ICD9}
\begin{algorithmic}[1]
\STATE{\textbf{Input:} Document $X$, ground truth labels $y*$, classifier $f(.)$, budget $K$ and score function $q$}
\STATE{$i \leftarrow 0, X_{best} \leftarrow X$}
\WHILE{$i \leq K$}
\STATE{$c \leftarrow$ Segmentation$(X_{best})$}
\FOR{each token $c_i$ in $c$}
\STATE{Compute gradients of component $c_i$ according to eq.~\ref{eq1}}
\ENDFOR
\STATE{Find the token or word based on the gradient according to the strategy}
\STATE{Generate all possible typos for the chosen word}
\STATE{Create a list of documents; each document corresponding to a typo}
\STATE{Find the document instance that decreases the output score the most assign this to $X_{best}$}
\STATE{$i \leftarrow i+1$}
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Results}
To the best of our knowledge, \cite{mullenbach-etal-2018-explainable} is the current state-of-the-art for the task of automated ICD-9 code assignment. We re-implemented their best performing models using the AllenNLP framework \cite{allennlp}. The test-set performance of the models for the task of predicting the top-50 most frequent ICD-9 codes from discharge summaries is given in table ~\ref{model-performance}. We found that the Max Pool based CNN outperformed the Attention Pool based CNN on all performance metrics. Further, we found that the computation time for training as well as generating predictions for the former was much lesser than the latter. Therefore, we decided to limit our focus on developing an adversarial attack strategy for the Max Pool based CNN.
We experiment with three different values of budget $K = \{10, 20, 30\}$ and two different strategies - maximum gradient and random strategy for selecting the token to attack. The maximum gradient strategy can be used to analyze the robustness of the model to malicious attacks while the random strategy can be used to simulate natural settings with adversarial examples. The training time for each run on the entire corpus ($1725$ discharge summaries) - $8$ hrs to $16$ hrs on a machine with Tesla K80 GPU. The results are summarized in table ~\ref{adv-results}.
In accordance with our intuition, max grad strategy performs better than random strategy. This is because, max grad strategy can produce meaningful perturbations in a large input space (average size of input document is $\sim 1400$ tokens). The model's performance doesn't drop much with random strategy. This suggests that the model is some what robust to naturally occurring noise such as typos and misspellings. However, this might change as the budget is increased. Due to computational limits, we did not explore budgets beyond $30$. A key result of our work is that, with less than $3\%$ of input tokens modified, the model's performance drops significantly from $0.62$ to $0.377$. This shows the potential vulnerability of this model to malicious attacks. Since, only a very few tokens are changed, it might be hard to defend against these attacks by training a discriminator to distinguish maliciously modified documents from regular ones.
Tables ~\ref{table1} and ~\ref{table2} show examples of discharge summaries before and after attack with their top5 labels. It is important to note that, on a few discharge summaries (last example in both the tables), the algorithm increases the top5 precision instead of decreasing it. One can make modifications to the algorithm to ensure that this doesn't happen which would result in further drop in precision. Due to time constraints, we were not able to accommodate this modification. Nevertheless, these examples show the brittleness of the baseline model to input tokens.
\begin{table}
\caption{Performance of baseline models on MIMIC-III dataset for predicting the top 50 most frequent ICD-9 codes.}
\label{model-performance}
\centering
\begin{tabular}{lcc}
\toprule
\multicolumn{3}{c}{Model} \\
\cmidrule(r){1-3}
Metric & Max Pool CNN & Label Attention Pool CNN \\
\midrule
\cmidrule(r){1-3}
Macro F1 Score & $0.55$ & $0.49$ \\
Micro F1 Score & $0.63$ & $0.55$ \\
Macro AUC & $0.87$ & $0.83$ \\
Micro AUC & $0.91$ & $0.86$ \\
Top 5 Precision & $0.62$ & $0.54$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Results of adversarial attacks on the corpus of discharge summaries of size $1725$.}
\label{adv-results}
\centering
\begin{tabular}{ccc}
\toprule
\multicolumn{3}{c}{Top5 precision} \\
\cmidrule(r){1-3}
Budget & Max grad strategy & Random strategy \\
\midrule
\multicolumn{3}{c}{Baseline ($K = 0 $) $\to 0.62$}\\
\cmidrule(r){1-3}
$10$ & $0.549$ & $0.592$ \\
$20$ & $0.462$ & $0.574$ \\
$30$ & $0.377$ & $0.567$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Examples of sentences for budget $10$ in maximum gradient strategy where the adversarial attack strategy resulted in maximum change in top5 labels. The first two examples cause the predictions to be worse and the last example shows a case where the adversarial example results in increased top5 precision. Labels in blue appear are part of ground truth labels.}
\label{table1}
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{2}{c}{Maximum gradient strategy, budget $= 10$} \\
\cmidrule(r){1-2}
Top5 precision & Description \\
\midrule
$0.8 \to 0.2$ & \begin{tabular}{@{}l@{}}...unchanged as well. A tracheostomy tube and right subclavian line... \\ ...unchanged as well. A \textcolor{red}{\textbf{tacheostomy ttube}} and right subclavian line...\\ \\ ...performed on. During tracheostomy procedure, pneumothorax occured and \\ chest tube...\\...performed on. During \textcolor{red}{\textbf{tacheostomy} \textbf{proecedure}, \textbf{pneumothroax} \textbf{occurred}} \\ and chest tube...\\ \\\textbf{Top5 labels before attack} - \color{blue} Insertion of Sengstaken tube, \color{red} Pneumonia, \\ \color{blue} Respiratory Ventilation, Venous catheterization, Arterial catheterization \\\\ \textbf{Top5 labels after attack} - \color{red} Pneumonia, Unspecified pleural effusion, \color{blue} Insertion \\ \color{blue} of Sengstaken tube, \color{red} Anemia, Acute post-hemorrhagic anemia\end{tabular}\\
\cmidrule(r){1-2}
$0.8 \to 0.2$ & \begin{tabular}{@{}l@{}}...cholelithiasis complicated hospital course including sepsis w persistent \\ hyperbilirubinemia... \\ ...cholelithiasis complicated hospital course including \textcolor{red}{\textbf{sespis}} w persistent \\ hyperbilirubinemia...\\ \\ ...surgical or invasive procedure - ercp, laparoscopic cholecystectomy, \\ laparoscopic liver biopsy..\\...surgical or invasive \textcolor{red}{\textbf{preocedure}} - \textcolor{red}{\textbf{erccp, laproscopic, cholecysectomy}},\\ laparoscopic liver biopsy..\\\\ ...presentation to hospital1 intubated jaundiced scleral...\\...presentation to hospital1 \textcolor{red}{\textbf{int8bated}} jaundiced scleral...\\ \\\textbf{Top5 labels before attack} - \color{blue} Unspecified acquired hypothyroidism, Insertion \\ \color{blue} of endotracheal tube, Respiratory Ventilation, Enteral infusion of concentrated \\ \color{blue} nutritional substances, \color{red} Continuous invasive mechanical ventilation \\\\\textbf{Top5 labels after attack} - \color{blue} Unspecified acquired hypothyroidism, \color{red} Diagnostic \\ \color{red} ultrasound of heart, Old myocardial infarction, Major depressive disorder,\\ \color{red} Other and unspecified hyperlipidemia.\end{tabular}\\
\cmidrule(r){1-2}
\textcolor{red}{\textbf{$0.2 \to 0.8$}} & \begin{tabular}{@{}l@{}}...higher on tube feeds appreciate nutrition recs tfs changed to...\\ ...higher on \textcolor{red}{\textbf{ttube fees apprciate nutritin res tfts}} changed to..\\ \\ ...for both chf and suspected aspiration pna w iv lasix...\\...for both chf and suspected \textcolor{red}{\textbf{aspirtation}} pna w iv lasix...\\\\ \textbf{Top5 labels before attack} - \color{red} Enteral infusion of concentrated nutritional \\ \color{red} substances \color{blue} Venous catheterization, \color{red} Food / vomit pneumonitis, Urinary tract \\ \color{red} infection, Acute respiratory failure.\\\\ \textbf{Top5 labels after attack} - \color{red} Acute respiratory failure, \color{blue} Venous catheterization, \\ \color{blue} Congestive heart failure, \color{blue} Insertion of endotracheal tube, Unspecified essential \\ \color{blue} hypertension \end{tabular}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Examples of sentences for budget $20$ in maximum gradient strategy where the adversarial attack strategy resulted in maximum change in top5 labels. The first two examples cause the predictions to be worse and the last example shows a case where the adversarial example results in increased top5 precision. Labels in blue appear are part of ground truth labels.}
\label{table2}
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{2}{c}{Maximum gradient strategy, budget $= 20$} \\
\cmidrule(r){1-2}
Top5 precision & Description \\
\midrule
$1.0 \to 0.2$ & \begin{tabular}{@{}l@{}}...cabg, x4, hyperlipidemia, anxiety, hypertension, migraines, gi bleed... \\ ...\textcolor{red}{\textbf{cbg}}, x4, \textcolor{red}{\textbf{hyperlipiddemia, axiety, hypertnesion, migranes}}, gi, \textcolor{red}{\textbf{bleedd}}...\\ \\ ...medical history - coronary artery disease, hyperlipidemia, anxiety...\\...medical history - \textcolor{red}{\textbf{conronary atery}} disease, \textcolor{red}{\textbf{hyperlipdiemia}}, anxiety...\\\\ ...room and underwent coronary artery bypass grafting x4 with left...\\...room and underwent coronary \textcolor{red}{\textbf{bypas gratfting}} x4 with left...\\ \\\textbf{Top5 labels before attack} - \color{blue} Single internal mammary-coronary artery bypass, \\ \color{blue} Extracorporeal circulation auxiliary to open heart surgery, Other and unspecified\\ \color{blue} hyperlipidemia, Atherosclerotic heart disease of native coronary artery without \\ \color{blue} angina pectoris, Unspecified essential hypertension \\\\ \textbf{Top5 labels after attack} - \color{blue} Extracorporeal circulation auxiliary to open heart\\ \color{blue} surgery, \color{red} Enteral infusion \color{red} of concentrated nutritional substances, Transfusion of \\ \color{red} packed cells, Diagnostic ultrasound of heart, \color{red} Atrial fibrillation\end{tabular}\\
\cmidrule(r){1-2}
$1.0\to 0.2$ & \begin{tabular}{@{}l@{}}...to posterior descending artery bronchosccopy reintubated history of present...\\ ...to posterior descending artery bronchosccopy \textcolor{red}{\textbf{reitnubated}} history of present...\\ \\ ...the procedure was hemoptysis requiring intubation he was transferred back...\\...the procedure was hemoptysis requiring \textcolor{red}{\textbf{ibntubation}} he was transferred back...\\\\ ...mitral regurgitation, hypertension, hypercholesterolemia, congestive heart\\ failure, tobacco abuse...\\...\textcolor{red}{\textbf{motral regunrgitation,}} hypertension, hypercholesterolemia, \textcolor{red}{\textbf{congesitve heeart}}\\\textcolor{red}{\textbf{failre, taobacco abusee}} \\\\\\\textbf{Top5 labels before attack} - \color{blue} Extracorporeal circulation auxiliary to open heart\\ \color{blue} surgery, Single internal mammary-coronary artery bypass, Atherosclerotic heart\\ \color{blue} disease of native coronary artery without angina pectoris, Mitral valve disorders,\\ \color{blue} Congestive heart failure \\\\\textbf{Top5 labels after attack} - \color{red} Unspecified essential hypertension, enteral infusion of \\ \color{red} concentrated nutritional substances, \color{blue} Extracorporeal circulation auxiliary to open\\ \color{blue} heart surgery, \color{red} Respiratory Ventilation, Transfusion of packed cells.\end{tabular}\\
\cmidrule(r){1-2}
\textcolor{red}{\textbf{$0.4 \to 1.0$}} & \begin{tabular}{@{}l@{}}...cancer s p resection bilateral renal masses per pcp name...\\ ...cancer s p \textcolor{red}{\textbf{resecton bliateral}} \textcolor{red}{\textbf{reanl mases}} per pcp name..\\ \\ ...morbid obesity, depression, restless leg syndrome...\\...\textcolor{red}{\textbf{mtorbid obestity, deprssion, resltess}} leg syndrome...\\\\ \textbf{Top5 labels before attack} - \color{blue} Congestive heart failure, Chronic obstructive \\ \color{blue} pulmonary disease \color{red} Chronic kidney disease, Hypertensive chronic kidney disease,\\ \color{red} Non-invasive mechanical ventilation\\\\ \textbf{Top5 labels after attack} - \color{blue} Congestive heart failure, Chronic obstructive pulmonary \\ \color{blue} disease, Unspecified essential hypertension, Diabetes mellitus without mention of \\ \color{blue} complication, Urinary tract infection \end{tabular}\\
\bottomrule
\end{tabular}
\end{table}
\section{Discussion}
This work is a first step at exploring the robustness of NLP models used for automatic ICD-9 code classification. Clinical documents are different from regular documents as they are typically generated in a fast-paced environment with higher than average typos and non-standard acronyms. As a result, clinical NLP models are more susceptible to adversarial samples compared to a regular NLP model trained on a standard English dataset. A key extension of the work would be to consider a dictionary learnt from clinical documents and biomedical literature as a defense against these character-level perturbations. Although this might mitigate the decrease in performance, it wouldn't completely solve it. A more rigorous way to deal with this would be to account for this in the tokenization strategy. It is easy to push a word out of vocabulary when using tokenization strategies like word2vec and GloVe. Other strategies that model words unseen in training dataset such as word-piece and byte-pair encoding will also break when typos are introduced because these models learn sub words from a standard dictionary. Therefore, any defense must account for these typos in the fundamental tokenization strategy. An interesting direction would be to learn a word similarity metric and map an unknown word to a closer word in the vocabulary given the input word and the context in which it appears. Building a robust tokenization strategy would be the first step towards a robust NLP model against character-level adversarial attacks.
\medskip
\small
|
2,869,038,155,014 | arxiv | \section{Introduction}
Interaction of black holes with other gravitating sources is interesting for purely theoretical reasons (non-linear superposition in a strong-field regime) as well as within models of certain astrophysical sources. A black-hole near field is hard to modify significantly as regards potential and intensity, but its higher derivatives (curvature) may be affected by external sources considerably. Here we try to learn and visualize this effect on a Schwarzschild black hole subject to a presence of a concentric static and axially symmetric thin ring described by the Bach--Weyl solution. More specifically, we analyse the behaviour of the simplest invariants given by the metric and its first and second derivatives in dependence on parameters of the system, namely relative mass and radius of the ring. A special attention is given to the black-hole interior, including the vicinity of the central singularity.
In a previous paper \cite{SemerakB-16}, we tried to deform the black-hole field by another black hole and for that purpose we considered the Majumdar--Papapetrou binary system, made of two extremally charged black holes. Though ``the other black hole" is a very strong source, we found that below the horizon the field is not much deformed within that class of space-times. This is connected with the extreme character of their horizons. Indeed, extreme charges are required as sources of the electrostatic field which just compensates the gravitational attraction; otherwise the holes would fall towards each other or would have to be kept static by an even more unphysical strut(s). Therefore, in the present paper we try to distort a black hole which is far from extreme state. Without the electrostatic repulsion, the external source has to be supported by pressure (hoop stresses) or by centrifugal force. The simplest configuration of this kind involves a thin ring or dics surrounding the hole in a static and axially symmetric, concentric manner. Such a setting may capture at least some features of the accreting black holes studied in astrophysics, while still allowing for an exact analytical treatment.
In section \ref{metric-functions}, we first compose the total metric and analyse its behaviour at the horizon. Then in section \ref{below-horizon} we extend the metric to the black-hole interior by solving Einstein's equations numerically along null geodesics starting tangentially to the horizon. In section \ref{invariants}, we compute and visualize on contours the behaviour of the basic invariants in dependence on parameters of the system, namely relative mass of the Bach--Weyl ring and its radius. Some more attention is devoted to the Kretschmann scalar and to the regions where it turns negative, in particular to their relation with the Gauss curvature of the horizon (subsection \ref{Kretschmann}). Final section \ref{concluding} concludes with a summary, a brief scan of similar literature, a remark concerning visualization and some further plans. More details on the null geodesics important for extension of the metric inside the black hole are shifted to Appendix \ref{appendix-A} and the question of extension of the Weyl coordinates is treated in Appendix \ref{appendix-B}. Let us stress that when speaking of ``black hole", we everywhere have in mind a section of the 3D horizon given by constant Killing time ($t$).
Note on notation: equations/values valid on the horizon will be denoted by the index `H', $X\stackrel{\rm H}{=} Y$, while expansions valid there will be denoted by an asterisk, $X\stackrel{*}{=} Y$.
The black-hole mass is called $M$, while the ring mass ${\cal M}$ and its Weyl radius $b$.
The Weyl-radius coordinate will be denoted by $\rho$; below horizon where it is pure imaginary, we will introduce $\varrho$ by $\rho=:{\rm i}\varrho$.
We use geometrized units in which $c=1$, $G=1$, index-posed comma/semicolon indicates partial/covariant derivative and usual summation rule is employed. Signature of the space-time metric $g_{\mu\nu}$ is ($-$+++), Riemann tensor is defined according to $V_{\nu;\kappa\lambda}-V_{\nu;\lambda\kappa}={R^\mu}_{\nu\kappa\lambda}V_\mu$
and Ricci tensor by $R_{\nu\lambda}={R^\kappa}_{\nu\kappa\lambda}$.
Cosmological constant is set zero.
\section{Weyl metric for Schwarzschild plus ring}
\label{metric-functions}
All vacuum static and axially symmetric space-times can be described by the Weyl-type metric
\begin{equation} \label{Weyl-metric}
{\rm d}s^2=-e^{2\nu}{\rm d}t^2+\rho^2 e^{-2\nu}{\rm d}\phi^2
+e^{2\lambda-2\nu}({\rm d}\rho^2+{\rm d}z^2) \,,
\end{equation}
where $t$ and $\phi$ are Killing time and azimuthal coordinates, and the unknown functions $\nu$ and $\lambda$ depend only on cylindrical-type radius $\rho$ and the ``vertical" linear coordinate $z$ which cover the meridional planes (orthogonal to both Killing directions) in an isotropic manner.
Einstein's equations reduce to
\begin{align}
&\nu_{,\rho\rho}+\frac{\nu_{,\rho}}{\rho}+\nu_{,zz}=0 \,, \\
&\lambda_{,\rho}=\rho(\nu_{,\rho})^2-\rho(\nu_{,z})^2 \,, \quad
\lambda_{,z}=2\rho\,\nu_{,\rho}\nu_{,z} \,, \label{Einstein-eqs}
\end{align}
i.e. to the Laplace equation and a simple line integral (which is however only rarely solvable explicitely).
Hence, the potential $\nu$ behaves like in Newtonian theory and adds linearly, whereas the second function $\lambda$ does not ``superpose" that simply. For two sources, with $\nu_{1}$ and $\nu_{2}$ denoting their individual potentials, one can write $\lambda=\lambda_{1}+\lambda_{2}+\lambda_{\rm int}$, where $\lambda_{1}$ and $\lambda_{2}$ describe the first and the second source alone (i.e., they satisfy the above equations with just $\nu_{1}$ and $\nu_{2}$, respectively) and $\lambda_{\rm int}$ is the interaction term which is given by
\begin{align}
\lambda_{{\rm int},\rho}
&=2\rho\left(\nu_{1,\rho}\nu_{2,\rho}-\nu_{1,z}\nu_{2,z}\right), \\
\lambda_{{\rm int},z}
&=2\rho\left(\nu_{1,\rho}\nu_{2,z}+\nu_{1,z}\nu_{2,\rho}\right).
\end{align}
Typically, the potential $\nu$ scales linearly with the source mass, hence $\lambda$ scales with the mass square.
We are specifically interested in space-time generated by a Schwarzschild-type black hole surrounded by a thin ring described by the Bach--Weyl solution.
The Schwarzschild solution appears, respectively in the Weyl and Schwarzschild coordinates, as
\begin{align}
\nu_{\rm Schw}&= \frac{1}{2}\,\ln\frac{d_1+d_2-2M}{d_1+d_2+2M} \\
&= \frac{1}{2}\,\ln\left(1-\frac{2M}{r}\right), \\
\lambda_{\rm Schw}&= \frac{1}{2}\,\ln\frac{(d_1+d_2)^2-4M^2}{4d_1 d_2} \\
&= \frac{1}{2}\,\ln\frac{r(r-2M)}{(r-M)^2-M^2\cos^2\theta} \; ,
\end{align}
where
\[d_{1,2}:=\sqrt{\rho^2+(z\mp M)^2}=r-M\mp M\cos\theta \,.\]
Transformation between the coordinates reads
\begin{align}
\rho=\sqrt{r(r-2M)}\,\sin\theta \,, &\quad
z=(r-M)\cos\theta \,; \label{Weyl-Schw} \\
r-M=\frac{d_2+d_1}{2} \,, &\quad
M\cos\theta=\frac{d_2-d_1}{2} \,. \label{Schw-Weyl}
\end{align}
Let us stress that these relations can only be safely used above the horizon (see Appendix \ref{appendix-B}).
It is worth noting that in the case of a Schwarzschild-type centre ($\nu_{1}\equiv\nu_{\rm Schw}$) the field equations for $\lambda$ appear quite simple in Schwarzschild coordinates when expressed in terms of $\lambda_{\rm int}$. Actually, after transforming ($X$ is some quantity)
\begin{align}
X_{,r} &= X_{,\rho}\rho_{,r}+X_{,z}z_{,r}= \nonumber \\
&= X_{,\rho}\;\frac{r-M}{\sqrt{r(r-2M)}}\,\sin\theta+
X_{,z}\cos\theta \,, \nonumber \\
X_{,\theta} &= X_{,\rho}\rho_{,\theta}+X_{,z}z_{,\theta}= \nonumber \\
&= X_{,\rho}\,\sqrt{r(r-2M)}\,\cos\theta-
X_{,z}(r-M)\sin\theta \,, \nonumber \\
\nu_{{\rm Schw},\rho} &= \frac{(d_1+d_2)\,[4M^2-(d_2-d_1)^2]}
{8M\rho\;d_1 d_2} \label{nuSchw,rho} \\
&= \frac{M(r-M)\sin\theta}
{[(r-M)^2-M^2\cos^2\theta]\,\sqrt{r(r-2M)}} \;, \\
\nu_{{\rm Schw},z} &= \frac{d_2-d_1}{2\,d_1 d_2}
= \frac{M\cos\theta}{(r-M)^2-M^2\cos^2\theta} \;,
\label{nuSchw,z}
\end{align}
they lead to
\begin{equation} \label{lambda_int,eqns}
\lambda_{{\rm int},r}=\frac{2M\nu_{2,\rho}}{\rho}\,\sin^2\theta \,,
\quad
\lambda_{{\rm int},\theta}=-2M\nu_{2,z}\sin\theta \,.
\end{equation}
Therefore, if $\nu_2$ depends linearly on the ``external"-source mass (we will call it ${\cal M}$), then $\lambda_{\rm int}$ is linear in it, too, while $\lambda_2$ is quadratic. Hence, in the decomposition of $\lambda$ the ${\cal M}$ parameter appears as
\begin{equation} \label{lambda-decomp}
\lambda=\lambda_{\rm Schw}+\lambda_{\rm int}+\lambda_2
=\lambda_{\rm Schw}+{\cal M}\,\tilde\lambda_{\rm int}+{\cal M}^2\,\tilde\lambda_2 \,,
\end{equation}
where the pure-Schwarzschild term $\lambda_{\rm Schw}$ as well as the tilded functions $\tilde\lambda_2$ and $\tilde\lambda_{\rm int}$ do not depend on ${\cal M}$.
Our ``second" source is a thin ring with Weyl radius $\rho=b$ and mass ${\cal M}$, described by the Bach--Weyl solution
\begin{align}
\nu_{\rm BW} &= -\frac{2{\cal M}K(k)}{\pi l_2} \,, \qquad l_{1,2}:=\sqrt{(\rho\mp b)^2+z^2} \,,
\label{nuBW} \\
\lambda_{\rm BW} &= -\frac{{\cal M}^2}{4\pi^2 b^2\rho} \times \nonumber \\
& \quad\times
\left[(\rho\!+\!b)(E\!-\!K)^2+\frac{(\rho-b)(E\!-\!k'^2 K)^2}{k'^2}\right],
\end{align}
where
\begin{align*}
&K\equiv K(k) := \int_0^{\pi/2}\frac{{\rm d}\alpha}{\sqrt{1-k^2\sin^2\alpha}} \;, \\
&E\equiv E(k) := \int_0^{\pi/2}{\sqrt{1-k^2\sin^2\alpha}}\;\,{\rm d}\alpha
\end{align*}
are complete elliptic integrals of the 1st and the 2nd kind,
with modulus and complementary modulus
\[k^2:=1-\frac{(l_1)^2}{(l_2)^2}=\frac{4b\rho}{(l_2)^2}\;, \qquad
k'^2:=1-k^2=\frac{(l_1)^2}{(l_2)^2} \;.\]
Especially on the axis $\rho=0$, one has $k=0$, $K=E=\pi/2$, so
$\nu_{\rm BW}=-\frac{{\cal M}}{\sqrt{z^2+b^2}}\,$ and $\lambda_{\rm BW}=0$
(the latter must actually hold for {\em any} Weyl solution should the axis be regular).
The solution was derived by \cite{BachW-22} and more recently studied e.g. by \cite{Hoenselaers-95,SemerakZZ-99,DAfonsecaLO-05}.\footnote
{We thank our colleague Pavel \v{C}\'{\i}\v{z}ek for pointing out that we did not give $\lambda_{\rm BW}$ properly in \cite{SemerakZZ-99} and for suggesting a correct form.}
Due to linearity of the Laplace equation, the partial potentials $\nu_{\rm Schw}$ and $\nu_{\rm BW}$ can simply be added, while the total $\lambda$ function has to be found from the total $\nu$ by quadrature.
In Schwarzschild coordinates, the total metric reads \cite{SemerakZZ-99}
\begin{align}
{\rm d}s^2
=& -e^{2\nu}{\rm d}t^2+r(r-2M)\,e^{-2\nu}\sin^2\theta\,{\rm d}\phi^2+ \nonumber \\
& +\left[(r\!-\!M)^2\!-\!M^2\cos^2\theta\right]
e^{2\lambda-2\nu}\!
\left[\frac{{\rm d}r^2}{r(r\!-\!2M)}+{\rm d}\theta^2\right] \nonumber \\
=& -\!\left(1-\frac{2M}{r}\right)e^{2\nu_{\rm ext}}{\rm d}t^2
+\frac{e^{2\lambda_{\rm ext}-2\nu_{\rm ext}}}{1-\frac{2M}{r}}\,{\rm d}r^2+ \nonumber \\
& +r^2 e^{-2\nu_{\rm ext}}\!
\left(e^{2\lambda_{\rm ext}}{\rm d}\theta^2+\sin^2\theta\,{\rm d}\phi^2\right),
\label{metric}
\end{align}
where in our case $\nu_{\rm ext}(\equiv\nu_1)\equiv\nu_{\rm BW}$,
while $\lambda_{\rm ext}:=\lambda-\lambda_{\rm Schw}=\lambda_{\rm BW}+\lambda_{\rm int}$.
Regarding that
\[\nu_{\rm BW}=-\frac{2{\cal M}}{\pi M}\,\frac{K(k)}{l_2/M} \;,
\qquad
\frac{\partial\nu_{\rm BW}}{\partial\rho}=\frac{1}{M}\,\frac{\partial\nu_{\rm BW}}{\partial(\rho/M)}\]
(and similarly for derivatives with respect to $z$ and $r$), we can now add to the decomposition (\ref{lambda-decomp}), on the basis of equations (\ref{lambda_int,eqns}), that $\lambda_{\rm int}$ scales with $M$ and ${\cal M}$ as
\begin{align}
&\lambda_{\rm int}\!
\left(\frac{\rho}{M},\frac{z}{M};\,\frac{b}{M};\,M,{\cal M}\right)= \nonumber \\
&=\frac{{\cal M}}{M}\;\lambda_{\rm int}
\!\left(\frac{\rho}{M},\frac{z}{M};\,\frac{b}{M};\,M=1,{\cal M}=1\right).
\end{align}
Thanks to this property, one can find the $\lambda$-field for a given system (given $M$, ${\cal M}$, $b$) by simple scaling of its form obtained for $M=1$ and ${\cal M}=1$ (and the given $b$).
\subsection{Behaviour on the horizon}
Our main interest is to learn how the external source affects the geometry inside the black hole, which requires to extend the metric below the horizon. It will thus be useful to know how the metric functions behave on the horizon. In the Weyl coordinates, the horizon is given by $\rho=0$, $|z|\leq M$. The black-hole potential has there a logarithmic divergence while the exterior potential is regular,\footnote
{Asterisk / index `H' denote expansions/values valid at the horizon.}
\[\nu_{\rm Schw}\stackrel{*}{=}\ln\frac{\rho}{2\,\sqrt{M^2-z^2}}+O(\rho^2),
\quad
\nu_{\rm BW}\stackrel{\rm H}{=} -\frac{{\cal M}}{\sqrt{z^2+b^2}} \;,\]
so the total potential $\nu=\nu_{\rm Schw}+\nu_{\rm BW}$ expands there as
\begin{equation} \label{nu,expand}
\nu\stackrel{*}{=}\ln\frac{\rho}{2\,\sqrt{M^2-z^2}}-\frac{{\cal M}}{\sqrt{z^2+b^2}}+O(\rho^2) \,,
\end{equation}
which implies, for example,
\begin{align}
\rho^2 e^{-2\nu} &\stackrel{\rm H}{=} 4(M^2-z^2)\exp\left(\frac{2{\cal M}}{\sqrt{z^2+b^2}}\right), \\
\lambda_{,\rho}-\nu_{,\rho} &= \rho(\nu_{,\rho})^2-\rho(\nu_{,z})^2-\nu_{,\rho}\stackrel{*}{=} O(\rho) \,.
\label{lambda,rho-nu,rho}
\end{align}
On any static (in fact even stationary) horizon, $\lambda(z)\stackrel{\rm H}{=} 2\nu(z)-2\nu(z\!=\!M)$ (see e.g. \cite{Will-74}, eq. (24)), therefore, applying this for the total as well as pure-Schwarzschild metric, one finds
\begin{align}
\lambda-\nu &\stackrel{\rm H}{=} \lambda_{\rm Schw}-\nu_{\rm Schw}+\nu_{\rm BW}(z)-2\nu_{\rm BW}(z\!=\!M)\stackrel{\rm H}{=} {}
\nonumber \\
&\stackrel{\rm H}{=} \ln\frac{2M}{\sqrt{M^2-z^2}}-\frac{{\cal M}}{\sqrt{z^2+b^2}}
+\frac{2{\cal M}}{\sqrt{M^2+b^2}} \;. \label{lambda-nu;H}
\end{align}
Using (\ref{lambda,rho-nu,rho}) and (\ref{lambda-nu;H}),
\begin{align}
\lambda-\nu &(\lambda-\nu)_{\rm H}+\int_0^\rho(\lambda_{,\rho}-\nu_{,\rho})\,{\rm d}\rho
\stackrel{*}{=} {} \nonumber \\
&\stackrel{*}{=} \ln\frac{2M}{\sqrt{M^2-z^2}}-\frac{{\cal M}}{\sqrt{z^2+b^2}}
+\frac{2{\cal M}}{\sqrt{M^2+b^2}}+O(\rho^2) \label{lambda-nu,expand}
\end{align}
and, by subtraction of (\ref{nu,expand}) from (\ref{lambda-nu,expand}),
\begin{equation}
\lambda-2\nu\stackrel{*}{=}\ln\frac{2M}{\sqrt{M^2-z^2}}-\frac{\rho}{4M}+O(\rho^2) \,.
\end{equation}
\section{Extension of the metric below horizon}
\label{below-horizon}
Interior of a black hole deformed by an external source is known to remain regular, except for the central singularity which however keeps its point-like character \cite{GerochH-82}. In order to extend the metric explicitely, let us first allow the spheroidal radius $r$ to go below $r=2M$. The Schwarzschild potential $\nu_{\rm Schw}$ involves imaginary part ${\rm i}\pi$ there, because the lapse squared $e^{2\nu}$ is negative below horizon. More seriously, the potential induced by the external source has to be continued there since it is not at all defined at that region originally.
\subsection{External potential inside the black hole}
For $r<2M$, the Weyl radius $\rho=\sqrt{r(r-2M)}\,\sin\theta$ turns pure imaginary, which makes the $l_{1,2}$ distances and the modulus of the $K(k)$ integral complex. However, this need not lead to complex $\nu_{\rm BW}$ since the latter is even in $\rho$, as seen, for example, from the known identity
\[K(k)=\frac{2}{1+k'}\,K\!\left(\frac{1-k'}{1+k'}\right)\]
which in our case ($k'=l_1/l_2$) implies
\begin{equation} \label{K(k)-formula}
-\frac{\pi}{2{\cal M}}\,\nu_{\rm BW}
=\frac{K(k)}{l_2}=\frac{2}{l_2+l_1}\,K\!\left(\frac{l_2-l_1}{l_2+l_1}\right).
\end{equation}
This is symmetrical with respect to the exchange $l_1\leftrightarrow l_2$. But such an exchange is equivalent to the change of the sign of $\rho$, so $\nu_{\rm BW}(\rho)$ is even.
Now, if $\nu_{\rm BW}$ is even in $\rho$, it should remain real when $\rho$ becomes pure imaginary. However, the behaviour of $K(k)$ for complex $k^2$ involves a feature which leaves this conclusion only partially valid. Let $\rho$ be pure imaginary, $\rho=:{\rm i}\varrho$, where $\varrho>0$. From the explicit form of the modulus
\begin{equation}
k^2=\frac{4b\rho}{(l_2)^2}=\frac{4{\rm i}\,b\varrho}{-\varrho^2+b^2+z^2+2{\rm i}\,b\varrho}
\end{equation}
it is seen that inside black hole there is a surface $\varrho^2=b^2+z^2$ where $k^2$ is pure real, $k^2=2$. But $K(k)$ has a branch cut along the real axis at $1<k^2<\infty$, so it is discontinuous on the above surface. More specifically, when crossing the cut from $\Im(k^2)<0$ to $\Im(k^2)>0$ side (which means from $\varrho^2>b^2+z^2$ to $\varrho^2<b^2+z^2$ side of the surface), the integral jumps from $K(k)$ to $K(k)+2{\rm i}\,K(k')$, hence in our case it jumps from $K(\sqrt{2})\doteq 1.311(1-{\rm i})$ to the complex conjugate $K(\sqrt{2})+2{\rm i}\,K({\rm i})\doteq 1.311(1+{\rm i})$. In addition, the same surface also marks the location where $\Re(l_2)=\Im(l_2)$, with $\Re(l_2)<\Im(l_2)$ on its $\varrho^2>b^2+z^2$ side and $\Re(l_2)>\Im(l_2)$ on its $\varrho^2<b^2+z^2$ side.
Due to these two circumstances, the expression $K(k)/l_2$ changes from pure real to pure imaginary when crossing the surface from $\varrho^2<b^2+z^2$ to $\varrho^2>b^2+z^2$.
A possible solution of this issue is offered by the above formula (\ref{K(k)-formula}). Actually, when writing the potential as
\begin{equation} \label{nuBW,alt}
\nu_{\rm BW}
=-\frac{4{\cal M}}{\pi\,(l_2+l_1)}\,K\!\left(\frac{l_2-l_1}{l_2+l_1}\right)
\end{equation}
rather than in the usual form $\nu_{\rm BW}=-2{\cal M}K(k)/(\pi l_2)$, it is real for both real and imaginary $\rho$, it smoothly crosses the horizon and coincides with the original form in the outer region.
Interior solution -- in particular in the region $\varrho^2>b^2+z^2$ where direct extension of the original exterior potential to imaginary $\rho$ did not bring a real result -- can also be checked by returning to the field equations and by solving them once again for $\rho=:{\rm i}\varrho$. The equations then read
\begin{align}
&\nu_{,\varrho\varrho}+\frac{\nu_{,\varrho}}{\varrho}-\nu_{,zz}=0 \,, \\
&\lambda_{,\varrho}=\varrho(\nu_{,\varrho})^2+\varrho(\nu_{,z})^2 \,, \quad
\lambda_{,z}=2\varrho\,\nu_{,\varrho}\nu_{,z} \,, \label{E-equations,inside}
\end{align}
so in comparison with (\ref{Einstein-eqs}) there appear sign changes in the first two equations. In particular, the first equation is the wave equation in the ``interior meridional plane" $(\varrho,z)$. Its solution, appropriate for our situation, is given by infinite series involving the Legendre functions $P_{n-1/2}\,$:
\begin{align*}
\nu_{\rm BW}^{\rm in}=
&-\frac{{\cal M}}{\sqrt{b^2+\varrho^2}} \,\times \\
&\times \sum\limits_{n=0}^\infty
\frac{(-1)^n(2n)!}{2^{2n}(n!)^2}\,
P_{n-\frac{1}{2}}\!\left(\frac{b^2-\varrho^2}{b^2+\varrho^2}\right)
\frac{z^{2n}}{(b^2+\varrho^2)^n} \;.
\end{align*}
This sum is really an expansion of (\ref{nuBW,alt}) valid inside the horizon.\footnote
{However, it only converges uniformly within $z^2<b^2+\varrho^2$, elsewhere the convergence is just point-wise and slow.}
In particular, on the horizon (more precisely, on all the axis $\varrho=0\Leftrightarrow\rho=0$) it yields correctly
\begin{align}
\nu_{\rm BW}^{\rm in}(\varrho=0)
&=-\frac{{\cal M}}{\sqrt{b^2+z^2}}=\nu_{\rm BW}(\rho=0) \,,\\
\left.\frac{\partial\nu_{\rm BW}^{\rm in}}{\partial\varrho}\right|_{\varrho=0}
&= 0 =\left.\frac{\partial\nu_{\rm BW}}{\partial\rho}\right|_{\rho=0} \,,\\
\left.\frac{\partial^2\nu_{\rm BW}^{\rm in}}{\partial\varrho^2}\right|_{\varrho=0}
&=\frac{{\cal M}}{2}\,\frac{b^2-2z^2}{(b^2+z^2)^{5/2}}
=-\left.\frac{\partial^2\nu_{\rm BW}}{\partial\rho^2}\right|_{\rho=0}.
\end{align}
An example of the ring-potential behaviour inside the black hole is given in figure \ref{nuBW-plot,inside}.
\subsection{Function $\lambda$ on the axis and at the horizon}
The last function needed in order to complete the metric (\ref{metric}) is $\lambda_{\rm ext}\equiv\lambda-\lambda_{\rm Schw}$. Its extension below the horizon is given by field equations (\ref{E-equations,inside}) which can be rewritten for $\lambda_{\rm ext}$ as
\begin{align}
\lambda_{{\rm ext},\varrho}
&\equiv \lambda_{,\varrho}-\lambda_{{\rm Schw},\varrho}
= \varrho(\nu_{,\varrho})^2+\varrho(\nu_{,z})^2-\lambda_{{\rm Schw},\varrho}= \nonumber \\
&= \varrho\big[(\nu_{{\rm BW},\varrho})^2+(\nu_{{\rm BW},z})^2 \nonumber \\
&\qquad +2\nu_{{\rm Schw},\varrho}\nu_{{\rm BW},\varrho}
+2\nu_{{\rm Schw},z}\nu_{{\rm BW},z}\big],
\label{lambda_ext,varrho} \\
\lambda_{{\rm ext},z}
&\equiv \lambda_{,z}-\lambda_{{\rm Schw},z}
= 2\varrho\,\nu_{,\varrho}\nu_{,z}-\lambda_{{\rm Schw},z}= {} \nonumber \\
&= 2\varrho\left(\nu_{{\rm Schw},\varrho}\nu_{{\rm BW},z}+\nu_{{\rm Schw},z}\nu_{{\rm BW},\varrho}
+\nu_{{\rm BW},\varrho}\nu_{{\rm BW},z}\right).
\label{lambda_ext,z}
\end{align}
Transforming to the Schwarzschild-type coordinates,
\[\varrho=\sqrt{r(2M-r)}\,\sin\theta \,, \qquad z=(r-M)\cos\theta \,,\]
while now using
\begin{align}
X_{,r} &= X_{,\varrho}\varrho_{,r}+X_{,z}z_{,r} \nonumber \\
&= X_{,\varrho}\;\frac{M-r}{\sqrt{r(2M-r)}}\,\sin\theta+
X_{,z}\cos\theta \,, \nonumber \\
X_{,\theta} &= X_{,\varrho}\varrho_{,\theta}+X_{,z}z_{,\theta} \nonumber \\
&= X_{,\varrho}\,\sqrt{r(2M-r)}\,\cos\theta+
X_{,z}(M-r)\sin\theta \,, \nonumber \\
\nu_{{\rm Schw},\varrho} &= \frac{(d_1+d_2)\,[4M^2-(d_2-d_1)^2]}
{8M\varrho\;d_1 d_2} \\
&= \frac{M(r-M)\sin\theta}
{[(r-M)^2-M^2\cos^2\theta]\,\sqrt{r(2M-r)}} \;,
\label{nu_Schw,varrho} \\
\nu_{{\rm Schw},z} &= \frac{d_2-d_1}{2\,d_1 d_2}
= \frac{M\cos\theta}{(r-M)^2-M^2\cos^2\theta}
\label{nu_Schw,z}
\end{align}
(these formulas are the same as (\ref{nuSchw,rho})--(\ref{nuSchw,z}) valid outside, only $\rho$ is changed for $\varrho$),
the equations assume the form
\begin{align}
\lambda_{{\rm ext},r}&=
\frac{2\nu_{{\rm BW},\varrho}\sin\theta}{\sqrt{r(2M-r)}}\,
\left[\,r(2M-r)\,\nu_{{\rm BW},z}\cos\theta-M\right]
\nonumber \\ & {}
+(M-r)\left[(\nu_{{\rm BW},\varrho})^2+(\nu_{{\rm BW},z})^2\right]\sin^2\theta \;, \\
\lambda_{{\rm ext},\theta}&=
2\nu_{{\rm BW},z}\sin\theta \;\times \nonumber \\
&\quad\times
\left[(M\!-\!r)\sqrt{r(2M\!-\!r)}\,\nu_{{\rm BW},\varrho}\sin\theta\!-\!M\right]
\nonumber \\ & {}
+r(2M-r)\left[(\nu_{{\rm BW},\varrho})^2+(\nu_{{\rm BW},z})^2\right]\sin\theta\cos\theta \;.
\end{align}
Note that in Schwarzschild coordinates all the expressions are ``ready to use", whereas if using Weyl coordinates (below horizon), one has to choose the signs of $d_1$ and $d_2$ properly (``by hand") -- see Appendix \ref{appendix-B}.
The first of these reduces, for $\sin\theta=0$, just to
\begin{equation}
\left(\lambda_{{\rm ext},r}\right)_{\sin\theta=0}=0 \,,
\end{equation}
hence the $\lambda_{\rm ext}$ function is constant along the $\sin\theta=0$ axis. Regarding that on the Weyl axis ($\rho=0$, $|z|>M$) one has $\lambda=\lambda_{\rm Schw}=\lambda_{\rm ext}=0$ ($z={\rm const}$ surfaces are required to be regular there), one thus finds that
\begin{equation}
\left(\lambda_{\rm ext}\right)_{\sin\theta=0}
=\left(\lambda_{\rm Schw}\right)_{\sin\theta=0}
=0
\end{equation}
holds {\em everywhere} on the (Schwarzschild) axis, including the black-hole interior.
Notice now that the second equation for $\lambda_{\rm ext}$ reduces to the same relation at the singularity $r=0$ and on the horizon $r=2M$,
\begin{align}
\left(\lambda_{{\rm ext},\theta}\right)_{r=0}
&= -2M\left(\nu_{{\rm BW},z}\right)_{r=0}\sin\theta \,, \\
\left(\lambda_{{\rm ext},\theta}\right)_{r=2M}
&= -2M\left(\nu_{{\rm BW},z}\right)_{r=2M}\sin\theta \,.
\end{align}
But $\varrho(r\!=\!2M)=0=\varrho(r\!=\!0)$, $z(r\!=\!2M)=M\cos\theta=-z(r\!=\!0)$ and $\nu_{\rm BW}$ is even in $z$ (hence $\nu_{{\rm BW},z}$ is odd in $z$), so we have
\[\left(\nu_{\rm BW}\right)_{r=0}=\left(\nu_{\rm BW}\right)_{r=2M}, \quad
\left(\nu_{{\rm BW},z}\right)_{r=0}=-\left(\nu_{{\rm BW},z}\right)_{r=2M}\]
and, therefore,
\begin{equation}
\left(\lambda_{{\rm ext},\theta}\right)_{r=0}=-\left(\lambda_{{\rm ext},\theta}\right)_{r=2M} \,,
\end{equation}
namely the latitudinal dependence of $\lambda_{\rm ext}$ is just opposite at the singularity and on the horizon.
However, on the horizon we have $\lambda(\theta)\stackrel{\rm H}{=} 2\nu(\theta)-2\nu(\theta\!=\!0)$ for the total metric as well as for pure Schwarzschild, so the same must also hold for $\lambda_{\rm ext}\equiv\lambda-\lambda_{\rm Schw}$, hence
\begin{align}
&\left(\lambda_{\rm ext}\right)_{r=0}=-\left(\lambda_{\rm ext}\right)_{r=2M}= {} \nonumber \\
&= -2\nu_{\rm BW}(r\!=\!2M)+2\nu_{\rm BW}(r\!=\!2M,\theta\!=\!0)= {} \nonumber \\ {}
&= \frac{2{\cal M}}{\sqrt{M^2\cos^2\theta+b^2}}-\frac{2{\cal M}}{\sqrt{M^2+b^2}} \;.
\end{align}
Note that the ``duality" between the horizon and the singularity was already observed by \cite{FrolovS-07}.
\subsection{Function $\lambda$ inside the black hole}
It has thus been possible to find $\lambda$ along the $\sin\theta=0$ axis and on the horizon. One would however like to know its behaviour everywhere inside the black hole. For such a purpose, it has proved advantageous to subtract equations (\ref{lambda_ext,varrho}), (\ref{lambda_ext,z}) and rewrite the result
\[\lambda_{{\rm ext},\varrho}\mp\lambda_{{\rm ext},z}
=\varrho\,(\nu_{,\varrho}\mp\nu_{,z})^2
-\varrho\,(\nu_{{\rm Schw},\varrho}\mp\nu_{{\rm Schw},z})^2\]
in terms of the derivatives
\[\frac{\partial}{\partial\eta_\mp}
:=\frac{\partial}{\partial\varrho}\mp\frac{\partial}{\partial z} \;:\]
\begin{align}
\lambda_{{\rm ext},\eta_\mp}
&=\varrho\left[(\nu_{,\eta_\mp})^2-(\nu_{{\rm Schw},\eta_\mp})^2\right]= \nonumber \\
&=\varrho\left[(\nu_{{\rm Schw},\eta_\mp}+\nu_{{\rm ext},\eta_\mp})^2
-(\nu_{{\rm Schw},\eta_\mp})^2\right]= \nonumber \\
&=\varrho\,\nu_{{\rm ext},\eta_\mp}
\left(2\nu_{{\rm Schw},\eta_\mp}+\nu_{{\rm ext},\eta_\mp}\right), \label{lambda_ext,eta}
\end{align}
where, from (\ref{nu_Schw,varrho}) and (\ref{nu_Schw,z}),
\begin{align}
&\nu_{{\rm Schw},\eta_\mp}
= 2M\;
\frac{d_1\left[\varrho\mp(z+M)\right]+d_2\left[\varrho\mp(z-M)\right]}
{d_1 d_2\left[(d_1+d_2)^2-4M^2\right]} = \nonumber \\
&\quad = -\frac{M}{2r(2M-r)}\left[\frac{\varrho\mp(z+M)}{d_2}+\frac{\varrho\mp(z-M)}{d_1}\right].
\end{align}
Regarding that
\begin{align*}
d_{1,2}&=\sqrt{(z\mp M)^2+\rho^2}
=\sqrt{(z\mp M)^2-\varrho^2}= \\
&=\sqrt{(z\mp M+\varrho)(z\mp M-\varrho)} \;,
\end{align*}
the above can also be written
\begin{align*}
&\nu_{{\rm Schw},\eta_\mp}= \\
&=\pm\frac{M}{2r(2M-r)}
\left(\sqrt{\frac{z+M\mp\varrho}{z+M\pm\varrho}}
+\sqrt{\frac{z-M\mp\varrho}{z-M\pm\varrho}}\right).
\end{align*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{null-geodesics-inside.eps}
\caption
{An elegant pattern of null geodesics just tangent to the horizon (red) and spanning the black-hole interior, in the Schwarzschild coordinates given by $r\!=\!M\left[1\pm\cos(\theta\!-\!\theta_0)\right]$ and depicted in $r\sin\theta$, $r\cos\theta$ axes (scaled by $M$); the geodesics given by $\theta_0=0$ and $\theta_0=\pi$ are emphasized (blue). We proceed along these characteristics when integrating the Einstein equations inside the horizon.}
\label{null-geodesics-inside}
\end{figure}
Equation (\ref{lambda_ext,eta}) has now to be integrated toward the black-hole interior.
This is best performed along the family of curves given by
\[d_{1,2}=0 \quad \Leftrightarrow \quad \varrho=|z\mp M|
\quad \Leftrightarrow \quad r=M(1\pm\cos\theta) \,,\]
namely
\begin{align}
&r=M\left[1\pm\cos(\theta-\theta_0)\right], \quad
\theta\in\langle \theta_0,\theta_0+\pi\rangle \,, \label{null-geodesics} \\
&{\rm where} \quad
\theta_0={\rm const}\in\langle 0,\pi\rangle \,. \nonumber
\end{align}
These curves are null geodesics starting tangentially to the horizon and descending toward the central singularity (see figure \ref{null-geodesics-inside} and appendix \ref{appendix-A}); they represent characteristics of the Einstein equations.
Multiplying equations (\ref{lambda_ext,eta}) by the tangent vector ${\rm d}\eta_\mp/{\rm d}\sigma$ of the respective curves, where $\sigma$ is some parameter, one obtains an ordinary differential equation suitable for integration,
\begin{equation}
\frac{{\rm d}\lambda_{\rm ext}}{{\rm d}\sigma}
= \varrho\,\nu_{{\rm ext},\eta_\mp}\,
\frac{{\rm d}(2\nu_{\rm Schw}+\nu_{\rm ext})}{{\rm d}\sigma}
\;.
\end{equation}
The main benefit of the latter is that it no more contains $\nu_{{\rm Schw},\eta_\mp}$ (which does not behave nicely below the horizon).
However, the formulation we have found the most advantageous still requires one more transformation.
\subsection{Horizon angles fixed by characteristics and a trapezoid rule}
It is seen on figure (\ref{null-geodesics-inside}) that any two null geodesics which ``counter-inspiral" (with respect to each other) from the horizon to the singularity intersect at a certain point $(r,\theta)$ inside the black hole. Let us denote by $\theta_+$ and $\theta_-$ the angles on the horizon from where these geodesics start, assuming $0<\theta_-<\theta_+<\pi$, and make the transformation
\begin{align*}
&r=M\left(1+\cos\frac{\theta_+-\theta_-}{2}\right),
\quad
\theta=\frac{\theta_++\theta_-}{2} \;, \\
&\varrho=\frac{M}{2}\,(\cos\theta_--\cos\theta_+) \,,
\quad
z=\frac{M}{2}\,(\cos\theta_-+\cos\theta_+) \,.
\end{align*}
In terms of these angles, the metric reads (notice that it is no longer diagonal)
\begin{align}
{\rm d}s^2=&-\left(1-\frac{2M}{r}\right)e^{2\nu_{\rm ext}}{\rm d}t^2 \nonumber \\
&+r^2 e^{-2\nu_{\rm ext}}
(e^{2\lambda_{\rm ext}}{\rm d}\theta_-{\rm d}\theta_++\sin^2\theta\,{\rm d}\phi^2)\,,
\label{metric,theta12}
\end{align}
where $r=r(\theta_-,\theta_+)$, and Einstein equations have the form
\begin{align}
& 2(\cos\theta_-\!-\!\cos\theta_+)\,\frac{\partial^2\nu_{\rm ext}}{\partial\theta_-\partial\theta_+}
=\frac{\partial\nu_{\rm ext}}{\partial\theta_+}\,\sin\theta_-
\!-\!\frac{\partial\nu_{\rm ext}}{\partial\theta_-}\,\sin\theta_+ \,, \\
& \frac{\partial\lambda_{\rm ext}}{\partial\theta_-}\,\sin\theta_-
=\left[2\sin\theta\!-\!(\cos\theta_-\!-\!\cos\theta_+)\,\frac{\partial\nu_{\rm ext}}{\partial\theta_-}\right]
\frac{\partial\nu_{\rm ext}}{\partial\theta_-} \;, \label{lambda,theta-} \\
& \frac{\partial\lambda_{\rm ext}}{\partial\theta_+}\,\sin\theta_+
=\left[2\sin\theta\!+\!(\cos\theta_-\!-\!\cos\theta_+)\,\frac{\partial\nu_{\rm ext}}{\partial\theta_+}\right]
\frac{\partial\nu_{\rm ext}}{\partial\theta_+} \;. \label{lambda,theta+}
\end{align}
To solve the first equation, it is sufficient to know the axis values $\nu_{\rm ext}(0,z)$,
\begin{equation} \label{nuext,integral}
\nu_{\rm ext}(\varrho,z)=
\frac{1}{\pi}\int\limits_0^\pi \nu_{\rm ext}(0,z+\varrho\cos\alpha)\,{\rm d}\alpha \,.
\end{equation}
This integral can be calculated using a simple trapezoid rule. Actually, for a function having the same odd derivatives with respect to the integration variable at the end points of the integration interval (which is the case of our $\nu_{\rm ext}(0,z+\varrho\cos\alpha)$), the error of this scheme falls exponentially with the number of discretization points (see e.g. \cite{Numerical-Recipes}, chapter 4).
In order to find $\lambda_{\rm ext}$, we have solved, instead of equations (\ref{lambda,theta-}) and (\ref{lambda,theta+}) themselves, their integrability condition
\begin{equation}
\frac{\partial^2\lambda_{\rm ext}}{\partial\theta_-\partial\theta_+}=
\frac{M(\nu_{{\rm ext},\theta_+}-\nu_{{\rm ext},\theta_-})}{2\,\sqrt{r(2M-r)}}
-\nu_{{\rm ext},\theta_+}\nu_{{\rm ext},\theta_-} \,.
\end{equation}
Using a reversible discretization scheme which respects propagation of the boundary conditions along characteristics (like in numerical treatment of the wave equation), one obtains very precise results, mainly thanks to a regular behaviour everywhere inside the black hole (including the shells where $d_{1,2}=0$).
The language of $\theta_-$ and $\theta_+$ angles is also advantageous for the Kretschmann scalar:
in a vacuum, the Riemann tensor has 3 independent components which satisfy
\begin{align}
&{R^{t\theta_-}}_{t\theta_-}={R^{t\theta_+}}_{t\theta_+}
={R^{\phi\theta_-}}_{\phi\theta_-}={R^{\phi\theta_+}}_{\phi\theta_+}= \nonumber \\
&\qquad =-\frac{1}{2}\,{R^{t\phi}}_{t\phi}
=-\frac{1}{2}\,{R^{\theta_-\theta_+}}_{\theta_-\theta_+} \;, \nonumber \\
&{R^{t\theta_-}}_{t\theta_+}=-{R^{\phi\theta_-}}_{\phi\theta_+} \,, \quad
{R^{t\theta_+}}_{t\theta_-}=-{R^{\phi\theta_+}}_{\phi\theta_-} \,,
\end{align}
and in terms of which the Kretschmann invariant reads just
\begin{equation}
K=12\,({R^{\theta_-\theta_+}}_{\theta_-\theta_+})^2+
16\,{R^{t\theta_-}}_{t\theta_+}{R^{t\theta_+}}_{t\theta_-} \;.
\end{equation}
\section{Potential, field and curvature inside and outside black hole}
\label{invariants}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{nuBW-radius.eps}
\caption
{Meridional-plane contours of the BW-ring potential $\nu_{\rm BW}$ inside a black hole, plotted for a ring of mass ${\cal M}=M$ and of different Weyl radii $b$ (given in the plots). The plots are drawn in Schwarzschild-type coordinates, so they are spherical and symmetric with respect to the equatorial plane (where the ring is placed) indicated by the green dashed line, as well as with respect to the axis indicated by the dot-dashed blue line. Higher/lower values correspond to brown/green colour.}
\label{nuBW-plot,inside}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{lapse-mass.eps}
\caption
{Meridional-plane contours of lapse $N$ (or of potential $\nu$) inside a black hole surrounded by a BW ring with radius $b\!=\!M$ and of different masses ${\cal M}$ (given in the plots). The plots are drawn in Schwarzschild-type coordinates, so they are spherical and symmetric with respect to the equatorial plane (where the ring is placed) indicated by the green dashed line, as well as with respect to the axis indicated by the dot-dashed blue line. Note that on the horizon $N$ vanishes.}
\label{N-mass}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{lapse-radius.eps}
\caption
{Meridional-plane contours of lapse $N$ (or of potential $\nu$) inside a black hole surrounded by a BW ring of mass ${\cal M}\!=\!M$ and of different Weyl radii $b$ (given in the plots). Meaning of the plots is the same as in figure \ref{N-mass}. Inside the black hole (originally spherically symmetric), local minima (more green) and maxima (more brown) clearly develop due to the surrounding ring. In the axial region they are of spheroidal shape, while in the equatorial region they are toroidal.}
\label{N-radius}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{kappa2-mass.eps}
\caption
{Meridional-plane contours of gravitational acceleration (squared) $\kappa^2$ inside a black hole surrounded by a BW ring with radius $b\!=\!M$ and of different masses ${\cal M}$ (given in the plots). Meaning of the plots is the same as in previous figures showing lapse. Again brown/green indicates higher/lower value. Drawn in blue with red boundaries are the regions of {\em negative} values, where ``acceleration of a static observer" is time-like. On the horizon $\kappa$ assumes a uniform value.}
\label{kappa-mass}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{kappa2-radius.eps}
\caption
{Meridional-plane contours of gravitational acceleration (squared) $\kappa^2$ inside a black hole surrounded by a BW ring of mass ${\cal M}\!=\!0.2M$ and of different Weyl radii $b$ (given in the plots). Meaning of the plots is the same as in previous figures. The blue regions of time-like ``acceleration" again develop into quite a complicated arrangement (the quotation marks just remind that below horizon this quantity does not have its usual sense).}
\label{kappa-radius}
\end{figure*}
Similarly as in the first paper on Majumdar--Papapetrou black-hole binary, we reveal the space-time geometry on the behaviour of the simplest invariants given by the metric and its first and second derivatives. Here, however, we deal with {\em vacuum} solution, so the Ricci tensor is zero and it makes no sense to study its quadratic scalar. We will thus consider the lapse function $N=e^\nu$, the gravitational acceleration $\kappa$ given by $\kappa^2=g^{\mu\nu}N_{,\mu}N_{,\nu}$ and the Kretschmann scalar $K=R_{\mu\nu\kappa\lambda}R^{\mu\nu\kappa\lambda}$.
Since all the configurations are static, axially symmetric and reflectionally symmetric with respect to the ``equatorial plane" (the one in which the ring is placed), we show their properties on meridional plots with Schwarzschild coordinates $(r\sin\theta,r\cos\theta)$ (in which the horizon is a sphere on $r=2M$). In all the figures, a ``geographic" colouring is used, with brown/green indicating higher/lower positive values and light/dark blue indicating smaller/greater depths.
In figure \ref{nuBW-plot,inside}, the sole ring potential $\nu_{\rm BW}$ inside the black hole is shown first, for ${\cal M}=M$ and several different ring radii. The potential has nothing special at the horizon and is also regular everywhere below it. However, the figure shows that inside the black hole it propagates\footnote
{This verb reminds that the black-hole interior is a {\em dynamical} domain.}
in a non-trivial manner.
The lapse-function $N\!=\!\sqrt{(2M/r)-1}\;e^{\nu_{\rm BW}}$ contours inside the horizon are shown, for sequences of black-hole--ring space-times, in figures \ref{N-mass} (fixed ring radius $b=M$, increasing mass) and \ref{N-radius} (fixed ring mass ${\cal M}=M$, decreasing radius). Their shapes clearly follow the behaviour of the external, ring potential. The colouring is not so ``attractive" as in the following figures of acceleration and curvature, simply because the values of $N$ are not so extreme; they only fall to zero (dark green) on the horizon (very suddenly) and, interestingly, also at certain locations on the axis.
The gravitational-acceleration level contours are shown in figures \ref{kappa-mass} and \ref{kappa-radius}. The pattern is rather different from that of potential/lapse, involving quite a complicated arrangement of regions where $\kappa^2$ is negative (drawn in blue). This means that the gradient of lapse, which {\em outside} the horizon determines acceleration of static observers (those at rest with respect to infinity)\footnote
{Let us also remind that at the horizon $\kappa$ is known as {\it surface gravity} and that over stationary horizons it is constant.}
and is everywhere space-like there, becomes time-like in some interior zones if the ring is sufficiently ``strong".
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{kappa2-mass-out.eps}
\caption
{Meridional-plane contours of $\kappa^2$ outside a black hole surrounded by a BW ring (red dots) with radius $b\!=\!5M$ (Schwarzschild radius $\doteq 6.1M$) and of different masses ${\cal M}$ (given in the plots). The white circular region (of radius $r\!=\!2M$) is the black hole, which indicates scale of the plots. The most distinct feature is the quite sharp zero (unstable-equilibrium ring) between the ring and the hole, shifting towards the horizon with gradual increase of the ring mass. On the horizon $\kappa\!=\!{\rm const}$.}
\label{kappa-mass-out}
\end{figure*}
We have not taken much notice of the geometry {\em outside} of the horizon, mainly focusing on deformation of the interior. At the level of potential ($\nu$, or the lapse function $N$), it would be rather superfluous to present figures of the exterior, because these simply correspond to the Newtonian potential of a finite rod, surrounded, symmetrically, by a thin ring and transformed from the Weyl to the Schwarzschild coordinates. The level of field (acceleration) may already be more interesting, since, admittedly, that is not a common transformation from cylindrical to spherical coordinates. In figure \ref{kappa-mass-out}, we thus show how the acceleration ($\kappa^2$) field changes with mass of the ring when the latter is placed on $b=5M$ (which corresponds to $r\doteq 6.1M$ in terms of the Schwarzschild radius). No surprise is seen, in particular, no intriguing structure along the axis; the main feature is the ring of unstable equilibrium (zero acceleration) between the ring and the horizon, gradually shifting from the former to the latter while the ring mass is being increased.
\subsection{The Kretschmann scalar}
\label{Kretschmann}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Kretschmann-mass.eps}
\caption
{Meridional-plane contours of the Kretschmann scalar inside a black hole surrounded by a BW ring with radius $b\!=\!M$ and of different masses ${\cal M}$ (given in the plots). Meaning of the plots is the same as in previous figures. The scalar is {\em negative} in the regions drawn in blue with red boundary; it is seen that these regions need not always touch the horizon.}
\label{Kretschmann-mass}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Kretschmann-radius.eps}
\caption
{Meridional-plane contours of the Kretschmann scalar inside a black hole surrounded by a BW ring with mass ${\cal M}\!=\!0.2M$ and of different Weyl radii $b$ (given in the plots). Meaning of the plots is the same as in previous figures. The negative-value regions (blue with red border) form a complicated structure, mainly around the central singularity.}
\label{Kretschmann-radius}
\end{figure*}
Finally we turn to the Kretschmann invariant. In the preceding paper \cite{SemerakB-16} (equation (33)), we used the Weyl-coordinate expression to compute it. This time, specifically in the dynamical region inside the black hole, the Schwarzschild-coordinate form is more suitable, for computation as well as for interpretation. It is quite similar,
\begin{align}
\!\!\!K
&= 8\left[({R^{tr}}_{tr})^2\!+\!({R^{t\theta}}_{t\theta})^2\!+\!({R^{t\phi}}_{t\phi})^2
\!-\frac{2\,({R^{tr}}_{t\theta})^2}{r(2M-r)}\right] \label{K-Schw} \\
&= \frac{8e^{4\nu-4\lambda}}{r^6} \;\times {} \nonumber \\
& \quad\times
\left[(\tilde{R}^{tr}{}_{tr})^2\!+\!(\tilde{R}^{t\theta}{}_{t\theta})^2
\!+\!(\tilde{R}^{t\phi}{}_{t\phi})^2
\!-\frac{2r\,(\tilde{R}^{tr}{}_{t\theta})^2}{2M-r}\right], \nonumber
\end{align}
where we have denoted ($j=r,\theta,\phi$, no summation)
\[\tilde{R}^{tj}{}_{tj}:=r^3 e^{2\lambda-2\nu}{{R}^{tj}}_{tj} \;,
\quad
\tilde{R}^{tr}{}_{t\theta}:=r^2 e^{2\lambda-2\nu}{{R}^{tr}}_{t\theta} \;.\]
Several simple observations:
\begin{itemize}
\item
For pure Schwarzschild, one has $e^{4\nu-4\lambda}=1$, $\tilde{R}^{tr}{}_{tr}=2M$, $\tilde{R}^{t\theta}{}_{t\theta}=M$, $\tilde{R}^{t\phi}{}_{t\phi}=M$ and $\tilde{R}^{tr}{}_{t\theta}=0$, which yields $K=48M^2/r^6$ correctly.
\item
$K$ is fully determined by the ``electric-type" tidal field, as expected in a static space-time. The first three components are related by vacuum Einstein equations, ${{R}^{tk}}_{tk}=R^t_t=0$ (summation over $k$).
\item
$K>0$ everywhere outside the black hole ($r>2M$). Inside, it can only become negative due to the $R^{tr}{}_{t\theta}$ component, i.e. the off-diagonal component of the ``electric" tidal field (which vanishes in pure Schwarzschild).
\item
As the external-source potentials are regular at $r=0$, the strong singularity of this central point is not altered by them.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Kretschmann-mass-out.eps}
\caption
{Meridional-plane contours of the Kretschmann scalar outside a black hole surrounded by a BW ring (red dots at divergent maximum of the invariant) with radius $b\!=\!5M$ ($r\doteq 6.1M$) and of different masses ${\cal M}$ (given in the plots). The white circular region (of radius $r\!=\!2M$) is the black hole, which indicates scale of the plots. Notice mainly the ring-shaped minima rising from the BW ring, approaching the axis and then splitting into one receding and one approaching the horizon.}
\label{Kretschmann-mass-out}
\end{figure*}
As opposed to the lapse and gravitational acceleration, the Kretschmann scalar requires knowing ``the second" metric function $\lambda$ which has to be found numerically by integrating Einstein's equations. Outside the black hole, one standardly follows some vacuum line starting from the axis (where $\lambda=0$). The results are illustrated in figure \ref{Kretschmann-mass-out} (ring at $b=5M$, or $r\doteq 6.1M$, sequence showing dependence on the ring mass). Besides the saddle ring, expected between the Bach-Weyl ring and the horizon (very slowly shifting towards the horizon with increasing ring mass), quite an interesting feature can be seen off the equatorial plane: with the ring mass increasing from zero, a ring-shaped minimum raises from the ring and goes ``up" (and also down, symmetrically, of course) while deepening and shrinking in radius; for a certain ring mass, it shrinks to the very axis, and then splits into two profound minima, of which one continues to recede along the axis, while the second approaches the horizon ``northern pole".
However, we have mainly focused on the black-hole interior again. In order to determine $\lambda$ by integration of the field equations, we have followed there the characteristics given by null geodesics starting tangentially to the horizon, as described at the end of section \ref{below-horizon}. The results are shown in figures \ref{Kretschmann-mass} (dependence on the ring mass) and \ref{Kretschmann-radius} (dependence on the ring's Weyl radius). Both sequences show that the curvature inside horizon is influenced considerably. Typically, with increasing strength of perturbation due to the ring, the regions of negative Kretschmann scalar occur and develop in a non-trivial manner; we draw them in blue and indicate their borders by red lines. Let us stress that the masses and radii chosen are out of any astrophysically realistic range, in order to mainly see how the interior curvature behaves under extremely strong perturbations. (The latter may only apply to a system of a black hole surrounded by a very compact, neutron torus, which might occur -- very temporarily -- during the collapse of a compact binary.)
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{Gauss-curvature.eps}
\caption
{Curves showing zeros of the horizon's Gauss curvature within the $(b,\theta)$ plane, in dependence on mass of the Bach-Weyl ring ${\cal M}$. As this mass is increased from zero, the zero curve expands from just tiny loop at $b\rightarrow 0$, $\theta\rightarrow\pi/2$ toward bottom and right. Specifically, the curves shown correspond to ${\cal M}/M=0.001$, 0.01, 0.04, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.13, 1.30, 1.55, 2.0, 2.9, 5.0, 10, 25, 100, as indicated in the plot.
The figure is to be used as follows: choose the ring's Weyl radius $b$ and mass ${\cal M}$; then, the Gauss curvature of the horizon is negative at latitudes $\theta$ where the given $b={\rm const}$ line lies to the left of the (black) curve corresponding to the chosen ${\cal M}$; where it lies to the right of the latter, the Gauss curvature is positive. It is seen that for any given $b$, there always exist a certain minimal value of ${\cal M}$ from which there appears a region of negative Gauss curvature on the horizon. If $b\geq M$, such a region starts spreading from the axis ($\theta=0$), whereas if $b<M$, it opens out from some {\em non-axial} latitude; this latitude shifts toward the equatorial plane with $b$ decreasing from $M$ to zero. The locations where, for a given $b$, the Gauss curvature vanishes first (in increasing the mass) and then gets negative, are marked by solid green line. It is also seen -- one verifies the exact value from equation (\ref{Gauss,axis}) -- that for ${\cal M}$ below $M/4$ (dotted black), the Gauss curvature can never get negative at the axis (though it does become negative somewhere closer to the equatorial plane if $b$ is sufficiently small, namely less than $0.5377M$). Equation (\ref{Gauss,axis}) also implies that the important point $b=M$, $\theta=0$ is reached by the curve obtained for ${\cal M}=M/\sqrt{2}$.}
\label{Gauss=0}
\end{figure*}
In order to more understand zeros of the Kretschmann scalar, we recall the relation \cite{FrolovS-07}
\begin{equation}
K\stackrel{\rm H}{=} 3\,{^{(2)}\!}R^2
\end{equation}
between the (four-dimensional) Kretschmann invariant and the Gauss curvature ${^{(2)}\!}R/2$ of the horizon (we mean of the horizon's $t={\rm const}$ section; ${^{(2)}\!}R$ denotes the corresponding 2D Ricci scalar). The Gauss curvature reads, for a generic static axisymmetric source (``ext"), (see e.g. Erratum of \cite{SemerakZZ-99})
\begin{align}
\frac{{^{(2)}\!}R}{2}
&\stackrel{\rm H}{=} -\,\frac{1}{\sqrt{g_{\theta\theta}g_{\phi\phi}}}
\left[\frac{(\sqrt{g_{\phi\phi}})_{,\theta}}
{\sqrt{g_{\theta\theta}}}\right]_{,\theta} \nonumber \\
&\stackrel{\rm H}{=} -\,\frac{1}{R_{\rm H}^{2}e^{\lambda_{\rm ext}}\sin\theta}
\left[\frac{(R_{\rm H}\sin\theta)_{,\theta}}
{R_{\rm H}e^{\lambda_{\rm ext}}}\right]_{,\theta} \nonumber \\
&\stackrel{\rm H}{=} \frac{1+(\nu_{{\rm ext},\theta}+\lambda_{{\rm ext},\theta})\cot\theta
+\nu_{{\rm ext},\theta\theta}-\nu_{{\rm ext},\theta}\lambda_{{\rm ext},\theta}}
{R_{\rm H}^{2}e^{2\lambda_{\rm ext}}} \nonumber \\
&\stackrel{\rm H}{=} \frac{1+3\nu_{{\rm ext},\theta}\cot\theta
+\nu_{{\rm ext},\theta\theta}-2(\nu_{{\rm ext},\theta})^2}
{4M^{2}\exp\left(2\nu_{\rm ext}(\theta)\!-\!4\nu_{\rm ext}(0)\right)} \;,
\end{align}
where $r=2M$ everywhere and
\begin{equation}
R_{\rm H}:=2Me^{-\nu_{\rm ext}(r=2M,\theta)}
\end{equation}
is the horizon's equatorial circumferential radius
($2\pi R_{\rm H}\sin\theta$ is its proper azimuthal circumference at given $\theta$ and
$2\int_{0}^{\pi}\!R_{\rm H}(\theta)\,e^{\lambda_{\rm ext}(r=2M,\theta)}{\rm d}\theta$
is its proper poloidal circumference).
For the Bach-Weyl ring, the horizon's Gauss curvature is always positive in the equatorial plane, namely
\begin{equation}
{^{(2)}\!}R\,(\theta\!=\!\pi/2) \stackrel{\rm H}{=}
\frac{1+\frac{{\cal M}M^2}{b^3}}
{2M^2\exp\left(\frac{4{\cal M}}{\sqrt{b^2+M^2}}-\frac{2{\cal M}}{b}\right)} \;,
\end{equation}
whereas on the axis it may assume both signs,
\begin{equation} \label{Gauss,axis}
{^{(2)}\!}R\,(\theta\!=\!0) \stackrel{\rm H}{=}
\frac{1-\frac{4{\cal M}M^2}{(b^2+M^2)^{3/2}}}
{2M^2\exp\left(\frac{2{\cal M}}{\sqrt{b^2+M^2}}\right)} \;.
\end{equation}
The horizon is known to get more and more oblate when the exterior source lying in the equatorial plane grows in mass. Intuition and experience would suggest that the horizon's Gaussian curvature mainly tends to zero and then to negative values in the axial region (see e.g. Erratum of \cite{SemerakZZ-99} for the Schwarzschild black hole affected by the concentric BW ring or thin annular disc, and \cite{Semerak-02} for a stationary generalization), but figure \ref{Gauss=0} shows that it is only so for $b>M$. When increasing the mass of a ring lying at $b<M$ (very close to the horizon), the region of negative Gauss curvature opens from some {\em non-axial} location; see the green line in figure \ref{Gauss=0} which indicates the latitude where this happens: it actually shifts toward the equatorial plane if the ring is placed closer and closer to the horizon ($b\rightarrow 0$). With figure \ref{Gauss=0} in mind, one understands better the configuration of negative-Kretschmann regions in figures \ref{Kretschmann-mass} and \ref{Kretschmann-radius}, because they touch the horizon exactly where the latter's Gauss curvature vanishes.
Let us also notice ``why" (or at least when/where) the Kretschmann scalar turns negative. Curvature components are probably most straightforwardly interpreted from the geodesic-deviation equation (see e.g. \cite{NiZ-78} for a detailed interpretation of the geodesic-deviation terms in a proper reference frame of a physical observer).
For those present in (\ref{K-Schw}), it is useful to regard the deviation's Schwarzschild components
\begin{align}
\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}
=& -\left[{R^t}_{rtr}(u^r)^2\!+
{R^t}_{\theta t\theta}(u^\theta)^2\!+
{R^t}_{\phi t\phi}(u^\phi)^2\right] \delta t \nonumber \\
& +\left[{R^t}_{rtr}u^r \delta r+
{R^t}_{\theta t\theta}u^\theta\delta\theta+
{R^t}_{\phi t\phi}u^\phi\delta\phi\right] u^t \nonumber \\
& +{R^t}_{rt\theta}(u^r\delta\theta+u^\theta\delta r)\,u^t \nonumber \\
& -2{R^t}_{rt\theta}\,u^r u^\theta\,\delta t \,, \\
\frac{{\rm D}^2\delta\theta}{{\rm d}\tau^2}
=& -\left[{R^\theta}_{t\theta t}(u^t)^2\!+
{R^\theta}_{r\theta r}(u^r)^2\!+
{R^\theta}_{\phi\theta\phi}(u^\phi)^2\right] \delta\theta \nonumber \\
& +\left[{R^\theta}_{t\theta t}u^t \delta t+
{R^\theta}_{r\theta r}u^r\delta r+
{R^\theta}_{\phi\theta\phi}u^\phi\delta\phi\right] u^\theta \nonumber \\
& +{R^\theta}_{trt}(u^r\delta t-u^t\delta r)\,u^t \nonumber \\
& +{R^\theta}_{\phi r\phi}(u^r\delta\phi-u^\phi\delta r)\,u^\phi \,.
\end{align}
For easier intuition, consider a couple of particles separated just in radius $t$ (we are below horizon!), so with $\delta x^i=0\,$ at a given point:
\begin{align}
\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}
=& -\left[{R^t}_{rtr}(u^r)^2\!+
{R^t}_{\theta t\theta}(u^\theta)^2\!+
{R^t}_{\phi t\phi}(u^\phi)^2\right] \delta t \nonumber \\
& -2{R^t}_{rt\theta}\,u^r u^\theta\,\delta t \,, \\
\frac{{\rm D}^2\delta\theta}{{\rm d}\tau^2}
=& \left({R^\theta}_{trt}u^r+{R^\theta}_{t\theta t}u^\theta\right) u^t\delta t \,.
\end{align}
Similarly, for particles separated only by $\delta\theta$,
\begin{align}
\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}
=& \left({R^t}_{rt\theta}u^r+{R^t}_{\theta t\theta}u^\theta\right)u^t\delta\theta \,, \\
\frac{{\rm D}^2\delta\theta}{{\rm d}\tau^2}
=& -\left[{R^\theta}_{t\theta t}(u^t)^2\!+
{R^\theta}_{r\theta r}(u^r)^2\!+
{R^\theta}_{\phi\theta\phi}(u^\phi)^2\right] \delta\theta \,.
\end{align}
Hence, the diagonal electric-type components (contributing positively to the Kretschmann scalar) are those which have the particles accelerate relative to each other in directions in which these are already separated, thus causing their longitudinal {\it expansion/contraction} (these terms are always non-zero, because, in the brackets, at least $u^r$ must be so). In contrast, the off-diagonal electric-type components (specifically ${R^t}_{rt\theta}\sim {R^\theta}_{trt}$, contributing negatively to the scalar) are seen to be pulling -- for example -- in $\theta$ the particles separated in $t$ direction et vice versa, thus causing transversal {\it shear}.
Well, note the diagonal terms ${R^\theta}_{t\theta t}u^\theta u^t\delta t$ in $\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}$ and its counter-part ${R^t}_{\theta t\theta}u^\theta u^t\delta\theta$ in $\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}$, and, on the other hand, the non-diagonal term $2{R^t}_{rt\theta}\,u^r u^\theta\,\delta t$ in $\frac{{\rm D}^2\delta t}{{\rm d}\tau^2}$: these do not seem to fit in the above division. But these terms require, besides the separation $\delta t$, also some transverse velocity (namely $u^\theta$); without this velocity, they vanish.
\section{Concluding remarks}
\label{concluding}
Continuing the study of black holes deformed by some additional source, we have found that the presence of a thin ring (described by the Bach--Weyl solution) affects the black-hole field much more than the presence of ``the other" black hole within the Majumdar--Papapetrou binary solution (considered in paper I). Outside the horizon, the potential (lapse) and field (acceleration) behave in a rather Newtonian manner, while curvature (the Kretschmann scalar) displays rather rugged landscape with loops or points of deep minima developing in the sources' vicinity and changing with parameters.
In the black hole interior the situation is yet more complex.
The gravitational acceleration, given by gradient of lapse/potential, shows different shapes and in extreme situations (very strong ring effect) may become time-like (the corresponding scalar $\kappa^2$ may turn negative).
The curvature is influenced by the ring even more profoundly.\footnote
{It is worth noting, however, that in spite of the considerable effect seen on the black hole, it has been shown by \cite{Guerlebeck-15}, on behaviour of multiple moments, that at infinity it still looks like Schwarzschild (it has ``no hair" there induced by tidal deformation).}
If the ring is placed sufficiently close to the horizon (and/or is sufficiently massive), there even appear two or more toroidal regions of negative Kretschmann scalar $K$. Some of them touch the horizon at circles where the 2D-horizon's Gauss curvature changes sign from positive to negative values. If the Riemann tensor is split into electric and magnetic parts (see e.g. \cite{CostaN-13}), the negative values of $K$ are naturally interpreted as regions where magnetic curvature dominates \cite{CherubiniBCR-02}. However, magnetic effects are usually tied to rotation, whereas here no rotation is present in space-time (though we are dealing with a non-extreme black-hole interior now, so {\em not} with a {\em static} region, of course).
It would be interesting to also study the rich curvature structure inside the ring-perturbed black hole by the scalar-gradient method pursued in \cite{AbdelqaderL-12} or the vortex-tendex concepts suggested by \cite{Nichols-etal-11}.
An obvious remark should be added to visualization. The Schwarzschild-type coordinates we have been using are favourable since they represent the horizon spherical irrespectively of the external influence. However, before interpreting the picture obtained in (any) coordinates, one should bear in mind that most of the statements made are coordinate dependent and that the true geometrical relations may be significantly different. In our case, this does not only apply to the shapes of those various equi-surfaces, but also e.g. to the ``location" (radius) of the ring. Actually, the proper radius
\[\sqrt{g_{rr}}\,{\rm d}r=\frac{e^{\lambda_{\rm ext}-\nu_{\rm ext}}}{\sqrt{1-\frac{2M}{r}}}\;{\rm d}r\]
as well as the circumferential radius
\[\sqrt{g_{\phi\phi}}=r\,\sin\theta\,e^{-\nu_{\rm ext}}\]
depend on the external source; specifically, for a given $r$ they both rapidly grow with the source mass. (Consequently, when ``keeping the ring's radius $b$ while increasing its mass" in the figures, one effectively makes the ring larger and larger, thus {\em weakening} its effect on the black hole.)
This aspect of curvature could be overcome by representing the surfaces in terms of isometric embedding to ${\mathbb E}^3$, which in our (axially symmetric) case means by drawing azimuthal circumferential radius as ``$x$"-axis and proper distance in the meridional direction, but, unfortunately, the shapes provided by strong-field geometry are often very weird and not even reasonably embeddable (have negative curvature). The Bach--Weyl ring, after all, is at finite proper distance from outside, but infinitely far when approached from below, its proper circumference being infinite from either side.
Let us mention some options for future work. First, we saw in paper I on Majumdar--Papapetrou binary that an extreme black hole is not in every respect a strong source, and this paper II confirmed that much stronger effect has been created by a singular ring. An interesting curvature structure might also be offered, within the same class of static and axially symmetric space-times, by a black-hole binary supported by an Appell ring. Namely, this ring generates a field which is ``repulsive" in a certain region, which might be enough to held the holes apart (without any struts). Another possibility is a black-hole binary whose components are held from infinity by singular struts. Such a system (a zero-acceleration limit of the C-metric) is of course artificial and, as opposed to the black hole surrounded by a ring, can hardly approximate any astrophysical setting, but (i) its black holes are far (actually, as far as possible) from extreme state, so they can be expected to exert more strain to each other (than the extreme ones), and (ii) the region between the central singularities of the holes may be rather ``unspoilt" by the singularities stretched along the ``exterior" parts of the symmetry axis. One might also consider a similar system made of a black hole and a massive particle(s). Alternatively, one might subject a black hole to a strong (electro-)magnetic field, e.g. within the Ernst class of exact solutions, but such a field is likely to produce much weaker space-time deformation than the above compact sources.
It would certainly be interesting to extend the analysis to {\em stationary} (non-static, rotating) situations. Although practically tractable and physically sound exact superpositions are not yet available within this class, one could describe them by multipole expansions and study the effect of the individual terms then. In the static (originally Schwarzschild) case, the effect of multipoles has recently been considered by \cite{AbdolrahimiMT-15} in order to learn how they deform the shadow of the horizon; the stationary (originally Kerr) case has been treated by \cite{Shoom-15,PaniGMF-15,AbdolrahimiKNT-15}.\footnote
{Even more general is the dynamical situation, recently studied by \cite{OSullivanH-14,OSullivanH-15}, for example.}
(See \cite{Johannsen-16} for the astrophysical importance of such studies, especially connected with the observational challenge provided by the compact object in our Galactic center.)
The tidal deformation of black holes has also been treated perturbatively, following many routes, see for instance \cite{PoissonV-10,Poisson-15} and references therein.
When speaking of static versus stationary settings, we should recall once more that the interior of the above-considered black hole is {\em dynamical},\footnote
{This is in contrast to the Majumdar-Papapetrou--type black holes studied in the first paper, which are extreme and so in their case the Killing vector field $\partial x^\mu/\partial t$ is time-like everywhere except on the very horizon.}
in order to stress again that all the results obtained ``below horizon" factually describe the $t\!=\!{\rm const}$ sections of the interior. Since the conformal diagram of the hole \& ring space-time is like that of the Schwarzschild black hole alone, just with the equatorial version having a singularity along $r\!=\!r_{\rm ring}$, it is clear how these (time-like) sections look in such a diagram, and that the dynamics ``happens" in the direction of decreasing $r$ (from the horizon towards the singularity) on them. Due to the time symmetry (which however is space-like below the horizon), these sections all have the same geometry.
It may also be a future plan to compare the results with those obtained for a different slicing of the black-hole interior, in particular for a space-like one (e.g. that defined by constant Kruskal-like time coordinate), which would reveal the dynamics of the interior in a different manner.
Let us conclude by noting that recently we have been studying the black-hole--disc/ring system for another but related reason as well: due to perturbation by the additional source, even within such highly symmetric space-times as static and axisymmetric (also reflection symmetric) ones, the geodesic dynamics in the black-hole field looses complete integrability and may incline to chaos (see \cite{SukovaS-13} and preceding papers of this series). The character of geodesic dynamics is very probably related to the curvature properties of the host space-time (and its submanifolds to which the motion is restricted), though any ``generic" attempt to ascribe such global features of motion to the local space-time geometry deserves much standoff (see e.g. \cite{VieiraL-96b}). On the other hand, the complete geodesic integrability is known to be connected with the existence of the Killing--Yano tensors which in turn appears to be restricted to only some space-time curvature types (Petrov type D) (see e.g. \cite{Batista-15} and references therein). This suggests where a more specific connection between curvature and chaos could be found.
\begin{acknowledgments}
We are grateful for support from the grants GAUK-369015 and SVV-260211 of the Charles University (M.B.), and GACR-14-37086G of the Czech Science Foundation (O.S.).
O.S. also thanks M. Crosta for hospitality at Osservatorio Astrofisico di Torino and T. Ledvinka for advice on {\sc MAPLE}.
\end{acknowledgments}
|
2,869,038,155,015 | arxiv | \section{Introduction}
\subsection{Background}
The objective functions of many signal-processing problems can be formulated as sums of two proper lower-semicontinuous convex functions: one that is smooth, $f: \mathbb{R}^n \to ]-\infty,+\infty]$, and another one that need not be smooth, $g: \mathbb{R}^n \to ]-\infty,+\infty]$. The resulting problem is
\begin{equation} \label{eq:split}
\underset{\mathbf{x} \in \mathbb{R}^n}{\text{minimize}} \quad f (\mathbf{x}) + g (\mathbf{x}).
\end{equation}
Such problems are typically large-scale and can be solved by using splitting methods, which convert~\eqref{eq:split} into a sequence of separable subproblems. The (relaxed) \emph{forward--backward} method~\cite{Figueiredo2003, Daubechies2004} is an example of such methods. Its iterations can be broken into a gradient (forward) step on $f$ and a proximal (backward) step on $g$, performed consecutively---see Algorithm~\ref{algo:rfba}, where $\text{prox}_{\tau g}$ denotes the \emph{proximal operator} of function $g$, i.e., $\text{prox}_{\tau g}(\mathbf{x}) \eqdef \text{arg min}_{\mathbf{u} \in \mathbb{R}^n} \left\{ g(\mathbf{u}) + \frac{1}{2 \tau} \| \mathbf{x} - \mathbf{u} \|^2 \right\}$~\cite{Moreau1962}.
\begin{algorithm} \label{algo:rfba}
Choose $\mathbf{x}^0 \in \mathbb{R}^n,\ \tau > 0$\;
$k \leftarrow 1$\;
\While{stopping criterion is not satisfied}{
Choose $\lambda^k > 0$\;
$\mathbf{x}^{k + 1} \leftarrow \mathbf{x}^k + \lambda^k \left( \text{prox}_{\tau g} \left(\mathbf{x}^k - \tau \nabla f \left(\mathbf{x}^k \right)\right) - \mathbf{x}^k \right)$\; \label{eq:iter_rfba}
$k \leftarrow k + 1$\;
}
\caption{Relaxed forward--backward method.}
\end{algorithm}
When analyzing the properties of many of these and other algorithms, it can be advantageous to use the theory of monotone operators~\cite{Byrne2004}. Let $2^{\mathbb{R}^n}$ denote the {power} set of $\mathbb{R}^n$. A set-valued operator $A: \mathbb{R}^n \to 2^{\mathbb{R}^n}$ is said to be \emph{monotone} if $\langle \mathbf{u}-\mathbf{v}, \mathbf{x}-\mathbf{y} \rangle \geq 0$ for all $(\mathbf{x},\mathbf{u}) \in \text{gra } A$ and $(\mathbf{y},\mathbf{v}) \in \text{gra } A$, where $\text{gra } A$ denotes the {graph} of A, and it is said to be \emph{maximally monotone} if there exists no other monotone operator whose graph properly contains $\text{gra } A$. Monotone operators are connected to optimization problems as follows. Take, for example, \eqref{eq:split}. According to Fermat's rule, its solutions should satisfy the inclusion $0 \in \nabla f (\mathbf{x}) + \partial g (\mathbf{x})$, where the set-valued operator $\partial g : \mathbb{R}^n \to 2^{\mathbb{R}^n} : \mathbf{x} \to \partial g (\mathbf{x})$ denotes the \emph{subdifferential} of $g$ (in the sense of Moreau and Rockafellar~\cite[Chapter 23]{Rockafellar1970}). The operators $\nabla f$ and $\partial g$ are examples of maximally-monotone operators~\cite[Theorem 20.40]{Bauschke2011}. Problem~\eqref{eq:split} can be seen as a particular case of the problem of finding a zero of the sum of two monotone operators $A$ and $C$, i.e.,
\begin{equation} \label{eq:inclusion}
\text{find } \mathbf{x} \in \mathbb{R}^n \quad \text{such that } 0 \in A \left(\mathbf{x}\right) + C \left(\mathbf{x}\right),
\end{equation}
if one makes $A=\partial g$ and $C=\nabla f$. Problem~\eqref{eq:inclusion} may be solved using a generalized version of Algorithm~\ref{algo:rfba}, in which Line~\ref{eq:iter_rfba} is replaced with
\begin{equation} \label{algo:rfba_op}
\mathbf{x}^{k + 1} \leftarrow \mathbf{x}^k + \lambda^k \left( J_{\tau A} \left( \mathbf{x}^k - \tau C \left(\mathbf{x}^k\right) \right) - \mathbf{x}^k \right),
\end{equation}
where $J_{\tau A} \eqdef (\text{Id} + \tau A)^{-1}$ is the \emph{resolvent} of operator $A$ and $\text{Id}$ denotes the \emph{identity} operator. Note that $J_{\tau \partial g} = \text{prox}_{\tau g}$ \cite[Example 23.3]{Bauschke2011}.
Problem~\eqref{eq:inclusion} can alternatively be written as the problem of finding a fixed point of the operator $R \eqdef J_{\tau A} \circ (\text{Id} - \tau C)$:
\begin{equation} \label{eq:mon_incl_T}
\text{find } \varx \in \mathbb{R}^n \quad \text{such that } \neop \left( \varx \right) = \varx.
\end{equation}
In general, the solutions of a convex optimization problem correspond to the fixed points of a certain operator, and an iterative optimization algorithm corresponds to a fixed-point method.
We can rewrite~\eqref{algo:rfba_op} as
\begin{equation} \label{algo:km}
\mathbf{x}^{k + 1} \leftarrow T_{\lambda^k} \left( \mathbf{x}^k \right) \eqdef \mathbf{x}^k + \lambda^k (R \left( \mathbf{x}^k \right) - \mathbf{x}^k).
\end{equation}
We say that an operator $R: \mathbb{R}^n \to \mathbb{R}^n$ is nonexpansive if $\| \mathbf{u} - \mathbf{v} \| \leq \| \mathbf{x} - \mathbf{y} \|$ for all $(\mathbf{x},\mathbf{u}) \in \text{gra } R$ and $(\mathbf{y},\mathbf{v}) \in \text{gra } R$. Let $R$ be a generic nonexpansive operator and let $\lambda \in \; ]0,1[$. Then the operator $T \eqdef \left(\eye - \lambda \right) + \lambda R$ is said to be $\lambda$-\emph{averaged}. It obeys the following contractive property~\cite[Proposition 4.25]{Bauschke2011}:
\begin{align}
&\norm{T \left(\varx\right) - T \left(\vary\right)}{}^2 \nonumber \\
& \, \leq \norm{\varx - \vary}{}^2 - \frac{1 - \lambda}{\lambda} \norm{\left( \text{Id} - T \right) \left(\varx\right) - \left( \text{Id} - T \right) \left(\vary\right)}{}^2 \label{eq:contract_avop}
\end{align}
for all $\varx, \, \vary \in \mathbb{R}^n$. In particular, when $\lambda = 1/2$, $T$ is said to be \emph{firmly nonexpansive}. The resolvents of maximally-monotone operators are firmly-nonexpansive~\cite[Corollary 23.8]{Bauschke2011}. Iteration~\eqref{algo:km} is known as the \emph{Krasnosel'ski\u{\i}--Mann} scheme and is the basis of not only the forward--backward method but also other optimization algorithms, such as the Douglas--Rachford one~\cite{Byrne2004, Bauschke2011}. It can be shown that, under certain conditions, the Krasnosel'ski\u{\i}--Mann scheme converges to $\text{Fix } R$, where $\text{Fix } R$ denotes the set of fixed points of $R$.
The convergence rate of the forward--backward method (Algorithm~\ref{algo:rfba}) can be shown to be sublinear, or, under certain assumptions, to be linear. This rate can often be improved by incorporating \emph{second-order} information about $f$ if this function is twice-differentiable. The local convergence rate of second-order methods is superlinear or even quadratic. As an example, consider the second-order version of Algorithm~\ref{algo:rfba}, which is given by replacing Line~\ref{eq:iter_rfba} with the iteration $\mathbf{x}^{k + 1} \leftarrow \mathbf{x}^k + \lambda^k \left( \text{prox}_g^{\mathbf{B}^k} \left( \mathbf{x}^k - \left[ \mathbf{B}^k \right]^{-1} \nabla f \left( \mathbf{x}^k \right) \right) - \mathbf{x}^k \right)$~\cite{Schmidt2011, Becker2012, Lee2014}, where $\mathbf{B}^k$ is a \ac{PD} matrix (the Hessian of $f$ or an approximation of it) and $\text{prox}_g^{\mathbf{B}^k}$ denotes the proximal operator of $g$ relative to the norm $\| \cdot \|^2_{\mathbf{B}^k}$, i.e., $\text{prox}_{g}^{\mathbf{B}^k} (\mathbf{x}) \eqdef \text{arg min}_{\mathbf{u} \in \mathbb{R}^n} \left\{ g(\mathbf{u}) + \frac{1}{2} \| \mathbf{x} - \mathbf{u} \|^2_{\mathbf{B}^k} \right\}$. More generally, and from an operator-centric perspective, by using second-order methods such as these, one is actually solving a left-preconditioned version of~\eqref{eq:inclusion}, in the sense that instead of directly tackling that problem we are considering problems that share the same set of solutions but may be more convenient to solve:
\begin{equation} \label{eq:mon_incl_precond}
\text{find } \varx \in \mathbb{R}^n \quad \text{such that } 0 \in \fbvm \maxmonresol \left( \varx \right) + \fbvm \cocoercop \left( \varx \right),
\end{equation}
where $\fbvm$ is a \ac{PD} operator. In what follows, we denote positive definiteness by $\fbvm \succ 0$ and positive semidefiniteness by $\fbvm \succeq 0$.
\subsection{Contributions}
The basis of this work is the study of the following alternative scheme to~\eqref{algo:km}:
\begin{equation} \label{eq:fixed_point_iter_opwavop}
\varx^\iite = \avop{\ave^\ite} \left( \varx^\ite \right) \eqdef \varx^\ite + \ave^\ite \left( \neop \left( \varx^\ite \right) - \varx^\ite \right),
\end{equation}
where, for every $\ite$, $\ave^\ite$ is a linear operator such that $\eye \succ \ave^\ite \succ 0$. For convenience, we call the operators $\avop{\ave^\ite}$, \emph{operator-weighted averaged operators}. It is clear that if, for all $k$, we make $\ave^\ite = \lambda^\ite \eye$, we recover~\eqref{algo:km}.
Iteration~\eqref{eq:fixed_point_iter_opwavop} can be interpreted in different ways. For example, if $\ave^\ite$ is fixed, i.e., if, for all $\ite$, $\ave^\ite = \ave$, where $\ave \succ 0$, that iteration can also be seen as a left-preconditioning scheme to solve~\eqref{eq:mon_incl_T}:
\begin{equation} \label{eq:mon_incl_precond_T}
\text{find } \varx \in \mathbb{R}^n \quad \text{such that } \ave \neop \left( \varx \right) = \ave \varx.
\end{equation}
\subsection{Notation and outline}
A detailed account of the notions listed in this section can be found in the work of Bauschke and Combettes~\cite{Bauschke2011}. We denote the \emph{scalar product} of a Hilbert space by $\langle \cdot , \cdot \rangle$ and the associated \emph{norm} by $\| \cdot \|$. The \emph{range} of an operator $A$ is denoted by $\text{ran } A$, and the \emph{adjoint} of $A$ by $A^*$. We say that an operator $A : \mathbb{R}^n \to \mathbb{R}^n$ is \emph{Lipschitz continuous} with constant $L > 0$ if $\| \mathbf{u} - \mathbf{v} \| \leq L \| \mathbf{x} - \mathbf{y} \|$, for all $(\mathbf{x},\mathbf{u}) \in \text{gra } A$ and $(\mathbf{y},\mathbf{v}) \in \text{gra } A$. Additionally, let $\Gamma_0 (\mathbb{R}^n)$ denote the class of all proper lower-semicontinuous convex functions from $\mathbb{R}^n$ to $]{-\infty},+\infty]$. Given two functions $f \in \Gamma_0 (\mathbb{R}^n)$ and $g \in \Gamma_0 (\mathbb{R}^n)$, their \emph{infimal convolution} is denoted by $\infconv{f}{g}$. The Legendre--Fenchel \emph{conjugate} of a function $f$ is denoted by $f^*$. The \emph{indicator function} of a set $C \in \mathbb{R}^{n}$ is defined as $\delta_C(\mathbf{x}) \eqdef 0$ if $\mathbf{x} \in C$, $\delta_C(\mathbf{x}) \eqdef + \infty$ otherwise. We use the notation $\{\mathbf{x}^k\}$ as a shorthand for representing the sequence $\{\mathbf{x}^k\}_{k=1}^{+\infty}$. The space of \emph{absolutely-summable sequences} in $\mathbb{R}$ is denoted by $\spaceseq^1 (\mathbb{N})$; the set of summable sequences in $[0, + \infty [$ is denoted by $\spaceseq^1_+ (\mathbb{N})$. Bold lowercase letters denote vectors and bold uppercase letters denote matrices. $[\mathbf{a}]_i$ denotes the $i$-th element of a vector $\mathbf{a}$, $[\mathbf{A}]_{:j}$ denotes the $j$-th column of a matrix $\mathbf{A}$, and $[\mathbf{A}]_{ij}$ denotes the element in the $i$-th row and $j$-th column of a matrix $\mathbf{A}$. $\mathbf 0$ denotes a \emph{zero} vector or matrix of appropriate size. The maximum and signum operators are denoted by $\text{max}(\cdot)$ and $\text{sgn}(\cdot)$, respectively.
The structure of this work is as follows. In Section~\ref{sec:ssn}, we briefly discuss a class of algorithms known as semismooth Newton methods. In Section~\ref{sec:extension}, we study the scheme given by~\eqref{eq:fixed_point_iter_opwavop}, and show how it can be used to solve a primal--dual problem first studied by Combettes and Pesquet~\cite{Combettes2011b}. In Section~\ref{sec:apps}, we present a simple application of the proposed method to solve an inverse problem. Section~\ref{sec:conclusions} concludes. Due to space constraints, we omit the proofs of the results discussed in Section~\ref{sec:extension}; these proofs can be consulted elsewhere~\cite[Chapter 5]{Simoes2017}.
\section{Semismooth Newton methods} \label{sec:ssn}
\emph{Semismooth Newton} methods were originally developed with the goal of using Newton-like methods to minimize certain nonsmooth functions at a superlinear convergence rate. To illustrate why these methods may be useful when solving problems of the form of~\eqref{eq:split}, consider, as an example, that $f = \| \mathbf{y} - \mathbf{H} \cdot \|^2$, and $g = \mu \| \cdot \|_1$, where $\mathbf{y} \in \mathbb{R}^m$, $\mathbf{H} \in \mathbb{R}^{m \times n}$, and $\mu > 0$. For problems such as these, it was shown by Hinterm\"{u}ller~\cite{Hintermuller2003} that some semismooth Newton methods are equivalent to some active-set methods. As we discuss in Section~\ref{sec:apps}, the fact that these methods can be written as active-set ones allows for significant time savings when solving certain problems, namely the ones involving sparsity-inducing regularizers, as is the case of the $\ell_1$ norm.
Let $G: \mathbb{R}^n \to \mathbb{R}^n$ be an operator such that $G: \mathbf{x} \to \mathbf{x} - \text{prox}_{\mu \| \mathbf{x} \|_1} \left(\mathbf{x} - 2 \mu \mathbf{H}^* (\mathbf{H} \mathbf{x} - \mathbf{y}) \right)$. The solution of the problem under consideration should satisfy the nonlinear equation $G ({\mathbf{x}}) = \mathbf{0}$, which is nonsmooth, since $\text{prox}_{\mu \| \cdot \|_1}$ is not everywhere differentiable. There are, however, generalizations of the concept of differentiability that are applicable to an operator such as $G$. One of them is the B(ouligand)-differential~\cite[Definition 4.6.2]{Facchinei2003}, which is defined as follows. Suppose that a generic operator $G: D \subset \mathbb{R}^n \to \mathbb{R}^m$ is locally Lipschitz, where $D$ is an open subset. Then by Rademacher's theorem, $G$ is differentiable almost everywhere in $D$. Let $C$ denote the subset of $\mathbb{R}^n$ consisting of the points where $G$ is differentiable (in the sense of Fr\'{e}chet~\cite[Definition 2.45]{Bauschke2011}). The B-differential of $G$ at $\mathbf{x}$ is $\partial_B \, G (\mathbf{x}) \eqdef \left\{ \lim_{\mathbf{x}^j \to \mathbf{x}} \nabla G \left(\mathbf{x}^j \right) \right\}$, where $\{\mathbf{x}^j\}$ is a sequence such that $\mathbf{x}^j \in C$ for all $j$ and $\nabla G(\mathbf{x}^j)$ denotes the Jacobian of $G$ at $\mathbf{x}^j$.
The B-differential of an operator at a given point may not be unique: for example, take $\text{prox}_{\mu \| \cdot \|_1} (\mathbf{x})$, which can be evaluated element-wise by computing $\text{max} \left\{ \big| [\mathbf{x}]_i \big| - \mu, 0 \right\} \circ \text{sgn} \left([\mathbf{x}]_i \right)$ for $i \in \{1, \cdots, n\}$. A possible $\mathbf{H} \in \partial_B \, \text{prox}_{\mu \| \cdot \|_1} (\mathbf{x})$ is a binary diagonal matrix defined as~\cite[Proposition 3.3]{Griesse2008}
\begin{equation} \label{eq:partial_B_G}
[\mathbf{H}]_{ii} = \begin{cases}
1 & \text{if } \big| [\mathbf{x}]_i \big| > \mu,\\
0 & \text{otherwise.}
\end{cases}
\end{equation}
This generalization of the concept of differentiability can also be used to formulate the so-called semismooth Newton method, which is characterized by the iteration $\mathbf{x}^{k+1} \leftarrow \mathbf{x}^{k} - [\mathbf{H}^k]^{-1} \, G(\mathbf{x}^k)$, where $\mathbf{H}^k \in \partial_B \, G (\mathbf{x}^k)$. It can be shown that this method locally converges superlinearly for operators known as semismooth~\cite{Qi1993b}. Let $\mathbf{x} \in D$ and $\mathbf{d} \in \mathbb{R}^n$; semismooth operators are operators that are directionally differentiable at a neighborhood of $\mathbf{x}$ and that, for any $\mathbf{H} \in \partial_B \, G (\mathbf{x}+\mathbf{d})$, satisfy the condition $\mathbf{H} \mathbf{d} - G'(\mathbf{x}; \mathbf{d})=o(\|\mathbf{d}\|)$ for $\mathbf{d} \to \mathbf{0}$, where $G'(\mathbf{x}; \mathbf{d})$ denotes the directional derivative of $G$ at $\mathbf{x}$ along $\mathbf{d}$. Examples of semismooth functions are the Euclidean norm and piecewise-differentiable functions~\cite[Chapter 2]{Ulbrich2011}, $\text{prox}_{\mu \| \cdot \|_1} (\mathbf{x})$ being an example of the latter. Note that the semismooth Newton method is a particular case of~\eqref{eq:fixed_point_iter_opwavop}, although we impose that $\eye \succ \ave^\ite \succ 0$ in the latter equation, which is not necessarily true for this method.
\section{An extension of averaged-operator-based algorithms} \label{sec:extension}
In this section, we define operator-weighted averaged operators, and show that they have a contractive property. We also study the asymptotic behavior of fixed-point iterations of these operators. Such iterations can be seen as an extension of the Krasnosel'ski\u{\i}--Mann scheme [cf.~\eqref{algo:km}]. We base our analysis on the fact that these iterations produce a sequence that is variable-metric Fej\'{e}r monotone~\cite{Combettes2013,Combettes2014b}. We then present an algorithm that uses operator-weighted averaged operators, and that solves a primal--dual problem that encapsulates many problem formulations~\cite{Combettes2011b, Combettes2014b}.
\subsection{An extension of the Krasnosel'ski\u{\i}--Mann scheme} \label{sec:eKM}
\begin{definition}[Operator-weighted averaged operators]
Let $\nesubset$ be a nonempty subset of $\mathbb{R}^n$, let $\eps \in \; ]0, 1[$, and let $\ave$ be an operator in $\mathbb{R}^n$ such that
\begin{equation} \label{eq:ave_ass}
\Meig \eye \succeq \ave \succeq \meig \eye, \quad \text{where } \Meig, \, \meig \in [\eps, 1-\eps].
\end{equation}
We say that an operator $\avop{\ave}: \nesubset \to \mathbb{R}^n$ is an operator-weighted averaged operator if there exists a nonexpansive operator $\neop: \nesubset \to \mathbb{R}^n$ such that
\begin{equation} \label{eq:aveop}
\avop{\ave} \define (\eye - \ave) + \ave \neop.
\end{equation}
\end{definition}
We have proved the following results:
\begin{proposition} \label{th:contractive}
Let $\nesubset$ be a nonempty subset of $\mathbb{R}^n$, let $\eps \in \; ]0, 1[$, let $\ave$ be an operator in $\mathbb{R}^n$ satisfying~\eqref{eq:ave_ass}, let $\neop: \nesubset \to \mathbb{R}^n$ be a nonexpansive operator, and let $\avop{\ave}: \nesubset \to \mathbb{R}^n$ be an operator as defined in~\eqref{eq:aveop}. Then the operator $\avop{\ave}$ is $\Meig$-averaged in the metric induced by $\inv{\ave}$. In other words, the operator $\avop{\ave}$ verifies
\begin{align*}
&\norm{\avop{\ave} \left( \varx \right) - \avop{\ave} \left( \vary \right)}{\inv{\ave}}^2 \\ \nonumber
&\leq \norm{\varx - \vary}{\inv{\ave}}^2 - \frac{1 - \Meig}{\Meig} \norm{\left( \emph{\eye} - \avop{\ave} \right) \left( \varx \right) - \left( \emph{\eye} - \avop{\ave} \right) \left( \vary \right)}{\inv{\ave}}^2
\end{align*}
for all $\varx, \, \vary \in \nesubset$.
\end{proposition}
\begin{theorem} \label{th:eKM}
Let $\nesubset$ be a nonempty closed convex subset of $\mathbb{R}^n$, let $\eps \in \; ]0, 1[$, let $\seq{\seqave^\ite} \in \spaceseq^1_+(\mathbb{N})$, let $\seq{\ave^\ite}$ be a sequence of \ac{PD} operators in $\mathbb{R}^{n \times n}$ such that, for all $\ite \in \mathbb{N}$,
\begin{equation} \label{eq:seq_ave_ass}
\begin{cases}
\Meig^\ite \emph{\eye} \succeq \ave^\ite \succeq \meig^\ite \emph{\eye},\\
\Meig^\ite, \, \meig^\ite \in [\eps, 1-\eps],\\
\left( 1 + \seqave^\ite \right) \ave^\iite \succeq \ave^\ite,
\end{cases}
\end{equation}
and let $\neop: \nesubset \to \nesubset$ be a nonexpansive operator such that $\emph{\fix} \neop \neq \emptyset$. Additionally, let $\varx^0 \in \nesubset$ and let, for all $\ite$, $\seq{\varx^\ite}$ be a sequence generated by~\eqref{eq:fixed_point_iter_opwavop}. Then $\seq{\varx^\ite}$ converges to a point in $\emph{\fix} \neop$.
\end{theorem}
\subsection{Primal--dual optimization algorithms} \label{sec:pdprob}
Combettes and Pesquet studied a primal--dual problem that generalizes many problems~\cite[Problem 4.1]{Combettes2011b}. By being able to devise an algorithm to solve this problem, we are effectively tackling a large number of problems simultaneously (problem~\eqref{eq:split} is one of these). Let $m$, $n$, and $\dimr$ be strictly-positive integers, let $\cvxone \in \lsc(\mathbb{R}^n)$, let $\coco \in \; ]0, + \infty[$, let $\smooth: \mathbb{R}^n \to ]{-\infty},+\infty]$ be convex and differentiable with a $\inv{\coco}$-Lipschitzian gradient, and let $\biasvar \in \mathbb{R}^n$. For every $\itr \in \{ 1, \dots, \dimr \}$, let $\biasvarlnop_\itr \in \mathbb{R}^{m_\itr}$, let $\cvxtwo_\itr \in \lsc(\mathbb{R}^{m_\itr})$, let $\strmaxmonopparam_\itr \in \; ]0, +\infty[$, let $\strong_\itr \in \lsc(\mathbb{R}^{m_\itr})$ be $\strmaxmonopparam_\itr$-strongly convex,\footnote{A function $\strong$ is said to be $\strmaxmonopparam$-\emph{strongly convex} if $\strong - \frac{\strmaxmonopparam}{2} \langle \mathbf{x}, \mathbf{x} \rangle$ is convex, for some $\strmaxmonopparam > 0$.} let $\lnop_\itr \in \mathbb{R}^{m_\itr \times n}$ such that $\lnop_\itr \neq 0$, and let $\pdomega_\itr$ be real numbers in $]0, 1]$ such that $\sum_{\itr = 1}^{\dimr} \pdomega_\itr = 1$. The problem is as follows:
\begin{problem} \label{th:problem}
Solve the primal minimization problem,
\begin{equation*} \label{eq:primalproblem2}
\underset{\pr \in \mathbb{R}^n}{\text{{minimize}}} \, \cvxone (\pr) + \sum_{\itr=1}^{\dimr} \pdomega_\itr \left(\infconv{\cvxtwo_\itr}{\strong_\itr} \right) \left(\lnop_\itr \pr - \biasvarlnop_\itr \right) + \smooth (\pr) - \innerpro{\pr}{\biasvar}{},
\end{equation*}
together with its corresponding dual minimization problem,
\begin{align*}
\underset{\du_1 \in \mathbb{R}^{m_1}, \cdots, \du_\itr \in \mathbb{R}^{m_\itr}}{\text{{minimize}}} &\, \left( \infconv{\conj{\cvxone}}{\conj{\cvxtwo}} \right) \left( \biasvar - \sum_{\itr=1}^{\dimr} \pdomega_\itr \conj{\lnop_\itr} \du_\itr \right) \\
&\, + \sum_{\itr=1}^{\dimr} \pdomega_\itr \left( \conj{\cvxtwo_\itr} (\du_\itr) + \conj{\strong_\itr} (\du_\itr) + \innerpro{\du_\itr}{\biasvarlnop_\itr}{} \right). \label{eq:dualproblem2}
\end{align*}
The sets of solutions to these primal and dual problems are denoted by $P$ and $D$, respectively.
\end{problem}
Consider Algorithm~\ref{algo:stackevmfbapp} to solve Problem~\ref{th:problem}. In what follows, for all $\itr$, $\seq{\vm^\ite}$, $\seq{\ave^\ite}$, $\seq{\vm^\ite_\itr}$, $\seq{\ave^\ite_\itr}$ are sequences of linear operators, and $\seq{\errorpr^\ite}$, $\seq{\errordu^\ite_\itr}$, $\seq{\errorresolpr^\ite}$, $\seq{\errorresoldu^\ite_\itr}$ are absolutely-summable sequences that can be used to model errors. Algorithm~\ref{algo:stackevmfbapp} is an extension of~\cite[Example 6.4]{Combettes2014b}.
\begin{algorithm} \label{algo:stackevmfbapp}
Choose $\pr^0 \in \mathbb{R}^n$ and $\du_1^0 \in \mathbb{R}^{m_1}, \cdots, \du_\itr^0 \in \mathbb{R}^{m_\itr}$\;
$\ite \leftarrow 1$\;
\While{stopping criterion is not satisfied}{
\For{$\itr = 1, \dots, \dimr$}{
Choose $\vm^\ite_\itr, \, \ave^\ite_\itr \succ 0 \text{ s.t. } \ave^\ite_\itr \prec \eye$\;
$\pdu^\ite_\itr =\prox^{\inv{(\vm_\itr^\ite)}}_{\conj{\cvxtwo}_\itr} \big( \du^\ite_\itr + \vm_\itr^\ite \big( \lnop_\itr \pr^\ite$
\newline\makebox[3cm]{}$ - \grad{\conj{\strong}_\itr} \left(\du^\ite\right) - \errorresoldu^\ite_\itr - \biasvarlnop_\itr \big) \big) + \errordu^\ite_\itr$\;
$\duo^\ite_\itr = 2 \pdu^\ite_\itr - \du^\ite_\itr$\;
$\du^\iite_\itr = \du^\ite_\itr + \ave_\itr^\ite \left( \pdu^\ite_\itr - \du^\ite_\itr \right)$\;
}
Choose $\vm^\ite, \, \ave^\ite \succ 0 \text{ s.t. } \ave^\ite \prec \eye$\;
$\ppr^\ite =$
$\prox_{\cvxone}^{\inv{(\vm^\ite)}} \big( \pr^\ite - \vm^\ite \big( \sum_{\itr = 1}^{\dimr} \pdomega_\itr \conj{\lnop}_\itr \duo^\ite_\itr$
\newline\makebox[3cm]{}$+ \grad{\smooth} \left(\pr^\ite\right) + \errorresolpr^\ite - \biasvar \big) \big) + \errorpr^\ite$\;
$\pr^\iite = \pr^\ite + \ave^\ite \left(\ppr^\ite - \pr^\ite \right)$\; \label{eq:ssn_in_algo}
$\ite \leftarrow \iite$\;
}
\caption{An application of~\eqref{eq:fixed_point_iter_opwavop} to solve Problem~\ref{th:problem}.}
\end{algorithm}
The following corollary establishes some convergence properties of Algorithm~\ref{algo:stackevmfbapp}.
\begin{corollary} \label{th:stackevmfbapp}
Suppose that
\begin{equation*} \label{eq:assumption2}
\biasvar \in \emph{\ran} \left( \subgrad{\cvxone} + \sum_{\itr=1}^{\dimr} \pdomega_\itr \conj{\lnop_\itr} \left( \infconv{\subgrad{\cvxtwo}_\itr}{\subgrad{\strong}_\itr} \right) \left(\lnop_\itr \cdot - \biasvarlnop_\itr \right) + \grad{\smooth} \right)
\end{equation*}
and set $\mincocomon \eqdef \min \{\coco, \strmaxmonopparam_1, \dots, \strmaxmonopparam_\dimr \}$. Let $\seq{\vm^\ite}$ be a sequence of \ac{PD} operators in $\mathbb{R}^{n \times n}$ and, for every $\itr \in \{ 1, \dots, \dimr \}$, let $\seq{\vm^\ite_\itr}$ be a sequence of \ac{PD} operators in $\mathbb{R}^{m_\itr \times m_\itr}$ such that, for all $\ite \in \mathbb{N}$,
\begin{equation} \label{eq:stackevmfbass1}
\begin{cases}
\Meigvm \emph{\eye} \succeq \vm^\ite \succeq \meigvm \emph{\eye},\\
\Meigvm \emph{\eye} \succeq \vm^\ite_\itr \succeq \meigvm \emph{\eye},\\
\Meigvm, \, \meigvm \in \ ]0, + \infty[,
\end{cases}
\end{equation}
let $\fbeps \in \;]0, \min \{ 1, \mincocomon \}[$, let $\seq{\ave^\ite}$ be a sequence of \ac{PD} operators in $\mathbb{R}^{n \times n}$, and let $\seq{\ave^\ite_\itr}$ be a sequence of \ac{PD} operators in $\mathbb{R}^{m_\itr \times m_\itr}$ such that, for all $\ite$,
\begin{equation} \label{eq:stackevmfbassave}
\begin{cases}
\ave^\ite \vm^\ite = \vm^\ite \ave^\ite,\\
\ave^\ite_\itr \vm^\ite_\itr = \vm^\ite_\itr \ave^\ite_\itr,\\
\Meig \emph{\eye} \succeq \ave^\ite \succeq \meig \emph{\eye},\\
\Meig \emph{\eye} \succeq \ave^\ite_\itr \succeq \meig \emph{\eye},\\
\Meig, \, \meig \in [\fbeps, 1],
\end{cases}
\text{and }
\begin{cases}
\ave^\iite \vm^\iite \succeq \ave^\ite \vm^\ite, \\
\ave^\iite_\itr \vm^\iite_\itr \succeq \ave^\ite_\itr \vm^\ite_\itr.
\end{cases}
\end{equation}
Let, for all $\itr$, $\seq{\errorpr^\ite}$, $\seq{\errordu^\ite}$, $\seq{\errorresolpr^\ite_\itr}$, $\seq{\errorresoldu^\ite_\itr} \in \spaceseq^1(\mathbb{N})$. For every $\ite$, set $\pddelta^\ite \define \isquareroot{\sum_{\itr = 1}^{\dimr} \pdomega_\itr \norm{\squareroot{\vm^\ite_\itr} \lnop_\itr \squareroot{\vm^\ite}}{}^2} - 1$ and suppose that $\pdxi^\ite \define \frac{\pddelta^\ite}{(1+\pddelta^\ite) \Meigvm} \geq \frac{1}{2 \mincocomon - \fbeps}$.
Let $\seq{\pr^\ite}$ be a sequence generated by Algorithm~\ref{algo:stackevmfbapp}. Then $\pr^\ite$ converges to a point in $P$ and $\left(\du^\ite_1, \dots, \du^\ite_\dimr \right)$ converges to a point in $D$.
\end{corollary}
\section{Experiment} \label{sec:apps}
In this section, we give a practical example of a simple problem that can be solved via Algorithm~\ref{algo:stackevmfbapp}. Consider the constrained problem
\begin{equation} \label{eq:lasso_constraint}
\underset{\mathbf{x} \in [c,d]^n}{\text{minimize}} \quad \| \mathbf{b} - \mathbf{H} \mathbf{x} \|^2 + \mu \| \mathbf{x} \|_1,
\end{equation}
where $\mathbf{b} \in \mathbb{R}^n$, $c \in \mathbb{R}$, $d \in \mathbb{R}$, $\mu>0$, $\mathbf{H} = {{}^{1}\!/_{n}} \widehat{\mathbf{H}}$, and $\widehat{\mathbf{H}} \in \mathbb{R}^{n \times n}$ is a lower-triangular matrix of ones. Griesse and Lorenz studied a non-constrained, and therefore simpler, version of this problem in the context of inverse integration~\cite[Section 4.1]{Griesse2008}. Problem~\eqref{eq:lasso_constraint} can be solved via Algorithm~\ref{algo:stackevmfbapp} if we let $\gamma > 0$, $\tau >0$ and make $m = n$, $\dimr = 1$, $\lnop_1 = \eye$, $\biasvarlnop_1 = \myvec{0}$, $\biasvar = \myvec{0}$, and, for all $\ite$, $\vm^\ite_1 = \gamma \eye$, $\vm^\ite = \tau \eye$, $\errorresoldu^\ite_1 = \myvec{0}$, $\errordu^\ite_1 = \myvec{0}$, $\ave_{1}^\ite = \eye$, $\errorresolpr^\ite = \myvec{0}$, $\errorpr^\ite = \myvec{0}$, $\smooth = \| \mathbf{b} - \mathbf{H} \cdot \|^2$, $\cvxone=\mu \| \cdot \|_1$, $\cvxtwo= \ind{[c,d]^n}{\cdot}$, $\strong_1: \mathbf{u} \to 0$ if $\mathbf{u} = 0$, $\strong_1: \mathbf{u} \to + \infty$ otherwise.
If we take $\ave^\ite$ to be a sequence of scalars, we recover a version of~\cite[Example 6.4]{Combettes2014b}. However, inspired by the fast convergence properties of the methods discussed in Section~\ref{sec:ssn} and following a similar reasoning to~\cite[Proposition 3.7]{Griesse2008}, we consider the B-differential for the operator $\text{prox}_{\mu \| \cdot \|_1}$ given in~\eqref{eq:partial_B_G} and take $\ave^\ite$ to be the inverse of
\begin{equation*} \label{eq:op_ssn}
\inv{\left(\mathbf{P}^\ite\right)} \begin{bmatrix}
\tau [\mathbf{H}]^*_{:{I}^k} [\mathbf{H}]^{}_{:{I}^k} & \tau [\mathbf{H}]^*_{:{I}^k} [\mathbf{H}]^{}_{:{A}^k} \\
\mathbf{0} & \eye
\end{bmatrix} \mathbf{P}^\ite,
\end{equation*}
where
\begin{equation*}
\begin{aligned}
{A}^k &\eqdef \{ i \in \mathbb{N} : \big| \left[\pr^\ite - 2 \tau \left( \mathbf{H}^* \left(\mathbf{H} \mathbf{x}^k - \mathbf{b}\right) + \duo^\ite_1 \right)\right]_i \big| \leq \tau \mu \},\\
{I}^k &\eqdef \{ i \in \mathbb{N} : \big| \left[\pr^\ite - 2 \tau \left( \mathbf{H}^* \left(\mathbf{H} \mathbf{x}^k - \mathbf{b}\right) + \duo^\ite_1 \right)\right]_i \big| > \tau \mu \},
\end{aligned}
\end{equation*}
and $\seq{\mathbf{P}^\ite}$ is a sequence of appropriate permutation matrices such that, given a vector $\mathbf{x}$, the first elements of the vector $\mathbf{P}^\ite \mathbf{x}$ correspond to the indices in ${I}^k$ and the last elements to the indices in ${A}^k$, for all $k$. By again following a similar reasoning to the one of~\cite[Section 3.3]{Griesse2008}, it can be shown that Line~\ref{eq:ssn_in_algo} of Algorithm~\ref{algo:stackevmfbapp} can be rewritten in such a way that this algorithm is easily seen to be equivalent to an active-set method. In fact, that line is given by
\begin{equation*}
\pr^\iite \leftarrow \inv{\left(\mathbf{P}^\ite\right)} \begin{bmatrix}
\left([\mathbf{H}]^*_{:{I}^\ite} [\mathbf{H}]_{:{I}^\ite}^{} \right)^{-1} \left[ \mathbf{H}^* \mathbf{b} - \duo^\ite_1 + \tau \mathbf{e}^\ite_{\pm} \right]_{{I}^\ite}^{} \\
\mathbf{0}
\end{bmatrix},
\end{equation*}
where $\mathbf{e}^\ite_{\pm} \eqdef \text{sgn} \left[ \pr^k - 2 \tau \left( \mathbf{H}^* \left(\mathbf{H} \mathbf{x}^k - \mathbf{b}\right) + \duo^\ite_1 \right) \right]$, for every $\ite$. The dimension of the problem to solve at each iteration is given by the cardinality of the set $I^\ite$. Naturally, the sparser the solution is estimated to be, the smaller the dimension of this problem is. In contrast, methods such as the \ac{ADMM}~\cite{Afonso2010} require the solution of a problem involving the full matrix $\mathbf{H}^*\mathbf{H}$. This is the reason why semimooth Newton methods are able to achieve faster convergence rates in practice than others.
We simulate an example similar to the one studied by Griesse and Lorenz~\cite[Section 4.1]{Griesse2008} but consider the noise to be Gaussian with a \ac{SNR} of 30 dB. We have set $\mu = 3 \times 10^{-3}$, $c=-80$, and $d=52$. We compared Algorithm~\ref{algo:stackevmfbapp} (denoted in what follows as \emph{Proposed}) with \ac{ADMM} and with the \ac{CM} to solve~\eqref{eq:lasso_constraint}. We manually tuned the different parameters of the three methods in order to achieve the fastest convergence results in practice. We arbitrarily chose the result of \ac{ADMM} after $10^7$ iterations as representative of the solution given by the three methods. Fig.~\ref{fig:rmse_vs_time_inv_int} illustrates the behavior of the three methods by showing the \ac{RMSE}
between the estimates of each method and the representative solution, as a function of time. The three methods were initialized with the zero vector. The experiments were performed using MATLAB on an Intel Core i7 CPU running at 3.20~GHz, with 32~GB of RAM.
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=.45]{rmse_vs_time_inv_int
\caption{RMSE, as a function of time, between the estimates of each iteration and the representative solution, for the three methods.}
\label{fig:rmse_vs_time_inv_int}
\end{center}
\end{figure}
In this example, we did not enforce assumptions~\eqref{eq:stackevmfbassave} but verified in practice that they were satisfied. However, in more complex examples, it may be necessary to devise a strategy that generates a sequence $\seq{\ave^\ite}$ satisfying these assumptions. This is akin to the necessity of devising globalization strategies in other Newton-like methods~\cite[Chapter 8]{Facchinei2003}.
\subsection{Appraisal}
It is clear that, for this example, the proposed method has a much faster convergence than either \ac{CM} or \ac{ADMM}. This improvement in convergence is similar to the one observed in the methods discussed in Section~\ref{sec:ssn}. In general, the sparser the solution is, the faster the method is as well. In order to benefit from this property, we must be able to solve the lower-dimensional linear system faster than the full system. This may not always be possible: for example, in problems that involve computations with the \ac{FFT} of a signal, we usually have only modest improvements in speed if we wish to compute only selected elements of the \ac{FFT}.\footnote{See~\url{http://www.fftw.org/pruned.html} for details.} However, for large-scale problems and for highly-sparse signals, methods known as sparse \acp{FFT}~\cite{Gilbert2014} may be useful. We verified in other experiments not detailed here that the proposed method has a comparable convergence speed to ADMM in problems whose solutions are not sparse or where we cannot take advantage of their sparsity.
\section{Conclusions} \label{sec:conclusions}
In this work, we defined operator-weighted averaged operators, and showed that they can be used to construct a number of algorithms with good convergence properties. These algorithms have very broad applications, and seem to be particularly suitable to address problems with sparsity-inducing regularizers, as suggested by a simple experiment. Possible future directions to be explored are the possibility of relaxing the assumptions on $\ave^\ite$, and the study of which problems are most suitable to be tackled by these methods.
\bibliographystyle{IEEEtran}
|
2,869,038,155,016 | arxiv | \section{Introduction}
The integral quadratic constraint (IQC) approach of \cite{Megretski:1997} provides a flexible framework for robustness analysis of uncertain systems, and includes as special cases many previously-proposed methods. The basic setup of \cite{Megretski:1997} is an interconnection of a nominal system -- which is a stable, finite-dimensional linear time-invariant system -- and uncertainties that are known to satisfy integral quadratic constraints. The ``uncertainties'' may be known but ``troublesome" components (e.g., time delay or other infinite-dimensional dynamics, saturation or other nonlinearties) or an unknown but bounded dynamics dynamics. Closed-loop stability and performance can be verified by solving a linear matrix inequality (semidefinite program).
Advances in computational analysis of nonlinear systems, especially polynomial systems via sum-of-squares programming \cite{Parrilo:2003}, motivate extending the IQC approach to scenarios in which the ``nominal'' system is nonlinear, but still amenable to computation, e.g. described by a relatively low-degree polynomial vector field. However, many of the most powerful IQCs used in \cite{Megretski:1997} are ``soft'' IQCs, meaning that their time-domain representations do not necessarily hold for all finite times (so-called ``hard'' IQCs), as would be required for standard analysis methods of nonlinear systems \cite{Schaft:2017}.
Recently, the work \cite{Seiler:2015} proved that, under rather mild conditions, most soft IQCs do in fact have hard representations, in particular if the associated frequency-domain multiplier admits a $ J $-factorization. Related results were obtained in \cite{veenman2013stability, Scherer:2018}. Based on this, several approaches have been developed to relax the assumption on nominal models. In \cite{Pfifer:2015,Wang:2016}, the IQC-based analysis framework has been applied to linear parameter-varying (LPV) systems. In \cite{Carrasco:2018}, a time-domain IQC theorem using graph separation was proposed for the feedback interconnection of two nonlinear systems. The stability condition was stated purely on input-output relations without involving linear matrix inequalities (LMIs).
The stability and performance notation used in the IQC framework is based on $ L_2 $ gain with respect to one preferred equilibrium (e.g., the origin). However, for many nonlinear systems, there are reasons to prefer a stronger stability analysis tool which is independent of the reference \cite{Wang:2017}. These cases often require the input-output stability to consider both \emph{boundedness} (i.e., bounded inputs produce bounded outputs) and \emph{continuity} (the outputs are not critically sensitive to small changes in inputs). Although both continuity and boundedness of input-output stability was introduced by Zames \cite{Zames:1966}, research on boundedness of solutions and stability of special solutions (e.g., a known equilibrium) have dominated existing literature. In \cite{Desoer:1975}, Desoer and Vidyasagar show that bounded and continuous solutions with respect to inputs can be implied by incremental $ L_2 $ stability while only boundedness can be ensured by the standard $ L_2 $-gain stability. The incremental IQC was briefly discussed in \cite{Megretski:1997} and later on applied to the harmonic analysis of uncertain systems in \cite{Rantzer:1997, Jonsson:2001}. However, it was proved in \cite{Kulkarni:2002} that stability multipliers such as Zames-Falb, Popov, and RL/RC multipliers are not directly applicable for incremental stability analysis as they cannot preserve incremental positivity. Recently, a weaker notation -- \emph{equilibrium-independent} IQC was introduced to describe nonlinear systems \cite{Summers:2013}. The frequency-domain IQC theorem is then applied for robustness analysis of networked passive systems with delays.
Contraction \cite{Lohmiller:1998} is a strong form of stability meaning that, roughly speaking, all solutions converge to each other exponentially. The underlying idea is to investigate the local stability of the linearized system (differential dynamics) along any admissible trajectory, from whichglobal incremental stability can be established via integration of local analysis along certain path. The benefits of contraction analysis are two folds. First, no specific knowledge about the particular trajectory or reference signal is required during the analysis stage, which leads to a \emph{reference-independent} approach. Second, the analysis problem has a convex formulation using Riemannian metric \cite{Aylward:2008}, which allows for a numerically efficient optimization method -- sum-of-squares (SOS) programming \cite{Parrilo:2003}. Recently, a general differential Lyapunov framework based on Finsler metric was presented in \cite{Forni:2014}. The extension of differential Lyapunov theory to input/output dynamics -- differential dissipativity \cite{Forni:2013,Schaft:2013} was applied to system identification \cite{Tobenkin:2010}, robust analysis of a limit cycle \cite{Manchester:2014} and distributed control of chemical systems \cite{Wang:2017b}. A controller synthesis and realization framework based on control contraction metric (CCM) was developed in \cite{Manchester:2017}. It was then extended to performance design in \cite{Manchester:2018}. In \cite{Wang:2006}, contraction analysis based on certain metrics was applied to group cooperation subject to time-delayed communications. However, there are as yet few precise results on contraction of uncertain systems containing a broad range of ``troublesome'' components.
In this paper, we present an approach to verify robust contraction and performance ($L_2$ gain) of an uncertain nonlinear system by lifting the time-domain IQC theorem into the differential setting. The uncertain system consists of a feedback interconnection of a nominal nonlinear model and a perturbation. Here we assume that the nominal model belongs to a class of ``less troublesome'' nonlinear dynamics whose differential $ L_2 $ gain can be obtained via pointwise LMIs. A novel IQC, namely \emph{differential IQC}, is then introduced to replace the ``troublesome'' uncertainty. By using the results from IQC-based analysis for LPV systems \cite{Pfifer:2015, Pfifer:2016}, we develop a pointwise LMI condition which yields a bounded differential $ L_2 $ gain for the uncertain systems. Finally, we prove that global incremental $ L_2 $-gain performance can be guaranteed if the local analysis result is satisfied for all one-parameter solution families. We also show that a \emph{global} $ L_2 $-gain (a weaker notation compared with incremental one) bound with respect to certain reference trajectory can be inferred if the differential dissipation test is only validated for certain class of solution families.
The paper is structured as follows. Section~\ref{sec:background} presents the background on contraction analysis and IQC. Section~\ref{sec:DIQC} introduces the concept of $ \delta $-IQC. The main results on robust contraction analysis based on differential dissipativity and $ \delta $-IQCs are given in Section~\ref{sec:main}. Section~\ref{sec:example} provides a simple computational example.
\section{Background}\label{sec:background}
\subsection{Notation}
Most notation is from \cite{Zhou:1996}.
For a matrix $ M\in\C^{m\times n} $, $ M' $ denotes the transpose and $ M^* $ denotes the conjugate transpose, respectively. The para-Hermitian conjugate of $ G\in\RH_\infty $, denoted as $ G^\sim $, is defined by $ G^\sim(s):=G(-s)' $. $ \mathcal{C}^k $ denotes the set of vector signals on $ \R $ which have $ k $th order derivative. $ \mathcal{L}_2 $ is the space of square-integrable vector signals on $ \R_{\geq 0} $, i.e., $ \|f\|:=(\int_{0}^{\infty}|f(t)|dt)^{1/2}<\infty $ where $ |x| $ denotes the Euclidean norm of a vector $ x $. The causal truncation $ (\cdot)_T $ is defined by $ (f)_T(t):=f(t) $ for $ t\leq T $ and 0 otherwise. $ \mathcal{L}_{2e} $ is the space of vector signals on $ \R_{\geq 0} $ whose causal truncation belongs to $ \mathcal{L}_2 $. Let $ ARE(A,B,Q,R,S) $ denote the following Algebraic Riccati Equation (ARE):
\begin{equation}
A'X+XA-(XB+S)R^{-1}(XB+S)'+Q=0.
\end{equation}
The stabilizing solution $ X\in\Sb $, if it exists, is such that $ A-BR^{-1}(XB+S)' $ is Hurwitz.
A Riemannian metric on $ \R^n $ is a symmetric positive definite matrix function $ M(x) $, smooth in $ x $, which defines a inner product $ \langle\delta_1,\delta_2 \rangle_x:=\delta_1'M(x)\delta_2 $ for any two tangent vector $ \delta_1,\delta_2 $. A metric is called \emph{uniformly bounded} if there exist positive constants $ a_2\geq a_1 $ such that $ a_1I\leq M(x)\leq a_2I,\,\forall x\in\R^n $.
$ \Gamma(x_0,x_1) $ denotes the set of piecewise smooth paths $ c:[0,1]\rightarrow\R^n $ with $ c(0)=x_0 $ and $ c(1)=x_1 $. The curved length of $ c(\cdot) $ is defined by
\begin{equation}
\ell(c):=\int_{0}^{1}\sqrt{\langle c_s,c_s \rangle_{c(s)}}ds
\end{equation}
where $ c_s=\partial c/\partial s $. The \emph{geodesic} $ \gamma(\cdot) $ denotes a path with the minimal length, i.e., $ \ell(\gamma)=\inf_{c\in\Gamma(x_0,x_1)}\ell(c) $.
For more details see \cite{Do-Carmo:1992}.
\subsection{Differential $ L_2 $ Gain via Contraction Analysis}
Consider a nonlinear system of the form
\begin{equation}\label{eq:system}
\dot{x}=f(x,d), \quad e=h(x,d)
\end{equation}
where $ x(t)\in\R^{n_x} $, $ d(t)\in\R^{n_d} $, $ e(t)\in\R^{n_e} $ are the state, disturbance and performance output, respectively. For simplicity, $ f,h $ are assumed to be smooth and time-invariant.
Instead of investigating stability properties of one solution $ (x,d,e)(\cdot) $, we are interested in an one-parameter solution family defined as follows \cite{Forni:2013}. In what follows we denote $ \rho:=(x,d,e) $.
\begin{dfn}
A set of solutions $ \Omega_{\rho}=\{\overline{\rho}(\cdot,s)\}_{s\in\R} $
where $ \overline{\rho}(\cdot,0)=\rho(\cdot) $ is said to be an \emph{one-parameter solution family} to \eqref{eq:system} if $ \overline{\rho}(\cdot,\cdot)\in\mathcal{C}^2 $ for almost all $ t\in\R_{\geq 0},\,s\in\R $, and $ \overline{\rho}(\cdot,s) $ satisfies \eqref{eq:system} for all $ s\in\R $.
\end{dfn}
Compared to the time $ t $, variable $ s $ can be seen as a spatial parameter. The partial derivative $ \frac{\partial \overline{x}}{\partial t} $ at each $ t $ characterizes the time evolution of a curve $ \overline{x}(t,\cdot) $ subject to the nonlinear model \eqref{eq:system}. The partial derivative $ \frac{\partial \overline{x}}{\partial s} $ at each $ s $ characterizes the behavior of tangent vector that moves along the solution $ \overline{x}(\cdot,s) $, as shown in Fig.~\ref{fig:solution-familiy}. This behavior can be modeled by the following \emph{differential dynamics} (\cite{Lohmiller:1998}):
\begin{equation}\label{eq:diff-dynamics}
\begin{split}
\dot{\delta}_x&=A(x,d)\delta_x+B(x,d)\delta_d:=\frac{\partial f}{\partial x}\delta_x+\frac{\partial f}{\partial d}\delta_d \\ \delta_e&=C(x,d)\delta_x+D(x,d)\delta_d:=\frac{\partial h}{\partial x}\delta_x+\frac{\partial h}{\partial d}\delta_d
\end{split}
\end{equation}
where $ (\delta_x,\delta_d,\delta_e)=\left(\frac{\partial \overline{x}}{\partial s},\frac{\partial \overline{d}}{\partial s},\frac{\partial \overline{e}}{\partial s}\right) $.
\begin{figure}[!bt]
\centering
\includegraphics[width=0.55\linewidth]{fig-solution-family}
\caption{Geometrical interpretation of the differential dynamics based on an one-parameter solution family.}\label{fig:solution-familiy}
\end{figure}
The performance condition for nonlinear systems usually involves a bound on the $ L_2 $ gain from $ d $ to $ e $. There are several different formulations of $ L_2 $ gain. Since the standard $ L_2 $ gain does not consider continuity of input-output stability, this work investigates the following strong notations.
\begin{dfn}
System \eqref{eq:system} is said to have an \emph{incremental} $ L_2 $-gain bound of $ \alpha>0 $ if for all pair of solutions with initial conditions $ x_0(0),x_1(0) $ and input $ d_0,d_1\in\Lc_{2e} $, and for all $ T>0 $ solutions exist and
\begin{equation}\label{eq:global-L2}
\|(e_1-e_0)_T\|^2\leq \alpha^2\|(d_1-d_0)_T\|^2+b(x_0(0),x_1(0))
\end{equation}
for some function $ b(x_0,x_1)\geq 0 $ with $ b(x,x)=0 $.
\end{dfn}
\begin{dfn}
System \eqref{eq:system} is said to have a \emph{global} $ L_2 $-gain bound of $ \alpha>0 $ if there exists a unique solution $ \rho^*(\cdot) $, any initial condition $ x(0) $ and input $ d-d^*\in\Lc_{2e} $, and for all $ T>0 $ solutions exist and
\begin{equation}
\|(e-e^*)_T\|^2\leq \alpha^2\|(d-d^*)_T\|^2+b(x(0),x^*(0))
\end{equation}
for some function $ b(x_0,x_1)\geq 0 $ with $ b(x,x)=0 $.
\end{dfn}
Note that global $ L_2 $ gain stronger than the equilibrium-independent counterpart \cite{Summers:2013}, but weaker than incremental one, since it only requires the convergence between a particular (i.e., system solution and reference trajectory) rather than arbitrary pair of trajectories, as shown in Fig.~\ref{fig:stability}.
\begin{figure}[!bt]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\linewidth]{fig-incremental} &
\includegraphics[width=0.5\linewidth]{fig-universal} \\
(a) incremental & (b) global
\end{tabular}
\caption{Different notations for reference-independent stability.}\label{fig:stability}
\end{figure}
We also work on the \emph{differential} $ L_2 $ gain, that is, system \eqref{eq:system} is said to have a differential $ L_2 $-gain bound of $ \alpha $ if for all $ T>0 $
\begin{equation}
\|(\delta_e)_T\|^2\leq \alpha^2\|(\delta_d)_T\|^2+\beta(x(0),\delta_x(0))
\end{equation}
where $ \beta(x,\delta_x)\geq 0 $ with $ \beta(x,0)=0 $ for all $ x $. Applying standard results \cite[Th. 3.1.11]{Schaft:2017} to the joint system \eqref{eq:system}, \eqref{eq:diff-dynamics} gives that a sufficient, and in some cases necessary, condition for the differential $ L_2 $-gain bound is the existence of a differential storage function $ V(x,\delta_x)\geq 0 $ with $ V(x,0)=0 $ such that
\begin{equation}\label{eq:diff-dissipation}
\dot{V}(x,\delta_x)\leq \alpha^2|\delta_d|^2-|\delta_e|^2.
\end{equation}
Here we are interested in the differential storage function induced by a Riemannian metric, i.e., $ V(x,\delta_x)=\delta_x'\Mc(x)\delta_x $. Other more general metrics are possible, see \cite{Forni:2013,Chaffey:2018}, however we focus on the Riemannian case as the analysis problem can have a convex formulation.
\subsection{Integral Quadratic Constraints}
In the IQC framework, the uncertainty (either a known but ``difficult'' component or unknown dynamics) is described by an operator $ \varDelta $, which refers to an input-output system
\begin{equation}
w=\varDelta(v).
\end{equation}
An operator $ \varDelta $ is said to be \emph{casual} if $ \varDelta_T(v):=(w)_T=\varDelta((v)_T) $ for all $ T\geq 0 $. It is said to be \emph{bounded} if there exists $ C_1 $ such that $ \|\varDelta_T(v)\|\leq C_1\|(v)_T\| $ for all $ T>0 $ and $v\in \mathcal{L}_{2e} $. The underlying idea of IQC analysis is to replace $ \varDelta $ with a constraint or set of constraints that it is known to satisfy, and which are convenient for analysis that is convenient for analysis via LMIs. In particular, the constraints are frequency-weighted integral inequalities of the form below.
\begin{dfn}\label{dfn:iqc-frequency}
Let $ \Pi=\Pi^\sim\in\RL_{\infty}^{(n_v+n_w)\times(n_v+n_w)} $ be given. Two signals $ v\in \Lc_2^{n_v} $ and $ w\in \Lc_2^{n_w} $ satisfy the \emph{frequency domain IQC} defined by the multiplier $ \Pi $ if
\begin{equation}\label{eq:iqc-frequency}
\int_{-\infty}^{\infty}
\begin{bmatrix}
\widehat{V}(j\omega) \\ \widehat{W}(j\omega)
\end{bmatrix}^*\Pi(j\omega)
\begin{bmatrix}
\widehat{V}(j\omega) \\ \widehat{W}(j\omega)
\end{bmatrix}d\omega \geq 0
\end{equation}
where $ \widehat{V} $ and $ \widehat{W} $ are Fourier transforms of $ v $ and $ w $. A bounded, causal operator $ \varDelta:\Lc_{2e}^{n_v}\rightarrow \Lc_{2e}^{n_w} $ satisfies the frequency domain IQC defined by $ \Pi $ if \eqref{eq:iqc-frequency} holds for all $ v\in \Lc_2^{n_v} $ and $ w=\varDelta(v) $.
\end{dfn}
IQCs can also be expressed in the time-domain via Parseval's theorem, and can be interpreted as comparing filtered versions of the input and output of $\Delta$ as in Fig. \ref{fig:iqc}.
\begin{dfn}\label{dfn:iqc-time}
Let $ \Psi\in\RH_\infty^{n_z\times(n_v+n_w)} $ and $ M=M' $. Two signals $ v\in \Lc_{2}^{n_v},\,w\in \Lc_{2}^{n_w} $ satisfy the \emph{time domain IQC} defined by the multiplier $ \Psi $ and matrix $ M $ if the following inequality holds
\begin{equation}\label{eq:iqc-time}
\int_{0}^{\infty}z'(t)Mz(t)dt\geq 0
\end{equation}
where $ z $ is the output of $ \Psi $ with the zero initial condition and inputs $ (v,w) $. A bounded, causal operator $ \varDelta $ satisfies the time domain IQC defined by $ (\Psi,M) $ if \eqref{eq:iqc-time} holds for all $ v\in \Lc_{2}^{n_v} $ and $ w=\varDelta(v) $.
\end{dfn}
The time domain IQC in \eqref{eq:iqc-time} is referred as a \emph{soft IQC} in \cite{Megretski:1997}, and it is important to note that it assumes that all signals are in $L^2$. If the time domain constraint $ \int_{0}^{T}z'(t)Mz(t)dt\geq 0 $ holds for all $ T\geq 0 $, this is called a \emph{hard IQC}. The hard/soft property is not strictly inherent to the multiplier $ \Pi $ but instead depends on the non-unique factorization $ \Pi=\Psi^{\sim}M\Psi $. In particulary, a key additional assumptions on the multiplier $ \Pi $ is that it admits a J-spectral factorization, in which case a hard factorization can be constructed \cite{Seiler:2015}.
\begin{figure}[!bt]
\centering
\includegraphics[width=0.55\linewidth]{fig-iqc}
\caption{Graphical interpretation of the IQC.}\label{fig:iqc}
\end{figure}
\section{Differential IQC}\label{sec:DIQC}
The conventional IQC cannot be directly applied to contraction analysis since it is defined with respect to one preferred equilibrium. In this section, we will introduce the concept of differential IQC, which is used to replace the ``troublesome'' perturbation.
An operator $ \varDelta $ is said to be \emph{locally affine bounded} if there exists $ C_0,\,C_1\geq 0 $ such that
\begin{equation}
\|\varDelta_T(v_1)-\varDelta_T(v_2)\|\leq C_0+C_1\|(v_1-v_2)_T\|
\end{equation}
for all $ T>0 $ and $v_i\in \Lc_{2e} $. It is called \emph{locally bounded} if this holds with $ C_0=0 $.
For a locally bounded operator $ \varDelta $, its associated differential operator $ \partial\varDelta $ can be defined by
\begin{equation}
\delta_w=\partial\varDelta(v;\delta_v):=\limsup_{s\rightarrow 0^+}\frac{\varDelta(v+s\delta_v)-\varDelta(v)}{s}
\end{equation}
where $ v,\delta_v\in \Lc_{2e} $.
The differential IQC is an integral quadratic constraint specified on $ (\delta_v,\delta_w) $. The following definition is given in frequency domain (the time-domain definition is omitted due to space restrictions, but it is similar to Definition~\ref{dfn:iqc-time}).
\begin{dfn}
An operator $ \varDelta: \Lc_{2e}^{n_v}\rightarrow \Lc_{2e}^{n_w} $ is said to satisfy the frequency-domain \emph{differential IQC} ($ \delta $-IQC) defined by the multiplier $ \Pi=\Pi^\sim\in\RL_{\infty}^{(n_v+n_w)\times(n_v+n_w)} $ if $ \varDelta $ is casual and locally bounded, and the following inequality holds
\begin{equation}\label{eq:iqc-diff}
\int_{-\infty}^{\infty}
\begin{bmatrix}
\hat{\delta}_v(j\omega) \\ \hat{\delta}_w(j\omega)
\end{bmatrix}^*\Pi(j\omega)
\begin{bmatrix}
\hat{\delta}_v(j\omega) \\ \hat{\delta}_w(j\omega)
\end{bmatrix}d\omega \geq 0
\end{equation}
where $ \hat{\delta}_v $ and $ \hat{\delta}_w $ are Fourier transforms of $ \delta_v \in \Lc_2^{n_v} $ and $ \delta_w=\partial\varDelta(v;\delta_v) \in \Lc_2^{n_w} $.
\end{dfn}
A similar definition can be given for incremental IQCs.
\begin{dfn}
A causal operator $ \varDelta $ satisfies the frequency-domain \emph{incremental IQC} defined by $ \Pi $ if for any pair of input-output trajectories $ (v_0,w_0)(\cdot) $ and $ (v_1,w_1)(\cdot) $ with $ d_v=v_1-v_0\in \Lc_2^{n_v} $ and $ d_w=\varDelta(v_1)-\varDelta(v_0)\in \Lc_2^{n_w} $, the following inequality holds
\begin{equation}\label{eq:iqc-incremental}
\int_{-\infty}^{\infty}
\begin{bmatrix}
\hat{d}_v(j\omega) \\ \hat{d}_w(j\omega)
\end{bmatrix}^*\Pi(j\omega)
\begin{bmatrix}
\hat{d}_v(j\omega) \\ \hat{d}_w(j\omega)
\end{bmatrix}d\omega \geq 0
\end{equation}
where $ \hat{d}_v $ and $ \hat{d}_w $ are Fourier transforms of $ d_v $ and $ d_w $. If the above condition only holds for a particular reference input (i.e.,$ v_0=v^* $), the operator $ \varDelta $ satisfies an \emph{global IQC} defined by $ \Pi $.
\end{dfn}
Note that a global IQC is just a standard IQC \cite{Megretski:1997} with respect to a trajectory which is not necessarily at the origin.
For locally bounded operators, $ \delta $-IQC and incremental IQCs are equivalent under a mild assumption on the multiplier, which is also important for hard factorization.
\begin{asmp}\label{asmp:1}
The multiplier $ \Pi $ satisfies $ \Pi_{vv}(j\omega)\geq 0 $ and $ \Pi_{ww}(j\omega)\leq 0 $ for all $ \omega\in\R\cup\{\infty\} $, where $ \begin{bmatrix}
\Pi_{vv} & \Pi_{vw} \\ \Pi_{vw}^\sim & \Pi_{ww}
\end{bmatrix} $ with $ \Pi_{vv}\in\RL_{\infty}^{n_v\times n_v} $ is a partition of the multiplier $ \Pi $.
\end{asmp}
\begin{prop}\label{prop:1}
Assume that Assumption~\ref{asmp:1} holds for the multiplier $ \Pi $. The locally bounded operator $ \varDelta $ satisfies the $ \delta $-IQC induced by $ \Pi $ for all one-parameter solution families $ \Omega_{\varrho} $ with $ \varrho:=(v,w) $ if and only if it satisfies the incremental IQC induced by $ \Pi $.
\end{prop}
\begin{proof}[Proof of Proposition~\ref{prop:1}]
``{\bf if}": For any $ v\in \Lc_{2e}^{n_v} $ and $ \delta_v\in \Lc_2^{n_v} $, we take $ v_0=v,w_0=w $ and $ v_1=v+s\delta_v $, $ w_1=\varDelta(v_1) $. For a sufficiently small $ s $, we have $ d_w=s\delta_w=s\partial \varDelta(v;\delta_v) $. Since $ \varDelta $ is locally bounded, there exists a set-valued map $ D_\varDelta $ such that $ \partial\varDelta(v;\delta_v)= D_\varDelta(v)\delta_v $. Condition \eqref{eq:iqc-diff} follows by substituting $ d_v=s\delta_v $ and $ d_w=sD_\varDelta(v)\delta_v $ into \eqref{eq:iqc-incremental}.
``{\bf only if}": For any pair of input-output behaviors $ (v_0,w_0)$ and $(v_1,w_1) $ satisfying $ d_v, d_w\in \Lc_2 $, by taking the parameterization $ v(s)=(1-s)v_0+sv_1 $ and $ w(s)=\varDelta(v(s)) $, we have $ \delta_v(s)=d_v\in \Lc_2 $ and $ \delta_w(s)=\partial\varDelta(v(s),\delta_v(s))\in \Lc_2 $ as $ \partial \varDelta $ is bounded. Integration of \eqref{eq:iqc-diff} over $ s\in[0,1] $ yields (the dependence on $ j\omega $ is omitted for simplicity):
\begin{equation}\label{eq:delta-Delta}
\begin{split}
0\leq& \int_{0}^{1}\int_{-\infty}^{\infty}
\begin{bmatrix}
\hat{\delta}_v(s) \\ \hat{\delta}_w(s)
\end{bmatrix}^*\Pi
\begin{bmatrix}
\hat{\delta}_v(s) \\ \hat{\delta}_w(s)
\end{bmatrix}d\omega ds \\
=&\int_{-\infty}^{\infty}\int_{0}^{1}
\begin{bmatrix}
\hat{d}_v \\ \hat{\delta}_w(s)
\end{bmatrix}^*
\begin{bmatrix}
\Pi_{vv} & \Pi_{vw} \\ \Pi_{vw}^\sim & \Pi_{ww}
\end{bmatrix}
\begin{bmatrix}
\hat{d}_v \\ \hat{\delta}_w(s)
\end{bmatrix}ds d\omega \\
=&\int_{-\infty}^{\infty}
\begin{bmatrix}
\hat{d}_v \\ \hat{d}_w
\end{bmatrix}^*
\begin{bmatrix}
\Pi_{vv} & \Pi_{vw} \\ \Pi_{vw}^\sim & 0
\end{bmatrix}
\begin{bmatrix}
\hat{d}_v \\ \hat{d}_w
\end{bmatrix}ds d\omega +\\
&\; \int_{-\infty}^{\infty}\int_{0}^{1}\hat{\delta}_w^*(s)\Pi_{ww}\hat{\delta}_w(s)dsd\omega.
\end{split}
\end{equation}
Since $ \Pi_{ww}(j\omega)=\Pi_{ww}^\sim(j\omega)\leq 0 $, it yields a factorization $ \Pi_{ww}=-\Lambda_w^\sim\Lambda_w $. From Cauchy-Schwarz inequality, we have
\begin{equation}\label{eq:delta-Delta-2}
\begin{split}
&\int_{0}^{1}\hat{\delta}_w^*(s)\Pi_{ww}\hat{\delta}_w(s)ds=-\int_{0}^{1}|\Lambda_w\hat{\delta}_w(s)|^2ds \\
&\quad \leq -\left|\int_{0}^{1}\Lambda_w\hat{\delta}_w(s)ds\right|^2
=\hat{d}_w^*\Pi_{ww}\hat{d}_w.
\end{split}
\end{equation}
The incremental IQC condition \eqref{eq:iqc-incremental} follows from \eqref{eq:delta-Delta}-\eqref{eq:delta-Delta-2}.
\end{proof}
From the above proposition, we have following corollaries.
\begin{cor}\label{coro:2}
If the operator $ \varDelta $ is linear, then the $ \delta $-IQC induced by $ \Pi $ is equivalent to the IQC induced by $ \Pi $.
\end{cor}
\begin{cor}
If the operator $ \varDelta $ satisfies the $ \delta $-IQC induced by $ \Pi $ for all $ \Omega_{\varrho^*} $ with a fixed $ \varrho^* $, then it satisfies the global IQC induced by $ \Pi $.
\end{cor}
For general locally affine bounded operators, the corresponding differential operators may not be well-defined since even a small input may produce large output (e.g., the relay operator $ w=\sgn(v) $), therefore, one cannot find any $ \delta $-IQC. However, it may be possible to construct artificial feedback loops encapsulating those operators \cite{Rantzer:1997} which enable $\delta$-IQC analysis. For example, consider the following uncertain system
\begin{equation}\label{eq:encapsulation}
\dot{y}=-ay-\varDelta_f(y)+v
\end{equation}
where $ \varDelta_f $ is a viscous friction operator defined by $ \varDelta_f(y)=\sgn(y)(b|y|+c) $ with $ a,b, c>0 $.
The operator $ w=\varDelta_f(y) $ is not locally bounded. However, a locally bounded operator can be constructed via the feedback encapsulation shown in Fig.~\ref{fig:encapsulation}, for which we can find a $\delta$-IQC.
\begin{prop}
The operator $\varDelta:v\mapsto y$ defined in \eqref{eq:encapsulation} satisfies a differential $L_2$ gain bound of $ \frac{1}{a+b}$.
\end{prop}
\begin{proof}
We sketch a proof based on the regularisation approach of \cite{fiore2016contraction}. The operator $ \varDelta:v\mapsto y $ is locally bounded. The following smooth system
$
\dot{y}=-(a+b)y-c\tanh(y/\epsilon)+v
$ with $ \epsilon>0 $ approaches \eqref{eq:system} when $ \epsilon\rightarrow 0 $. The differential dynamics of this approximation are
\begin{equation}
\dot{\delta}_y=-(a+b)\delta_y-c/\epsilon(1-\tanh^2(y/\epsilon))\delta_y+\delta_v.
\end{equation}
Since $ 1-\tanh^2(y/\epsilon)\geq 0 $, the above differential dynamics has a $ L_2 $-gain bound of $ \frac{1}{a+b} $ for all $ \epsilon>0 $. The claim follows by taking $ \epsilon\rightarrow 0 $ and the results of \cite{fiore2016contraction}.
\end{proof}
\begin{figure}[!bt]
\centering
\includegraphics[width=0.7\linewidth]{fig-encapsulation}
\caption{Bounded feedback encapsulation of a viscous friction.}\label{fig:encapsulation}
\end{figure}
\section{Robust Contraction Analysis}\label{sec:main}
The main results of this paper prove robustness results for a nominal nonlinear system in feedback with uncertainties satisfying differential IQCs.
\subsection{Problem Statement}
We consider a robust contraction analysis problem for uncertain nonlinear systems. Here the perturbed system is described by the feedback interconnection of a nominal nonlinear system $ G $ and an uncertainty $ \varDelta $ as shown in Fig.~\ref{fig:feedback}. This feedback interconnection with $ \varDelta $ wrapped around the top of $ G $ is denoted $ \Fc_u(G,\varDelta) $. The nominal system $ G $ is represented by
\begin{equation}\label{eq:nominal-sys}
\dot{x}=f(x,w,d),\; v=g(x,w,d),\; e=h(x,w,d)
\end{equation}
where $ x(t)\in\R^{n_x} $ is the state, $ w(t)\in\R^{n_w} $ and $ d(t)\in\R^{n_d} $ are inputs, and $ v(t)\in\R^{n_v} $ and $ e(t)\in\R^{n_e} $ are outputs. For simplicity, $ f,g,h $ are assumed to be smooth and time-invariant. Moreover, the following assumptions are made regarding $ G $ and $ \varDelta $.
\begin{figure}[!bt]
\centering
\includegraphics[width=0.45\linewidth]{fig-feedback}
\caption{Feedback interconnection}\label{fig:feedback}
\end{figure}
\begin{asmp}\label{asmp:2}
The nominal nonlinear system $ G $ has a differential $ L_2 $-gain bound from $ \col(w,d) $ to $ \col(v,e) $.
\end{asmp}
\begin{asmp}\label{asmp:3}
The uncertainty $ \varDelta $ satisfies a collection of frequency domain $ \delta $-IQCs defined by $ \{\Pi_k\}_{1\leq k\leq N} $ with $ \Pi_k\in \RL_{\infty}^{(n_v+n_w)\times(n_v+n_w)} $ satisfying Assumption~\ref{asmp:1}, denoted by $ \partial \varDelta\in \partial\mathbf{\Delta}(\Pi_1,\ldots,\Pi_N) $.
\end{asmp}
\begin{asmp}\label{asmp:4}
The overall uncertainty satisfies a differential $L^2$ gain bound, and has been normalized to satisfy $ \|\partial \varDelta\|\leq 1 $, so the first $ \delta $-IQC is defined by the multiplier $ \Pi_1=\diag(I_{n_v},-I_{n_w}) $.
\end{asmp}
All these assumptions are used to simplify the algorithm. From Assumption~\ref{asmp:2}, the nominal system is a ``less troublesome'' nonlinear dynamics since the differential $ L_2 $-gain implies bounded and continuous outputs with respect to inputs. The $ \delta $-IQCs in Assumption~\ref{asmp:3} are used to bounded the differential input/output behavior of the perturbation $ \varDelta $. Assumption~\ref{asmp:1} and \ref{asmp:4} are used to ensure that a ``combined'' multiplier $ \Pi_{\lambda}=\sum_{k=1}^{N}\lambda_k\Pi_k $ is a hard IQC and has a $ J $-spectral factorization for all $ \lambda\in\Lambda:=\{\lambda\in\R^{N}\mid \lambda_1>0,\,\lambda_k\geq 0,\, 2\leq k\leq N\} $. Similar assumptions are made in \cite{Pfifer:2016} for LPV robustness analysis.
\subsection{Robust Performance Condition}
In this section, a pointwise LMI is developed to verify the differential dissipation condition \eqref{eq:diff-dissipation} for the uncertain system $ \Fc_u(G,\varDelta) $. First, the differential dynamics of the nominal system $ G $ can be represented by
\begin{equation}\label{eq:delta-G}
\begin{bmatrix}
\dot{\delta}_x \\ \delta_v \\ \delta_e
\end{bmatrix}=
\begin{bmatrix}
A_x(\rho) & B_{xw}(\rho) & B_{xd}(\rho) \\
C_v(\rho) & D_{vw}(\rho) & D_{vd}(\rho) \\
C_e(\rho) & D_{ew}(\rho) & D_{ed}(\rho)
\end{bmatrix}
\begin{bmatrix}
\delta_x \\ \delta_w \\ \delta_d
\end{bmatrix}
\end{equation}
where $ A_x=\frac{\partial f}{\partial x} $, $ B_w=\frac{\partial f}{\partial w} $, $ B_d=\frac{\partial f}{\partial d} $, $ C_v=\frac{\partial g}{\partial x} $, $ D_{vw}=\frac{\partial g}{\partial w} $, $ D_{vd}=\frac{\partial g}{\partial d} $, $ C_e=\frac{\partial h}{\partial x} $, $ D_{ew}=\frac{\partial h}{\partial w} $ and $ D_{ed}=\frac{\partial h}{\partial d} $. Let $ (\Psi_k,M_k) $ be a factorization for $ \Pi_k $ and $ \Psi $ be the aggregated system of $ \{\Psi_k\}_{1\leq k\leq N} $ which can yield a (minimal) state-space realization with differential dynamics as follows:
\begin{equation}\label{eq:delta-psi}
\begin{bmatrix}
\dot{\delta}_\psi \\ \delta_{z_k}
\end{bmatrix}=
\begin{bmatrix}
A_\psi & B_{\psi v} & B_{\psi w} \\
C_{z_k} & D_{z_k v} & D_{z_k w}
\end{bmatrix}
\begin{bmatrix}
\delta_\psi \\ \delta_v \\ \delta_w
\end{bmatrix},\; 1\leq k\leq N
\end{equation}
where $ \delta_\psi\in\R^{n_\psi} $ is the filter state, $ \delta_{z_k} $ is the output of the filter $ \Psi_k $ driven by the signals $ (\delta_v,\delta_w) $. From \eqref{eq:delta-G} and \eqref{eq:delta-psi}, we can obtain the extended system of $ \delta G $ and $ \Psi $ as follows
\begin{equation}\label{eq:extend-system-1}
\begin{bmatrix}
\dot{\delta}_\chi \\ \delta_{z_k} \\ \delta_e
\end{bmatrix}=
\begin{bmatrix}
\A(\rho) & \B_{w}(\rho) & \B_{d}(\rho) \\
\C_{z_k}(\rho) & \D_{z_kw}(\rho) & \D_{z_kd}(\rho) \\
\C_e(\rho) & \D_{ew}(\rho) & \D_{ed}(\rho)
\end{bmatrix}
\begin{bmatrix}
\delta_\chi \\ \delta_w \\ \delta_d
\end{bmatrix},\; 1\leq k\leq N
\end{equation}
where $ \chi=\col(x,\psi)\in\R^{n_x+n_\psi} $. The extended system can be expressed in terms of the state matrices in \eqref{eq:delta-G} and \eqref{eq:delta-psi}.
The following result establishes a pointwise LMI condition for differential dissipativity \eqref{eq:diff-dissipation} of the closed-loop system, and can be seen as an application of the method of \cite{Pfifer:2016} to the differential dynamics \eqref{eq:extend-system-1}.
\begin{prop}\label{prop:differential-L2-gain}
Let the nominal system $ G $ and the uncertainty $ \varDelta $ satisfy Assumption~\ref{asmp:2}-\ref{asmp:4}. The feedback interconnection $ \Fc_u(G,\varDelta) $ has a differential $ L_2 $-gain bound of $ \alpha $ if there exists a smooth state-dependent matrix function $ P(x)=P'(x) $ and a multiplier coefficient vector $ \lambda\in\Lambda $ such that the following pointwise LMI (omitting $ \rho $ for a shorter notation)
\begin{equation}\label{eq:LMI-soft}
\begin{split}
&
\begin{bmatrix}
P\A+\A'P+\dot{P} & P\B_w & P\B_d \\
\B_w'P & 0 & 0 \\
\B_d'P & 0 & -\alpha^2I
\end{bmatrix}+
\begin{bmatrix}
\C_e' \\ \D_{ew}' \\ \D_{ed}'
\end{bmatrix}
\begin{bmatrix}
\C_e' \\ \D_{ew}' \\ \D_{ed}'
\end{bmatrix}' \\
&\quad +\sum_{k=1}^{N}\lambda_k
\begin{bmatrix}
\C_{z_k}' \\ \D_{z_kw}' \\ \D_{z_kd}'
\end{bmatrix}M_k
\begin{bmatrix}
\C_{z_k}' \\ \D_{z_kw}' \\ \D_{z_kd}'
\end{bmatrix}'<0
\end{split}
\end{equation}
holds for all $ \rho $.
\end{prop}
Note that if the nominal system is polynomial, the above pointwise LMI condition can be efficiently solved for using SOS programming \cite{Parrilo:2003}.
The main part of the proof parallels Theorem 2 in \cite{Pfifer:2016}, since the evaluation of the differential dynamics \eqref{eq:delta-G} along any particular trajectory gives an LPV system. With this in mind, we simply sketch the proof here.
The key step is to prove that \eqref{eq:LMI-soft} is equivalent to a new LMI formulation which involves a single, hard $ \delta $-IQC and a new matrix function $ \tilde{P}(\chi)\geq 0 $ for all $ \chi $. First, the combined multiplier $ \Pi_\lambda $ has a factorization $ (\Psi_\lambda,M_\lambda) $ where $ \Psi_\lambda=\begin{bmatrix}(sI-A_\psi)^{-1}B_\psi \\ I\end{bmatrix} $ with $ B_\psi:=\begin{bmatrix} B_{\psi v} & B_{\psi w}\end{bmatrix} $ and
\begin{equation}
M_\lambda=\begin{bmatrix}
Q_\lambda & S_\lambda \\ S_\lambda' & R_\lambda
\end{bmatrix}:=\sum_{k=1}^{N}\lambda_k
\begin{bmatrix}
\C_{z_k}' \\ \D_{z_kw}' \\ \D_{z_kd}'
\end{bmatrix}M_k
\begin{bmatrix}
\C_{z_k}' \\ \D_{z_kw}' \\ \D_{z_kd}'
\end{bmatrix}'
\end{equation}
with $ Q_\lambda=Q_\lambda' $ and $ R_\lambda=R_\lambda' $.
From Assumption~\ref{asmp:3}-\ref{asmp:4} and the definition of $ \Lambda $, we have $ \Pi_{\lambda,vv}(j\omega)>0 $ and $ \Pi_{\lambda,ww}(j\omega)<0 $ for all $ \omega\in\R\cup\{\infty\} $. The multiplier $ \Pi_{\lambda} $ can yield a $ J $-spectral factorization $ (\widetilde{\Psi}_\lambda,\widetilde{M}_\lambda) $ with $ \widetilde{M}_\lambda=\diag(I_{n_v},-I_{n_w}) $ \cite[Lemma 4]{Seiler:2015}. The differential dynamics for the state-space realization of $ \widetilde{\Psi}_\lambda $ is
\begin{equation}\label{eq:delta-psi-lambda}
\begin{bmatrix}
\dot{\delta}_\psi \\ \delta_{z_\lambda}
\end{bmatrix}=
\begin{bmatrix}
A_\psi & B_{\psi v} & B_{\psi w} \\
{C}_{z_\lambda} & {D}_{z_\lambda v} & {D}_{z_\lambda w}
\end{bmatrix}
\begin{bmatrix}
\delta_\psi \\ \delta_v \\ \delta_w
\end{bmatrix}
\end{equation}
where $ C_{z_\lambda}=\widetilde{M}_\lambda (D_{z_\lambda}^{-1})'(B_\psi'X+S_\lambda) $ with $ X $ as the stabling solution to the $ ARE(A_\psi,B_\psi,Q_\lambda,S_\lambda,R_\lambda) $ and $ D_{z_\lambda}:=\begin{bmatrix}{D}_{z_\lambda v} & {D}_{z_\lambda w}\end{bmatrix} $ satisfying $ R_\lambda=D_{z_\lambda}'\widetilde{M}_\lambda D_{z_\lambda} $. Then the extended system of $ \delta G $ and $ \widetilde{\Psi}_\lambda $ can be represented by
\begin{equation}\label{eq:diff-dynamics-closedloop}
\begin{bmatrix}
\dot{\delta}_\chi \\ \delta_{z_\lambda} \\ \delta_e
\end{bmatrix}=
\begin{bmatrix}
\widetilde{\A} & \widetilde{\B}_{w} & \widetilde{\B}_{d} \\
\widetilde{\C}_{z_\lambda} & \widetilde{\D}_{z_\lambda w} & \widetilde{\D}_{z_\lambda d} \\
\C_e & \D_{ew} & \D_{ed}
\end{bmatrix}
\begin{bmatrix}
\delta_\chi \\ \delta_w \\ \delta_d
\end{bmatrix}.
\end{equation}
The equivalent formulation to LMI \eqref{eq:LMI-soft} can be written as:
\begin{equation}\label{eq:LMI-hard}
\begin{split}
&
\begin{bmatrix}
\widetilde{P}\widetilde{\A}+\widetilde{\A}'\widetilde{P}+\dot{\widetilde{P}} & \widetilde{P}\widetilde{\B}_w & \widetilde{P}\widetilde{\B}_d \\
\widetilde{\B}_w'\widetilde{P} & 0 & 0 \\
\widetilde{\B}_d'\widetilde{P} & 0 & -\alpha^2I
\end{bmatrix}+
\begin{bmatrix}
\C_e' \\ \D_{ew}' \\ \D_{ed}'
\end{bmatrix}
\begin{bmatrix}
\C_e' \\ \D_{ew}' \\ \D_{ed}'
\end{bmatrix}' \\
&\quad +
\begin{bmatrix}
\widetilde{\C}_{z_\lambda}' \\ \widetilde{\D}_{z_\lambda w}' \\ \widetilde{\D}_{z_\lambda d}'
\end{bmatrix}\widetilde{M}_\lambda
\begin{bmatrix}
\widetilde{\C}_{z_\lambda}' \\ \widetilde{\D}_{z_\lambda w}' \\ \widetilde{\D}_{z_\lambda d}'
\end{bmatrix}'<0
\end{split}
\end{equation}
where $ \widetilde{P}=P-\begin{bmatrix}0 & 0 \\ 0 & X\end{bmatrix}\geq 0 $ with $ X $ as the stabilizing solution to the $ ARE(A_\psi,B_\psi,Q_\lambda,S_\lambda,R_\lambda) $.
Left- and right-multiplying \eqref{eq:LMI-hard} with $ \col(\delta_\chi,\delta_w,\delta_d) $ and its transpose, and taking integration of $ t $ over $ [0,T] $ gives
\begin{equation}\label{eq:diff-gain}
\widetilde{V}_T-\widetilde{V}_0+\int_{0}^{T}\delta_{z_\lambda}'\widetilde{M}_\lambda\delta_{z_\lambda}\leq \int_{0}^{T}(\alpha^2|\delta_d|^2-|\delta_e|^2)dt
\end{equation}
where $ \widetilde{V}_t $ denotes $\delta_\chi'(t)\Mc(\chi(t))\delta_\chi(t) $ with $ \Mc=\widetilde{P}+\epsilon I $. Here $ \epsilon $ is a small positive constant. Note that $ (\widetilde{\Psi}_\lambda, \widetilde{M}_\lambda) $ is a hard factorization which implies $ \int_{0}^{T}\delta_{z_\lambda}'\widetilde{M}_\lambda\delta_{z_\lambda}\geq 0 $ for all $ T>0 $. Thus, \eqref{eq:diff-gain} is a differential dissipation condition which yields a differential $ L_2 $-gain bound of $ \alpha $ for $ \mathcal{F}_u(G,\varDelta) $.
There are two major benefits for applying contraction analysis to nonlinear systems: local differential stability implies global incremental/global stability, and no knowledge about the reference trajectory is required. The following theorem shows that robust contraction analysis based on $ \delta $-IQC preserves these two features.
\begin{thm}\label{thm:main}
Suppose that conditions for Proposition~\ref{prop:differential-L2-gain} are satisfied. If differential dissipation inequality \eqref{eq:diff-gain} holds for any solution $ \rho(\cdot) $ and any solution family $ \Omega_\rho $, then $ \Fc_u(G,\varDelta) $ has an incremental $ L_2 $-gain bound of $ \alpha $. If \eqref{eq:diff-gain} only holds for all $ \Omega_{\rho^*} $ with certain $ \rho^*(\cdot) $, then $ \Fc_u(G,\varDelta) $ has a global $ L_2 $-gain bound of $ \alpha $.
\end{thm}
\begin{proof}
We prove the first claim that if \eqref{eq:diff-dissipation} is satisfied for all solution families, an incremental $ L_2 $-gain bound is guaranteed. For any pair of solutions $ \rho_0(\cdot) $ and $ \rho_1(\cdot) $, we consider an one-parameter solution family $ \Omega_{\rho_0} $ satisfying
\begin{equation}
\begin{split}
\overline{x}(0,s)&=c(s,x_0(0),x_1(0)) \\
\overline{d}(t,s)&=d_0(t)(1-s)+d_1(t)s \\
\overline{e}(t,s)&=h(\overline{x},\overline{d})(t,s)
\end{split}
\end{equation}
where $ c(\cdot,x_0(0),x_1(0)) $ is a smooth path joining $ x_0(0) $ and $ x_1(0) $. Thus, $ \Omega_{\rho_0} $ also satisfies $ \overline{\rho}(\cdot,1)=\rho_1(\cdot) $.
Substituting $ (\delta_x,\delta_d,\delta_e)=(\overline{x}_s,\overline{d}_s,\overline{e}_s) $ into \eqref{eq:diff-gain} and integrating it over $ [0,1] $ yield
\begin{equation}
\int_{0}^{T}\int_{0}^{1}|\overline{e}_s|^2dsdt\leq \alpha^2\int_{0}^{T}\int_{0}^{1}|\overline{d}_s|^2dsdt+\int_{0}^{1}c_s'\Mc c_sds.
\end{equation}
This gives the incremental $ L_2 $-gain condition \eqref{eq:global-L2} with $ b(x_0,x_1)=\ell^2(c(\cdot,x_0(0),x_1(0))) $ since $ \overline{d}_s=d_1-d_0 $ and $ |e_1-e_0|^2=|\int_{0}^{1}\overline{e}_sds|^2\leq \int_{0}^{1}|\overline{e}_s|^2ds $ (Cauchy-Schwarz inequality). For the second claim, it is straight forward by following the above steps.
\end{proof}
\section{Illustrative Example}\label{sec:example}
A simplified model of surge-stall dynamics of a jet engine has the form of
\begin{equation}
\begin{bmatrix}
\dot{\psi} \\ \dot{\phi}
\end{bmatrix}=f(x)+Bu+Ed:=
\begin{bmatrix}
\phi+u \\ -\psi-\frac{3}{2}\phi^2-\frac{1}{2}\phi^3+d
\end{bmatrix}
\end{equation}
where $ x=(\phi,\psi) $ is the state, $ u $ is the control input, and $ d $ is external disturbance. Here $ \phi $ is a measure of mass flow through the compressor, and $ \psi $ is a measure of pressure rise in the compressor. By implementing the CCM based control synthesis approach (\cite{Manchester:2018}), we found a constant Riemannian metric and a differential controller
\begin{equation}
\delta_u=K(x)\delta_x
\end{equation}
which achieves a bounded global $ L_2 $ gain of $ \alpha=0.93 $ from $ d $ to $ e:=\phi+0.1u $ within the region $ |\phi|\leq 1 $. The control realization based on integration along geodesics is
\begin{equation}
u(t)=\kappa(x,x^*,u^*)(t):=u^*(t)+\int_{0}^{1}K(\gamma(t,s))\gamma_sds
\end{equation}
where $ (x^*,u^*)(\cdot) $ is a reference trajectory, $ \gamma(t,\cdot) $ is the geodesic joining $ x^*(t) $ to $ x(t) $. For constant metric, there exist a unique geodesic $ \gamma(t,s)=x^*(t)(1-s)+x(t)s $.
The $ \delta $-IQC based contraction analysis problem we considered here is the performance degradation caused by uncertain input delays:
\begin{equation}
u(t)=\kappa(x,x^*,u^*)(t-\theta)
\end{equation}
where $ \theta\in[0,\Theta] $ with $ \Theta $ as a known bound. The perturbed nonlinear system can be represented as a feedback interconnection of a nominal model $ G $ and a perturbation $ \varDelta $:
\begin{equation}
\begin{split}
G:\; &
\begin{cases}
\dot{x}&=f(x)+Bk(x,x^*,u^*)+Bw+Ed \\
e&=Cx+Dk(x,x^*,u^*)+Dw \\
v&=k(x,x^*,u^*)
\end{cases} \\
\varDelta:\; & \quad w=v(t-\theta)-v(t).
\end{split}
\end{equation}
Since the uncertainty $ \varDelta $ is a linear infinite-dimensional system, the differential operator $ \partial\varDelta $ exists and shares the same multipliers for $ \delta $-IQCs and conventional IQCs (Corollary~\ref{coro:2}). We can obtain a simple (and not complete) set of $ \delta $-IQCs from \cite{Megretski:1997}:
\begin{equation}\label{eq:iqc}
\begin{split}
\left|\hat{\delta}_v(j\omega)\right|^2-\left|\hat{\delta}_v(j\omega)+\hat{\delta}_w(j\omega)\right|^2&\geq 0 \\
\eta(\Theta\omega)\left|\hat{\delta}_v(j\omega)\right|^2-\left|\hat{\delta}_w(j\omega)\right|^2&\geq 0
\end{split}
\end{equation}
where
\begin{equation}
\eta(\omega)=\frac{\omega^2+0.08\omega^4}{1+0.13\omega^2+0.02\omega^4}.
\end{equation}
Solving the parameter-dependent LMI \eqref{eq:LMI-soft} problem took using SOS programming (Yalmip \cite{Lofberg:2004} and SDP solver Mosek). The results are shown in Table~\ref{tab:1}. System performance deterioration is observed as the input delay increases.
\begin{table}[!bt]
\caption{Global $ L_2 $-gain bound $\alpha$ vs delay bound $ \Theta $ }\label{tab:1}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline \hline
$ \Theta $ & 0 & 0.04 & 0.08 & 0.12 & 0.16 \\
\hline
$ \alpha $ & 0.93 & 1.44 & 4.87 & 13.59 & 546.3 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
This paper extended the time-domain IQC theorem to the contraction analysis framework. This is quite a natural development, since contraction is based on the study of differential dynamics, which can be interpreted as a special type of LPV system. However, by integrating along paths in state-space we obtain rigorous global results for a nonlinear uncertain system. Our approach leads to a computationally tractable condition to assess the robust contraction performance of a nominal nonlinear system interconnected with uncertainties described by differential IQCs. Future work will consider the synthesis of robust controllers for uncertain nonlinear systems using the approach of \cite{Manchester:2018}.
\bibliographystyle{IEEEtran}
|
2,869,038,155,017 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Phase curve analysis has established itself as an important tool in characterizing exoplanets. It is commonly used to study close-in massive planets, opening up a window to independently characterize the planetary parameters such as eccentricity, geometric albedo, longitudinal temperature distribution, cloud coverage as well as planetary mass (for a detail review, see \citet{shporer2017}, or \citet{parmentier2017}). While infrared windows are often ideal for observing the thermal phase curve of planets as planetary emission often peak in such bandpass \citep[e.g.][]{harrington2006, knutson2007, adams2018}, the limited number of space based infrared facilities has been the primary bottleneck for such studies. This is where space-based photometric missions such as Convection, Rotation, and planetary Transits \citep[\textit{CoRoT};][]{baglin2002}, and \textit{Kepler} \citep{borucki2010} have come to play a crucial role through optical phase curves. While leading in the discoveries of the exoplanets, these mission have also significantly increased the number of phase curve detections. For instance with \textit{Kepler}, photometric time series were obtained for thousands of targets with unprecedented precision. For reference, a combined differential photometric precision of $29.0$ ppm was reported for Kp = 11.5 to 12 mag stars in the long cadence mode \citep{gilliland2011}. This has already led to the discovery of robust phase curves for more than 20 transiting \textit{Kepler} planets \citep{borucki2009, welsh2010, batalha2011, barclay2012, esteves2015, angerhausen2015}. Additionally, the precise photometry has been used to estimate RV-independent masses \citep{faigler2013, esteves2013}, validating the planetary nature of transiting objects \citep{quintana2013, ojeda2013} well as for the discovery non-transiting systems \citep{faigler2012, millholland2017}.
Despite operating only on two reaction wheels, the revamped \textit{K2} mission is able to achieve photometric precision on par with the original \textit{Kepler} mission. This has been possible not only due to ingenious mission redesign \citep{howell2014} but also because of a host of tools that have been developed in response to the unique data challenges caused primarily by, but not limited to, the telescope drift \citep{vanderburg2014, luger2016}. Despite \textit{K2}'s relatively short baseline of $\sim$80 days (compared to the primary mission's four year baseline), its high precision observation of numerous bright targets (V$\sim$10 mag) translates into good opportunities for detecting phase curves.
Yet, there are unique challenges in studying phase curves with a `short' observation baseline. Not only does the shorter length of observation make it particularly difficult when it comes to disentangling the phase curve from quasi-periodic signals such as those arising from spot modulation, but non-periodic effects such as thermal settling or edge effects will disproportionately distort the final obtained signal. In the case of \textit{K2}, the required aggressive post processing of the data to correct the systematics can also affect the astrophysical signal under consideration. Not to mention, there are gaps in our understanding of the physics behind phase curves. Intriguing questions surrounding the existence and the cause of the third harmonics observed in systems such as HAT-P-7b and Kepler-13Ab still eludes a clear explanation \citep{shporer2014, esteves2015, cowan2017}. A handful of the optical phase curves have been observed with significant asymmetries, which have been attributed to scattering due to inhomogeneous clouds \citep{demory2013, shporer2015}. Meanwhile, \citet{armstrong2016} measured the temporal variations of the optical phase curve of HAT-P-7b, which comes with the prescription for the climatic variability in planets, thereby undermining the classical picture of a consistent signal present throughout the time series. Similarly, given the small signal amplitude, correlated noise as well as dilution can dramatically affect the inferred conclusions we derive from the phase curves \citep[see discussions surrounding Kepler-91b in ][]{esteves2013,lillobox2014}. This all points to the complex world of planetary atmospheres hidden under the simple phase curve models, robust characterization of which would require more precise data.
Despite these challenges, there are already three reported planetary phase curves in \textit{K2} data: Qatar~2b \citep{dai2017b}, K2-141b \citep{malavolta2018}, and WASP-104b \citep{mocnik2018}. Hot Jupiters like Qatar-2b and WASP-104b have light curves exhibiting ellipsoidal variation and Doppler boosting consistent with their radial velocity (RV) based masses. On the other hand, K2-141b, an ultra short period super-Earth, has a measurable phase curve dominated by reflective and thermal components. As more than 350 planets have been discovered by \textit{K2}, we have undertaken a project to systematically search for phase curves among the selected \textit{K2} light curves where phase curve detection is feasible.
In $\S$2 of this work, we introduce the process used to screen out the targets for the phase curves among the \textit{K2} planets. In $\S$3, we present the pipeline used to process the data including discussion on different filtering processes, and in $\S$4 we perform Signal Injection and Retrieval test to probe the strengths and weaknesses of our pipelines. Under $\S$5, we discuss the model framework used for the phase curve, and the fitting procedures whereas the results are presented in $\S$6. This is followed by the reportings on secondary eclipse in $\S$7. In $\S$8 and $\S$9, we present the scientific insights gained through our work, which is followed by the conclusion. We present the the light curves used in our data analysis in the Appendix.
\section{Potential Candidates}
For this study, we have only considered the confirmed exoplanets. Our search included 382 exoplanets which were observed by \textit{K2}, all of which were catalogued in NASA Exoplanet Archive\footnote{\href{https://exoplanetarchive.ipac.caltech.edu}{https://exoplanetarchive.ipac.caltech.edu}} as of December 18, 2018 with the \textit{K2} flag. While the \textit{K2} mission itself has come to an end, the data from the mission is still being processed and discovery numbers are expected to increase as candidates continue to be validated with follow-up observations. Phase curve studies, such as the analysis presented in this paper, can also be applied to other photometric exoplanet surveys such as Transiting Exoplanet Search Survey \citep[\textit{TESS;}][]{ricker2015}, which is expected to find as many as $\sim$10$^4$ planets primarily in short period orbits \citep{huang2018}.
As the possibility of the detection of phase curves primarily boils down to the precision of the light curve, we filter out the suitable candidates by estimating the magnitude of the combined signal against the obtainable precision of the light curve. We use the parameters recorded in the database to estimate the equivalent amplitude of the phase curve signal. When some of the values were missing, estimates were made based on other available parameters of the planet. Using the combined amplitude of all four different effects i.e. reflective, thermal, ellipsoidal and Doppler components, we calculate the expected signal to noise ratio using an estimator for the precision of the light curve as below:
\begin{equation}
\begin{aligned}
\label{eqn:CalcAmp}
\mathrm{SNR} &= \frac{{((A_{Re\!f}+A_{Th})^2+A_{Ell}^2+A_{Dop}^2})^{\frac{1}{2}}\cdot N^{\frac{1}{2}}}{\sqrt{2}\sigma} \\
&= \frac{ A_{Eqv} \cdot N^{\frac{1}{2}}}{\sqrt{2} \sigma}
\end{aligned}
\end{equation}
where $N$ is the number of observed photometric points set to 3500 (roughly 30 minutes bin over 80 days observation period), and $\sigma$ is the precision expected to be determined by the brightness of the target in \textit{Kepler} bandpass using pre-flight estimates. The equivalent amplitude is considered to be the amplitude of the sinusoidal signal which has power equivalent to the combined elements of the phase curve:
\begin{equation}
\begin{aligned}
\label{eqn:CalcAmp}
A_{Eqv} &= ((A_{Re\!f}+A_{Th})^2 + A_{Ell}^2 + A_{Dop}^2)^{\frac{1}{2}}
\end{aligned}
\end{equation}
To estimate the reflective component ($A_{Re\!f}$), we assume the geometric albedo ($A_g$) of 0.4 and evaluate the reflective component ($A_{Re\!f}$) as follows:
\begin{equation}
A_{Re\!f} = A_g \left(\frac{R_p/R_*}{a/R_*}\right)^2,
\end{equation}
where $R_p/R_*$ is the scaled radius of the planet, and $a/R_*$ is the scaled semi-major axis. Similarly, the thermal variation ($A_{Th}$) is calculated as:
\begin{equation}
A_{Th} = \left(\frac{R_p}{R_*}\right)^2 \frac{\int B(T_{Day}) R(\lambda) d \lambda}{\int B(T_{*}) R(\lambda)d \lambda},
\end{equation}
where $B(T)$ is the Planck's black body radiation law corresponding to temperature $T$, $R(\lambda$) is the response function of \textit{Kepler/K2}, and $T_{Day}$ is the day-side temperature of the planet, which is estimated as following:
\begin{equation}
T_{Day} = T_{e\!f\!f}\sqrt{\frac{1}{a/R_*}}\left[f(1-A_B\right)]^{\frac{1}{4}} ,
\label{eqn:lopez2007}
\end{equation}
where $T_{e\!f\!f}$ is the effective temperature of the host star, $A_B$ is the Bond albedo set at 0.6 following a Lambertian sphere relation ($A_B$ = $\frac{3}{2} A_g$), and $f$ is a proxy variable for re-circulation set at 2/3 corresponding to the case where only the day-side is re-radiating \citep{lopez2007}.
\begin{figure*}[ht]
\includegraphics[width=0.97\textwidth]{Histogram.pdf}
\caption{\label{fig:best_candidates} Expected signal amplitude ($A_{eqv}$) versus the \textit{Kepler} magnitude for the \textit{K2} detected planets. The red region contains the planet for which SNR detection is expected to be less than 3$\sigma$. Out of 382 planets, 52 could have detectable phase signal above 3$\sigma$ level and are listed in \autoref{table:best_candidates}. Among these, 9 planets have detected phase curves, all of which are drawn in colors other than grey to improve the readability of the graph.}
\end{figure*}
\begin{deluxetable*}{lcccccccccccc}
\tablecaption{\label{table:best_candidates} All \textit{K2} targets analyzed for the phase curves with various relevant parameters.}
\tablehead{
\colhead{\textbf{Identifier}}&
\colhead{\textbf{EPIC ID}}&
\colhead{\textbf{Kp} (Mag)} &
\colhead{\textbf{Campaign}} &
\colhead{ \textbf{Period} (Days)}&
\colhead{\textbf{$Rp/R_{*}$}}&
\colhead{ \textbf{$A_{Re\!f}$}}&
\colhead{\textbf{$A_{Th}$}}&
\colhead{\textbf{$A_{Ell}$}} &
\colhead{\textbf{$A_{Dop}$}} &
\colhead{\textbf{$A_{Eqv}$}} &
\colhead{\textbf{SNR}}
}
\startdata
\textbf{K2-31b} & 204129699 & 10.6 & 2 & 1.258 & 0.135 & 199.2 & 1.4 & 11.9 & 4.7 & 201 & 228.9\\
\rowcolor{Gray}
WASP-85 Ab & 201862715 & 10.3 & 1 & 2.656 & 0.163 & 144.5 & 0.80 & 2.2 & 2.1 & 145 & 200.2\\
K2-237b & 229426032 & 11.5 & 11& 2.181 & 0.118 & 184.1 & 5.8 & 8.8 & 2.4 & 190 & 132.9\\
\rowcolor{Gray}
HAT-P-56b & 202126852 & 10.9 & 0 & 2.791 & 0.105 & 109.5 & 3.0 & 7.0 & 2.8 &113 &109.3 \\
\textbf{WASP-104b$^{a}$} & 248662696 & 11.6 & 14 & 1.755 & 0.121 & 138.4 & 0.9 & 5.8 & 2.7 &140 & 91.9\\
\rowcolor{Gray}
\textbf{QATAR-2b$^{b}$} & 21275629 & 13.0 & 6 & 1.337 & 0.162 & 247.6 & 0.4 & 18.5 & 8.3 & 249 &65.1\\
K2-183b & 211562654 & 12.8 & 5 & 0.469 & 0.027 & 152.8 & 50.6 & 1.9 & 0.0 & 203 & 63.2\\
\rowcolor{Gray}
WASP-75b & 206154641 & 11.8 & 3& 2.484 & 0.103 &103.8 & 1.7 & 4.2 & 1.7 & 106 & 61.8 \\
WASP-118b & 220303276 & 10.9 & 8 &4.046 & 0.082 & 59.9 & 1.2 & 1.5 & 0.6 & 61 & 58.1\\
\rowcolor{Gray}
K2-260 b & 246911830 & 12.5 & 13&2.627 & 0.097 & 135.3 & 5.3 & 7.9 & 1.9 & 141 & 52.9\\
K2-34b & 212110888 & 11.4 & 5, 16 &2.996 & 0.088 & 65.5 & 0.9 & 6.3 & 2.5 & 67 &47.3\\
\rowcolor{Gray}
\textbf{K2-141b$^{c}$} & 246393474 & 10.6 & 12 &0.280 &0.020 & 31.6 & 2.4 & 2.8 & 0.1 & 34 &38.6 \\
\textbf{HATS-9b} & 217671466 & 13.1 & 7 &1.915 & 0.083 & 145.5 & 3.7 & 13.6 & 1.8 &150 & 37.0\\
\rowcolor{Gray}
WASP-28b & 246375295 & 11.9 & 12 & 3.407 & 0.116 & 69.7 & 0.4 & 1.5 & 1.4 &70 & 36.9\\
K2-266b & 248435473 & 11.4 & 14 & 0.659 & 0.043 & 46.2 & 0.4 & 1.1 & 0.2 &47 & 34.2\\
\rowcolor{Gray}
WASP-55b & 212300977 & 11.7 & 6 & 4.466 & 0.125 & 52.4 & 0.1 & 0.5 & 0.8 & 53 &32.2\\
\textbf{K2-107b} & 216468514 & 12.8 & 7 &3.314 & 0.083 & 83.3 & 1.8 & 4.1 & 1.1 & 85 &26.6\\
\rowcolor{Gray}
WASP-47b & 206103150 & 11.8 & 3 & 4.161 & 0.102 & 44.1 & 0.1 & 1.6 & 1.8 & 44 & 25.9\\
\textbf{HATS-12b} & 218131080 & 12.7 & 7 & 3.143 & 0.063 & 73.2 & 4.2 & 17.8 & 2.8 & 79 &25.6\\
\rowcolor{Gray}
WASP-107b & 228724232 & 11.2 & 10 &5.721 & 0.145 & 25.5 & 0.0 & 0.0 & 0.3 & 26 & 20.4\\
K2-29b & 211089792 & 12.9 & 4 & 3.259 & 0.142 & 72.9 & 0.7 & 0.9 & 1.4 & 73 &20.4\\
\rowcolor{Gray}
HD 3167b & 220383386 & 9.0 & 8 & 0.957 & 0.017 & 7.3 & 1.4 & 0.4 & 0.0 & 7.5 &19.3\\
K2-39b & 206247743 & 10.6 & 3 & 4.605 & 0.019 & 12.7 & 0.4 & 5.0 & 0.2 & 14 &16.4\\
\rowcolor{Gray}
HATS-11b & 216414930 & 13.7 & 7 & 3.619 & 0.107 & 97.1 & 1.2 & 3.1 & 1.3 & 98 &15.5\\
K2-267b & 246851721 & 11.3 & 13 & 6.180 & 0.0681 & 19.3 & 0.1 & 0.1 & 0.1 & 19 &15.3 \\
\rowcolor{Gray}
WASP-157b & 212697709 & 12.2 & 6 & 3.952 & 0.094 & 33.8 & 0.1 & 0.5 & 0.8 & 34 &15.2\\
K2-232b & 247098361 & 9.79 & 13 & 11.168 & 0.020 & 8.5 & 0.0 & 0.1 & 0.4 & 8.5 &14.8\\
\rowcolor{Gray}
K2-22b & 201637175 & 14.9 & 1 & 0.381 & 0.075 & 205.8 & 1.8 & 87.7 & 9.0 & 226 & 14.5\\
\textbf{K2-131b} & 228732031 & 11.9 & 10 & 0.369 & 0.020 & 23.5 & 2.1 & 1.8 & 0.1 & 26 &13.5\\
\rowcolor{Gray}
WASP-151b & 246441449 & 12.7 & 12 & 4.533 & 0.101 & 38.2 & 0.1 & 0.3 & 0.4 & 38 &12.6 \\
K2-238 b & 246067459 & 13.6 & 12 & 3.205 & 0.080 & 65.1 & 0.6 & 3.8 & 1.3 & 66 &11.8 \\
\rowcolor{Gray}
K2-30b & 210957318 & 13.2 & 4 & 4.100 & 0.127 & 42.1 & 0.0 & 0.5 & 1.0 & 42 &9.9 \\
K2-100b & 211990866 & 10.4 & 5 & 1.674 & 0.027 & 7.5 & 0.1 & 0.1 & 0.0 & 7.7 &9.7 \\
\rowcolor{Gray}
\textbf{K2-106b} & 220674823 & 12.0 & 8 & 0.571 & 0.017 & 16.3 & 1.7 & 2.0 & 0.1 & 18 &9.4 \\
K2-60b & 206038483 & 12.6 & 3 & 3.003 & 0.063 & 26.2 & 0.1 & 1.2 & 0.8 & 26 & 9.1 \\
\rowcolor{Gray}
K2-141c & 246393474 & 10.6 & 12 & 7.749 & 0.094 & 7.6 & 0.0 & 0.0 & 0.0 & 7.6 & 8.6 \\
K2-140b & 228735255 & 12.5 & 10 & 6.569 & 0.114 & 22.2 & 0.0 & 0.4 & 1.4 & 22 &8.3 \\
\rowcolor{Gray}
K2-229b & 228801451 & 11.0 & 10 & 0.584 & 0.014 & 8.2 & 0.5 & 0.4 & 0.0 & 8.7 &8.2 \\
K2-261b & 201498078 & 10.5 & 14 & 11.633 & 0.053 & 6.3 & 0.0 & 0.1 & 0.2 & 6.3 & 7.9 \\
\rowcolor{Gray}
K2-253 b & 228809550 & 12.6 & 10 & 4.002 & 0.105 & 22.8 & 0.0 & 0.0 & 0.1 &23 & 7.7 \\
GJ 9827b & 246389858 & 10.3 & 12 & 1.209 & 0.025 & 5.4 & 0.0 & 0.1 & 0.1 & 5.4 & 7.4 \\
\rowcolor{Gray}
HD 89345 & 248777106 & 9.2 & 14 &11.814 & 0.038 & 3.1 & 0.0 & 0.1 & 0.1 & 3.1 & 7.0 \\
K2-121b & 211818569 & 12.9 & 5 & 5.186 & 0.109 & 21.9 & 0.0 & 0.0 & 0.1 & 22 &6.0 \\
\rowcolor{Gray}
K2-137b & 228813918 & 14.5 & 10 & 0.180 & 0.018 & 17.0 & 0.2 & 64.7 & 5.1 & 67 &5.8 \\
K2-157b & 201130233 & 12.6 & 10 & 0.365 & 0.011 & 13.9 & 2.9 & 1.8 & 0.0 & 17 &5.8 \\
\rowcolor{Gray}
K2-273b & 211919004 & 13.1 & 5,16,18 &11.716 & 0.0484 & 22.3 & 0.1 & 0.2 & 0.0 & 22 & 5.4 \\
K2-99b & 212803289 & 11.0 & 6,17 & 18.249 & 0.042 & 5.8 & 0.0 & 0.5 & 0.7 & 5.9 &5.3 \\
\rowcolor{Gray}
WASP-47e & 206103150 & 11.8 & 3 & 0.790 & 0.014 & 8.1 & 0.6 & 1.0 & 0.1 & 8.7 &5.1 \\
HATS-36b & 215969174 & 14.3 & 7 & 4.175 & 0.110 & 47.2 & 0.1 & 3.1 & 4.2 & 48 & 4.9 \\
\rowcolor{Gray}
K2-113b & 220504338 & 13.5 & 8 & 5.818 & 0.091 & 25.6 & 0.0 & 1.1 & 1.8 & 26 &4.7 \\
K2-132b & 228754001 & 11.7 & 10 &9.175 & 0.033 & 5.9 & 0.0 & 1.1 & 0.6 & 6.1 & 3.8 \\
\rowcolor{Gray}
K2-45b & 201345483 & 15.3 & 1 & 1.729 & 0.138 & 69.1 & 0.0 & 0.1 & 0.2 & 69 &3.3
\enddata
\tablenotetext{}{References for optical phase curves in (a) \citet{mocnik2018}, (b) \citet{malavolta2018}, and (c) \citet{dai2017b}}
\tablenotetext{$\dag$}{All the detected phase curves planets are highlighted in bold.}
\end{deluxetable*}
For calculating the amplitude of the ellipsoidal variation $A_{Ell}$, we consider the formulation presented in \citet{morris1985}:
\begin{align}
A_{Ell} &= \alpha_{Ell} \frac{M_p}{M_*} \left(\frac{1}{a/R_*}\right)^3 \sin ^2 i,\\
\alpha_{Ell} &= \frac{0.15 (15+u)(1+g)}{(3-u)},
\end{align}
where $\alpha_{Ell}$ is a constant characterizing tidal distortion, $M_p$ is the mass of the planet, $M_*$ is the mass of the star, $a/R_*$ is the scaled semi-major axis, $i$ is the inclination of the orbit, $u$ is the linear limb-darkening parameter, and $g$ is the gravity-darkening parameter. We determine the value of $u$ and $g$ by linearly interpolating among effective temperature, metallicity and $log~g$ and assuming turbulence of 2 $\rm{km~s}^{-1}$ from the table provided by \citet{claret2011}.
Similarly, the Doppler beaming effect ($A_{Dop}$) is modeled as following:
\begin{align}
&K = \left(\frac{2 \pi G}{P}\right)^{1/3} \frac{M_p \sin i}{M_*^{2/3}\sqrt{1-e^2}},\quad (M_p<<M_*)\\
&A_{Dop} = \alpha_D \frac{K}{c},
\end{align}
where $G$ is the gravitational constant, $P$ is the period, $M_p$ is the mass of the planet, $M_*$ is the mass of the star. $\alpha_D$ is Doppler boosting factor, which is based on the proposition of \citet{loeb2003}. For this work we use the empirical relation reported by \citet{millholland2017} between the Doppler boosting coefficient ($\alpha_{D}$) and effective stellar temperature ($T_{e\!f\!f}$):
\begin{equation}
\begin{aligned}
\alpha_{D} = 7.2-(6\times 10^{-4})T_{e\!f\!f}.
\end{aligned}
\end{equation}
\autoref{table:best_candidates} lists 52 \textit{K2} exoplanets arranged in the order of expected signal to noise for phase curve. It also lists the expected individual contributions of all four phase curve components in parts per million (ppm), along other fundamental parameters and information. \autoref{fig:best_candidates} displays predicted phase curve amplitudes ($A_{Eqv}$) as a function of stellar magnitude. Note that while we expect the ellipsoidal variation and Doppler amplitude to be well constrained around our predicted values, the reflective and thermal components can differ by more than an order of magnitude, depending on the choice of albedo. The choice of high geometric albedo is motivated primarily to cast a wide enough net not to miss any potential candidates. This in turn leads to a high non-detection rate. Additional factors that lead to decreased precision include crowding, imperfect detrending, stellar activity, and other various systematics. Besides, we find that the pre-flight estimates systematically overpredicts the precision for the brighter targets as can be seen in \autoref{fig:ExpectedPrecision}. We trace this back to the use of constant aperture size across all targets.
\section{Data Preparation}
Phase curves signals are often times weak compared to other astrophysical signals present in the photometric time series. Filtering processes to remove them is therefore a necessary step. By the time the final light curve is obtained, the data goes through multiple processing steps, each of which handles different aspects of the systematics. The official \textit{Kepler} processing tool handles many of the detector and electronic effects \citep{jenkins2010}. However, the pointing induced errors historically have been left up to the exoplanet community to address.
As a response, different pipelines were developed by research groups who have diverse research foci. For instance, Kepler Asteroseismic Science Operations Center \citep[KASOC;][]{handberg2014} pipeline was purposed for astero-seismic related studies, whereas for those interested in the stellar rotational period developed independent pipelines \citep{angus2016}. Similarly, specialized pipelines have been developed to find transits. For our work we consider two different pipelines, \texttt{K2SFF} \citep{vanderburg2014, vanderburg2016} and \texttt{EVEREST} \citep{luger2016, luger2018} primarily due to their ability to produce light curves with high precision. For our work, we have used the scatter of the phase folded light curve as the benchmark for selection criterion for any subsequent analysis. This was motivated by the reasoning that systematics is the single-handedly the most challenging hurdle standing in the way of the phase curve detection.
\begin{figure}[ht!]
\includegraphics[width=0.49\textwidth]{PrecisionObtained.pdf}
\caption{\label{fig:ExpectedPrecision} The observed precision for our targets compared to the expected noise level used for SNR estimation. The expected noise level underestimates the scatter for bright targets while generally overestimates the noise for the dimmer targets. }
\end{figure}
\subsection{Detrending}
Due to drift of the field in the rolling axis, which was periodically compensated for by thruster firings, pre-detrended light curves from \textit{K2} exhibit a characteristic sawtooth pattern. Pipelines such as \texttt{EVEREST}, and \texttt{K2SFF} have been designed to remove these and other systematics unique to the \textit{K2} data. We particularly focus on using these two pipelines due to their comprehensiveness and the quality of the final products delivered by these pipelines. \texttt{K2SFF} light curves are available for all of our targets, whereas \texttt{EVEREST} is available for all of targets up until Campaign 13 at the time of this publication. All the data is publicly available from the MAST archive.\footnote{\href{https://archive.stsci.edu/k2/data\_search/search.php}{https://archive.stsci.edu/k2/data\_search/search.php}}
\texttt{K2SFF} is a parametric approach to detrending - decorrelating the motion of the centroid with the variation of the magnitude. On the other hand, \texttt{EVEREST} is a Gaussian process based detrending which models the intra-pixel variation to produce the final light curve . Both of the methods produce light curves that are comparable in photometric precision (see \autoref{fig:ExpectedPrecision}). Unfortunately, a comprehensive quantitative comparisons on the performance of these pipelines is beyond the scope of this paper. Light curves from both pipelines have been used successfully in different analyses. \citet{dai2017b} used the light curve generated by \texttt{EVEREST} pipeline due to its higher precision whereas \citet{malavolta2018} and \citet{mocnik2018} used some variation of \texttt{K2SFF} in producing the final light curves, which were then used for subsequent phase curve analyses.
The detrending removes most of the power injected at higher frequencies such as those introduced by thruster firing events ($\sim$6 hours), while at lower frequencies ($>$15 days) other long term systematics still can dominate \citep{vancleve2016, aranzana2018}. Since the phase curve signals lie in the region ($\sim$1 to 5 days) which is usually well separated in the frequency domain from both of these effects, they are minimally distorted, and the treatment by these pipelines are sufficient in most of the cases. However, there are cases where these traditional \textit{K2} pipelines often tend to fail, such as in crowded fields or for bright targets. This in turn has motivated the development of more specialized pipelines to handle crowding typically in a cluster environment \citep{libralato2016} or bright targets that can saturate the pixels \citep{white2017}. Unfortunately, the light curves from these specialized pipelines are not as comprehensively available for most of \textit{K2} targets, therefore we limit our analysis with the light curves available from \texttt{EVEREST} and \texttt{K2SFF}.
In \autoref{fig:ExpectedPrecision}, we compare the observed precision of the light curve against the expected precision. We found that the pre-flight prediction provided for \textit{Kepler} targets\footnote{\href{https://keplergo.arc.nasa.gov/CalibrationSN.shtml}{https://keplergo.arc.nasa.gov/CalibrationSN.shtml}} overestimates the precision for the bright targets, whereas it underestimates the precision for the fainter targets. This error can be traced to the constant read noise error assigned to all of the targets calculated by assuming an aperture size of 20 pixels. For the bright targets the non-linear effects and background contamination poses worse problems than this calculation allows for. While a separate algorithm for detrending bright stars has been explored \citep{white2017}, there might be room for even more optimization. But overall, the assumed precision curve provides a good estimate of the precision expected for our targets.
\begin{figure*}[ht]
\includegraphics[width=0.97\textwidth]{Anomaly.pdf}
\caption{\label{fig:anomaly} Figure showing different systematics in the detrended data. (a) Transit points and outliers detected in a portion of QATAR-2 \texttt{EVEREST} detrended light curve. (b) Potential systematic effects akin to thermal settling observed during the first few days of detrended WASP-47 \texttt{EVEREST} detrended data. (c) An abrupt change is observed in \texttt{EVEREST} detrended data for HATS-12b possibly due to a change in pixel responsivity. Note that a similar offset is spotted in \texttt{K2SFF} detrended data.}
\end{figure*}
\subsection{Outliers Handling}
The final light curve obtained by the pipelines needs further processing due to the presence of outliers. We initially remove all the data lying outside eight times the inter-quartile (Q3-Q1) range from the median. Following this, we mask out all the transit points found using the transit parameters reported in the NASA exoplanet database. We then locate the outliers through an iterative spline fitting process by excluding data that lie more than 3$\sigma$ away in each iteration. This process is repeated once the light curve is folded, during which outliers occurring during transit events are removed. On average, this led to removal of around 2.5\% of the original data across our targets.
Some of our targets show effects akin to thermal settling at the beginning of the data (see \autoref{fig:anomaly}). However, since these effects are not uniformly present in all our targets, we did not exclude these data points in our analysis. If multi-campaign observations were available for a target, we combined all of the light curves available to produce the final light curve. \textit{K2} light curves often have a gap typically of a few days at the middle of the observation for data downlinking. These discontinuities are expected and well handled by the detrending pipelines as well as the filtering. For HATS-12 however, there is an abrupt offset at the middle of the observation (see \autoref{fig:anomaly}) which was observed in both of the detrended light curves. We correct for this apparent offset by modeling the continuum using linear regression at the break point, and applying the relevant offset to generate the final stellar continuum before filtering.
\section{Signal Injection Retrieval Test}
In order to extract the phase curve, the stellar continuum has to be modeled out. A host of filtering techniques have been used in the past. Sometimes the stellar continuum exhibits little to no variation, therefore requiring minimal pre-processing as in the case of TRES-2b \citep{barclay2012}. However, for most of \textit{K2} targets, filtering provides an opportunity not only to remove the stellar continuum, but also any uncorrected systematics. Thus, we explored suites of filtering techniques available to us, among which we focused particularly on three: spline, phasma and Butterworth. Before performing all three filtering processes, we masked the region where the primary transit and secondary eclipse. This can lead to increased scatter around the region surrounding occultation. Below we discuss the three filtering techniques, whose performances were evaluated using $\chi_\nu^2$ as the primary metric:
\begin{enumerate}
\item \textbf{Spline Filtering}: Spline flattening is the most commonly used filtering technique for removing the stellar flux \citep{esteves2013, shporer2015,angerhausen2015, armstrong2016}. Use of different degrees of polynomial or knotting intervals are common depending on the planetary period as well as the ability of the spline to model the stellar continuum. Splines essentially act as a low pass filter, and have been successfully used in a range of targets in the past. For this work, we adopt third degree polynomial knotted once every period of the planet.
\item \textbf{Phasma}: Phasma as a filtering method in the context of the phase curve was proposed in \citet{jansen2018}. Phasma in essence is a median filtering with window length set to the period of the planet. A similar implementation with a mean filter was also used. While easy to implement, phasma did not perform on par with other filtering techniques particularly among hot Jupiters (see \autoref{fig:threeflatteningmethod} and \autoref{fig:retrievaltest}).
\item \textbf{Frequency filtering}: Another method that has been used is harmonics based filtering \citep{quintana2013}. In this work, we use the sixth order Butterworth filter as implemented in \texttt{scipy}\footnote{\href{scipy.org}{scipy.org}} with a bandpass between half the planetary frequency and 3.5 times the planetary frequency. The limits of the bandpass were partially motivated to preserve the third harmonics \citep{esteves2015}, as well as decided through trial and error. Before filtering, we uniformly and linearly interpolate the light curve after masking the transit and occultation points. Overall, the performance of Butterworth filter performance was superior among the three filtering techniques studied in detail, particularly among the hot Jupiters (see \autoref{fig:threeflatteningmethod}, and \autoref{fig:retrievaltest}). However, structural features such as ringing artifacts were more prominent in the residuals.
\end{enumerate}
\begin{deluxetable}{lc}
\tablecaption{\label{table:InjectionTesttargets} Targets used for Signal Injection Test}
\tablehead{
\colhead{\textbf{Identifier}}&
\colhead{\textbf{EPIC IDs}}
}
\startdata
\textbf{K2-31b} & 203089855, 203526723, 203758400, \\
& 204254456, 204529573\\
\textbf{HATS-9b} & 214576141, 214963629, 215327780\\
& 215496957, 216068131\\
\textbf{HATS-12b} & 215293111, 215310931 , 215517702\\
& 215594041, 215677034\\
\textbf{K2-107b} & 214402646, 215075353, 215542349\\
& 215771782, 215834357 \\
\textbf{K2-131b} & 201094825, 201094970, 201121210\\
& 201141186, 201164625\\
\textbf{K2-106b} & 220197918, 220205426, 220209263\\
& 220228282, 220249101
\enddata
\end{deluxetable}
For each system with detected phase curves, we download \texttt{K2SFF} and \texttt{EVEREST} detrended light curves of five target stars from the same campaign with similar magnitude and precision range after detrending. During the process of choosing the light curves for signal injection test, targets exhibiting strong short term modulation (i.e. less than 10 days), exhibiting intrinsic variability, or having obvious uncorrected systematics were deliberately avoided. The list of targets used for signal injection-retrieval test are reported in \autoref{table:InjectionTesttargets}. For the hot Jupiters among our targets, we injected the phase curve signals in the corresponding light curves considering geometric albedos between 0.01 to 0.66 at a step-size of 0.01 and masses between 0.25 and 7.0 $M_{\rm Jup}$ with a step-size of 0.25 $M_{\rm Jup}$ while using the reported transit parameters for the planets. We ran the subsequent light curves through our pipeline, and compared the final retrieved signal to the injected phase curve signal. The quick fits using Levenberg-Marquardt minimization showed that in most cases consistent parameters of albedo and mass to the injected signals can be retrieved. During these tests, we found the Butterworth filter outperforms both spline and phasma filtering among hot Jupiters (see \autoref{fig:threeflatteningmethod} and \autoref{fig:retrievaltest}). We also performed additional tests with phase offset signals, and Butterworth filter continued to outperform other two filters. For the ultra-short period super-Earths, we performed tests using parameters of K2-141, K2-131 and K2-106 for which we simulate phase curves with a reflective component with geometric albedo ranging between 0.01 to 0.8. For these latter set of planets, the performances of the different filtering techniques were comparable (see \autoref{fig:retrievaltest}).
\begin{figure}[ht]
\includegraphics[width=0.48\textwidth]{ThreeFlatteningMethods.pdf}
\caption{\label{fig:threeflatteningmethod} A random case of injected signal retrieved through (a) Spline Filtering, (b) Phasma Filtering (c) Butterworth bandpass Filtering. The error bars were scaled down accordingly to the bin-size from the calculated mean scatter.}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.48\textwidth]{InjectionRetrievalHistogram.pdf}
\caption{\label{fig:retrievaltest} Top: Histogram of $\chi^2_\nu$ for the retrieved signal of the ultra-short period planets (USP). The performance of all three flattening methods are similar. Bottom: Histogram of $\chi^2_\nu$ for the retrieved signals observed for our samples of hot Jupiter targets. The mean value for shown with dotted lines $\chi^2_\nu$ was 1.00 for Butterworth filters, 2.09 for Spline Filters, and 3.06 for Phasma Filters. Among the filtering techniques, we found the Butterworth filter statistically performed the best.}
\end{figure}
Based on these tests, we have used Butterworth filtered light curves for all our hot Jupiter targets. The sixth order Butterworth filter allows us to isolate power from a specific frequency band but it can sometimes lead to ringing effects in the residuals visible for some of our targets. Such artifacts were also observed in our injection retrieval test, however presence of these artifacts did not appear to affect the accuracy of the retrieved parameters. We explored the possibility of using narrower bandpass Butterworth filters, however, such implementations tended to decrease the accuracy of the retrieved parameters in our signal injection test. For the short period rocky planets (period less than a day) such as K2-141b, K2-131b or K2-106b, spline flattening light curves were used as its performance is comparable to Butterworth filter, and have already been shown to work robustly on a number of occasions.
Note that our injection-retrieval test has been heavily influenced by the parameters of the systems where we discovered the phase curves. We performed our test using detrended lightcurves of those system that did not show strong stellar modulations or unusual artifacts. Thus, there are inherent limitations to the tests we performed, and the performance of the filtering techniques is likely to change in the presence of complex red noise. We have also limited ourselves to three filtering techniques in this paper, but there are a host of techniques available which we did not fully explore. For instance, some authors have pointed out the possibility of using Gaussian Processes \citep{serrano2018} for disentangling the planetary phase curves amid strong stellar modulations induced by stellar rotation. However, given the computational cost, the complexity of the model along with the possibility of overfitting the phase curves dissuaded us from diving too deep into this technique \citep{millholland2017}. In the future, we plan to explore a wider suite of filtering, and data analysis techniques which might allow us to improve on the process we introduce here.
\section{Models}
For all the targets where we detect phase curves, we simultaneously fit for the primary transit, secondary eclipse and phase curve. We set the period as noted in the NASA exoplanet archive to produce phase folded light curves. The simultaneous fit of the transit and the phase curves is primarily motivated to understand the cases with strong degeneracies among parameters as is the case of K2-31b. We initiate our MCMC model using the parameters reported in the discovery paper. For limb darkening, we use quadratic forms with uniform priors with range of 0.1 around the nearest values estimated by \citep{claret2011}. We similarly introduce priors in our MCMC for the scaled semi-major axis($a/R_*$) parameters to only accept values that yields the stellar density ($\rho_*$) within 5$\sigma$ of spectroscopically derived stellar density \citep{winn2010}:
\begin{equation}
\rho_* + k^3 \rho_p = \frac{3 \pi }{GP^2} \left( \frac{a}{R_*} \right)^3 ,
\label{eqn:stellardensity}
\end{equation}
where $G$ is Gravitational constant, and $\rho_p$ is the planetary density. We ignore the contribution from $k^3 \rho_p$ in our calculation. Additionally, we introduce a $T_0$ offset parameter, which allows for an offset in the time of the conjunction. Note that the final reported value of $T_0$ is calculated by appropriately combining this offset with the reported values in the original discovery paper.
We use the \citet{mandel2002} formalism as implemented in \texttt{batman} \citep{kriedberg2015} in order to fit for the primary transit as well as the secondary eclipses. We supersample our light curve by a factor of 15, and set the exposure time to 29.4 minutes. For the phase curves, we used Bayesian Information Criteria (BIC) to choose the best model among three different models of phase curves: i) Thermal component with no significant nightside contribution (Model I) ii) Thermal component with significant nightside contribution (Model II) iii) Thermal component with a phase offset (Model III). In Model I, the secondary eclipse depth is constrained as a function of the amplitude of the reflective and thermal component, which can lead to compensated discovery of secondary eclipses. In Model II, the depth of the secondary eclipse is a free parameter without any priors. In Model III, we use the amplitude of the thermal component and phase offset as two free additional parameters compared to Model I. For all three models, we do not fit for phase variation during the transit, where we expect the transit to dominate the signal. We fit for a single mass parameter, which we refer to as the photometric mass ($M_{Phot}$), to model both ellipsoidal variation as well as Doppler effects. For all of our models, we have used non-eccentric models motivated by the original discovery papers.
For exploring the parameter space, we used affine invariant MCMC implemented in \texttt{emcee} \citep{mackey2013} running for 25,000 steps with 50 walkers. We use Gelman-Rubin statistics to ensure all of our MCMC converge. After the initial run, we compared the different models using Bayesian Information Criterion (BIC), and re-ran the best model for 50,000 step by initializing our parameters around the best obtained values in the previous runs. We built the posterior distribution to estimate the error after removing the first 25,000 step of the data, and use the rest to estimate the error of the fit parameters. From posterior distribution, we report the median and 1$\sigma$ confidence interval corresponding to 15.8$^{\rm th}$ percentile and 84.2$^{\rm th}$ percentile respectively. For some parameters such as planetary mass or equilibrium temperature, we propagate the error from the stellar parameters with the errors we estimate from the posterior distribution in our final reported parameters.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.98\textwidth]{CombinedFigure0.pdf}
\caption{\label{fig:ComboFigure0} \textbf{Top:} Best fit of phase curve model for Qatar-2b using \texttt{EVEREST} data exhibiting strong ellipsoidal and Doppler variation. \textbf{Middle:} Best fit f phase curve model of K2-141b showing strong thermal and reflective modulation. \textbf{Bottom} Phase curve of WASP-104b observed in \texttt{K2SFF} detrended light curve exhibiting prominent ellipsoidal and Doppler effects. The fit parameters for all three targets are presented in \autoref{table:phasefit_param0}.}
\end{figure*}
For all our models we consider circular orbits and expect the secondary eclipse to occur exactly at the half phase. We ignore the R\o mer delay, and the effects of eccentricity. These choices were motivated by the precision of the data as well as the reporting of no significant eccentricities in the discovery papers. As for the temperature of the planet, we report the dayside temperature by numerically solving the following equation:
\begin{equation}
\Delta = A_g \left(\frac{R_p/R_*}{a/R_*}\right)^2 + \left(\frac{R_p}{R_*}\right)^2 \frac{\int B(T_{Day}) R(\lambda) d \lambda}{\int B(T_{e\!f\!f}) R(\lambda)d \lambda},
\label{eqn:delta_ag}
\end{equation}
where $\Delta$ is the secondary eclipse depth, $R_p/R_*$ is the scaled radius, $a/R_*$ is the scaled semi-major axis, and $B$ is Planck function which is convolved with the \textit{Kepler} response function $R(\lambda)$. We solve \autoref{eqn:delta_ag} for the geometric albedo ($A_g$), and assume the day-side contribution is a function of the geometric albedo using aforementioned \citet{lopez2007} formalism introduced in \autoref{eqn:lopez2007}. In order to report the dayside temperature, we set the re-radiation factor to 1/2 and assume A$_B$ = 3/2A$_g$ unless otherwise stated. Note this simple relation is not valid for planets in our solar system, and usually is likely to overestimate the value of the Bond Albedo \citep{dyudina2016}. We also require the secondary eclipse depth to be strictly greater than or equal to the amplitude of the reflective and thermal component at phase 0.5. Similarly, the equilibrium temperature is calculated by setting the re-radiation factor to be 1/4.
\section{Results}
\subsection{Pre-Selected Targets}
Prior to our study, three of K2 targets had reported phase curves: Qatar-2b \citep{dai2017b}, K2-141b \citep{malavolta2018}, and WASP-104b \citep{mocnik2018}. We re-run these objects through our pipeline, and fit the phase folded data and fit the phase curve models. For QATAR-2b, we obtained ellipsoidal variation amplitude of 22.6$^{+2.6}_{-2.5}$ ppm against 15.4$\pm$4.8 ppm, and Doppler modulation amplitude of 10.8$^{+1.3}_{-1.2}$ ppm against 14.6 $\pm$ 5.1 ppm reported in \citet{dai2017b}. This led to estimation of the photometric mass of 3.27$^{+0.40}_{-0.41}$ $M_{\textrm{\scriptsize Jup}}$~ consistent within 2$\sigma$ with the radial velocity mass of 2.487$\pm$0.089 $M_{\textrm{\scriptsize Jup}}$~ \citep{bryan2012}. The reflective and the thermal component in the phase curve constrain the geometric albedo with 3$\sigma$ confidence at 0.03 under the assumption $A_g = 3/2A_B$. For K2-141b, we use the use the same detrended lightcurve used in \citet{malavolta2018}, and the amplitude of the phase curve we find 11.8$\pm$1.5 ppm is consistent to their finding of occultation depth of 23 $\pm$ 4 ppm. Under the assumption $A_g = A_B$, and numerically solving \autoref{eqn:delta_ag}, we estimate the geometric albedo of 0.205$^{+0.059}_{- 0.077}$. Similarly, we detect phase curve signal in WASP-104b using \texttt{K2SFF} detrended light curve as had been reported in \citet{mocnik2018}. We find the reflective and thermal component at 5.1$\pm$1.0 against theirs reporting at 4.8$\pm$2.1 ppm, ellipsoidal variation amplitude at 3.21$\pm$0.83 against theirs reporting at 6.9$\pm$2.2 ppm, and Doppler effect amplitude at 2.13$\pm$0.55 against theirs reporting at 4.2$\pm$1.9 ppm. Also the photometric mass we obtain 0.99$\pm$0.26 $M_{\textrm{\scriptsize Jup}}$~ is consistent within 1.5$\sigma$ with the radial velocity mass 1.272$\pm$0.047 \citep{smith2014}.
Thus, our results for all three planets are largely consistent to the previous phase curves analyses. The fits using the best parameters for our targets are presented in \autoref{fig:ComboFigure0}, and the corresponding fit parameters are presented in \autoref{table:phasefit_param0}. The light curves used in the analysis, as well as filtered light curves has been presented in the Appendix.
\begin{deluxetable*}{lcccc}
\tablecaption{\label{table:phasefit_param0} Stellar and Planetary Parameters for Qatar-2b, K2-141b, and WASP-104b}
\tablehead{
\colhead{Parameter} &
\colhead{Unit}&
\colhead{QATAR-2b}&
\colhead{K2-141b}&
\colhead{WASP-104b}
}
\startdata
\multicolumn{2}{l}{\textbf{Stellar Parameters}}\\
$M_*$ & $M_\odot$ & 0.74$\pm$0.04$^{a}$ & 0.708$\pm$0.028$^{b}$ & 1.02$\pm$0.09$^{c}$\\
$R_*$ & $R_\odot$ & 0.713$\pm$0.018$^{a}$& 0.681$\pm$0.018$^{b}$& 0.93$\pm$0.23$^{c}$\\
$T_{e\!f\!f}$ & K & 4645$\pm$50 $^{a}$ & 4599$\pm$79$^{b}$\ & 5450$\pm$130$^{c}$\\
$[$Fe/H$]$ & dex & -0.02$\pm$0.08$^{d}$& -0.06$^{+0.08}_{-0.10}$$^{b}$ & -0.09$\pm$0.09$^{c}$ \\
log $g_*$ & cgs & 4.601$\pm$0.018$^{a}$& 4.62$^{+0.02}_{-0.03}$$^{b}$ & 4.5$\pm$0.2$^{c}$ \\
$u$ & -& 0.7165$^{e}$& 0.7194$^{e}$ & 0.6459$^{e}$\\
$g$ & -& 0.5434$^{e}$& 0.5452$^{e}$ & 0.4292$^{e}$\\
\hline
\textbf{Pipeline}& - & \texttt{EVEREST} & \texttt{K2SFF}$^{b}$ & \texttt{K2SFF}\\
\hline
\multicolumn{2}{l}{\textbf{Orbital Parameters}}\\
Period & Days & 1.337116553$\pm$0.000000044$^{f}$& 0.2803244$\pm$0.0000015$^{b}$ & 1.75540636$\pm$0.00000014$^{g}$\\
$T_0$ - 2450000 & BJD & 5617.5816109$\pm$0.0000087 & 7744.071542$^{+0.000071}_{-0.000068}$ & 7935.0702204$\pm$0.0000078\\
$R_p/R_*$ & - & 0.16526$^{+0.00010}_{-0.00009}$ & 0.02084$^{+0.0005}_{-0.0002}$& 0.12041$\pm$0.00026 \\
$a/R_*$ & - & 6.6769$^{+0.006}_{-0.013}$& 2.30$^{+0.05}_{-0.15}$ & 6.732$^{+0.041}_{-0.044}$ \\
$b$ & - & 0.036$^{+0.038}_{-0.026}$ & 0.23$^{+(0.22}_{-0.16}$ & 0.7115$^{+0.0063}_{-0.0059}$\\
Inclination& Deg & 89.69$^{+ 0.22}_{- 0.33}$ & 84.2 $^{+ 4.1}_{-6.3}$ & 83.933$^{+0.087}_{-0.093}$\\
$e$ &-& 0 (assumed)& 0 (assumed) & 0 (assumed) \\
$\omega$ & Deg & 90 (assumed)& 90 (assumed) & 90 (assumed)\\
$u_1$ & - & 0.5488$^{+0.0016}_{-0.0008}$ & 0.639$^{+0.057}_{-0.051}$ & 0.422$^{+0.017}_{-0.015}$\\
$u_2$ & - & -0.0051$^{+0.0043}_{-0.0020}$ & 0.074$^{+(0.073}_{-0.061}$ & 0.135$^{+0.023}_{-0.015}$\\
\hline
\multicolumn{2}{l}{\textbf{Phase Curve Parameters}}\\
$A_{Re\!f+T\!h}$ & ppm & 6.9$^{+3.1}_{-3.0}$ & 11.8$\pm$1.5 & 5.1$\pm$1.0\\
$A_{Ell}$ & ppm & 22.6 $^{+2.6}_{- 2.5}$ & -- & 3.21$\pm$0.83\\
$A_{Dop}$ & ppm & 10.8$^{+1.3}_{-1.2}$ & -- & 2.13$\pm$0.55\\
$T_{Day}$ & K & 1711 $\pm$ 24 & 2406$^{+ 144}_{- 76}$
& 1698 $\pm$24\\
$T_{eq}$ & K& 1434 $\pm$ 20 & 1984$^{+108}_{-59}$ & 1422 $\pm$ 20\\
$A_g$ & - & $<$0.03 (3$\sigma$) & 0.205$^{+0.059}_{- 0.077}$
& 0.0211 $\pm$ 0.0068\\
$M_{RV}$ & $M_{\mathrm{Jup}}$ & 2.487$\pm$0.086$^{a}$ & 0.016$\pm$0.0013$^{b}$ & 1.272$\pm$0.047$^{c}$\\
$M_{Phot}$ & $M_{\mathrm{Jup}}$ & 3.27$^{+0.40}_{-0.41}$ & -- & 0.99$\pm$0.26
\enddata
\tablenotetext{a}{Adopted from \citet{bryan2012}}
\tablenotetext{b}{Adopted from \citet{malavolta2018}}
\tablenotetext{c}{Adopted from \citet{smith2014}}
\tablenotetext{d}{Adopted from \citet{maxted2015}}
\tablenotetext{e}{Linear limb-darkening ($u$) and gravity-darkening ($g$) coefficients interpolated from \citet{claret2011}}
\tablenotetext{f}{Adopted from \citet{dai2017b}}
\tablenotetext{g}{Adopted from \citet{mocnik2018}}
\end{deluxetable*}
\subsection{New Discoveries}
From the remaining 49 targets, we discovered phase curves among 6 of planets. Once discovered, we fitted three standard models as discussed under $\S$5. The unfiltered light curve, the outliers, as well as the filtered light curves from the process are added presented under the Appendix. Below we present and discuss each of our targets in detail:
\begin{deluxetable*}{lccccc}
\tablecaption{\label{table:phasefit_param} Stellar and Planetary Parameters for K2-31b, HATS-9b, HATS-12b, and K2-107b}
\tablehead{
\colhead{Parameter} &
\colhead{Unit}&
\colhead{K2-31b}&
\colhead{HATS-9b}&
\colhead{HATS-12b}&
\colhead{K2-107b}
}
\startdata
\multicolumn{2}{l}{\textbf{Stellar Parameters}}\\
$M_*$ & $M_\odot$ & 0.91$\pm$0.06$^{a}$ & 1.030$\pm$0.039$^{b}$ & 1.489$\pm$0.071$^{c}$ & 1.30$\pm$0.14$^{d}$\\
$R_*$ & $R_\odot$ & 0.78$\pm$0.07$^{a}$& 1.503$^{+0.101}_{-0.043}$$^{b}$ & 2.21$\pm$0.21$^{c}$ & 1.78$\pm$0.16$^{d}$\\
$T_{e\!f\!f}$ & K & 5280$\pm$70$^{a}$& 5366 $\pm$70$^{b}$\ & 6408$\pm$75$^{c}$ & 6030$\pm$120$^{d}$\\
$[$Fe/H$]$ & dex & 0.08$\pm$0.05$^{a}$& 0.340$\pm$0.050$^{b}$ & -0.100$\pm$0.040$^{c}$ & 0.10$\pm$0.10$^{d}$\\
log $g_*$ & cgs &4.60$\pm$0.07$^{a}$& 4.095$\pm$0.038$^{b}$ & 3.923$\pm$0.065$^{c}$ & 4.07$\pm$0.10$^{d}$\\
$u$ & -& 0.6554$^{e}$& 0.5467$^{e}$ & 0.5317$^{e}$ & 0.5723$^{e}$\\
$g$ & -& 0.4611$^{e}$& 0.3200$^{e}$ & 0.2751$^{e}$ & 0.3280$^{e}$\\
\hline
\textbf{Pipeline}&-& \texttt{EVEREST} & \texttt{K2SFF}
& \texttt{K2SFF} & \texttt{EVEREST}\\
\hline
\multicolumn{2}{l}{\textbf{Orbital Parameters}}\\
Period & Days & 1.257850$\pm$0.000002$^{a}$& 1.9153073$\pm$0.000005$^{b}$ & 3.142833 $\pm$0.000011$^{c}$ & 3.31392$\pm$0.00002$^{d}$\\
$T_0$ - 2450000 & BJD & 2358.709367$\pm$0.000010& 6124.258934$^{+0.000032}_{-0.000033}$ & 6798.955644$\pm$0.000075 & 6928.059202$\pm$0.000055\\
$R_p/R_*$ & - & 0.168$^{+0.042}_{-0.023}$& 0.08414$^{+0.00014}_{-0.00012}$& 0.06048$^{+0.00056}_{-0.00042}$ & 0.08335$^{+0.00023}_{-0.00026}$\\
$a/R_*$ & - & 5.66$^{+0.10}_{-0.09}$& 4.556$^{+0.011}_{-0.026}$ & 5.47$^{+0.19}_{-0.24}$ & 5.890$^{+0.082}_{-0.075}$\\
$b$ & - & 1.022$^{+0.053}_{-0.031}$ & 0.065$^{+0.065}_{-0.045}$ & 0.29$^{+0.12}_{-0.16}$ & 0.7925$^{+0.0074}_{-0.0082}$\\
Inclination& Deg & 79.61$^{+0.51}_{-0.74}$ & 89.29 $^{+0.55}_{-0.82}$ & 87.0$^{+1.8 }_{-1.5}$ & 82.27$^{+ 0.18}_{- 0.17}$\\
$e$ &-& 0 (assumed)& 0 (assumed) & 0 (assumed) & 0 (assumed)\\
$\omega$ & Deg & 90 (assumed)& 90 (assumed) & 90 (assumed) & 90 (assumed)\\
$u_1$ & - & 0.560$^{+0.029}_{-0.051}$ & 0.5300$^{+0.0072}_{-0.0049}$ & 0.3227$^{+0.0091}_{-0.0053}$ & 0.437$^{+0.027}_{-0.019}$\\
$u_2$ & - & 0.261$^{+0.039}_{-0.062}$ & -0.012$^{+0.017}_{-0.016}$ & 0.236$^{+0.013}_{-0.006}$ & 0.100$^{+0.030}_{-0.021}$\\
\hline
\multicolumn{2}{l}{\textbf{Phase Curve Parameters}}\\
$A_{Re\!f+T\!h}$ & ppm & 12.27$^{+0.85}_{-0.83}$ &11.6$^{+2.3}_{-2.4}$ & 7.5$\pm1.9$ & 12.8$^{+1.9}_{-2.0}$ \\
$A_{Ell}$ & ppm & 17.07$^{+ 0.77}_{-0.75}$ & 14.7$\pm$ 2.3& 10.2$\pm$1.8 & 11.6 $^{+ 1.8}_{-1.9}$\\
$A_{Dop}$ & ppm & 5.55$^{+ 0.33}_{- 0.30}$ & 2.08 $\pm$ 0.33 & 2.53$^{+ 0.54 }_{-0.50 }$ & 3.03$^{+ 0.51}_{-0.50}$\\
$T_{Day}$ & K& 1860$\pm$35 & 2100$\pm$29& 2240$^{+82}_{-68}$ & 2005$^{+ 26}_{-27}$\\
$T_{eq}$ & K& 1554$\pm$29 & 1751$\pm$24 & 1849.$^{+61}_{-52}$ & 1669$^{+20}_{-21}$\\
$A_g$ & - & $<$0.047 (3$\sigma$) & 0.027$^{+ 0.015}_{-0.017}$ & 0.07$\pm 0.04$ & 0.102$\pm$0.023\\
$M_{RV}$ & $M_{\mathrm{Jup}}$ & 1.774$\pm$0.079$^{a}$& 0.837$\pm$0.029$^{b}$ & 2.38$\pm$ 0.11$^{c}$ & 0.84$\pm$0.08$^{d}$\\
$M_{Phot}$ & $M_{\mathrm{Jup}}$ & 2.09$\pm0.18$ & 0.98$\pm0.16$ & 2.13$^{+0.46}_{-0.43}$& 1.57$^{+0.26}_{-0.27}$\\
\enddata
\tablenotetext{a}{Adopted from \citet{grziwa2016}}
\tablenotetext{b}{Adopted from \citet{brahm2015}}
\tablenotetext{c}{Adopted from \citet{rabus2016}}
\tablenotetext{d}{Adopted from \citet{eigmuller2017}}
\tablenotetext{e}{Linear limb-darkening ($u$) and gravity-darkening ($g$) coefficients interpolated from \citet{claret2011}}
\end{deluxetable*}
\subsubsection{K2-31b}
Given the short period, high mass and large scaled radius, K2-31b was the best candidate among \textit{K2} discovered planets to have a detectable phase curve. Our analysis of the data indeed shows the presence of a robust phase curve with all of the components present. K2-31b is a grazing hot Jupiter with mass of $\sim$1.8 M$_{\mathrm{Jup}}$ discovered by \citet{grziwa2016}. Due to the grazing nature of the transit, which introduces a degeneracy between scaled radius and inclination, the radius of the planet has not very precisely determined. Our MCMC yielded the scaled radius of the planet of 0.168$^{+0.042}_{-0.023}$, which assuming a host star of radius 0.78 $R_\odot$ translates into physical radius of 1.28$^{+0.32}_{-0.17}~R_\mathrm{Jup}$.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.98\textwidth]{K2_31_CornerPlot.pdf}
\caption{\label{fig:k2_31_cornerplot} Corner plot showing the posterior distribution and co-variance among different fit parameters of K2-31b. A strong degeneracy is present among the scaled radius ($Rp/R_*$), geometric albedo ($A_g$), impact parameter ($b$), and day-side temperature ($T_{Day}$), while the mass of the planet exhibits minimal correlation with most of the parameters. The errors from the stellar parameters have not been propagated.}
\end{figure*}
For our analysis, we use the \texttt{EVEREST} detrended light curve and obtain a photometric mass ($M_{phot}$) of 2.09$\pm0.18$ $M_{\mathrm{Jup}}$ consistent within 2$\sigma$ to the RV-based mass ($M_{RV}$) of 1.774$\pm$0.079 $M_{\mathrm{Jup}}$ reported in the discovery paper. \citet{grziwa2016} points out that due to its grazing nature which is often interpreted as eclipsing binaries, planets like K2-31b are usually neglected for RV follow-up. We show here that a target's planetary nature can be revealed by estimating mass through the optical phase curve. In the discovery paper, \citet{grziwa2016} constrains the upper limit on the geometric albedo of K2-31b at 0.40, due to the absence of any visible secondary eclipse. By fitting for the whole phase curve, we are able to constrain the geometric albedo to 0.047 at 3$\sigma$ confidence, a dark planet even by the standard of hot Jupiters \citep{esteves2015, angerhausen2015}. Yet, the grazing nature of the transit leads to strong degeneracy among parameters (see \autoref{fig:k2_31_cornerplot}), as well as the underlying assumptions for the thermal emission and the Bond albedo leaves room for unaccounted systematic errors for such estimation. The degeneracy among different parameters, however is not as strong for the observed photometric mass. A ringing feature appears to present in the residual, although it is not a very prominent one.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.02\textwidth]{CombinedFigure1.pdf}
\caption{\label{fig:ComboFigure1} Upper-left: Transit fit using the best fit parameter for K2-31b and the residual. Upper-right: Phase curve signal using the best obtained parameter for K2-31b with different components. The data was binned to have a total number of bins of 75, thus each bin size corresponds to 0.37 hours. Lower-left: Transit fit using the best fit parameter for HATS-12b and the residual. Lower-right: Phase curve signal using the best set of visualization in HATS-9b with different components and its residual. The data was binned to have a total number of bins of 75, thus each bin size corresponds to 0.55 hours.}
\end{figure*}
\subsubsection{HATS-9}
HATS-9b is a hot Jupiter that was discovered in the Campaign 7 \textit{K2} field \citep{brahm2015}. \citet{bayliss2018} updated the system parameters using the \textit{K2} lightcurve, however did not report the phase curve. We use \texttt{K2SFF} detrended light curve for this particular analysis given its precision, and in examining the light curve for the potential phase curve, we detect the ellipsoidal variation for HATS-9b which yielded a photometric mass of 0.98$\pm0.16$ M$_{\mathrm{Jup}}$ consistent within 1$\sigma$ to the reported RV mass of 0.837 $\pm0.029$ M$_{\mathrm{Jup}}$. Fitting for the reflective and the thermal component of the phase curve yielded the geometric albedo of 0.027$^{+ 0.015}_{-0.017}$. The fit using the best parameters is shown in \autoref{fig:ComboFigure1}, and the corresponding parameters are reported in \autoref{table:phasefit_param}. Ringing effects appears to be more prominent among the residual of HATS-9b.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.02\textwidth]{CombinedFigure2.pdf}
\caption{\label{fig:ComboFigure2} Upper-left: Transit fit using the best fit parameter for HATS-12b and its residual. Upper-right: Phase curve signal using the best obtained parameter for HATS-12b with different components. The data was binned to have a total number of bins of 75, thus each bin size corresponded to 0.91 hours. Lower-left: Transit fit using the best fit parameter obtained for K2-107b and its residual. Lower-right: Phase curve signal using the best set of visualization in K2-107b with different components and its residual. The data was binned to to have a total number of bins of 75, thus each bin size corresponds to 0.99 hours.}
\end{figure*}
\subsubsection{HATS-12b}
HATS-12b \citep{rabus2016} was another hot Jupiter discovered in the
\textit{K2} Campaign 7 field. An abrupt jump in the data was observed in both detrended light curves around BJD - 2457333.1 (see \autoref{fig:anomaly}), possibly due to a change in the pixel responsivity \citep{jenkins2010}. We use \texttt{K2SFF} detrended light curves for the analysis, and corrected for the jump by using a linear regression at the break point. Marked as one of the promising targets using our SNR metric, we see distinct phase curve emerge in the phase folded light curve. The phase curves exhibits particularly prominent ellipsoidal variation, fitting for which leads to a mass of 2.13$^{+0.46}_{-0.43}$ $M_{\mathrm{Jup}}$ consistent within 1$\sigma$ to the reported RV mass of 2.38$\pm 0.11$ $M_{\mathrm{Jup}}$ \citep{rabus2016}. The good agreement comes with a little surprise given HATS-12 has a stellar mass of 1.489$\pm$ 0.071 $M_\odot$, which lies above 1.4 $M_\odot$, a threshold beyond which the tidal-equilibrium approximation assumed for calculating the ellipsoidal variation amplitude may not strictly hold \citep{pfahl2008}. We find the geometric albedo is 0.07$\pm 0.04$, typical for hot Jupiters. The fit using the best set of parameters is shown in \autoref{fig:ComboFigure2}, and the corresponding parameters are reported in \autoref{table:phasefit_param}. Note the secondary eclipse primarily comes as a constraint from the use of Model I, which is favored among three models using BIC.
\subsubsection{K2-107b}
K2-107b was reported in \citet{eigmuller2017} where using high-resolution imaging, a few nearby companions were detected. However, the dilution due to these companions is only 0.5$\pm$0.1\% of the \textit{K2} aperture flux and therefore was ignored in this analysis. A few nearby companions were detected with high resolution imaging, which are positioned in the \textit{K2} aperture, however the combined dilution factor correction due to these companion is 0.005$\pm$0.001, therefore negligible for the calculation we are considering. We use the \texttt{EVEREST} detrended light curve for the fitting purposes, which yields a photometric mass of 1.57$^{+0.26}_{-0.27}$ M$_{\mathrm{Jup}}$, which is within 3$\sigma$ of RV mass reported at 0.84$\pm$0.08 M$_{\mathrm{Jup}}$. The estimated geometric albedo is 0.102$\pm0.023$. The fit of K2-107b is shown in \autoref{fig:ComboFigure2}, and the parameters are reported in \autoref{table:phasefit_param}. Some structural features are present in the residuals potentially because of the ringing effects from filtering process. Like in HATS-12b, the secondary eclipse is a constraint from the use of Model I. The larger scatter of the data points during the secondary eclipse potentially results from the masking of these points during the filtering process.
\subsubsection{K2-131b}
K2-131b is an ultra-short period planet with a period of 0.3693 days reported in \citet{dai2017a}. For this work, we use the light curve from \texttt{EVEREST} pipeline, and only from the second half of Campaign 10 as the first half of the data shows strong systematic effects. In the phase folded data, we detect the secondary eclipse at 25.4$\pm$8.2 ppm. We only fit for reflective and thermal component, as the ellipsoidal and Doppler beaming signal from the small planets is not detectable. Unlike for the hot Jupiters in our lists, we use an altered relation between the geometric albedo and Bond Albedo as $A_B = A_g$. The modified relation allows for values greater than 0.66 for geometrical albedo, and such deviation from the traditional Lambertian relation ($A_B$ = $\frac{3}{2} A_g$) is expected as it tends to overestimate the Bond Albedo \citep{dyudina2016}. The fits showing the best obtained parameters are shown in \autoref{fig:ComboFigure3}, and the parameters are reported in \autoref{table:secondaryeclp_param}. Note the geometric albedo of 0.27$^{+0.31}_{-0.27}$ was estimated for K2-131b.
\begin{figure*}[ht!]
\centering
\includegraphics[width=1.02\textwidth]{CombinedFigure3.pdf}
\caption{\label{fig:ComboFigure3} Upper-right: Transit fit using the best parameter for K2-131b. Upper-left: Phase curve modulation showing reflective and thermal component in K2-131b with a secondary eclipse depth at 25.4$\pm$8.2 ppm. A total of 50 bins are used which corresponds to bin size of 0.14 hours. Lower-right: Transit fit using the best parameter for K2-106b. Lower-left: Phase curve modulation showing reflection and thermal component modulation in the folded light curve of K2-106b with a secondary eclipse depth at 23.5$^{+4.9}_{-5.1}$ ppm. A total of 50 bins are used which corresponds to bin size of 0.22 hours.}
\end{figure*}
\subsubsection{K2-106b}
K2-106 is a multi-planetary system with an ultra short period planet with period of 0.57133 days. It was first identified as a candidate in \citet{adams2017}, and subsequent RV campaigns verified the planetary nature of the signal with mass reported in \citet{guenther2017} and \citet{sinukoff2017}. For our analysis, we adopted the values from the latter. We detect the secondary eclipse at 23.5$^{+4.9}_{-5.1}$ ppm. Reflective and thermal component constitute the dominant part of the phase curve (see \autoref{fig:ComboFigure3}), and like K2-131b, we do not fit for either ellipsoidal variation or Doppler beaming due to negligible expected contributions. In order to estimate the temperature, we similarly use the modified relation i.e. $A_B = A_g$, which yielded a geometric albedo of 0.62$^{+0.22}_{-0.34}$. The fit itself is shown in \autoref{fig:ComboFigure3}, and the corresponding fit parameters are reported in \autoref{table:secondaryeclp_param}.
\section{Secondary Eclipse}
\label{sec:SecEclip}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.49\textwidth]{HATS_11b_K2SFF.pdf}
\caption{\label{fig:hats_11b} Transit fit (left) is shown for HATS-11b using best fit parameters which are reported in \autoref{table:secondaryeclp_param}. The secondary eclipse (right) is detected at 62$\pm$12 ppm.}
\end{figure}
To our knowledge, K2-260b is the only planet up until now with a robust secondary eclipse detection among the planets discovered by \textit{K2} \citep{johnson2018}. Since secondary eclipses characterize the geometric albedo as well as the temperature contrast just through observation of the depth of secondary eclipse alone, it is less prone to periodic or quasi-periodic noises. We therefore uniformly look for secondary eclipse signals for all planets reported in \autoref{table:best_candidates}.
For our secondary eclipse detection pipeline, we masked the range of data at transit as well as the phase of 0.5, where we estimate the secondary eclipse to occur. To the rest, we fitted a third degree polynomial with a length of 0.75 days and phase folded the data to build up the signal. The rest of the pipeline includes the same iterative outliers detection technique as was implemented for the phase curves. In this fashion, we detected secondary eclipse in \texttt{K2SFF} light curves of HATS-11b at 62$\pm$12 ppm (see \autoref{fig:hats_11b}). We use the depth to solve for the dayside temperature as well as the albedo for HATS-11b. The fit parameters are reported in \autoref{table:secondaryeclp_param}.
The detection of a secondary eclipse without the phase curve, as is the case for HATS-11b, raises an interesting question - why is there a secondary eclipse without a phase curve? HATS-11b has an easy-to-model stellar continuum that makes extracting a phase curve rather easy, although the signal may have been distorted during one of the many data processing steps. It also could be that the source of phase curve is predominantly thermal, and an efficient heat transportation between day and night-side significantly weakens the phase curve signal. Another explanation could be that the planetary atmosphere at the depth the phase curve probes is rotating at a pseudo-synchronous rate \citep{adams2018} washing out the signal as we phase fold the light curve. Note using \texttt{EVEREST} light curve for HATS-11b, which has comparable precision level as the \texttt{K2SFF} light curve, the secondary eclipse depth was detected at the level of 36$\pm$11 ppm, still a 3$\sigma $ detection. It still lacks a robust phase curve.
\section{Non-Detection}
\label{sec:nondetection}
For the most part, our formulated metric is expected to perform as well as, if not better than, the previously used metrics such as a/R$_*<$10 in \citet{esteves2013}, and R$_p>$4R$_\oplus$, P$<$10 days and V$_{\mathrm{mag}}<$15 in \citet{angerhausen2015}. However, our precision approximation relation deviates from the actual observed value for fainter stars, which in the future could be improved by using the empirically obtained noise floor. Similarly our calculation of SNR ratio could underestimate the signal for small period planets due to potentially non-negligible contribution from tidal heating of the planets.
\begin{figure}[ht]
\includegraphics[width=0.49\textwidth]{ComparativePeriodogram.pdf}
\caption{\label{fig:comparativeperiodogram} Lomb-Scargle Periodogram of the time series of K2-31b and HAT-P-56b after removal of outliers and transit points. Note the presence of significant power of HAT-P-56 near and around the planetary period while such signal is absent in case of K2-31b. The power at 6 hours (thruster firing) exhibits minimal power in both of the targets.}
\end{figure}
The presence of strong stellar activity can make the process of disentangling the phase curve difficult. The degree of difficulty particularly depends on the separation of the frequency and its harmonics of stellar modulation signals from the phase curve signal's frequencies. For instance, for targets for which phase curve signal was successfully extracted such as K2-31b, there is little overlap between power from stellar rotation and phase curve (see \autoref{fig:comparativeperiodogram}). On the other hand, for targets such as WASP-85Ab, K2-29b and K2-100b, the presence of prominent stellar activity dominates the frequency spectrum, thereby making the process of phase curve extraction difficult. For other targets such as WASP-118b and HATS-P-56b, the presence of stellar pulsation potentially induced by the planet similarly makes the disentangling process difficult \citep{huang2015, mocnik2017}. A clear sign of stellar pulsation has been observed in similar spectral type but highly eccentric system -- HAT-P-2b \citep{dewit2017}.
As for cases such as K2-137b, which has a large estimated ellipsoidal variation, no phase curve was detected because the mass was the upper limit reported in the discovery paper, leading to over-estimation of the signal \citep{smith2018}. Similarly, for targets such as K2-22b, a disintegrating planetary system \citep{ojeda2015}, the assumptions of our model does not hold. As for some of the targets such as WASP-47 system, we detect phase curve for the inner most planet at a significance level of 2$\sigma$, which we have not reported in this paper.
\section{Discussion}
\label{sec:Discussion}
\subsection{Model Performance}
The photometric mass for the four hot Jupiters, while less precise than their RV counterparts, are consistent at 3$\sigma$ level with each other in all of the six cases. Hence, the model we use appear to be well-calibrated. In fact, the model has been tested for a wide range of masses -- planetary to stellar mass, although there are some cases of where discrepancy between the model and the data do exist \citep{faigler2015b, eigmuller2018}. Such inconsistencies in the photometric mass in the case of hot Jupiters can arise due to the thermal component with a phase-shift resulting from super-rotation \citep{faigler2015a}. When we independently fit for ellipsoidal variation and Doppler boosting, the mass ranges derived from the Doppler effect were not as consistent with the RV mass as the one obtained from the ellipsoidal variation. For instance, the median of the distribution of ellipsoidal mass in all cases i.e. for Qatar-2b, WASP-104b, K2-31b, HATS-9b, and K2-107b was off compared to Doppler mass. While such discrepancies could arise because of the fact that Doppler effects are often times too smaller compared to their ellipsoidal counterpart, and at the same time they are affected from the contribution of phase shifted thermal component or asymmetries in the reflective component. We refrained from using more complex models given the precision of data did not support such choices.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.49\textwidth]{AllanVariance.pdf}
\caption{\label{fig:allan_variance} Observed RMS vs the bin size in the residual obtained by fitting phase curve models in K2-260b and K2-31b. Binning in K2-31b follows the expected power law of 0.5, whereas for K2-260b strongly deviates from it. The red lines are idealized cases drawn for both data sets.}
\end{figure}
\subsection{Signal Fidelity}
\label{subsec:fidelity}
\textit{K2} has been used for used in studies related to astero-seismology \citep{lund2016}, white dwarfs pulsation \citep{hermes2017}, AGN variability \citep{aranzana2018}, and stellar rotation \citep{angus2016, esselstein2018}. These studies have shown that K2 pipeline can retain astrophysical variability signals if the period under consideration is less than 15 days \citep{vancleve2016}. As all of our targets have precisely known periods which fall well under 15 days, we can confidently extract the relevant signal. The major trouble in phase curve extraction instead is restricted by other dominant forms of accompanying stellar variation.
There are additional tests we perform to test the fidelity of the signal. For instance, K2-260b is close to spin-orbit resonance. Despite reporting a prominent secondary eclipse, \citet{johnson2018} did not report the phase curve for the planet. We ran our pipeline, and fitted the phase folded light curve, which however yielded a mass inconsistent with reported RV value by more than 3$\sigma$. The residuals from the fit show the correlated noise which standout in RMS vs bin size plots compared to the residuals from the targets such as K2-31b for which the phase curve is reported (see \autoref{fig:allan_variance}). All of our phase curve targets have residuals that closely follow the expected the power law.
\subsection{Ultra-Short Period Planet}
With our discovery, K2-131b and K2-106b now join the group of other ultra-short planets such as Kepler-78 \citep{ojeda2013}, Kepler-10b \citep{esteves2013}, 55-Cnc-e \citep{demory2016} and K2-141b \citep{malavolta2018} with a detected secondary eclipse. Note all of these planets are rocky super-Earths with high densities and high geometric albedos possibly due to the presence of refractory surfaces. We used the modified relation A$_g$=A$_B$ to estimate the geometric albedo for all of the super-Earth targets to allow geometric values greater than 0.66. Additionally, we suspect that there might be non-negligible additional source of heating such as tidal heating present in these systems which scales strongly with distance from the host star ($a$) and eccentricity ($e$) as follows:
\begin{equation}
\begin{split}
H = \frac{63}{4} \frac{(GM_*)^{3/2} M_* R_p^5}{Q_p} a^{-15/2}e^2\\
\end{split}
\end{equation}
where $H$ is the tidal heating rate, $M_*$ is the stellar mass, $R_p$ is the planetary radius, and $Q_p$ is the tidal dissipation parameter \citep{jackson2008}. The fact that most of these planets are multi-planetary system suggest mechanisms similar to Io as a Galilean moon of Jupiter may alse be acting on these planets \citep{peale1979, demory2016}. While tidal heating will increase the overall equilibrium temperature of these planets, thereby increasing the nightside contribution, the precision of \textit{K2} data does not allow us to explore such effects. Yet, asymmetries between the day and night-side can still occur due to mechanisms such as volcanism \citep{gelman2011}, which would also contribute to the phase curve signals.
\subsection{Spectroscopic Follow-Up}
Phase curves can be used to infer the existence of atmospheres for the close-in hot planets through the detection of the offset of the phase curve peaks \citep{shporer2015, demory2016, angelo2017}. Similarly, the geometric albedo can be linked to atmospheric processes such as clouds, which are known to play important roles in the transmission spectrum \citep{kreidberg2014, sing2016}. Currently, there are only a few targets with reported geometric albedo which have been followed up with spectroscopic observation. However, this will drastically change as \textit{TESS} discovers a large sample of optimal targets. This could enable screening out the best planetary candidates for the follow-up atmospheric studies using the geometric albedo as a guideline.
\begin{deluxetable*}{lcccc}
\tablecaption{\label{table:secondaryeclp_param} Stellar and Planetary Parameters for K2-131b, K2-106b, and HATS-11b}
\tablehead{
\colhead{Parameter} &
\colhead{Unit} &
\colhead{K2-131b} &
\colhead{K2-106b} &
\colhead{HATS-11b}
}
\startdata
\multicolumn{2}{l}{\textbf{Orbital Parameters}}\\
$M_*$ & $M_\odot$ & 0.84$\pm$0.03$^{a}$ & 0.92$\pm$0.03$^{b}$ & 1.000$\pm$0.060$^{c}$ \\
$R_*$ & $R_\odot$ & 0.81$\pm$0.03$^{a}$ & 0.95$\pm$0.05$^{b}$ & 1.444$\pm$0.057$^{c}$ \\
$T_{e\!f\!f}$ &K& 5200$\pm$100$^{a}$ & 5496$\pm$46$^{b}$ & 6060$\pm$150$^{c}$\\
$[$Fe/H$]$ & dex & -0.02$\pm$0.08$^{a}$ & 0.06$\pm$0.03$^{b}$& -0.390$\pm$0.060$^{c}$ \\
log $g_*$ & cgs & 4.62$\pm$0.10$^{a}$ & 4.42$\pm$0.05$^{b}$ & 4.118$\pm$0.026$^{c}$ \\
u & -& 0.6604$^{d}$ & 0.6294$^{d}$ & 0.5467$^{d}$\\
g & -& 0.4737$^{d}$& 0.4181$^{d}$ & 0.3199$^{d}$\\
\hline
\textbf{Pipeline}&-& \texttt{EVEREST} & \texttt{K2SFF}
& \texttt{K2SFF}\\
\hline
\multicolumn{2}{l}{\textbf{Orbital Parameters}}\\
Period(Days) & Days & 0.3693038$\pm$0.0000091$^{a}$ & 0.571336$\pm$0.000020$^{b}$ & 3.6191613$\pm$ 0.0000099$^{c}$\\
$T_0$ -- 2450000 & BJD & 7582.93620$^{+0.00031}_{-0.00033}$ & 6226.43381$^{+0.00040}_{-0.00039}$ &6574.96536$^{+0.00017}_{-0.00016}$\\
$R_p/R_*$ & - & 0.01968$^{+0.0016}_{-0.0006}$ & 0.01584$^{+0.00086}_{-0.00036}$& 0.10707 $\pm$0.00013 \\
$a/R_*$ &- & 2.61$^{+0.23}_{-0.58}$ & 2.73$^{+0.16}_{-0.47}$ & 7.006$^{+0.010}_{-0.015}$ \\
e & - & 0 (fixed) & 0 (fixed) & 0 (fixed) \\
$\omega$ & - & 90 (fixed) & 90 (fixed) & 0 (fixed)\\
b & - & 0.43$^{+0.32}_{-0.31}$ & 0.37$^{+0.31}_{-0.25}$ & 0.036$^{+0.039}_{-0.025}$\\
Inc & Deg & 80.5$^{+7.0}_{-12.4}$ & 82.3$^{+5.4}_{-9.7}$ & 89.70 $^{+ 0.21}_{- 0.32}$\\
u$_1$ & - & 0.511$^{+0.077}_{-0.054}$
& 0.451$^{+0.072}_{-0.063}$ & 0.383$^{+0.007}_{-0.008}$\\
u$_2$ & - & 0.141$^{+0.077}_{-0.056}$ & 0.228$^{+0.068}_{-0.068}$ & 0.2000$^{+0.0087}_{-0.0039}$ \\
\hline
\multicolumn{2}{l}{\textbf{Secondary Eclipse Fit Parameters}}\\
A$_{Re\!f + T\!h}$ & ppm & 12.7$\pm$4.2 & 11.7$\pm$2.5 & -\\
$\Delta$& ppm & 25.4$\pm$8.2 & 23.5$^{+4.9}_{-5.1}$ & 62$\pm$12\\
$A_g$&-& 0.27$^{+ 0.31}_{-0.27}$ & 0.62$^{+0.22}_{-0.34}$
& 0.249$^{+ 0.057}_{- 0.058}$\\
$T_{Day}$& K & 2300$^{+740}_{-425}$ & 2200$^{+630}_{-470}$ & 1704$^{+ 54}_{- 60}$
\\
$T_{Eq}$ & K & 2010$^{+450}_{-270}$ & 1800$^{+470}_{- 375}$ & 1428$^{+ 44}_{- 49}$
\\
\enddata
\tablenotetext{a}{Adopted from \citet{dai2017a}}
\tablenotetext{b}{Adopted from \citet{sinukoff2017}}
\tablenotetext{c}{Adopted from \citet{rabus2016}}
\tablenotetext{d}{Interpolated from \citet{claret2011}}
\end{deluxetable*}
\subsection{Future Prospects}
With the launch of \textit{TESS} and the up-coming future missions like CHaracterising ExOPlanet Satellite \citep[\textit{CHEOPS};][]{broeg2013} James Webb Space Telescope \citep[\textit{JWST};][]{beichman2014}, PLAnetary Transits and Oscillation of stars \citep[\textit{PLATO};][]{rauer2014}, and Atmospheric Remote-sensing Exoplanet Large-survey \citep[\textit{ARIEL};][]{tinetti2016}, there will be plentiful opportunities in the future for phase-curves studies. In fact, the phase curve of WASP-18b has already been reported in Sector 2 \textit{TESS} data \citep{shporer2018}, and more will definitely be detected over the course of the mission. These studies will allow an unprecedented opportunity to learn about exoplanet atmospheres, while allowing us to refine our models with more precise data, and potentially disentangle the often degenerate reflective and the thermal components \citep{placek2016}.
\section{Conclusion}
We have significantly increased the number of phase curves discovered by \textit{K2} with four hot Jupiters' phase curves that yield photometric masses within 3$\sigma$ of the reported RV-based masses, and two additional short period super-Earths with 3$\sigma$ level secondary eclipse detections along with corresponding phase curves. The availability of the precise light curves as well as the use of a more aggressive filtering procedure tested with signal injection facilitated in our venture. The consistency of the obtained mass, although for a small but non-negligible number of planets, raises the possibility of developing a tool for preliminary planetary signal validation for \textit{TESS} candidates. As we stand on the cusp of discovering many more planets, and will re-observe many of the hot Jupiters, an opportunity will be presented to refine our phase curve models, build a larger sample of planets with detected phase curves and open up novel lines of inquiry. Such possibilities and others should strongly motivate the pursuit of the phase curves, as they will provide preliminary atmospheric characterization and mass estimation for many of the systems without investing any additional resources.
\vspace{0.5cm}
\noindent {\bf Acknowledgments:} This work includes data taken by \textit{K2}, and the final detrended light curves from \textit{K2SFF} as well as \texttt{EVEREST} pipeline. Authors would like to thank Dr. Andrew Vanderburg for discussion on \textit{K2SFF} pipeline products and kindly providing the detrended lightcurve of K2-141b. The authors would also like to thank Dr. Rodrigo Luger in regards to discussion on \texttt{EVEREST} pipeline. Authors would also like to thanks the anonymous referee for insightful comments. P. Niraula acknowledges the support of the Grayce B. Kerr Fellowship Fund. at MIT. S. Redfield and P. Niraula acknowledge support from the National Science Foundation through the Astronomy and Astrophysics Research Grant AST-1313268. D. Serindag acknowledges support from the European Research Council under the European Union's Horizon 2020 research and innovation program under grant agreement No. 694513. D. Serindag also acknowledges the Undergraduate Research Fellowship awarded by the NASA CT Space Grant Consortium in support of this research.
\software {\tt batman} \citep{kriedberg2015}, {\tt emcee} \citep{mackey2013}, {\tt gatspy} \citep{vanderplas2015}, {\tt lmfit} \citep{newville2016}, {\tt matplotlib} \citep{matplotlib}.
|
2,869,038,155,018 | arxiv | \section{Introduction}
Pre-trained language representations such as GPT \citep{radford2018improving}, BERT \citep{devlin2019bert} and XLNet \citep{yang2019xlnet}, have shown substantial performance improvements using self-supervised training on large-scale corpora \citep{dai2015semi, peters2018deep, radford2018improving, liu2019roberta}. More interestingly, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering \citep{rajpurkar2016squad, rajpurkar2018know}, and language inference \citep{bowman2015large, williams2017broad}, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful \citep{devlin2019bert}.
However, along with the significant performance enhancement, the parameter volume and complexity of these pre-trained language representations significantly increase.
As a result, it becomes difficult to deploy these large-scale language representations into real-life
computation constrained devices including mobile phones and edge devices. Throughout this paper, we attempt to answer the following questions.
\textbf{Question 1}: Is it possible to compress
large-scale language representations such as BERT via weight pruning?
\textbf{Question 2}: How would the weight-pruned, pre-trained model affect the performance of the downstream multi-task transfer learning objectives?
The problem of weight pruning has been studied under many types of deep neural networks (DNNs) \citep{goodfellow2016deep}, such as AlexNet \citep{krizhevsky2012imagenet}, VGG \citep{simonyan2014very}, ResNet \citep{he2016deep}, and MobileNet \citep{howard2017mobilenets}. It is shown that
weight pruning can result in a notable reduction in the model size.
A suite of weight pruning techniques have been developed, such as non-structured weight pruning \citep{han2015learning}, structured weight pruning \citep{wen2016learning}, filter pruning \citep{li2016pruning}, channel pruning \citep{he2017channel}, ADMM-NN \citep{ren2019admm} and PCONV \citep{ma2019pconv} to name a few.
Different from pruning CNN-type models,
BERT not only considers the metrics on the pre-training task, but also needs to make allowance for the downstream multi-task transfer learning objectives.
Thus, the desired weight pruning needs to preserve
the capacity of transfer learning from a sparse pre-trained model to downstream fine-tuning tasks.
In this work, we investigate irregular weight pruning techniques on the BERT model, including the iterative pruning method \citep{han2015learning} and one-shot pruning method \citep{liu2018rethinking}. However, these methods fail to converge to a sparse pre-trained model without incurring significant accuracy drop, or in many cases do not converge at all (see supporting results in Appendix).
Note that the aforementioned weight pruning techniques are built on different sparsity-promoting regularization schemes \citep{han2015learning, wen2016learning}, e.g., lasso regression ($\ell_{1}$ regularization) and ridge regression ($\ell_{2}$ regularization). {We find that the failure of previous methods on weight pruning of BERT is possibly due to the inaccurate sparse pattern learnt from the simple $\ell_1$ or $\ell_2$ based sparsity-promoting regularizer. In fact, the difficulty of applying regularization to generate weight sparsity coincides with the observation in \citep{loshchilov2018decoupled} on the imcompatibility of conventional weight decay ($\ell_{2}$ regularization) for training super-deep DNNs as BERT.}
It is pointed out that the main reason is that the direct {optimization} of a regularization penalty term causes divergence from the original loss function and has negative effect on the effectiveness of gradient-based update. To mitigate this limitation, \citep{loshchilov2018decoupled} have modified the regularization in Adam by \emph{decoupling weight decay regularization from the gradient-based update}, and have achieved state-of-the-art results on {large-scale language pre-training} and downstream multi-task transfer learning objectives \citep{devlin2019bert}.
\vspace{-5mm}
\begin{figure}[h!]
\centering
\includegraphics[width=11.8cm]{images/overview.pdf}
\vspace{-5mm}
\caption{Overview of pruning BERT using Reweighted Proximal Pruning algorithm and then fine-tuning on a wide range of downstream transfer learning tasks. Through RPP, we find the identical universal sparsity $\mathcal { S }_{\hat{\mathbf{w}}}$. The BERT model pruned with RPP could be fine-tuned over the downstream transfer learning tasks.}
\label{fig:overview}
\end{figure}
In this work, we aim at more accurate universal sparse pattern search \textcolor{rebuttal_color}{(see Figure\,\ref{fig:overview} for an overview of our approach)} motivated by our experiments and the conclusion from \citet{loshchilov2018decoupled}.
{We propose Reweighted Proximal Pruning (RPP), which integrates reweighted $\ell_1$ minimization \citep{candes2008enhancing} with proximal algorithm \citep{parikh2014proximal}.
RPP consists of two parts : the reweighted $\ell_{1}$ minimization and the proximal operator. Reweighted $\ell_{1}$ minimization serves as a better method of generating sparsity in DNN models matching the nature of weight pruning, compared with $\ell_{1}$ regularization.
Thanks to the closed-form solution of proximal operation on a weighted $\ell_1$ norm, in RPP the sparsity pattern search can be decoupled from computing the gradient of the training loss. In this way the aforementioned pitfall in prior weight pruning technique on BERT can be avoided. We show that RPP achieves effective weight pruning on BERT for the first time to the best of our knowledge. Experimental results demonstrate that the proximal pruned BERT model keeps high accuracy on a wide range of downstream tasks, including SQuAD \citep{rajpurkar2016squad, rajpurkar2018know} and GLUE \citep{wang2018glue}.
}
We summarize our contributions as follows.
\begin{itemize}
\item We develop the pruning algorithm Reweighted Proximal Pruning (RPP), which acheives the first effective weight pruning result on large pre-trained language representation model - BERT. RPP achieves $59.3\%$ weight sparsity without inducing the performance loss on both pre-training and fine-tuning tasks.
\item We spotlight the relationship between the pruning ratio of the pre-trained DNN model and the performance on the downstream multi-task transfer learning objectives. We show that many downstream tasks except for SQuAD allows at least $80\%$ pruning ratio compared with $59.3\%$ under the more challenging task SQuAD.
\item \textcolor{rebuttal_color}{We observe that as the pruning ratio of the pre-trained language model increases, the performance on the downstream transfer learning tasks decreases. The descending range varies in different downstream transfer learning tasks. However, the proposed RPP approach is able to achieve a consistently high pruning ratio compared to iterative pruning based methods.
}
\item We show that different from weight pruning in image classification tasks, \textcolor{rebuttal_color}{RPP helps to find the structured sparsity pattern in transformer blocks used in BERT.
Moreover, we peer into the effect of network pruning on the language representation embedded in BERT.
}
\end{itemize}
\section{Related Work}
\paragraph{BERT and prior work on model compression}
BERT \citep{devlin2019bert} is a self-supervised approach for pre-training a deep transformer encoder \citep{vaswani2017attention}, before fine-tuning it for particular downstream tasks. Pre-training of BERT optimizes two training objectives $-$ masked language modeling (MLM) and next sentence prediction (NSP) $-$ which require a large collection of unlabeled text. We use BooksCorpus (800M words) \citep{zhu2015aligning} and the English instance of Wikipedia (2,500M words) as the pre-training corpus, the same as \citet{devlin2019bert}. For detailed information about the BERT model, readers can refer to the original paper \citep{devlin2019bert}.
\citet{michel2019sixteen} mask some heads in multi-head attention modules in BERT, and then evaluate the performance on the machine translation task. Similarly,
\citet{hao2019multi} eliminates certain heads in the multi-head attention module. First, the limited previous work do not consider the pre-training metrics and the other downstream multi-mask transfer learning objectives. They only considered the specific machine translation task (out of over 10 transfer tasks), which is only a specific fine-tuning and is limited for the universal pre-trained language representation (BERT).
Second, the multi-head attention module uses a weight sharing mechanism \citep{vaswani2017attention}. So masking some heads does not reduce the weight volume. Finally, multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions, while single attention head inhibits this effect \citep{vaswani2017attention}. As a result, masking some heads in multi-head attention harms the weight sharing mechanism, without weight volume reduction. In summary, the limited previous work in this area are not effective weight pruning method on BERT. \citet{shen2019q} reports the quantization result of BERT model, which is orthogonal to our work and can be combined for further compression/acceleration.
\paragraph{Reweighted $\ell_{1}$ and proximal algorithm }
\citet{candes2008enhancing} present
reweighted $\ell_{1}$ algorithm and demonstrate the remarkable performance and broad applicability in the areas of statistical estimation, error correction and image processing. Proximal algorithms can be viewed as an analogous tool for non-smooth, constrained, large-scale, or distributed versions of these problems \citep{parikh2014proximal}. To the best of our knowledge, ours is \textcolor{rebuttal_color}{the first work that applies reweighted $\ell_{1}$ minimization to network compression, particularly for BERT pruning.
},
\section{Reweighted Proximal Pruning for large-scale language representation during pre-training}
Pruning for pre-trained language representations should not only consider the performance of pre-training objectives, but also make allowance for the downstream fine-tuning transfer learning tasks.
Let $f_{i}$ denote the loss function of network for downstream task $\mathcal { T } _ { i } \sim p ( \mathcal { T } )$, where $p(\mathcal{T})$ denotes the distribution of tasks. Let $\mathbf{w}$ denote the parameters of the pre-trained model (pre-training in BERT), and $\mathbf{z}_{i}$ denote the $i$-th task-specified model parameters (fine-tuning in BERT). The downstream tasks have separate fine-tuned models, even though they are initialized with the same pre-trained parameters \citep{devlin2019bert}. Starting from the pre-trained parameters $\mathbf{w}$, the parameters $\mathbf{z}_{i}(\mathbf{w})$ are obtained through
fine-tuning
\begin{equation} \label{eq:downstream task}
\fontsize{9}{8.5}\selectfont
\underset { \mathbf { w } \in \mathbb { R } ^ { d } } { \operatorname { minimize } } \; f _ { i } ( \mathbf { w } )
\end{equation}
\subsection{Pruning formulation in transfer learning}
Following the conventional weight pruning formulation, we first consider the problem of weight pruning during pre-training:
\begin{equation} \label{eq:pruning}
\fontsize{9}{8.5}\selectfont
\begin{split}
&\underset { \mathbf { w } \in \mathbb { R } ^ { d } } { \operatorname { minimize } } \ f _ { 0 } ( \mathbf { w } ) +\gamma \| \mathbf { w } \| _ { p }
\end{split}
\end{equation}
where $f_{0}$ is the loss function of pruning, $p \in \{0,1\}$ denotes the type of regularization norm, and $\gamma$ is a regularization term. We note that the sparsity-promoting regularizer in the objective could also be replaced with a hard $\ell_p$ constraint, $| \mathbf { w } \| _ { p } \leq \tau$ for some $\tau$.
Let $\hat{\mathbf{w}}$ denote the solution to problem (\ref{eq:pruning}), and the corresponding sparse pattern $\mathcal { S }_{\hat{\mathbf{w}}}$ is given by
\begin{equation}
\fontsize{9}{8.5}\selectfont
\mathcal { S }_{\hat{\mathbf{w}}} = \left\{ i | \hat { w } _ { i } = 0, \; \forall i \in [ d ] \right\}
\end{equation}
For a specific transfer task $i$, we allow an additional retraining/fine-tuning step to train/fine-tune weights starting from the pre-training results $\hat{\mathbf{w}}$ and subject to the determined, fixed sparse pattern $\mathcal { S }_{\hat{\mathbf{w}}}$, denoted as $\mathbf { z } _ { i } ( \hat{\mathbf { w }} ; \mathcal { S }_{\hat{\mathbf{w}}} )$. That is, we solve the modified problem \eqref{eq:downstream task}
\begin{equation}
\fontsize{9}{8.5}\selectfont
\underset { \mathbf { z }_i } { \operatorname { minimize } }\ f _ { i } \big ( \mathbf { z } _ { i } ( \hat{\mathbf { w }} ; \mathcal { S }_{\hat{\mathbf{w}}} ) \big )
\end{equation}
Here, different from (\ref{eq:downstream task}), the task-specific fine tuning weights variable $\mathbf { z } _ { i } ( \hat{\mathbf { w }} ; \mathcal { S }_{\hat{\mathbf{w}}} )$ is now defined over $\mathcal { S }_{\hat{\mathbf{w}}}$.
Our goal is to seek a sparse (weight pruned) model during pre-training, with weight collection $\hat{\mathbf{w}}$ and sparsity $\mathcal{S}_{\hat{\mathbf{w}}}$, which can perform as well as the original pre-trained model over multiple new tasks (indexed by $i$). These fine-tuned models $\mathbf { z } _ { i } ( \hat{\mathbf { w }} ; \mathcal { S }_{\hat{\mathbf{w}}} )$ (for different $i$) share the \emph{identical universal sparsity} $\mathcal{S}_{\hat{\mathbf{w}}}$.
\subsection{Reweighted Proximal Pruning}
In order to enhance the performance of pruning pre-trained language representation over multi-task downstream transfer learning objectives, we propose Reweighted Proximal Pruning (RPP). RPP consists of two parts: the reweighted $\ell_{1}$ minimization and the proximal operator. Reweighted $\ell_{1}$ minimization serves as a better method of generating sparsity in DNN models matching the natural objective of weight pruning, compared with $\ell_{1}$ regularization.
The proximal algorithm then separates the computation of gradient with the proximal operation over a weighted $\ell_1$ norm, without \textcolor{rebuttal_color}{directly optimizing the entire sparsity-penalized loss,} \textcolor{rebuttal_color}{which requires gradient backpropagation of the involved loss.}
This is necessary in the weight pruning of super-deep language representation models.
\subsubsection{Reweighted $\ell_{1}$ minimization}
In the previous pruning methods \citep{han2015learning, wen2016learning}, $\ell_{1}$ regularization is used to generate sparsity. However, consider that two weights $w_{i}, w_{j}\ (w_{i} < w_{j})$ in the DNN model are penalized through $\ell_{1}$ regularization. The larger weight $w_{j}$ is penalized more heavily than smaller weight $w_{i}$ in $\ell_{1}$ regularization, which violates the original intention of weight pruning, ``removing the unimportant connections'' (parameters close to zero) \citep{han2015learning}. To address this imbalance, we introduce reweighted $\ell_{1}$ minimization \citep{candes2008enhancing} to the DNN pruning domain. Our introduced reweighted $\ell_{1}$ minimization operates in a systematic and iterative manner (detailed process shown in Algorithm \ref{alg: rwl}), and the first iteration of reweighted $\ell_{1}$ minimization is $\ell_{1}$ regularization. This designed mechanism helps us to observe the performance difference between $\ell_{1}$ and reweighted $\ell_{1}$ minimization. Meanwhile, this mechanism ensures the advancement of reweighted $\ell_{1}$ minimization over $\ell_{1}$ regularization, as the latter is the single, first step of the former.
Consider the regularized weight pruning problem (reweighted $\ell_1$ minimization):
\begin{equation}\label{eq:rwl}
\fontsize{9}{8.5}\selectfont
\underset{\mathbf { w }}{\operatorname{minimize}} \quad f_{0}(\mathbf { w }) + \gamma \sum_{i}\alpha_{i}|w_{i}|
\end{equation}
where $\alpha_{i}\ (\alpha_{i} > 0)$ factor is a positive value. It is utilized for balancing the penalty, and is different from weight $w_i$ in DNN model.
$\alpha_i$ factors will be updated in the iterative reweighted $\ell_1$ minimization procedure (Step 2 in Algorithm \ref{alg: rwl}) in a systematic way \citep{candes2008enhancing}.
If we set $T=1$ for reweighted $\ell_{1}$, then it reduces to $\ell_{1}$ sparse training.
\begin{algorithm}[h!]
\caption{RPP procedure for reweighted $\ell_{1}$ minimization}
\begin{algorithmic}[1] \label{alg:proximal_pruning}
\State Input: Initial pre-trained model $\mathbf{w}^{0}$, initial reweighted $\ell_{1}$ minimization ratio $\gamma$, initial positive value $\mathbf\alpha^{0}=1$
\For{$t = 1,2,\ldots, T$}
\State $\mathbf w = \mathbf w^{(t-1)}$, $\mathbf { \alpha } = \mathbf \alpha^{(t-1)}$
\State \textbf{Step 1}: Solve problem (\ref{eq:rwl}) to obtain a solution $\mathbf{w}^t$ via iterative proximal algorithm (\ref{eq:proximal_iterative})
\State \textbf{Step 2}: Update reweighted factors $\alpha_{i}^t=\frac{1}{|w_{i}^{t}|^{(t)} + \epsilon}$ (the inside $w_{i}^{t}$ denotes the weight $w_{i}$ in iteration $t$, and the outside $(t)$ denotes the exponent), $\epsilon$ is a small constant, e.g., $\epsilon=0.001$
\EndFor
\end{algorithmic}
\label{alg: rwl}
\end{algorithm}
\subsubsection{Proximal method}
In the previous pruning methods \citep{han2015learning, wen2016learning}, $\ell_{1}$ regularization loss is directly \textcolor{rebuttal_color}{optimized through the back-propagation based gradient update}
of DNN models, and the hard-threshold is adopted to execute the pruning action at the step of pruning (all weights below the hard-threshold become zero).
In our approach,
we derive an effective solution to problem (\ref{eq:rwl}) for given $\{ \alpha_i \}$, namely, in Step\,1 of Algorithm\,2, in which back-propagation based gradient update is only applied on $f_{0}(\mathbf { w })$ but not $\gamma \sum_{i}\alpha_{i}|w_{i}|$.
We adopt the proximal algorithm \citep{parikh2014proximal} to satisfy this requirement through decoupling methodology. In this way, the sparsity pattern search can be decoupled from \textcolor{rebuttal_color}{back-propagation based gradient update} of the training loss. The proximal algorithm is shown in \citep{parikh2014proximal} to be highly effective (compared with the original solution) on a wide set of non-convex optimization problems. Additionally, our presented reweighted $\ell_{1}$ minimization (\ref{eq:rwl}) has analytical solution through the proximal operator.
To solve problem (\ref{eq:rwl}) for a given $\alpha$, the proximal algorithm operates in an iterative manner:
\begin{equation}
\fontsize{9}{8.5}\selectfont
\label{eq:proximal_iterative}
\mathbf { w } _ { k } = \operatorname { prox } _ { \lambda _ { k } , { r w } - \ell _ { 1 } } \left( \mathbf { w } _ { k - 1 } - \lambda _ { k } \nabla _ { \mathbf { w } } f_{0} \left( \mathbf { w } _ { k - 1 } \right) \right)
\end{equation}
where the subscript $k$ denotes the time step of the training process inside \textcolor{rebuttal_color}{each iteration of} RPP, $\lambda_{k}\ (\lambda_{k} > 0)$ is the learning rate, and we set the initial $\mathbf { w }$ to be $\mathbf { w }^{(t-1)} $ from the last iteration of reweighted $\ell_{1}$. The proximal operator $\operatorname { prox } _ { \lambda _ { k } , { r w } - \ell _ { 1 } } (\mathbf{a})$ is the solution to the problem
\begin{equation} \label{eq:proximal_mini}
\fontsize{9}{8.5}\selectfont
\underset { \mathbf { w } } { \operatorname { minimize } } \; \gamma \sum _ { i } \alpha _ { i } \left| w _ { i } \right| + \frac { 1 } { 2 \lambda _ { k } } \| \mathbf { w } - \mathbf { a } \| _ { 2 } ^ { 2 }
\end{equation}
where $\mathbf { a } = \mathbf { w } _ { k - 1 } - \lambda _ { k } \nabla _ { \mathbf { w } } f \left( \mathbf { w } _ { k - 1 } \right)$. The above problem has the following analytical solution \citep{liu2014sparsity}
\begin{equation}
\fontsize{9}{8.5}\selectfont
\label{eq:analytical_s}
w _ { i , k } = \left\{ \begin{array} { l l } { \left( 1 - \frac { \gamma \lambda _ { k } \alpha _ { i } } { \left| a _ { i } \right| } \right) a _ { i } } & { \left| a _ { i } \right| > \lambda _ { k } \gamma \alpha _ { i } } \\ { 0 } & { \left| a _ { i } \right| \leq \lambda _ { k } \gamma \alpha _ { i } }. \end{array} \right.
\end{equation}
We remark that the updating rule (6) can be interpreted as the proximal step (\ref{eq:analytical_s}) over the gradient descent step $\mathbf { w } _ { k - 1 } - \lambda _ { k } \nabla _ { \mathbf { w } } f \left( \mathbf { w } _ { k - 1 } \right)$. Such a descent can also be obtained through optimizers such as AdamW. We use the AdamW \citep{loshchilov2018decoupled} as our optimizer, the same with \citep{devlin2019bert}. The concrete process of AdamW with proximal operator is shown in Algorithm \ref{alg:adamw} of Appendix C.
\textit{Why chooses AdamW rather than Adam?}
\citet{loshchilov2018decoupled} proposes AdamW to improve the generalization ability of Adam \citep{kingma2014adam}. \citet{loshchilov2018decoupled} shows that {the conventional weight decay}
is inherently not effective in Adam and has negative effect on the effectiveness of gradient-based update, which is the reason of the difficulty to apply adaptive gradient algorithms to super-deep DNN training for NLU applications (like BERT). \citet{loshchilov2018decoupled} mitigates this limitation and improves regularization of Adam, by decoupling weight decay regularization from the gradient-based update \citep{loshchilov2018decoupled}.
AdamW is widely adopted in pre-training large language representations, e.g., BERT \citep{devlin2019bert}, GPT \citep{radford2018improving} and XLNet \citep{yang2019xlnet}.
Our proposed RPP also benefits from the decoupling design ideology. The difference is that RPP is for the generation of sparsity, instead of avoiding over-fitting, like decoupled weight decay in AdamW.
\paragraph{Our new and working baseline: New Iterative Pruning (NIP).}
To get the \emph{identical universal sparsity} $\mathcal{S}_{\mathbf{w}}$, we tried a series of pruning techniques, including the iterative pruning method \citep{han2015learning} and one-shot pruning method \citep{liu2018rethinking}. But these methods do not converge to a viable solution.
The possible reason for non-convergence of the iterative pruning method is that
\textcolor{rebuttal_color}{directly optimizing $\ell_p$ ($p \in \{ 1,2\}$) sparsity-promoting regularization makes the gradient computation involved and thus harms the loss convergence} \textcolor{rebuttal_color}{(We provide the loss curve and analysis in Appendix \ref{app: non_convergence})}.
To circumvent the convergence issue of conventional iterative pruning methods, we propose a new iterative pruning (NIP) method.
Different from iterative pruning \citep{han2015learning},
NIP reflects the naturally progressive pruning performance \textcolor{rebuttal_color}{without any externally introduced penalty}. We hope that other pruning methods should not perform worse than NIP, otherwise, the effect of \textcolor{rebuttal_color}{optimizing} the newly introduced sparsity-promoting regularization is negative.
We will show that NIP is able to successfully prune BERT to certain pruning ratios. We refer readers to Appendix\,\ref{app: NIP} for the full detail about NIP, our proposed baseline algorithm.
\section{Experiments}
In this section, we describe the experiments on pruning pre-trained BERT and demonstrate the performance on 10 downstream transfer learning tasks.
\subsection{Experiment Setup}
We use the official BERT model from Google as the startpoint. Following the notation from \citet{devlin2019bert}, we denote the number of layers (i.e., transformer blocks) as $L$, the hidden size as $H$, and the number of self-attention heads as $A$. We prune two kinds of BERT model: $\mathrm { BERT } _ { \mathrm { BASE } }$ ($L=12, H=768, A=12, \text{total parameters}=110\mathrm{M}$) and $\mathrm { BERT } _ { \mathrm { LARGE } }$ ($L=24, H=1024, A=16, \text{total parameters}=340\mathrm{M}$). \textcolor{rebuttal_color}{As the parameters of these transformer blocks take up more than 97\% weights of the entire BERT, the weights of these transformer blocks are our pruning target.}
\textbf{Data:} In pre-training, we use the same pre-training corpora as \cite{devlin2019bert}: BookCorpus ($800\mathrm{M}$ words) \citep{zhu2015aligning} and English Wikipedia ($2,500\mathrm{M}$ words). Based on the same corpora, we use the same preprocessing script\footnote{https://github.com/google-research/bert} to create the pre-training data. In fine-tuning, we report our results on the Stanford Question Answering Dataset (SQuAD) and the General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue}. We use two versions of SQuAD: V1.1 and V2.0 \citep{rajpurkar2016squad, rajpurkar2018know}. The GLUE is a collection of datasets/tasks for evaluating natural language understanding systems\footnote{The datasets/tasks are: CoLA \citep{warstadt2018neural}, Stanford Sentiment Treebank (SST) \citep{socher2013recursive}, Microsoft Research Paragraph Corpus (MRPC) \citep{dolan2005automatically}, Semantic Texual Similarity Benchmark (STS) \citep{agirre2007semeval}, Quora Question Pairs (QQP), Multi-Genre NLI (MNLI) \citep{williams2017broad}, Question NLI (QNLI) \citep{rajpurkar2016squad}, Recognizing Textual Entailment (RTE) and Winograd NLI(WNLI) \citep{levesque2012winograd}.}.
\textbf{Input/Output representations:} We follow the input/output representation setting from \citet{devlin2019bert} for both pre-training and fine-tuning. We use the WordPiece \citep{wu2016google} embeddings with a $30,000$ token vocabulary. The first token of every sentence is always a special classification token ([CLS]). The sentences are differentiated with a special token ([SEP]).
\textbf{Evaluation:} In pre-training, BERT considers two objectives: masked language modeling (MLM) and next sentence prediction (NSP). For MLM, a random sample of the tokens in the input sequence is selected and replaced with the special token $([\text{MASK}])$. The MLM objective is a cross-entropy loss on predicting the masked tokens. NSP is a binary classification loss for predicting whether two segments follow each other in the original text. In pre-training, we use MLM and NSP as training objectives to pre-train, retrain the BERT model, and as metrics to evaluate the BERT model . In fine-tuning, F1 scores are reported for SQuAD, QQP and MRPC. Matthew's Corr and Pearson-Spearman Corr are reported for CoLA and SST2 respectively. Accuracy scores are reported for the other tasks.
All the experiments execute on one Google Cloud TPU V3-512 cluster, three Google Cloud TPU V2-512 clusters and 110 Google Cloud TPU V3-8/V2-8 instances.
\textbf{Baseline:}
As there is no public effective BERT pruning method, we use the proposed NIP pruning method on BERT as the baseline method. Th detail of NIP is shown in Appendix\,\ref{app: NIP}. The progressive pruning ratio is $ \nabla p=10\%$ (prune $10\%$ more weights in each iteration). Starting from the official $\mathrm { BERT } _ { \mathrm { BASE } }$, we use 9 iterations. In each iteration $t$ of NIP, we get the sparse $\mathrm { BERT } _ { \mathrm { BASE } }$ with specific sparsity, as $( \mathbf { w }^{t} ; \mathcal { S }_{\mathbf{w}^{t}} )$. Then we retrain the sparse $\mathrm { BERT } _ { \mathrm { BASE } }$ $\mathbf{w}^{t}$ over the sparsity $\mathcal{S}_{\mathbf{w}^{t}}$. In the retraining process, the initial learning rate is $2\!\cdot\!10^{-5}$, the batch size is $1024$ and the retraining lasts for $10,000$ steps (around 16 epochs). For the other hyperparameters, we follow the original BERT paper \cite{devlin2019bert}. In each iteration, the well retrained sparse $\mathrm { BERT } _ { \mathrm { BASE } }$ is the starting point for the fine-tuning tasks and the next iteration.
\begin{figure}[b]
\centering
\vspace{-0.5cm}
\setlength{\abovecaptionskip}{0.1cm}
\setlength{\belowcaptionskip}{-0.8cm}
\includegraphics[width=12cm]{images/nip.pdf}
\caption{Evaluate the performance of pruned $\mathrm { BERT } _ { \mathrm { BASE } }$ using NIP and RPP, respectively (MLM and NSP accuracy on pre-training data and F1 score of fine-tuning on SQuAD 1.1 are reported).}
\label{fig:bert_nip}
\end{figure}
\subsection{Reweighed Proximal Pruning (RPP)}
We apply the proposed Reweighted Proximal Pruning (RPP) method on both $\mathrm { BERT } _ { \mathrm { BASE } }$ and $\mathrm { BERT } _ { \mathrm { LARGE } }$, and demonstrate performance improvement. Detailed process of RPP is in Appendix\,\ref{app: alg_RPP}.
For $\mathrm { BERT } _ { \mathrm { BASE } }$, we use the hyperparameters exactly the same with our experiments using NIP. The initial learning rate is $\lambda\!=\!2\cdot\!10^{-5}$ and the batch size is 1024. We iterate the RPP for six times ($T\!\!=\!\!6$), and each iteration lasts for $100,000$ steps (around 16 epochs). The total number of epochs in RPP is smaller than NIP when achieving 90\% sparsity ($96 < 144$). There is no retraining process in RPP. We set $\gamma\!\in\!\{10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\}$ and $\epsilon\!\!=\!\!10^{-9}$ in Algorithm \ref{alg: rwl}. Recall that RPP reduces to $\ell_{1}$ sparse training as $t\!=\!1$.
In Figure\,\ref{fig:bert_nip}, we present the accuracy versus the pruning ratio for pre-training objectives MLM and NSP, and fine-tuning task SQuAD 1.1. Here we compare RPP with NIP.
Along with the RPP continuing to iterate, the performance of RPP becomes notably higher than NIP for both the pre-training task and the fine-tuning task. The gap further increases as the RPP iterates more times. In Figure \ref{fig:bert_nip}, we find that the NSP accuracy is very robust to pruning. Even when $90\%$ of the weights are pruned, the NSP accuracy keeps above $95\%$ in RPP algorithm and around $90\%$ in NIP algorithm. For MLM accuracy and SQuAD F1 score, the performance drops quickly as the prune ratio increases. RPP slows down the decline trend to a great extent. On SQuAD 1.1 dataset/task, RPP keeps the F1 score of $\mathrm { BERT } _ { \mathrm { BASE } }$ at 88.5 ($0$ degradation compared with original BERT) at $41.2\%$ prune ratio, while the F1 score of $\mathrm { BERT } _ { \mathrm { BASE } }$ applied with NIP drops to 84.6 ($3.9$ degradation) at $40\%$ prune ratio. At $80\%$ prune ratio, RPP keeps the F1 score of $\mathrm { BERT } _ { \mathrm { BASE } }$ at 84.7 ($3.8$ degradation), while the F1 score of $\mathrm { BERT } _ { \mathrm { BASE } }$ applied with NIP drops to 68.8 ($19.7$ degradation compared with the original BERT).
In addition to the fine-tuning task of SQuAD 1.1, the other transfer learning tasks show the same trend (RPP consistently outperforms NIP) and the detailed results are reported in Appendix \ref{app: transfer_learning}.
For $\mathrm { BERT } _ { \mathrm { LARGE } }$, we use the hyperparameters exactly the same with our experiments using NIP except for the batch size. The initial learning rate is $2\!\cdot10^{-5}$ and the batch size is 512. We iterate the RPP for four times ($T\!\!=\!\!6$), and each iteration lasts for $100,\!000$ steps (around 8 epochs). There is no retraining process either. We set $\gamma\!\in\!\{10^{-2}\, 10^{-3}\, 10^{-4}\, 10^{-5}\}$ and $\epsilon\!=\!10^{-9}$ in Algorithm \ref{alg: rwl}. The experimental results about pruning $\mathrm { BERT } _ { \mathrm { LARGE } }$ and then fine-tuning are shown in Table \ref{tab:bert_large}.
\begin{table}[h!]{
\caption{$\mathrm { BERT } _ { \mathrm { LARGE } }$ pruning results on a set of transfer learning tasks. The degradation is contrasted with the original BERT (without pruning) for transfer learning.}
\centering
{
\resizebox{\linewidth}{!}{
\begin{tabular}{*{10}{c}}
\toprule
Method & Prune Ratio($\%$) & SQuAD 1.1 & QQP & MNLI & MRPC & CoLA \\
\midrule
NIP & 50.0 & 85.3 (-5.6) & 85.1 (-6.1) & 77.0 (-9.1) & 83.5 (-5.5) & 76.3 (-5.2) \\
& 80.0 & 75.1 (-15.8) & 81.1 (-10.1) & 73.81 (-12.29) & 68.4 (-20.5) & 69.13 (-12.37) \\
\midrule
\textbf{RPP} & 59.3 & 90.23 (-0.67) & 91.2 (-0.0) & 86.1 (-0.0) & 88.1 (-1.2) & 82.8 (+1.3) \\
& 88.4 & 81.69 (-9.21) & 89.2 (-2.0) & 81.4 (-4.7) & 81.9 (-7.1) & 79.3 (-2.2) \\
\bottomrule
\end{tabular}
}
\resizebox{\linewidth}{!}{
\begin{tabular}{*{10}{c}}
\toprule
Method & Prune Ratio($\%$) & SQuAD 2.0 & QNLI & MNLIM & SST-2 & RTE \\
\midrule
NIP & 50.0 & 75.3 (-6.6) & 90.2 (-1.1) & 82.5 (-3.4) & 91.3 (-1.9) & 68.6 (-1.5)\\
& 80.0 & 70.1 (-11.8) & 80.5 (-10.8) & 78.4 (-7.5) & 88.7 (-4.5) & 62.8 (-7.3) \\
\midrule
\textbf{RPP} & 59.3 & 81.3 (-0.6) & 92.3 (+1.0) & 85.7 (-0.2) & 92.4 (-0.8) & 70.1 (-0.0) &\\
& 88.4 & 80.7 (-1.2) & 88.0 (-3.3) & 81.8 (-4.1) & 90.5 (-2.7) & 67.5 (-2.6) &\\
\bottomrule
\end{tabular}
}
}
\label{tab:bert_large}
}
\end{table}
\subsection{Visualizing Attention Pattern in BERT}
We visualize the sparse pattern of the kernel weights in sparse BERT model applied with RPP, and present several examples in Figure \ref{fig:visualization_sp}. Since we directly visualize the value of \emph{identical universal sparsity} $\mathcal{S}_{\mathbf{w}}$ without any auxiliary function like activation map, the attention pattern is universal and data independent.
\begin{figure}[h!]
\fontsize{8}{8}\selectfont
\centering
\subfigure{
\centering
\begin{minipage}[]{13cm}
\includegraphics[width=4cm]{images/weights/layer2_query.pdf}
\centering
\includegraphics[width=4cm]{images/weights/layer3_query.pdf}
\centering
\includegraphics[width=4cm]{images/weights/layer11_query.pdf}
\end{minipage}%
}%
\quad
\centering
\subfigure{
\centering
\begin{minipage}[]{13cm}
\centering
\includegraphics[width=4cm]{images/weights/layer2_key.pdf}
\centering
\includegraphics[width=4cm]{images/weights/layer3_key.pdf}
\centering
\includegraphics[width=4cm]{images/weights/layer11_key.pdf}
\end{minipage}%
}%
\caption{Visualization of sparse pattern $\mathcal{S}$ in pruned $\mathrm { BERT } _ { \mathrm { BASE } }$ model $\mathbf{w}$.
\textcolor{rebuttal_color}{We sample 6 matrices (3 query matrices at the top row and 3 key matrices at the bottom row) from layer 2, layer 3 and layer 11 in the sparest pruned $\mathrm { BERT } _ { \mathrm { BASE } }$.
}
}
\label{fig:visualization_sp}
\end{figure}
BERT's model architecture is a multi-layer, bidirectional transformer encoder based on the original implementation \citep{vaswani2017attention}. Following \citep{vaswani2017attention}, the transformer architecture is based on ``scaled dot-product attention.'' The input consists of queries, keys and values, denoted as matrices $Q$, $K$ and $V$, respectively. The output of attention model is computed as
\begin{equation}
\fontsize{8}{8}\selectfont
\text { Attention } ( Q , K , V ) = \operatorname { softmax } \left( \frac { Q K ^ { T } } { \sqrt { d _ { k } } } \right) V
\end{equation}
where $d_{k}$ is the dimension. We visualize the sparse matrices $Q$ and $K$ of layer 2, layer 3 and layer 11 respectively in Figure \ref{fig:visualization_sp}. From Figure \ref{fig:visualization_sp}, we have the following observations and analyses.
\textcolor{rebuttal_color}{\textbf{Structured pattern:} Figure \ref{fig:visualization_sp} demonstrates the structured pattern of non-zero weights in a pruned transformer block. More specifically, we found that the pruned $Q$ and $K$ matrices within each transformer yield interesting group-wise structures (column-wise non-sparsity for query matrix and row-wise non-sparsity for key matrix). Interestingly, we obtained these structured sparse patterns from our proposed RPP, an irregular pruning method (namely, no group-wise sparsity is penalized). This is different from the irregular pruning on image classifiers, and thus shows the specialty of pruning on language models. We also believe that the use of reweighted $\ell_1$ approach matters to find these fine-grained sparse patterns. Note that the structured sparsity pattern is more friendly to hardware implementation and acceleration than the non-structured pattern.
}
\textcolor{rebuttal_color}{\textbf{Semantic interpretation:} The structured pattern found by RPP (visualized in Figure \ref{fig:visualization_sp}) has the following semantic interpretation. What might the large-scale language representation learn? The answer becomes clear after the language representation is pruned by RPP. From the perspective of attention mechanism, the query matrix $Q$ (column-wise non-sparsity) mainly models the attention information inside each sequence, while the key matrix $K$ (row-wise non-sparsity) mainly models the attention information between different sequences in the context.}
\subsection{$t$-SNE Visualization}
\begin{figure}[b]
\centering
\setlength{\abovecaptionskip}{0.2cm}
\subfigure{
\centering
\begin{minipage}[b]{13cm}
\centering
\includegraphics[width=13cm]{images/tsne/t-SNE_light.pdf}
\end{minipage}%
}%
\caption{$t$-SNE visualization of \textcolor{rebuttal_color}{word embeddings in the original BERT model and the pruned BERT model using RPP}. \textcolor{rebuttal_color}{From left to right: $t$-SNE of original BERT embedding, together with an enlarging region around word ``intelligent"; $t$-SNE of embedding in pruned BERT, together with an enlarging region. These visualizations are obtained by running t-SNE for 1000 steps with perplexity=100.}}
\label{fig:visualization_tsne}
\end{figure}
$t$-Distributed Stochastic Neighbor Embedding ($t$-SNE) is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets \citep{maaten2008visualizing}. Pre-trained word embeddings
are an integral part of modern NLP systems \citep{devlin2019bert} and one contribution of BERT is the pre-trained contextual embedding. Hence, we visualize word embedding in the original BERT model and the BERT model applied with RPP in Figure \ref{fig:visualization_tsne} using $t$-SNE. \textcolor{rebuttal_color}{Since BERT is different from commonly-studied image classifier in network pruning, we would like to examine if pruning on BERT will lead to significant change on the low-dimensional manifold of the language representation. From Figure \ref{fig:visualization_tsne}, we obtain the following observations and insights.}
\textcolor{rebuttal_color}{\textbf{Low-dimensional manifold: } Figure \ref{fig:visualization_tsne} illustrates that, for both original BERT and BERT pruned with RPP, the low-dimensional manifolds of the language representations are similar, showing the similar projection.
Taking the specific word ``intelligent" in Figure \ref{fig:visualization_tsne} as an example, the distribution of specific words and corresponding nearest words at the low-dimensional manifold (calculated using cosine/Euclidean distance) remain the high degree of similarity. This implies that the BERT applied with RPP keeps most of the language representation information similar to that from the original BERT. }
\textcolor{rebuttal_color}{\textbf{Linguistic interpretation of proper noun:} There is one salient ribbon on the upper left of the macroscopical t-SNE visualization of word embeddings in either the original BERT or the pruned BERT through RPP. Each point in the ribbon represents a year number in annals. There is also one salient short line on the lower left of the macroscopical t-SNE visualization of word embeddings in either the original BERT or the BERT applied with RPP. Each point in most of the lines represents an age number. Other proper nouns also reveal similar characteristics. Our proposed RPP remains the embedding information of these proper nouns from the perspective of linguistic interpretation.
}
\section{Conclusions and Future Work}
This paper presents the pruning algorithm \emph{RPP}, which achieves the first effective weight pruning result on large pre-trained language representation model - BERT. RPP achieves $59.3\%$ weight sparsity without inducing the performance loss on both pre-training and fine-tuning tasks. We spotlight the relationship between the pruning ratio of the pre-trained DNN model and the performance on the downstream multi-task transfer learning objectives. We show that many downstream tasks except SQuAD allows at least $80\%$ pruning ratio compared with $59.3\%$ under task SQuAD. Our proposed Reweighted Proximal Pruning provides a new perspective to analyze what does a large language representation (BERT) learn.
\newpage
|
2,869,038,155,019 | arxiv | \section{Introduction}\label{intro}
A parallelism in real projective 3-space is an equivalence relation on lines (always
assumed to be continuous in a certain sense) such that every class is a spread
(i.e., a partition of the point set into disjoint lines). The classical example is
Clifford parallelism, but there are many more examples with varying amounts of symmetry, as
was shown by Betten and Riesinger; see \cite{Thaswalker}, \cite{hyper}
and other articles by these authors.
From every parallelism one can construct
an oriented parallelism (i.e., a similar equivalence relation on oriented lines), by a
trivial process of `unfolding'. In \cite{so3}, the author found that
even among the most symmetric oriented parallelisms other than Clifford parallelism there are
`non-foldable' examples that do not arise in this trivial way. Our aim is to show that
a vast number of non-foldable examples can be found among regular oriented parallelisms
(where the spreads are all isomorphic to the complex spread), even with 2-torus symmetry.
In the present paper, we lay the foundations for this project by establishing
powerful construction principles. In the case of non-oriented parallelisms, such
principles were found by Betten and Riesinger and later simplified by the author.
One works with the Klein model, which describes $\Pd$ within $\Pf$. The line space of $\Pd$
corresponds to the Klein quadric $K$, and regular spreads become intersections of
$K$ with certain 3-spaces.
First, there is a construction that works for all regular parallelisms;
by dualizing with respect to the Klein quadric, the parallelism is turned into a set
$\CH$ of lines not meeting the quadric such that every tangent hyperplane of the
quadric contains exactly one of them \cite{hyper}, \cite{gldirect}.
Such sets are called hyperflock determining line sets, abbreviated $\it hfd$ line sets.
In the special case where $\CH$ spans (a plane or) a 3-space, one can dualize once more
within the 3-space and one obtains a so-called generalized line star or $\it gl$
star \cite{Thaswalker}, \cite{gldirect}.
A $\it gl$ star is a set of lines in the 3-space
such that every exterior point with respect to $K$ is
on exactly one of them. This correspondence has been used to
construct large sets of examples with torus symmetry \cite{Stevin}, \cite{torus}.
We shall prove similar results for parallelisms of oriented lines
(briefly called oriented parallelisms). There are several obstacles that have to be overcome.
This requires a careful analysis of orientation in various geometric contexts,
see Section \ref{orient}. A spread is a line set homeomorphic to the 2-sphere,
and the section culminates in the proof that orienting a spread as a manifold
amounts to the same as orienting all lines in the spread (Theorem \ref{spreadorient}).
Regular spreads appear in the Klein quadric as intersections with certain 3-spaces.
In Section \ref{Klein}, we show that this correspondence lifts to a continuous map from
oriented 3-spaces to oriented spreads, where the topology on the set of oriented spreads is
defined by a Hausdorff metric.
In Sections \ref{HFD} and \ref{GL} we define the oriented analogs of $\it hfd$ sets
and of $\it gl$ stars and obtain our main results. For example, given a 3-space $R$ that meets
$K$ in an elliptic quadric $Q$, an oriented $\it gl$ star or $\it gl^+$ star with
respect to $Q$ is a set
of oriented secants of $Q$ such that every point $p$ of $R$, not in the interior of $Q$,
is incident with
exactly two of them and such that the set of these two oriented lines depends continuously
on the point $p$. It is the latter condition which makes this approach work.
Example \ref{patholog} will show that compactness would not do as a surrogate.
The main results of these two sections
are Theorems \ref{hfd} and \ref{gl}. They may be summarized as follows.
\bthm\label{SUMMARY}
a) There is a one-to-one correspondence between compact oriented regular parallelisms of
$\Pd$ and $\it hfd^+$ line sets in $\Pf$.
b) Let $R$ be a 3-space of $\Pf$ intersecting the Klein quadric $K$ in an elliptic
quadric $Q$.
There is a one-to-one correspondence between $\it hfd^+$ line sets in $R$
and $\it gl^+$ stars with respect to $Q$.
\ethm
In the final Section \ref{ex} we give criteria
that help to recognize oriented generalized line stars and to construct them,
and we display non-foldable examples with and without rotational symmetry.
A systematic study of regular oriented parallelisms with 2-torus action will be left
to a future occasion.\\
\section{Orientation}\label{orient}
We consider various concepts of orientation arising in real projective geometry, and their
relationships. Starting from fairly standard concepts, we proceed to develop
specialized and not so obvious notions and results concerning orientation in
projective planes, spreads and parallelisms.
A useful reference to standard notions and constructions related to orientation is \cite{GP}.
All notions, notation, and conventions introduced in this section shall be used
tacitly in the sequel.
\subsection{Oriented vector spaces and projective spaces}\label{orspac}
As usual, an orientation on the vector space $\BR^k$ is given by an ordered basis
$B = (v_1, ...,y_k)$, and two orientations given by $B_1$ and $B_2$ are considered as
equal when the linear map sending $B_1$ to $B_2$ has positive determinant.
A vector space
$V$ with a fixed orientation will usually be denoted $(V,B)$ or simply $V^+$.
Any basis defining the same orientation as $B$ will be called a \it positive basis \rm of $(V,B)$.
An oriented differential $k$-manifold is a differential manifold with compatible
orientations on all its tangent spaces. An orientation in this sense defines also an
orientation of the underlying topological manifold, i.e., preferred generators for all local
homology groups in dimension $k$.
An oriented vector space may be considered as an oriented differential manifold,
since it coincides with each of its tangent spaces.
\\
The projective space $P_k\BR$ is defined as the quotient space of $\BR^{k+1}\setminus\{0\}$
obtained by identifying a nonzero vector $v$ with every scalar multiple $rv$, $0\ne r \in \BR$.
By an orientation of $P=P_k\BR$ we shall mean an orientation of $\BR^{k+1}$ and denote
the oriented projective space by $P^+$.
\\
We stress that this does not mean that we have oriented the differential or
topological $k$-manifold $P$.
However, if we restrict the quotient map $\rho: \BR^{k+1}\setminus\{0\} \to P_k\BR$ to
the unit sphere $\BS_k \subseteq \BR^{k+1}$, we obtain a two-sheeted covering map. The
nontrivial deck transformation of this covering is the antipodal map $-\rm id$. A given
orientation of the differential manifold $\BR^{k+1}$ induces an orientation on the unit ball
(as a manifold with boundary), and this in turn yields an orientation on the boundary
$\BS_k$ as follows, cp. \cite{GP}. A basis $B$ of the tangent space $T_s\BS_k$ at a point $s$ is positive if $B$, preceded by an outward pointing vector
$v \in T_s\BR^{k+1} = \BR^{k+1}$, becomes a positive basis of $\BR^{k+1}$.
Now we try to transfer this orientation to $P_k\BR$ via the map $\rho$ by insisting that
the tangent map of $\rho$ should preserve orientation on tangent spaces. This works
without conflict if and only
if the deck transformation preserves the orientation of $\BS_k$, which is the case if
and only if $k+1$ is even, i.e., if $k$ is odd. So an odd-dimensional oriented
projective space as defined here is in fact an oriented manifold, but an
even-dimensional one is not. \\
\bf Remark. \rm The orientation on the manifold $P_k\BR$ defined above
does not depend on the choice of a quadratic form defining the unit sphere $\BS_k$.
In other words, if
$\phi: \BR^{k+1} \to \BR^{k+1}$ is a linear automorphism with positive determinant, then
replacing $\BS_k$ with $\BS_k' = \phi\BS_k$ we get the same result. Indeed, $\phi$
is an orientation preserving diffeomorphism between the two spheres, and the quotient
maps $\rho$ and $\rho'$
to the resulting oriented manifolds $P_k\BR$ and $P_k'\BR$ also preserve orientations.
The induced map $\phi^\flat: P_k\BR \to P_k'\BR$ satisfies
$\rho' \circ \phi = \phi^\flat \circ \rho$, and therefore preserves orientations, as well.\\
We note that an orientation of a compact connected
differential $k$-manifold corresponds to an orientation of the underlying topological
manifold, which can be represented by a preferred generator of its top homology group,
which is infinite cyclic. This amounts to the same as choosing a preferred generator for each
local homology group in dimension $k$ in a coherent way.
\subsection{Grassmannians and orientation}
In Incidence Geometry, a projective space is commonly considered as the subspace
lattice of some vector space with the lattice operations \it join \rm $X\vee Y$ (the span
of the union of two subspaces $X$ and $Y$) and \it intersection \rm
$X\wedge Y$. The $k$-dimensional
real projective space, considered as the subspace lattice of $\BR^{k+1}$, will be
denoted $\Ps{k}$. Its point set $P_k\BR$ is the set of 1-dimensional vector subspaces.
The projective space $P(X)$ associated with an $(l+1)$-dimensional subspace
$X \le \BR^{k+1}$ may be considered as a submanifold of $P_k\BR$, homeomorphic to $P_l\BR$.
Such a subset is considered as an $l$-dimensional projective subspace of $P_k\BR$.
In this way, $\Ps{k}$ becomes a lattice of subsets of $P_k\BR$. \\
The set of all $l$-dimensional subspaces of $P_k\BR$, or equivalently, the set of
$(l+1)$-dimensional subspaces of $\BR^{k+1}$, is known as the Grassmann manifold
${\rm G}_{k+1,l+1}$. Indeed, the transitive action of the general linear group
$\Delta = \GL{k+1}$
turns this set into a compact, connected differential manifold, namely the
homogeneous space of the group $\Delta$ modulo the stabilizer $\Delta_X$
of any fixed $(l+1)$-dimensional
subspace $X$. Specifically, this means that the subspace $\delta X$ corresponds to the coset
$\delta \Delta_X$ for every $\delta \in \Delta$.
The differential manifold ${\rm}G_{k+1,1}$ equals the projective space $P_k\BR$
as defined earlier. When we think of the elements of a Grassmann manifold as projective
subspaces, we prefer to write $P_{k,l}$ rather than ${\rm G}_{k+1,l+1}$. Thus,
we have $P_k\BR = P_{k,0}$.
For the continuity properties of the lattice operations in $\Ps{k}$, a convenient
reference is \cite{Kuehne} or \cite{handb}.\\
Now let us consider oriented subspaces. As before, the set of oriented
$(l+1)$-dimensional vector subspaces of $\BR^{k+1}$ or, equivalently, the set of oriented
$l$-dimensional projective subspaces of $P_k\BR$, becomes a compact differential
manifold when considered as a coset space of $\Delta = \GL{k+1}$. This time, the subgroup
to be factored out is the stabilizer $\Delta_{X^+}$ of any oriented $(l+1)$-dimensional
oriented vector subspace $X^+$. The elements of this stabilizer fix $X^+$ as a vector space
and induce a linear map of positive determinant on $X$. We denote this manifold by
$$G^+_{k+1,l+1} = P^+_{k,l}.$$
We repeat that it is not possible to consider even-dimensional projective subspaces as
oriented manifolds. Note also that, in general, we do not have lattice operations for
oriented subspaces.
Clearly, the stabilizer $\Delta_{X^+}$ is an index 2 subgroup of $\Delta_X$, and therefore,
the manifold $P^+_{k,l}$ is a 2-sheeted covering space of $P_{k,l}$.
The covering maps will be denoted $\psi$. We could have used this
fact as a definition of $P^+_{k,l}$, but this would not allow us to obtain a direct grip on
the orientation carried by a subspace.
\subsection{Polarities and orientation}
A \it polarity \rm is an antiautomorphism $\xi$ of order 2 of the
lattice $\Ps{k}$. It is defined
by a non-degenerate symmetric bilinear form $f$ on $\BR^{k+1}$ and sends a subspace $X$ to
$$\xi(X) = \{v \in \BR^{k+1} \vert \, f(v,X)=0\}.$$
We have $\dim \xi(X) + \dim X = k+1$ for the vector space dimensions. In terms of projective
dimensions, this means that $\xi$ sends $P_{k,l}$ to $P_{k,k-l-1}$.
If $f$ restricts to
a non-degenerate form on $X$, then $\xi(X)$ is indeed a vector space complement of $X$.
In this case, we can define $\xi$ for oriented subspaces. We choose a fixed orientation for
$\BR^{k+1}$, and we define $\xi(X^+)$ by insisting
that the sum decomposition
$$\BR^{k+1} = X \oplus \xi(X)$$
is a sum of oriented vector spaces, i.e., that a positive ordered basis
of $X$, followed by a positive ordered basis of $\xi(X)$, defines the given
orientation of $\BR^{k+1}$. Thus we have a partial lift of the polarity to the 2-sheeted
covers, that is, a continuous partial map $\xi^+: P^+_{k,l} \to P^+_{k,k-l-1}$ that
commutes with the covering maps $\psi$ in the sense that
$\psi \circ \xi^+ = \xi \circ \psi$.
\subsection{Oriented projective planes}
Let $(P,\CL)$ be a compact, connected topological projective plane. This means first of
all that $(P,\CL)$ is a projective plane, i.e., that the elements of $\CL$ are subsets of
the set $P$, called lines, and that two distinct points $p,q$ are joined by a unique line
$p \vee q \in \CL$ and two distinct lines $K,L$ meet in a unique point $K \wedge L \in P$.
Moreover, we require that $P$ and $\CL$ are compact, connected topological spaces and that
the operations $\vee$ and $\wedge$ are continuous. The classical examples are the planes
$\mathrm{PG}(2,\BF)$, where $\BF$ stands for the field of real or complex numbers, the
skew field of quaternions, or the division algebra of octonions. The examples that we have
in mind here are translation planes defined by spreads in $\Pd$, see Section \ref{spr}.\\
For simplicity, we shall assume that lines are topological manifolds, which implies that
they are in fact spheres of dimension $l \in \{1,2,4,8\}$, and that $P$ and $\CL$ are
$2l$-dimensional manifolds. If we only assume that $P$
has finite covering dimension $\dim P < \infty$, then lines are homology $l$-manifolds
with the same possibilities for $l$ as before, and their homology groups are those of
an $l$-sphere. Proofs of these facts can be found in Chapter 5 of \cite{CPP}.
Background information without proofs is also given in \cite{handb}.\\
Let $K,L$ be two lines and let $x$ be a point not contained in any of these lines.
Then the central projection map
$$\omega (K,x,L): y \to (y \vee x) \wedge L$$
is a homeomorphism from $K$ to $L$, called a \it perspectivity\rm . Compositions of
perspectivities starting from a line $L$ and ending up on the same line are
called \it projectivities\rm . They form a group $\Omega(L)$. These groups have been
studied extensively \cite{wind}.\\
Every line $L \approx \BS_l$ admits two possible orientations, and we denote the set of
all oriented lines by $\CL^+$. There is a two to one surjective map
$$\psi: \CL^+ \to \CL$$
that forgets orientations. Orientation forgetting maps will occur frequently, and they will
always be denoted $\psi$. Our goal is to define a topology on $\CL^+$ such that $\psi$ becomes
a two-sheeted covering map. The first such construction was given by Salzmann \cite{adv}, p. 10,
using the space of all embeddings $\BS_l \to P$ whose images are lines,
equipped with the compact open topology. Here we use a different approach.
\\
Let $L^+$ be an oriented line, and choose a point $x \notin L = \psi(L^+)$.
Define a set of oriented lines
$$\CL^+(L^+,x)$$
to be the set of all lines $M$ not containing $x$, endowed with the orientation
transferred from $L^+$ via $\omega(L,x,M)$. The restriction of $\psi$ to this set is a
bijection onto an open subset of $\CL$ (the set of lines not containing $x$), and we
define a topology on $\CL^+(L^+,x)$ by insisting that this bijection be a homeomorphism.
Now consider two such sets $\CU_1 = \CL^+(L_1^+,x_1)$ and $\CU_2=\CL^+(L_2^+,x_2)$, and
let $\CX^+$ be the set
of oriented lines containing neither $x_1$ nor $x_2$. An oriented line
$M^+ \in \CU_1$ belongs to the intersection $\CU_1\cap \CU_2$ if and only if it
lies in $\CX^+$ and
the map $\omega(L_1,x_1,M,x_2,L_2)$ (to be read as a composition of
perspectivities in the obvious manner) is orientation preserving as a map $L_1^+ \to L_2^+$.
If $M_t$ is a path in $\CX = \psi\CX^+$, then the corresponding maps $\omega_t$
are homotopic, hence they have the same effect on the top homology groups and
are either all orientation preserving or all orientation reversing.
Thus the intersection $\CU_1\cap \CU_2$ is mapped by the forgetful
map $\psi$ onto a (possibly empty) union of some path connceted components of $\CX$.
These components are
open sets, so we see that $\CU_1 \cap \CU_2$ is open in $\CU_1$ and in $\CU_2$ and inherits the
same topology from both sets.
Now we endow $\CL^+$ with the topology generated by all the topologies
on the various sets $\CL^+(L^+,x)$, and it is still true that $\psi$ restricts to a
homeomorphism on each of these sets with respect to this topology. Thus we have obtained
\bprop
If the set $\CL^+$ of oriented lines of a compact projective plane is equipped with the
topology defined above, then the forgetful map $\psi: \CL^+ \to \CL$ becomes a
two-sheeted covering map. \ok
\eprop
From this, we obtain the next proposition almost as a corollary. By a \it section \rm of
the map $\psi$ we mean a map $\sigma$ in the opposite direction such that the composition
$\psi \circ \sigma$ is the identity map of $\CL$.
\bprop
\item{a)} Let $(P,\CL)$ be a projective plane with lines of dimension $l$.
The space $\CL^+$ of oriented lines is connected if $l = 1$ (in fact,
it is then a 2-sphere) and is disconnected otherwise.
\item{b)} The forgetful map $\psi: \CL^+ \to \CL$ admits a section if and only if $l \ge 2$.
\eprop
\bpf
If $l \ge 2$, then the point space $P$ is simply connected, because the complement of a
point $P\setminus \{x\}$ deformation retracts onto every line not containing $x$,
cp. \cite{CPP}, 51.26.
Exchanging the roles of points and lines, we see that $\CL$ is simply connected as well.
Therefore, a two-sheeted covering of $\CL$ must be the topological sum of two copies of $\CL$, each
of which is mapped homeomorphically onto $\CL$ by the covering map.\\
For $l=1$, the space $\CL^+$ is connected. Indeed, the pencil $\CL^+_x$ of oriented
lines passing through a given point $x$ is connected (in fact, homeomorphic to $\BS_1$)
since we have a continuous surjection from the boundary of a disc containing $x$ in
its interior
onto $\CL_x^+$, sending a point $p$ to the line $p \vee x$ oriented locally from $p$ to $x$.
Furthermore, any two oriented lines belong to the pencil of oriented lines
passing through their
point of intersection.
Now the space $\CL$ is homeomorphic to $P_2\BR \approx \BS_2/\pm{\rm id}$ for $l=1$,
see \cite{CPP}, 42.10.
Thus, the 2-sphere is the only connected two-sheeted covering space of $\CL$.
A section to $\psi$ would be a homeomorphism by domain invariance, a contradiction.
\epf
For our purposes, the preceding result is not enough. We need an explicit construction
of sections in the case $l \ge 2$. This will be made possible with the aid of
projectivity groups $\Omega(L)$. \\
It is rather easy to see that projectivities do not preserve
orientation of lines if $l=1$. For example, in the real affine plane (which canonically extends
to the projective plane) consider the lines $X$ (the $x$-axis) and $Y$ (the $y$-axis) and
the points $p = (1,-1)$ and $q =(-1,-1)$. The projectivity $\omega(X,p,Y,q,X)$ (to be read as
a composition of perspectivities in the obvious manner) reverses the orientation of the
$x$-axis. However, for $l \ge 2$, we shall prove that all lines can be oriented in such a
way that all perspectivities and, hence, all projectivities preserve these orientations.
By the definition of the topology on $\CL^+$, this then provides a section to the
forgetful map $\psi$.\\
As a preparation, we endow every group $\Omega(L)$ of projectivities with the compact open
topology, which turns it into a topological transformation group of the space $L$.
Next we note that a path $\sigma: [0,1] \to \Omega(L)$ corresponds to an isotopy on $L$,
and so the maps $\sigma(0)$ and $\sigma(1)$ are either both orientation preserving or
both orientation reversing, because they have the same effect on the top homology group.
Together with the above example, this shows that $\Omega(L)$ is not pathwise connected in
the case of the real projective plane. However, we have the following
well-known
\ble
For $l \ge 2$, all groups $\Omega(L)$ of projectivities are pathwise connected.
\ele
\bpf
We follow \cite{wind}, Theorem 3.3. Consider a projectivity
$$\omega = \omega(L_1,x_1,L_2,x_2,...,L_{n-1},x_{n-1},L_n)$$
with $L_n = L_1=L$. We shall construct a path in $\Omega(L)$ joining
$\omega$ to the identity. We choose a point $x$ not on any of the lines $L_i$ and join
every $x_i$ to $x$
by a path $x_i(t)$ in the connected set $(x \vee x_i) \setminus (L_i \cup L_{i+1})$
(or by a constant path if $x = x_i$). Replacing every $x_i$ by $x_i(t)$ in the definition
of $\omega$ we obtain
a path $\omega(t)$ in $\Omega(L)$. Since all projection centers of $\omega(1)$ are
identical, $\omega(1)$ is the identity, and $\omega(0) = \omega$.
\epf
Now we obtain a geometric construction of sections of the forgetful map $\psi$.
\bthm\label{planeorient}
In a compact projective plane of dimension $2l \ge 4$ it is possible in exactly two ways to
orient all lines in such a way that all perspectivities are orientation preserving.
\ethm
\bpf
Start with one line $L$ and orient it in one of the two possible ways. The condition
on perspectivities then forces our way of orienting all other lines. If conflicts
arise when we
transfer orientations about, then it means that two projectivities from $L$ to some
line $K$ take the orientation of $L$ to distinct orientations of $K$. Then the quotient
of these projectivities
is an orientation reversing projectivity of $K$ to itself. This is impossible, because
$\Omega(K)$ is path connected, which implies that every projectivity of $K$ to itself
is isotopic to the identity and hence induces the identity on the top homology group.
\epf
Note that the orientations constructed in this proof depend continuously on the lines, by
the very definition of the topology on $\CL^+$. Thus we have in fact obtained a
section to the forgetful map.
\subsection{Oriented spreads}\label{spr}
We are now ready to study the orientation properties of spreads of $\Pd$, which are
crucial to us
since parallelisms are built from spreads. In fact, the results of this section,
together with the continuity result Theorem \ref{orsprbyinters} below, constitute
the most subtle steps
towards our final goals.
When we first introduced oriented parallelisms
in \cite{so3}, we were content with a very simple approach. A spread is a certain set $\CS$ of
lines of a projective 3-space, which is homeomorphic to the 2-sphere. Hence, the two-sheeted
covering $\CS^+$ is necessarily disconnected, and we defined an orientation of $\CS$ to be a
choice of one of the two connected components of this cover. In the present situation, we need
closer control of the orientations of the lines $L\in \CS$ involved here, so we need to refine our definition. \\
First we recall the definition of a spread; compare Section 64 of \cite{CPP}.
Consider the line space $P_{3,1}$ of $\Pd$. A compact set
$\CS \subseteq P_{3,1}$ is a (topological) \it spread \rm if each point $x$ belongs to a unique
element $S_x \in \CS$. By compactness, the map $x \to S_x$ is continuous, and $\CS$ is homeomorphic to
the 2-sphere (this will become apparent later). \\
One reason for studying spreads is that a spread $\CS$ defines an \it affine translation
plane \rm
$\CA_\CS$. We may consider $\CS$ as a subset of the Grassmann manifold $G_{4,2}$,
i.e., as a set of 2-dimensional vector subspaces of $\BR^4$. The plane $\CA_\CS$ has
point set $\BR^4$, and its lines are all translates
$L+v$, where $L\in \CS$ and $v \in \BR^4$. Thus, $\CS$ is the pencil of lines
containing the origin $0\in \BR^4$ (which incidentally explains why $\CS \approx \BS_2$). \\
The projective plane associated with $\CA_\CS$ is the \it projective translation plane \rm
$\CT_\CS$ defined by $\CS$. There is a particularly nice description of this plane, which is somewhat hidden in the proof of Theorem 64.4 of \cite{CPP}. (That book chapter contains a
thorough introduction to translation planes.) The construction of $\CT_\CS$ starts from $\Ps{4}$.
Choose a hyperplane $H$ of that projective space. Then $H$ is isomorphic to $\Pd$,
and we may consider $\CS$ as a set of lines of $H$. We simply
write $P_4$ for the point set $P_{4,0}=P_4\BR$ of $\Ps{4}$.
Now the points of $\CT_\CS$ are
\begin{itemize}
\item the points of $P_4$ not belonging to $H$ and
\item the elements of $\CS$.
\end{itemize}
This partitions the point set $P_4$ into disjoint sets
(most of them singletons), and the topology of the point set $T$ of $\CT_\CS$ is
the resulting quotient topology.
The lines of $\CT_\CS$ are
\begin{itemize}
\item the elements of $P_{4,2}$ (2-spaces in $\Ps{4}$) that meet $H$ in a line
$S \in \CS$ and
\item the set $\CS$ itself.
\end{itemize}
Incidence of points and lines is given by inclusion, and
the topology of the line set is again a suitable quotient topology. \\
Now it is important to note that the topology of $\CS$ considered as a set of
points of $\CT_\CS$
is the same as the topology of $\CS$ inherited from $P_{3,1} = G_{4,2}$,
the line space of $H$. By definition, $\CS$
considered as
the point set of the line $\CS$ of $\CT_\CS$ is the quotient of the point set
$H_0$ of $H$ with respect to the map $H_0 \to \CS$ that sends a point to the
unique spread line containing it. As we noted earlier, this map is continuous
when $\CS$ is given the topology coming from the line
space of $H$. It is also closed (by compactness) and surjective, hence it is a
quotient map and our claim follows. \\
Remembering Theorem \ref{planeorient}, we now conclude that orienting a spread $\CS$
(the special line of $\CT_\CS$) as a manifold amounts to the same thing as orienting \it all \rm
\, lines of $\CT_\CS$. The affine plane $\CA_\CS$ is obtained from $\CT_\CS$ by deleting
the special line $\CS$ and all its points, and so by orienting all lines of $\CT_\CS$
we have in particular oriented all elements of $\CS$, considered as 2-spaces in
$\BR^4$ (because they are lines of $\CA_\CS$). \\
In terms of the affine plane, this transfer of orientations is easy to visualize:
$\CS$ is the pencil of lines passing through the origin, and in order to orient $S\in \CS$,
apply to $S$ a translation $s \to s+v$ with $v \notin S$. Then consider the bijection
$$s+v \to (s+v)\vee 0$$
of $S+v$ onto $\CS \setminus \{S\}$ and transfer the orientation via these two maps.
This is, indeed, much simpler, but in order to prove consistency by applying
Theorem \ref{planeorient}, we prefer the projective version. \\
Summarizing these constructions, we obtain the following theorem.
\bthm\label{spreadorient}
Let $\CS \subseteq P_{3,1}$ be a compact spread of $\Pd$. The construction given above
defines a bijective correspondence between orientations
of the manifold $\CS \approx \BS_2$ on the one hand and coherent orientations of all lines of
the projective translation plane $\CT_\CS$ associated with $\CS$ on the other hand, i.e.,
sections to the orientation-forgetting map $\psi$ of the latter plane.
In particular, orienting $\CS$ as a manifold amounts to the same as orienting, in a coherent way, all the vector 2-spaces $S \in \CS$ as manifolds (or as vector spaces). \ok
\ethm
\subsection{Oriented parallelisms}
A \it parallelism \rm $\Pi$ on $\Pd$ is a set of pairwise disjoint spreads
covering the line set
$P_{3,1}$. In other words, a parallelism partitions the line set and therefore
is often thought of as an equivalence relation on the line set. Likewise,
a \it parallelism of oriented lines \rm or briefly, an \it
oriented parallelism \rm $\Pi^+$ is defined as a set of oriented spreads partitioning the
set $P^+_{3,1}$ of oriented lines. If $\Pi$ is a parallelism, then an oriented
parallelism $\Pi^+$ can be obtained from it by taking all oriented spreads $\CS^+$ such that
$\psi \CS \in \Pi$, where as always $\psi$ denotes the map that forgets orientations.
This process will be called \it unfolding, \rm and
oriented parallelisms obtained in this way will be said to be \it foldable\rm.
Our investigation
is motivated by the existence of non-foldable oriented parallelisms with nice properties,
the discovery of
which is described in \cite{so3}, and by the desire to find more non-foldable
examples worth looking at.
In a topology to be introduced shortly, an ordinary parallelism is a
non-orientable 2-manifold (a projective plane) and an oriented parallelism is a
2-sphere.
To avoid possible confusion, we stress here that
the possible orientations of this sphere are irrelevant to us.
So the term `oriented parallelism' is merely a shorthand for `parallelism of oriented lines'.
This contrasts with the situation for spreads, compare the preceding subsection.
For the convenience of the reader, we recall
some basic facts obtained in \cite{so3}, adding some details that require extra
attention in the present situation. \\
We need a condition to ensure topological well-behavedness of a parallelism $\Pi$ or $\Pi^+$.
A good choice for a topology on $\Pi$ or $\Pi^+$
is the topology defined by the \it Hausdorff metric \rm \cite{Tuzh},
which we introduce next.
Let $(X,d)$ be a compact metric space. The \it hyperspace \rm $h(X)$ is the set of
all compact subsets of $X$, endowed with the metric
$$d_h(A,B) = \max\{\max_{a\in A}d(a,B), \max_{b \in B}d(b,A)\},$$
where, as usual, $d(a,B) = \max_{b \in B} d(a,b)$. By the \it Hausdorff topology, \rm
we mean the topologgy on $h(X)$ induced by the Hausdorff metric.
Two metric topologies agree if they
induce the same notion of convergence. In the case of the Hausdorff metric, this notion
is captured by the following Lemma, which follows easily from the definition,
in view of compactness.
\ble\label{hausdconv}
Let $(X,d)$ be a compact metric space. A sequence $A_n \in h(X)$ converges to $A \in h(X)$
if and only if the following two conditions are satisfied.
\item{i)} If a sequence of points $a_n \in A_n$ converges to a point $a$ in $X$, then
$a \in A$.
\item{ii)} Every point $a \in A$ is the limit of some sequence of points $a_n \in A_n$.
\ok
\ele
In what follows, we want to treat ordinary and oriented parallelisms simultaneously. We write
$\Pi^*$ and $L^* \in P_{3,1}^*$ to indicate that we are thinking of both possibilities. By
$$\Pi^*(L^*)$$
we denote the unique spread in $\Pi^*$ that contains $L^*$. Similarly,
$$\Pi^*(x,L^*)$$
denotes the unique line that belongs to the same spread as $L^*$ and contains a given point $x$.
In this way, we define two maps
$P_{3,1}^* \to \Pi^*$ and $P_{3,0}\times P_{3,1}^* \to P_{3,1}^*$; each of them
contains the same information as $\Pi^*$ itself, which should justify the abuse
of notation. We have the following
\bprop\label{toppar}
Let $\Pi^*$ be an ordinary or oriented parallelism on $\Pd$. The following conditions
are equivalent.
\item{1)} $\Pi^*$ is compact with respect to the Hausdorff topology on $h(P_{3,1}^*)$.
\item{2)} The map $\Pi^*: P_{3,1}^* \to h(P_{3,1}^*)$ defined above is continuous with respect to the Hausdorff topology on the hyperspace.
\item{3)} The map $\Pi^*: P_{3,0}\times P_{3,1}^* \to P_{3,1}^*$ defined above is continuous.
\eprop
\bpf
Assume (1). In order to show (3), we prove sequential continuity.
If $(x_n,L_n^*) \to (x,L^*)$, we have to show that $\Pi^*(x_n,L_n^*) \to \Pi^*(x,L^*)$. By compactness of $\Pi^*$, we may assume that
$\Pi^*(L_n^*)$ converges to some spread $\CS^* \in \Pi^*$.
Then by Lemma \ref{hausdconv}, we have
$L^* \in \CS^*$, and so $\CS^* = \Pi^*(L^*)$. Moreover, we may assume that $\Pi^*(x_n, L_n^*)$
converges to some line $K^* \in P^*_{3,1}$. Then $x \in K^*$ since incidence is closed,
and $K^* \in \Pi^*(L^*)$ by Lemma \ref{hausdconv}, because $\Pi^*(x_n,L_n^*) \in \Pi^*(L_n^*)$
and $\Pi^*(L_n^*) \to \Pi^*(L^*)$. Thus, $K^* = \Pi^*(x,L^*)$. This proves (3).
Now assume (3) and suppose that $L_n^* \to L^*$. In order to prove (2),
we have to show that $\Pi^*(L_n^*)\to
\Pi^*(L)$. Thus we have to verify conditions (i) and (ii) of Lemma \ref{hausdconv}.
So let $K^*_n\in \Pi^*(L_n^*)$ and assume that $K_n^* \to K^* \in P^*_{3,1}$.
We have to show that $K^* \in \Pi^*(L)$. Choose points $x_n \in K_n^*$. We may assume that
$x_n \to x \in P_{3,0}$. Then by (3), we have
$$K_n = \Pi^*(x_n,L_n^*) \to \Pi^*(x,L^*) \in \Pi^*(L^*).$$
This proves (i). For condition (ii), let $K^* \in \Pi^*(L^*)$. We are looking for lines
$K_n^* \in \Pi^*(L_n^*)$ such that $K_n^* \to K^*$. Choose any point $x \in K^*$. Then
$K_n := \Pi^*(x,L_n^*)$ belongs to $\Pi^*(L_n^*)$, and these lines converge to
$\Pi^*(x,L^*) = K^*$ by (3).
Finally, (2) implies (1) because the map considered in (2) restricts to a bijective map
from the star of all lines or oriented lines containing any chosen point $x$ to $\Pi^*$. The star is compact, and (1)
follows.
\epf
We shall say that $\Pi^*$ is a \it topological (oriented) parallelism \rm if it satisfies these
equivalent conditions.
Line stars are homeomorphic to $P_2\BR$ in the ordinary case and to $\BS_2$ in the oriented case. Hence, the last step of the above proof shows:
\ble
A topological parallelism $\Pi$ is
homeomorphic to $P_2\BR$, and a topological oriented parallelism $\Pi^+$ is homeomorphic to $\BS_2$.
\ok
\ele
\section{Klein correspondence for oriented regular spreads}\label{Klein}
Within $\Ps{5}$, the Klein correspondence sets up a model of $\Pd$ that is well suited for studying the line space $P_{3,1}$. We summarize without proofs properties of the Klein correspondence that can be found
in the literature, in particular in \cite{Knarr}, \cite{Stevin}, \cite{gldirect}.
The Klein model arises from the index 3 bilinear form
$f(x,y) = x_1y_1 + x_2y_2 +x_3y_3 - x_4y_4-x_5y_5-x_6y_6$ on $\BR^6$. There are several
kinds of special subspaces of $\BR^6$ with respect to this form:
\begin{itemize}
\item Totally isotropic one-dimensional subspaces. Viewed as points of $\Ps{5}$, they constitute
the \it Klein quadric \rm $K$, which represents the line set of $\Pd$.
\item Two sorts of totally isotropic 3-dimensional subspaces (i.e., $f$ induces the zero
form on them). They represent the points and hyperplanes of $\Pd$. Incidence with lines is represented as reverse or direct inclusion, respectively.
\item Four-dimensional subspaces of signature $(1,3)$ or $(3,1)$, i.e., $f$ induces a non-degene\-rate form of index one on them. Viewed as projective subspaces, they are
3-dimensional and intersect $K$ in an elliptic quadric. They will be most important to us,
and we call them (1,3)-spaces or (3,1)-spaces, respectively. As a vector space, a (1,3)-space
contains a 3-dimensional negative definite subspace, and a (3,1)-space contains a 3-dimensional
positive definite subspace.
\end{itemize}
We use the Klein correspondence mainly to describe regular spreads. A \it regular
spread \rm of $\Pd$ is a spread isomorphic to the \it complex spread\rm , which defines the complex affine plane. In other words, the latter spread consists of the one-dimensional complex
subspaces of $\BC^2 = \BR^4$. We have the following Lemma, proved, e.g.,
in \cite{Stevin}, Proposition 13.
\ble\label{regspr}
In the Klein model of $\Pd$, the regular spreads are precisely the sets $\CS=K\cap P$,
where $P$ is a $(1,3)$-space or a $(3,1)$-space.
\ele
Here, and frequently in what follows, we identify projective subspaces
with their point sets, so that the lattice $\Pf$ is viewed as a lattice of subsets of $P_5\BR$.
Our aim now is to obtain an oriented version of the above lemma, together with a
continuity assertion. This will be achieved by the construction given below.\\
The space of oriented lines of $\Pd$ is a two-sheeted covering of the line space,
which we identify with the Klein quadric $K$. Therefore,
we shall write $K^+$ for the space of oriented lines
and call it the \it oriented Klein quadric. \rm
The covering map $K^+ \to K$ will be denoted $\psi$, as usual. It is well-known that
$K^+ \approx \BS_2 \times \BS_2$. An easy proof can be given using the oriented left and right
Clifford parallelisms, see \cite{so3}, Proposition 2.4. \\
\bf Construction. \it Step 1. Let $P^+ \in P^+_{5,3}$ be an oriented (3,1)-space,
considered as projective space. The projective dimension of $P^+$ is odd, so,
as explained in Section \ref{orspac}, we are given an orientation of
$P^+$ as a differential manifold.
\it Step 2. Let $P = \psi(P^+)$. The elliptic quadric $\CS = P \cap K$,
which represents a regular spread,
separates $P$ into two components, and is the boundary of both.
The closure of the `interior' component is a
compact ball and inherits an orientation from $P^+$. This orientation induces an
orientation on the boundary $\CS$ as follows: Let $B=(v_2,v_3)$ be an ordered
basis of the tangent space $T_s\CS$ at $s \in \CS$ and let $v \in T_sP$
be an outward pointing tangent vector.
Then the orientation of $\CS$ at $s$ is defined by $B$ if $(v,v_2,v_3)$
is a positive basis of $T_sP$.
\it Step 3. Finally, we apply Theorem \ref{spreadorient}, and from the orientation of
the manifold $\CS$ we obtain an orientation of all lines of the spread, such that we end up
with one particular connected component $\CS^+$ of $\psi^{-1}(\CS) \subseteq K^+$.
We shall denote this set by
$$\CS^+ = \CS^+(P^+) \subseteq K^+.$$
\rm Our aim is to show now that the map $P^+ \to \CS^+(P^+)$ from $P_{5,3}^+$ to the
hyperspace $h(K^+)$
is continuous. This is the main step in the proof of Theorem \ref{hfd} below, which
describes a way of constructing all oriented regular parallelisms.
We need some preparation concerning group actions. For more details on the group actions
discussed here, see \cite{Stevin}. \\
We consider the group
$$\Sigma = \mathop{\rm {SL}}(4,\BR).$$
By definition, this group acts on $\BR^4$; it also acts (ineffectively) on $\Pd$.
Via the Klein correspondence, the latter action is translated to an $f$-orthogonal
action on $\BR^6$ and on
$\Ps{5}$, where, as earlier, $f$ denotes the form defining the Klein quadric.
In fact, $\Sigma$ induces an index 2 subgroup of the projective orthogonal group.
In particular,
the action of $\Sigma$ leaves invariant all the sets of subspaces of special type enumerated
at the beginning of this section. The actions on these sets of subspaces are transitive.
The action also lifts to the sets $P^+_{5,l}$ of oriented
subspaces. \\
A transitive action of a Lie group $\Gamma$ on a manifold $M$ always
admits \it local sections\rm.
That is, given a point $x \in M$, there exist a neighborhood $U$ of $x$ and a continuous
map $u \to \gamma_u$ from $U$ into $\Gamma$ such that $\gamma_x$ is the
identity and $u= \gamma_u(x)$ for all $u\in U$.
This can be shown by proving that the map $\Gamma \to M$ sending $\gamma$ to $\gamma(x)$
is a submersion. See \cite{naga} for a more general result.
\ble\label{section}
The actions of $\Sigma$ on the sets of oriented and non-oriented special
subspaces of $\Ps{k}$ listed at the beginning of this section admit local
sections. \ok
\ele
\ble \label{hausdconv2}
Let $\Gamma$ be a topological group acting on a metric space $X$ and let $Y \subseteq X$
be compact. If $\gamma_\nu$ is a sequence in $\Gamma$ converging to the unit element, then
$\gamma_\nu(Y) \to Y$ in the Hausdorff metric.
\ele
\bpf The continuity of the map $\Gamma\times X \to X$
sending $(\gamma,x)$ to $\gamma(x)$ implies that
$\gamma_\nu \to {\rm id}$ uniformly on the compact set $Y$. The claim follows easily.
\epf
\ble\label{grassm}
If $\Ps{k}$ is viewed as a lattice of subsets of $P_k\BR$,
then the topology of the Grassmannians
$P_{k,l}$ is induced by the Hausdorff metric.
\ele
\bpf
We use the continuity properties of the topological projective space $\Ps{k}$ given,
e.g., in \cite {Kuehne} or \cite{handb}.
Suppose that $P_\nu \to P$ in $P_{k,l}$. Choose a subspace $Q$ of dimension $k-l-1$
that is in general position to all these spaces. Then the projection map $\pi_\nu: P \to
P_\nu$ sending $p \in P$ to $(p\vee Q)\wedge P_\nu$ is a homeomorphism, and
$\pi_\nu$ converges to the identity of $P$. Hence for every point $p\in P$, the sequence
$\pi_\nu p_\nu \in P_\nu$ converges to $p$, and condition (ii) of Lemma \ref{hausdconv}
is satisfied. Condition (i) follows from the fact that the incidence relation is closed.
This shows that $P_\nu \to P$ with respect to the Hausdorff metric. Alternatively, this can be
deduced from the preceding two lemmas.
Conversely, assume that $P_\nu \to P$ in the Hausdorff sense. Let $X\subseteq P$
be a set of $l+2$ points spanning $P$ (a \it frame \rm for $P$). Then for every $\nu$
there is a set $X_\nu \subseteq P_\nu$ of cardinality $l+2$ such that $X_\nu \to X$ in
the Hausdorff sense. It follows that for $\nu$ large enough, $X_\nu$ is a frame for $P_\nu$,
and the continuity of forming spans implies that $P_\nu \to P$ in $P_{k,l}$.
\epf
\bthm\label{orsprbyinters}
The above construction defines a continuous map $P^+ \to \CS^+(P^+)$ from the set of
oriented $(3,1)$-spaces in $P^+_{5,3}$ to the hyperspace $h(K^+)$ of the
oriented Klein quadric.
\ethm
\bpf
1. Let $P_\nu^+ \to P^+$ be a convergent sequence of $(3,1)$-spaces in $P^+_{5,3}$.
According to Lemma \ref{section},
there is a sequence $\sigma_\nu \in \Sigma$, converging to the identity, such that
$P^+ = \sigma_\nu P^+_\nu$ for all $\nu$. Using Lemma \ref{hausdconv2}, we infer that that
$\psi P^+_\nu = P_\nu$
converges to $\psi P^+ = P$ in the Hausdorff metric; compare also
Lemma \ref{grassm}. Since $K$ is invariant
under $\Sigma$, the same also holds for
the intersections with $K$, that is, $\CS_\nu = P_\nu \cap K \to \CS = P\cap K$
in the hyperspace $h(K)$.
2. The group $\Sigma$ of isomorphisms respects all steps of the construction that turns
an oriented projective subspace of odd dimension into an oriented manifold. Therefore, the oriented manifolds $P_\nu^+$ converge to the oriented manifold $P^+$.
By this we mean that that there is a
uniformly continuous sequence of orientation preserving
embeddings of $P_3^+\BR$ onto
$P_\nu^+\subseteq P_5$ that converges to an orientation preserving embedding onto
$P^+$. Indeed, the restrictions of the maps $\sigma_\nu^{-1}$ form such a sequence.
3. The group $\Sigma$ respects all structural features that were used in the construction of an
orientation of the spreads $\CS_\nu$ and $\CS$ from the orientations of $P_\nu$ and $P$,
given in Step 2 of the construction following Lemma \ref{regspr}. As before, this
implies that $\CS_\nu^+ \to \CS^+$ as oriented manifolds.
4. The group $\Sigma$ acts on both $\BR^4$ and $\Pd$.
The group elements $\sigma_\nu$ send the spread $\CS_\nu$, considered as a
subset of $G_{4,2}$, to the spread $\CS$. Consequently, $\sigma_\nu$ is an isomorphism between
the affine translation planes defined by these spreads, and extends to an
isomorphism of the associated projective planes. Thus $\sigma_\nu$ preserves all
steps in the construction of orientations on the elements of $\CS_\nu$ and
of $\CS$. Consequently, $\sigma_\nu$
sends $\CS^+(P_\nu^+)$ to $\CS^+(P^+)$. Moreover, $\sigma_\nu$
converges to the identity on $K^+$,
hence as before we may conclude that $\CS^+(P_\nu^+) \to \CS^+(P^+)$ in the
Hausdorff metric, as desired.
\epf
\section{Oriented $\it hfd$ line sets and first main result}\label{HFD}
We begin with a topological Lemma. It is probably known, but I do not remember seeing it in the literature. If $X$ is a topological space, we let $h_2(X)$ denote the set of subsets $A \subseteq X$
with cardinality $\# A = 2$. We topologize this as the quotient
$$h_2(X) = \left ( (X\times X) \setminus \Delta_X \right ) /\langle s \rangle,$$
where $\Delta_X$ denotes the diagonal $\{(x.x) \vert \, x \in X\}$ and $\langle s \rangle$
is the the group generated by the switching map
$s: (x,y) \to (y,x)$. If $X$ is metric, then this topology is also induced by the Hausdorff metric, which is why we choose the symbol $h_2$.
\ble\label{2-1}
Let $q: \tilde Y \to Y$ be a two-sheeted covering map of connected Hausdorff spaces, and let $X$ be a compact space. Let $\tilde g:X \to \tilde Y$ be continuous and $g = q\circ \tilde q$.
Suppose that all inverse images $g^{-1}(y)$, $y\in Y$, have cardinality 2 and that the resulting map
$g^{-1}:Y \to h_2(X)$ is continuous.
Then the map $\tilde g$ is bijective and, in fact, a homeomorphism.
\ele
\bpf
Let $U\subseteq Y$ be an open set which is evenly covered, that is, $q^{-1}(U)$
is a union of two open subsets $U_1, U_2$ which are both mapped homeomorphically
onto $U$ by $q$.
If $\tilde g$ maps the two $g$-inverse images of $u\in U$ into the same sheet $U_i$,
then the images are in fact equal, and
the same happens for nearby points $u'$, by continuity of $g^{-1}$.
On the other hand, if those images are distinct, then the same holds in a neighborhood of $u$.
This shows that the cardinality of $\tilde g (g^{-1}(y))$, $y \in Y$, is locally constant.
By connectedness of $Y$, this cardinality is always 1 or always 2. In the latter case,
$\tilde g$ is bijective and hence a homeomorphism by compactness. In the former case,
looking at the sheets again one sees that
the set $\tilde g(X)$ and its complement are both open and nonempty,
a contradiction to connectedness.
\epf
\begin{Example}
\rm The following shows that the assumption about continuity of the inverse
is indispensable in the Lemma above. We take $\tilde Y = X = \BS_2$ and $Y= P_2\BR$, the real projective plane. There is the two-sheeted covering $q: \tilde Y \to Y$
sending $y$ to $\pm y$. Consider $g = q\circ \tilde g$,
where $\tilde g$ is either the identity map or the folding map
$(x,y,z) \to (x,y,\vert z \vert)$. In the second case, $g^{-1}$ is discontinuous at the
equator ($z=0$), and $\tilde g$ is neither injective nor surjective.
\end{Example}
We now return to the Klein model of $\Pd$, and consider the polarity $\pi_5$ defined by the
bilinear form $f$ of signature $(3,3)$. First we note that the polar $\pi_5(x)\in P_{5,4}$
of a point $x \in K$ (which represents a line of $\Pd$) is the \it tangent hyperplane \rm of
$K$ at that point. This is not to be confused with the tangent vector space $T_xK$ of the differential manifold $K$,
which is of no concern to us at the moment.
A line $L \in P_{5,1}$ is called an \it exterior line \rm with respect to
$K$ if $L\cap K = \emptyset$. We note that this is the case if and only if $L$ is a
subspace of type $(2,0)$ or $(0,2)$, i.e., the form $f$ is positive or negative
definite on $L$ considered as a two-dimensional vector space.
Then $\pi_5(L) \in P_{5,3}$ is a $(1,3)$-space or a $(3,1)$-space, respectively, and defines a regular spread $\pi_5(L)\cap K$. \\
Betten and Riesinger \cite{hyper}
defined a \it hyperflock determining line set \rm or shortly, an \it hfd set
\rm to be a set $\CH \subseteq P_{5,1}$ of exterior lines such that every tangent hyperplane
$\pi_5(x)$, $x \in K$, contains exactly one line from $\CH$. Since $\pi_5$
is an antiautomorphism
of the lattice $\Ps{5}$ (i.e., it reverses inclusions), this implies that every
$x\in K$ is contained in exactly one element of $\pi_5(\CH)$. In other words, this set
defines a regular parallelism $\Pi (\CH)$ by taking intersections with $K$.
Moreover, every regular
parallelism arises in this way, and the parallelism $\Pi (\CH)$ is topological
if and only if $\CH$ is compact; compare \cite{gldirect}. Now we imitate this
in the oriented case, but we need to change in the pattern. For example,
inclusion is only defined for non-oriented subspaces, so we cannot directly mimick
the above definition of an \it hfd \rm line set.
\begin{Definition}
\rm a) An \it oriented hfd line set \rm or briefly, an $\it hfd^+$
set is a
set $\CH^+\subseteq P^+_{5,1}$ of oriented exterior lines
such that
\begin{itemize}
\item
for every $x\in K$, there are exactly two lines $H_i^+\in \CH^+$, $i=1,2$, such that
the tangent hyperplane $\pi_5(x)$ contains $\psi H_1^+$ and $\psi H_2^+$, and
\item
the set $\{H_1^+,H_2^+\}$ of these oriented lines depends
continuously on the point $x$.
\end{itemize}
b) If $\CH^+$ is an $\it hfd^+$ set, we denote by
$$\Pi^+(\CH^+)=\CS^+(\pi_5^+\CH^+)$$
the set of all oriented spreads $\CS^+(\pi_5^+ H^+)\subseteq h(K^+)$, $H^+\in \CH^+$, as in Theorem \ref{orsprbyinters}.
\end{Definition}
In contrast to the non-oriented case, the definition of $\it hfd^+$ sets is quite useless
without the condition on continuity of the inverse. This is because it works properly only in
connection with Lemma \ref{2-1}; compare also Example \ref{patholog}.
As a compensation, compactness can be deduced
from this condition.
\bprop\label{hfdcomp}
Let $\CH^+$ be an $\it hfd^+$ set. Then
a) $\CH^+$ is compact.
b) For each $H^+ \in \CH^+$, there is a point $x \in K$ such that $\psi H^+ \subseteq \pi_5(x)$.
\eprop
\bpf
For assertion (b), note that exterior lines are of type $(2,0)$ or $(0,2)$.
For every $x\in K$, the tangent hyperplane $\pi_5(x)$ contains subspaces of both those types,
and the group $\Sigma$ is transitive on the subspaces of either type, whence (b) follows.
Now assertion (a) follows, because $K$ is compact. Indeed, from the continuity property of
$\it hfd^+$ sets, we infer that the set of non-ordered pairs $\{H_1^+,H_2^+\}$ of
oriented lines contained in some tangent hyperplane $\pi_5(x)$ is compact. By (b) it
follows easily that $\CH^+$ is compact, as well.
\epf
Here is our first main result. It describes \it all \rm topological oriented
regular parallelisms, whereas the final results of the next section only deal
with the case that $\dim \mathop \mathrm{span} \CH^+=3$.
\bthm\label{hfd}
If $\CH^+$ is an $\it hfd^+$ set, then the set
$\Pi^+(\CH^+) = \CS^+(\pi_5^+\CH^+)$ of oriented
spreads is a topological oriented regular parallelism, and every topological oriented
regular parallelism arises in this way.
\ethm
\bcor
Every $\it hfd^+$ set $\CH^+$ is homeomorphic to the 2-sphere $\BS_2$, and it
consists entirely either of lines of type $(2,0)$ or of lines of type $(0,2)$.
\ecor
\it Proof of Corollary. \rm The polarity $\pi_5^+$ is continuous, and Theorem \ref{orsprbyinters}
asserts that the map $P^+ \to \CS^+(P^+)$ is continuous with respect to the Hausdorff metric.
Since $\CH^+$ is compact by Proposition \ref{hfdcomp}, it follows that the oriented parallelism
$\Pi^+(\CH^+)$ is homeomorphic to $\CH^+$,
and we know that oriented parallelisms are homeomorphic to the 2-sphere. In particular,
$\CH^+$ is connected, and the second assertion follows. \ok
\\
\it Proof of Theorem \ref{hfd}. \rm Using Theorem \ref{orsprbyinters} and
Proposition \ref{hfdcomp}, we obtain that $\Pi^+ = \Pi^+(\CH^+)$ is a set
of oriented spreads, and that this set is compact with respect to the Hausdorff metric.
We want to use
Lemma \ref{2-1} in order to show that every oriented line belongs to exactly
one of these oriented spreads. At first sight, there seems to be no mapping
available to which the Lemma
might be applied.
However, instead of the condition just stated, is suffices to consider the star
$\mathfrak{L}_p^+$ of all oriented lines passing through some point $p$ of
$\Pd$ and to prove that every oriented line in this star
belongs to exactly one spread in $\Pi^+$. We view the star as a subset of $K^+$.
Now we have the two-sheeted covering map
$\psi: \mathfrak{L}^+_p \to \mathfrak{L}_p$, and we have a map $\tilde g: \CH^+ \to
\mathfrak{L}^+_p$ that sends an oriented line $H^+ \in \CH^+$ to the unique oriented
line of the oriented spread
$\CS^+(\pi_5^+H^+)$ containing $p$.
This map is continuous
by an argument similar to the proof of Theorem \ref{toppar}.
By the properties of the $\it hfd^+$ set and those of the polarity,
every line $L \in \mathfrak{L}_x$ belongs to the spreads
$\psi \CS^+(\pi_5^+H_i^+)$ for exactly two lines $H_1^+, H_2^+ \in \CH^+$
and, moreover, the set $\{ H_1^+, H_2^+\}$ depends continuously on $L$.
This means that the composite map $g = \psi \circ \tilde g$
has inverse images $g^{-1}(L)$ of cardinality 2, which depend continuously on $L$.
Now the Lemma tells us that $\tilde g$ is bijective,
which completes the proof that $\Pi^+$ is a compact oriented parallelism.
It remains to prove the converse, i.e., that every compact oriented parallelism comes from
some $\it hfd^+$ set. This is obtained without difficulty by retracing all the steps.
One special point is to show that the set of
oriented lines from the $\it hfd^+$ set that are contained in a tangent hyperplane
$\pi_5(x)$, $x \in K$, really depends on $x$ continuously. The reason for
this is the fact that the 2-valued inverse
map of $\psi: K^+ \to K$ is continuous because $\psi$ is a covering map. Now the
continuity property of the given oriented parallelism implies that the
set of two oriented spreads containing the elements of $\psi^{-1}(x)$ depends continuously on
$x\in K$, and this translates to the corresponding property of $\CH^+$.
\ok\\
Every non-oriented $\it hfd$ set yields an $\it hfd^+$ set by taking its inverse
image with respect to $\psi$, the forgetful map. $\it hfd^+$ sets obtained in this way will
be called \it foldable. \rm Clearly we have the following.
\bprop
The oriented parallelism $\Pi^+(\CH^+)$ associated with an $\it hfd^+$ set $\CH^+$ is
foldable if and only if $\CH^+$ is foldable. \ok
\eprop
\section{Oriented generalized line stars and second main result}\label{GL}
\begin{Definition} \rm Let $\Pi^* = \Pi^*(\CH^*)$ be an oriented or non-oriented
compact regular parallelism.
a) The space
$$R = R(\Pi^*) := \mathop {\rm span} \CH^*$$
will be called the \it ruler \rm of the parallelism $\Pi^*$.
b) The number
$$\dim \Pi^* := \dim R = \dim \mathop {\rm span} \CH^*$$
will be called the \it dimension \rm of $\Pi^*$. This terminology was introduced, in the
non-oriented case, by Betten and Riesinger \cite{hyper}.
\end{Definition}
If $\dim \Pi^+ = 2$, then $\CH^+ \approx \BS_2$ must consist of all oriented lines in $R$.
According to \cite{hyper}, Lemma 2.7, $\psi (\Pi^+)$ is the ordinary Clifford parallelism.
Hence $\Pi^+$ is its oriented unfolding, the oriented Clifford parallelism.
In this section, we shall deal with the
case $\dim \Pi^* = 3$. In this case, passing to a dual object one obtains a
very convenient description for $\CH^*$. In the non-oriented case, this was
shown by Betten and Riesinger \cite{Thaswalker}; see \cite{gldirect} for a simple proof.
We shall see that this direct
proof carries over to the oriented case almost \it verbatim. \rm The only problem is to capture
the right kind of dual object by a suitable definition. \\
Let an $\it hfd^+$ set $\CH^+$ with 3-dimensional span be given.
We know that either all
lines of $\CH^+$ are
(oriented) $(2,0)$-spaces, or all these lines are $(0,2)$-spaces.
The two cases are interchanged by
the map $(x_1,x_2,x_3,y_1,y_2,y_3) \to (y_1,y_2,y_3,x_1,x_2,x_3)$, so without
loss of generality we may assume that the elements of $\CH^+$ are of type $(0,2)$.
The following result is essentially contained in the proof of \cite{Stevin}, Theorem 23.
We give an abbreviated proof for the sake of completeness.
\bprop
Let $\Pi^* = \Pi^*(\CH^*)$ be a 3-dimensional compact regular parallelism, and assume
that the elements of $\CH^*$ are of type $(0,2)$ (i.e., negative definite). Then the
ruler $R(\Pi^*) \in P_{5,3}$ is of type $(1,3)$. In particular, it meets the Klein quadric
$K$ in an elliptic quadric $Q = K\cap R$.
\eprop
\bpf
If $\CH^*$ were a line star $\mathfrak{L}_y^*$ of $R$, then the point $y$ would belong to all tangent
hyperplanes $\pi_5(x)$, $x \in K$, a contradiction. Therefore, $R$ is spanned by two lines
$H_1$, $H_2$ from $\psi \CH^*$. Thus $\pi_5R = \pi_5H_1 \wedge \pi_5H_2$ meets $K$ in
the (empty) intersection of two spreads from $\Pi^*$ and is an exterior line. This implies
our claim.
\epf
Now we want to start with a potential ruler $R$ of type $(1,3)$ in $\Pf$
and to find an $\it hfd^*$ set $\CH^*$ in $R$. We use the polarity $\pi_3$ of $R$ induced
by $\pi_5$, in other words, the polarity defined by the restriction to $R$ of the form $f$.
We say that a point of $R$ is \it non-interior \rm with respect to $Q = R\cap K$
if it either belongs to $Q$
or is contained in a line that misses $Q$. Let $\mathop{Ni}Q$ be the set of
non-interior points. By a \it 2-secant \rm of $Q$ we mean a line meeting $Q$ in two distinct points.
In the non-oriented case, it is known from \cite{Thaswalker} and \cite{gldirect} that
the compact $\it hfd$ sets in $R$ are precisely the sets $\pi_3G$, where $G$ is a
compact set of 2-secants of $Q$ such that every point of $\mathop{Ni}Q$ belongs to
exactly one line from $G$. A set $G$ of this kind is called a \it generalized line star\rm,
abbreviated \it gl \rm star.
The simplest example is an ordinary line star $G=\mathfrak{L}_x$; then $\CH = \pi_3 \mathfrak{L}_x$ is the
line set of the plane $\pi_3x$, and we are in the Clifford case. We ask how to define
the correct oriented analogue of a \it gl \rm star. The answer is the following.
\begin{Definition}
\rm Let $Q$ be an elliptic quadric in a 3-dimensional real projective space.
A set $G^+$
of oriented 2-secants of $Q$ is called an \it oriented gl star \rm or just a
$\it gl^+$ star if every point $x$ of
$\mathop{Ni}Q$ is incident with exactly two oriented lines from $G^+$, and if the set of
these two oriented lines depends continuously on the point $x$.
\end{Definition}
As with $\it hfd^+$ sets, there is no version of the notion of $\it gl^+$ stars
without the continuity condition, but compactness can be deduced from it.
\ble
Every $\it gl^+$ star $G^+$ is compact, but compactness cannot replace the continuity
condition in the above definition.
\ele
\bpf
Compactness of $G^+$ follows directly from compactness of $Q$ via the continuity property.
That compactness does not conversely imply the continuity condition will be demonstrated
by Example \ref{patholog}.
\epf
\bthm\label{gl}
Let $R$ be a 3-space of type $(1,3)$ in $\Pf$. The $\it hfd^+$ sets in
$R$ are precisely the sets $\CH^+ = \pi_3^+G^+$, where $G^+$ is a $\it gl^+$ star
with respect to $Q = R\cap K$. The space $R$ is generated by $\CH^+$ if and only if
$G^+$ is not an ordinary star of oriented lines.
\ethm
Combining this with Theorem \ref{hfd} we obtain our main result for the 3-dimensional case:
\bcor\label{main}
The compact oriented regular parallelisms of $\Pd$ of dimension $d \le 3$ are
precisely the parallelisms
$$\Pi^+(G^+) := \Pi^+(\pi_3^+G^+),$$
where $G^+$ is a $\it gl^+$ star in a 3-space $R$ of $\Pf$ that meets the Klein
quadric $K$ in an elliptic quadric of $R$. The parallelism is Clifford if and only if
$G^+$ is an ordinary star of oriented lines. \ok
\ecor
If the ruler $R$ is of type $(1,3)$, as we have assumed previously, then
the 3-spaces $\pi_5^+\pi_3^+L^+$, $L^+ \in G^+$, which define the oriented spreads of
$\Pi^+(G^+)$, are of type $(3,1)$. \\
The proof of Theorem \ref{gl} uses the following Lemma.
\ble\label{conic}
\rm (\cite{gldirect}, 2.4) \it Let $U \in P_{3,1}$ be a subspace of type $(1,3)$ and
consider the elliptic
quadric $Q = X \cap K$. For every $x\in K\setminus Q$, the tangent hyperplane $\pi_5(x)$
intersects $Q$ in a non-degenerate conic, and every non-degenerate conic in $Q$ arises
in this way.
\ele
\bpf
The quadric $Q$ represents a spread in $\Pd$, hence the line represented by
$x \notin Q$ intersects infinitely many lines in this spread. This means that $x$ is
$f$-orthogonal to infinitely many elements $q \in Q$. These elements then belong to
the plane $\pi_5(x)\wedge R$.
The converse follows using transitivity properties of the group $\Sigma$.
\epf
\it Proof of Theorem \ref{gl}. \rm The proof is practically the same as the proof of
\cite{gldirect}, Theorem 2.3. Only the words `exactly one' have to be replaced by `exactly two'
where appropriate. For the sake of completeness, we give the proof that a $\it gl^+$ star
$G^+$ yields an $\it hfd^+$ set; the converse direction is similar.
For $L^+ \in G^+$ and $x \in K$, we have that
$\psi H^+ = \psi \pi_3^+(L^+)\in \psi \CH^+$ is contained in the
tangent hyperplane $\pi_5(x)$ if and only if it is contained in $\pi_5(x) \cap R$, and
this happens if and only if $\psi L^+$ contains the point $r(x) := \pi_3(\pi_5(x) \cap R)$.
If $x\notin Q$, then $r(x) \in \mathop{Ni}Q$ by Lemma \ref{conic}. If $x \in Q$, then again,
$r(x) = x \in \mathop{Ni}Q$. There are exactly two lines
$L_i^+ \in G^+$, $i= 1,2$, containing $r(x)$. The set $\{L_1^+, L_2^+\}$ depends continuously
on $r(x)$, which in turn is a continuous function of $x$. \ok\\
Every compact non-oriented $\it gl$ star $G$ has the continuity property, that is,
the line from $G$ passing through a non-interior point $p$ continuously depends on $p$,
see \cite{gldirect}, Theorem 3.2.
This implies that taking the inverse image of $G$ with respect to $\psi$,
we obtain a $\it gl^+$ star $G^+$. Such examples are said to be \it foldable. \rm Clearly,
we have the following.
\bprop
Let $G^+$ be a $\it gl^+$ star. Then $G^+$ and the associated $\it hfd^+$ set
$\pi_3^+G^+$ as well as the associated oriented parallelism $\Pi^+(G^+)$ are either
all foldable or all non-foldable.
\eprop
\section{Examples}\label{ex}
With a non-oriented \it gl \rm star $G$, there is associated the involutory
homeomorphism $\sigma : Q \to Q$ which sends a point $x \in Q$ to the second
point of intersection of the line $G_x \in G$ containing $x$. This involution
carries all information about $G$. In the oriented case, the intersection points
of $L^+ \in G^+$ with $Q$ can be distinguished: at one point, $L^+$ enters the closed
ball $B$ bounded by $ Q$ (i.e., a positive tangent vector points inward), and at the
other point, the line leaves $B$. Let us call these points $e(G^+)$
and $l(G^+)$, respectively.
\ble
If $G^+$ is a $\it gl^+$ star with respect to $Q$, then at every point of $Q$ exactly one
oriented line $L^+ \in G^+$ enters the closed ball $B$ bounded by $Q$, and exactly one
leaves $B$.
\ele
\bpf
As every oriented line $L^+ \in G^+$ has an entry point and a leave point,
both the set of entry points and the set of leave points are nonempty. Lines entering
at $x_n \to x$ cannot converge to a line leaving
at $x$, hence both sets are closed. So the connected quadric $Q$
is a disjoint union of three closed sets: the set $EL$ of points where one line enters and
one line leaves, the set $EE$ where two lines enter, and the set $LL$ where two lines leave.
That these sets are closed follows from the continuity property in the
definition of $\it gl^+$ stars. Only one of the three sets can be nonempty, and
this set can only be $EL$.
\epf
\begin{Definition} \rm
Given a $\it gl^+$ star $G^+$, define a map $\rho: Q \to Q$ by sending $x\in Q$ to the
leave point of the unique line $G_x^+$ that enters the ball $B$ at $x$.
We call this map the \it characteristic map \rm of $G^+$.
\end{Definition}
The following lemma is now obvious.
\ble\label{charmap}
If $G^+$ is a $\it gl^+$ star with characteristic map $\rho$, then $\rho$ is a
fixed point free homeomorphism,
and $G^+$ is the set of all lines
$G_x^+ = x \vee \rho(x)$, oriented in such a way that the interval
$G_x \cap B$ is traversed from $x$ to $\rho(x)$. \ok
\ele
This opens up a huge set of candidates for $\it gl^+$ stars. Every fixed
point free homeomorphism of the
2-sphere may be tested for its potential of defining a $\it gl^+$ star.
The two oriented lines passing through $x \in Q$ are
then $x \vee \rho(x)$ and $\rho^{-1}(x) \vee x$, so the continuity condition for the
set of oriented lines of the $\it gl^+$ star passing through a given point is
satisfied on $Q$ at least.
In general, the test will fail nevertheless,
but one may suspect that there are far more successes than with ordinary $gl$ stars,
where the characteristic map is an involution. We pursue this a bit further.
\begin{Definition} \rm
An (oriented) $\it gl^+$ star is said to be \it foldable \rm if by forgetting orientations
it yields an ordinary $\it gl$ star. As before, we call an oriented
parallelism $\Pi$ \it foldable \rm if by forgetting orientations we get an ordinary
parallelism.
\end{Definition}
Again we have two obvious facts:
\bprop\label{fold}
a) A $\it gl^+$ star is foldable if and only if its characteristic map is an involution.
b)
The oriented regular parallelism defined by a $\it gl^+$ star is foldable if and only if the
$\it gl^+$ star is foldable.
\ok
\eprop
\begin{Example}\label{nonfold}
\rm Here is a class of non-foldable $\it gl^+$ stars. Compare also Example \ref{patholog}.
In \cite{torus},
a large set of rotationally symmetric $\it gl$ stars were constructed. They are defined by
their characteristic involutions $\sigma: \BS_2 \to \BS_2$. On the equator ($z=0$) the involutions
agree with the antipodal map, that is, $\sigma (x,y,0) = - (x,y,0)$. Now we take two
different involutions $\sigma_1$ and $\sigma_2$ of this kind and define a homeomorphism
$\rho \BS_2 \to \BS_2$ by sending $(x,y,z)$ to $\sigma_1(x,y,z)$ if $z \ge 0$, and to
$\sigma_2(x,y,z)$ if $z \le 0$. Then it is easily checked that this map $\rho$ is
the characteristic map of a non-foldable parallelism.
\end{Example}
\bthm
The examples described above define compact oriented regular parallelisms.
These parallelisms are
non-foldable, and they admit a 2-dimensional torus group of automorphisms.
\ethm
The \it proof \rm \, is almost automatic.
For the automorphism group,
compare \cite{Stevin}, Theorem 31 or
\cite{torus}, Proposition 4.1. We note that a 2-torus is as much symmetry as an
oriented regular parallelism can have without being Clifford, see \cite{torus}, Theorem 2.1.
\\
If a $\it gl^+$ star is to be constructed from its characteristic map, then the
defining incidence condition can be relaxed and the continuity property can be replaced by
an orientation rule. The analogous result for ordinary $\it gl$ stars is Proposition 5.1 of
\cite{torus}.
\bthm
Let $Q$ be an elliptic quadric in a real projective 3-space $R$ and let $\rho: Q\to Q$
be a fixed point free homeomorphism. Let $G^+=G^+(\rho)$ be the set of all oriented lines
$G_x^+ = x \vee \rho(x)$, oriented in such a way that the interval
$\psi(G_x^+) \cap B$ is traversed from $x$ to $\rho(x)$.
If each point of $\mathop{Ni}Q$
is incident with at most two of these oriented lines, then $G^+$ is a $\it gl^+$ star.
In particular,
the continuity property of $\it gl^+$ stars comes for free in this situation.
\ethm
\bpf
1. We may assume that $Q$ is the unit sphere in the affine space $\BR^3$, of which $R$
is the projective closure. By construction, $G^+$ is compact.
First we look at `affine' points $a \in A :=\BR^3 \cap \mathop{Ni} Q$.
The orientation of a line $L^+ \in G^+$ defines an order relation on the affine line
$\psi L^+ \cap \BR^3$. By the construction of $G^+$, the entry and leave points of $L^+$
satisfy $e(L^+) < l(L^+)$. We call $L^+$ a positive line with respect to $a\in L^+$ if
$l(L^+) \le a$. The only other possibility is $a \le e(L^+)$, in which case
we call $L^+$ a negative line with respect to $a$.
2. Let $L_n^+\in G^+$ be a sequence of oriented
lines that are positive with respect to points $a_n$. If both sequences converge, then
$\lim L_n^+$ is positive with respect to $\lim a_n$. Let $B$ be the set of points incident
both with a positive line and with a negative line from $G^+$. If $a_n\in B$, $a_n\to a$
and if the lines $L_n^+ \in G^+$ are positive with respect to $a_n$, then these
lines accumulate at some positive $L^+$ line for $a$, by compactness of $G^+$, and
similarly for negative lines. Hence, we have
$a \in B$. It follows that in fact
$L_n^+$ converges to $L^+$,
and the desired continuity condition is satisfied on $B$.
We proceed to show that $B = A$.
3. It suffices to show that every point of $A$ is on a negative line. For $x \in Q$,
we have the oriented line $G_x^+ \in G^+$. We
define a ray $W_x$ as the connected component of $\psi (G_x^+)\cap A$ that contains $x$.
For $r \ge 1$, let $g_r(x)$ be the intersection point of $W_x$ with the sphere $rQ$
of radius $r$. Then $G_x^+$ is a negative line with respect to $g_r(x)$. Let $h :A\to Q$
be the map sending $y$ to $y\Vert y\Vert^{-1}$. Then $k_r := h\circ g_r$, $r \ge 1$,
is a family of
pairwise homotopic maps $Q\to Q$, with $k_1 = \rm id$. So all of these maps have mapping degree one
and, hence, are surjective. This proves our claim, and the continuity condition is proved
on the set $A$.
4. It remains to prove continuity for points on the plane $I$ at infinity. Every point
$y \in I$ has a neighborhood $U$ homeomorphic to $\BR^2 \times [-1,1]$ such that
$U\cap I$ corresponds to $\BR^2 \times \{0\}$ and that $U \cap rQ$ for some large number $r$
corresponds to $\BR^2 \times \{-1,1\}$. By compactness of $Q$, and for large enough $r$,
there is a neighborhood
$V\subseteq U$ of $y$ such that each oriented line meeting
both $V$ and $Q$
intersects both the top layer $\BR^2 \times \{1\}$ and the bottom
layer $\BR^2 \times \{-1\}$ of $U$.
These oriented lines may therefore be divided into the set of upward lines,
traversing $U$ from bottom to top, and downward lines. By the previous steps, each point
of $V \setminus I$ is incident with both an upward line
and a downward line from $G^+$.
As before, we may conclude that the same is true for all points of $V$, and the
continuity condition is guaranteed for these points.
\epf
We conclude with an example demonstrating the necessity of the
condition on continuity of the inverse in the definitions of $\it gl^+$ stars
and, hence, also in that of $\it hfd^+$ sets. It shows that compactness is not a possible
surrogate condition.
\begin{Example} \label{patholog}
\rm We start by defining an ordinary $\it gl$ star $G_1$ with respect to $Q = \BS_2$ in $\Pd$.
It contains all lines of the plane $z=0$ that pass through the origin $o = (0,0,0) \in \BR^3$,
but no other lines containing $o$. This $\it gl$
star does not have rotational symmetry. It has to be so, because rotationally
symmetric $\it gl$ stars inevitably contain the rotation axis. Incidentally, this
is the simplest known example with this property. For more such examples, see Section 7.2 of
\cite{aut}.
Consider the points
$$p_t = (\sqrt{1-t^2}, 0, t) \in \BS_2 \quad and \quad q_t = (f(t),0,0),$$
where $t \in [0,\frac{1}{2}]$ and where $f: [0,\frac{1}{2}]\to [\frac{1}{2} ,0]$ is a
strictly decreasing bijection. Let $L_t$ be the line $p_t \vee q_t$. Then the slope of $L_t$
strictly decreases from 0 to $-\infty$, and $L_{\frac{1}{2}}$ is parallel to the
$z$-axis $Z$. Now rotate the line $L_t$ about the axis $A_t$ parallel to $Z$ and
passing through $q_t$. These rotated lines fill a cone $C_t$, and we define $G_1$
to be the set of all
lines obtained by rotating $L_t$ for all $t\in [0,\frac{1}{2}]$.
The non-interior part $C_t\cap \mathop {\it Ni} Q$ lies completely
inside $C_s$ for all $s < t$.
By continuity, every point of $ \mathop {\it Ni} Q$ is incident with exactly one line from
$G_1$, and we have defined a $\it gl$ star.
Now consider the ordinary star of lines $G_2 = \frak L_o$, and observe that $G_1\cap G_2$
consists precisely of the horizontal lines passing through $o$.
This gives us two possibilities to try forming a $\it gl^+$ star.
1. We take the horizontal lines with both orientations, the lines of $G_1$ with
upward orientation, and the lines of $G_2$ with downward orientation.
Like Example \ref{nonfold}, this gives
a nice non-foldable $\it gl^+$ star satisfying the continuity condition,
but without rotational symmetry.
2. We take the elements of $G_1 \cup G_2$, all of them with upward orientation
(and two orientations for the horizontal ones). This is a compact set
$A\approx \BS_2$ of oriented lines such that
every point $p \in \mathop{Ni} Q$ lies on precisely two
distinct oriented lines in $A$ (here we use the
information about the intersection $G_1\cap G_2$). But the
set of those two lines does not depend on $p$ continuously when $p$ is in the plane $z = 0$.
So $A$ is \it not \rm a $\it gl^+$ star. By Corollary \ref{main}, it does not
define an oriented parallelism.
\end{Example}
The possibility of such examples quite puzzled the author until he understood
the relevance of the continuity condition. What the example tells us is that,
in contrast with non-oriented
$\it gl$ stars, compactness does not suffice to ensure the continuity property of
a $\it gl^+$ star. Yet for oriented parallelisms themselves, compactness is
enough, by Proposition \ref{toppar}.
\bibliographystyle{plain}
|
2,869,038,155,020 | arxiv | \section{Introduction}
A ``First Impression" is the event when a person encounters another person and forms a mental image about the person \cite{wikifirstimpress}. Here the mental image can be based on lot of characteristics such as facial expressions, action, physical appearance, the way of interaction, body language, etc. According to research in Psychology \cite{willis2006first}, the first impressions are formed even with a limited exposure (as less as 100ms) to unfamiliar faces. Forming a first impression is usually done in terms of Personality-traits recognition. Determining Personality-traits automatically will be helpful in human resourcing, recruitment process. An automatic analysis of Personality-traits will help people to train themselves.
The problem can be represented as in Table \ref{inputoutput}. A short video with a person's interview is given as input and the output is expected to be 5 fractional values in the range [0, 1] representing Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness. These five are collectively known as the ``Big-Five personality-traits".
There has not been much work in literature for First-impressions recognition, though the researchers have explored Emotion recognition\cite{cowie2001emotion,cohen2000emotion,cohen2003facial,kim2013deep,Kessous,Kim}, a related area in terms of the type of problem and features (hand-crafted features as well as deep features) used.
There are many ways, people express their emotions, among which facial expressions are the most useful\cite{cowie2001emotion,cohen2000emotion,cohen2003facial,kim2013deep}. Cohen et. al \cite{cohen2000emotion} used HMM based models to categorize the emotions in a video into six types: (1)happy, (2)angry, (3)surprise, (4)disgust, (5)fear, (6)sad. Their extended work\cite{cohen2003facial} in multilevel HMM performed automatic segmentation and recognition from a continuous signal. Xiaowei Zhao et. al \cite{Kim} proposed iterative Multi-Output Random Forests for face analysis in images using a combination of three tasks namely Facial landmark detection, Head pose estimation and Facial expression recognition. Deep features have also been used for facial analysis. Javier G. Razuri et. al \cite{David} have extracted features from regions around eyes and mouth for recognizing the human emotions. Their idea was that information related to emotions could be captured by tracking the expressions around eyes and mouth region. The extracted features are then input into a feed-forward neural network trained by back-propagation for classification of emotions.
Although, facial expressions form an important cue, they alone are not sufficient to recognize emotions effectively. Loic et. al \cite{Kessous} used facial expressions, gestures and acoustic analysis of speech based features. In their work, they have used a Bayesian classifier to recognize one of the eight types of emotions (Anger, Despair, Interest, Pleasure, Sadness, Irritation, Joy and Pride). They presented uni-modal (trained separately with all three types of features), bi-modal (combine two modes together) and multi-modal (combine all three modes together). Among all combinations, they observed that multi-modal based classification yielded the best performance.
We propose two end-to-end trained deep learning models that use audio features and face images for recognizing first impressions. In the first model, we propose a Volumetric (3D) convolution based deep neural network for determining personality-traits. 3D convolution was also used by Ji et. al \cite{ji3dconv}, although for the task of action recognition from videos of unconstrained settings. In the second model, we formulate an LSTM(Long Short Term Memory) based deep neural network for learning temporal patterns in the audio and visual features. Both the models concatenate the features extracted from audio and visual data in a later stage. This is in spirit of the observations made in some studies \cite{Kessous} that multi-modal classification yields superior performance.
Our contribution in this paper is two-fold. First, mining temporal patterns in audio and visual features is an important cue for recognizing first impressions effectively. Secondly, such patterns can be mined from a few frames selected in a stochastic manner rather than the complete video, and still predict the first impressions with good accuracy. The proposed methods have been ranked second on the ChaLearn LAP APA2016 challenge(first round)\cite{chalearn1stround1stimpressions}.
This paper is organized as follows. In Section \ref{sec:methodology}, we describe the two models in detail and the steps followed to prepare the input data and features for the models. Section \ref{sec:stochastic_training} describes the novel stochastic method of training and testing the networks. In Section \ref{sec:experiments_results}, we discuss the Apparent Personality Analysis 2016: First Impressions Dataset, the evaluation protocol, the implementation details and the experimental results obtained in two phases of the competition. Section \ref{sec:conclusions} concludes the paper providing future direction for the work.
\begin{longtable}{ c| c }
\multicolumn{1}{c}{Input} &
\multicolumn{1}{c}{Target} \\\hline
\includegraphics[scale=0.15]{./pics/KHQJhOzdrYo_003.png} & \includegraphics[scale=0.15]{./pics/KHQJhOzdrYo_003-target.png} \\ \hline\hline
\includegraphics[scale=0.15]{./pics/xgRqkTXmZko_000.png} & \includegraphics[scale=0.15]{./pics/xgRqkTXmZko_000-target.png} \\ \hline\hline
\caption{Example of Input and Target. Input is the raw video containing a person's interview \& output will be the predicted personality-traits values. \label{inputoutput}}
\end{longtable}
\section{Methodology}
\label{sec:methodology}
We propose two bi-modal deep neural network architectures that have two branches, one for encoding audio features and the other for visual features. Inputs to both the audio and visual branches of the model are generated after pre-processing the raw video data. Features extracted from both the branches are fused in a later stage of the model, while the complete network is trained end-to-end. In this section, we describe the pre-processing that was performed on the data and the architecture of models in detail.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth,height=0.3\textheight]{./pics/Preprocessing_pipe.png}
\caption{Data pre-processing pipeline, where the face aligned images are extracted from image frames and spectral audio features are extracted from audio data.\label{preprocess}}
\end{figure}
\subsection{Audio data pre-processing}
Given a video, we extract its audio component and split the audio component into N non-overlapping partitions as shown in figure \ref{preprocess}. From each individual partition, we extract ``mean and standard deviation" of certain properties (table \ref{audiofeats}) of audio signal. We use an open-source python based audio processing library called pyAudioAnalysis \cite{giannakopoulos2015pyaudioanalysis,pyaudioanalysis} for this purpose. The hand-crafted features are of 68 dimensions, which includes the mean and standard deviation of the following attributes:
\begin{longtable}{ l | p{9cm} }
\multicolumn{1}{l}{\textbf{Attribute Name}} &
\multicolumn{1}{l}{\textbf{Description}} \\
\hline
Zero Crossing Rate & The rate of sign-changes of the signal during the duration of a particular frame\\\hline
Energy & The sum of squares of the signal values, normalized by the respective frame length. \\\hline
Entropy of Energy & The entropy of sub-frames' normalized energies. It can be interpreted as a measure of abrupt changes.\\\hline
Spectral Centroid & The centre of gravity of the spectrum.\\\hline
Spectral Spread & The second central moment of the spectrum.\\\hline
Spectral Entropy & Entropy of the normalized spectral energies for a set of sub-frames.\\\hline
Spectral Flux & The squared difference between the normalized magnitudes of the spectra of the two successive frames.\\\hline
Spectral Rolloff & The frequency below which 90\% of the magnitude distribution of the spectrum is concentrated.\\\hline
MFCCs & Mel Frequency Cepstral Coefficients form a cepstral representation where the frequency bands are not linear but distributed according to the mel-scale.\\\hline
Chroma Vector & A 12-element representation of the spectral energy where the bins represent the 12 equal-tempered pitch classes of western-type music (semitone spacing).\\\hline
Chroma Deviation & The standard deviation of the 12 chroma coefficients.\\\hline
\hline
\caption{Audio features extracted using pyAudioAnalysis \cite{pyaudiofeatures} \label{audiofeats}}
\end{longtable}
\subsection{Visual data pre-processing}
The visual processing branch of the model takes as input, a set of 'N' 3D aligned segmented face images. We segment the face images to prevent the background from affecting the predictions, which should rather depend only on the features of the face (gaze direction, movements of eye, lips, etc). We use facial landmark detection and tracking to segment the faces. The landmark points are then aligned to fixed locations, which give us segmented face images that have also been aligned.
We use an open-sourced C++ library OpenFace\cite{baltru2016openface,openface} for all the visual pre-processing tasks.
\subsection{Model Architecture}
We propose two models in our work. The models are shown in figure \ref{multimodalconv} and \ref{multimodallstm} respectively. We divide each video into N non-overlapping partitions. From each of the N partitions, both audio and visual features are extracted (figure \ref{preprocess}) and used as inputs to the models. Here, only the inter-partition variations are learned as temporal patterns, while the intra-partition variations are ignored. We do so, to handle redundancy in consecutive frames especially in high fps videos. As we can see in figures \ref{convpipe} and \ref{lstmpipe}, the audio and visual features from each block are passed through consecutive layers of neural network. Now, in our first model, the temporal patterns across the N sequential partitions are learned using a 3D convolution module. While in the second model, we use an LSTM to learn the temporal patterns across the partitions. The kernel sizes and stride information are available in the figure \ref{modelarch}. By empirical analysis, we fixed N as 6.
\begin{figure}[!th]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width = 0.9\textwidth, height=0.6\textheight]{./pics/model_Convolution_cropped.png}
\caption{Bi-modal Volumetric Convolutional Neural Network architecture\label{multimodalconv}}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width = 0.9\textwidth, height=0.6\textheight]{./pics/model_LSTM_cropped.png}
\caption{Bi-modal LSTM Neural Network architecture\label{multimodallstm}}
\end{subfigure}
\caption{Model Architecture Diagram\label{modelarch}}
\end{figure}
\subsubsection{Volumetric (3D) convolution model:} Our first model is inspired from the work of Ji et. al\cite{ji3dconv}. The architecture is shown in figure \ref{multimodalconv} and the pipeline is demonstrated in figure \ref{convpipe}. The visual data processing branch learns the change in facial expressions from face aligned images using 3D convolution. At first, the 6 face aligned temporally ordered images of size $3\times 112\times 112$ are passed through a 3D convolution layer, followed by a ReLU and a 3D max-pooling layer. The 3D convolution as well as max-pooling are done in a volume comprised of X, Y and t dimensions. The resulting feature maps are in-turn passed through a second set of similar layers of 3D convolution, ReLU and 3D max-pooling but with different kernel sizes (refer to figure \ref{multimodalconv} for details about parameters). This is followed by another layer of 3D convolution, which result in a single feature map of size $1\times21\times21$ which is flattened to a 441 dimensional feature vector. Simultaneously, the audio-data processing branch gets a $6 \times 68$ dimensional feature vector which is reduced to a 100 dimensional vector using a fully connected layer. The feature vectors from audio and visual branches are concatenated and yields a 541 (100 from audio + 441 from visual data) dimensional feature vector, which is then input to a fully connected (FC) layer of 200 nodes and a ReLU layer, followed by another FC layer of 5 nodes which has the activation function as sigmoid. These 5 nodes represent the predicted values of the Big-Five Personality traits.
\subsubsection{LSTM based model:} We designed our second model to learn the task based on temporal relationship within the input. The architecture and pipeline of the model are shown in figure \ref{multimodallstm} and figure \ref{lstmpipe} respectively. We propose LSTM units to capture the temporal patterns of the input data to predict the personality traits. Each aligned face image is passed through a series of spatial convolution, ReLU and spatial max-pooling layers of varying kernel sizes (refer to figure \ref{multimodallstm} for details about parameters). The generated feature maps are flattened to get 1024 dimensional feature vector and it is connected to a fully connected layer of 128 nodes. Simultaneously, the audio-data (6 feature vectors of 68 dimension) is passed through a 32-node fully connected layer and reduced to 32-dimension. After these steps, the output feature vectors from audio and visual data processing branches are concatenated to yield 6 feature vectors of 160 dimension (32 dim of audio + 128 dim of visual data for each 6 partition) which are still maintained in temporal order. The extracted temporally ordered 6 feature vectors are then passed through an LSTM with output dimension of 128. The LSTM takes $6\times160$ dimensional input and outputs a sequence of 6 $128$-dimensional feature vector. The LSTM generates output for each time step and then, each output is passed through 5 dimensional fully-connected layer with sigmoid activation function. Thus, we get 6 outputs of predicted 5 personality traits. For each personality trait, we average the predicted value, output by all 6 LSTM output units. Thus we get a single prediction value for each of the Big Five personality traits.
\begin{figure}
\centering
\includegraphics[height=0.25\textheight]{./pics/Convolution_pipe.png}
\caption{Pipeline of 3D-Convolution model\label{convpipe}}
\end{figure}
\section{Stochastic Training and Testing}
\label{sec:stochastic_training}
According to Psychology research \cite{willis2006first}, it is observed that first impressions of unfamiliar faces can be formed even with exposure times as small as 100-ms. Their results suggest that predictions made with a 100-ms exposure correlated highly with judgments made in the absence of time constraints, suggesting that small exposure times were sufficient for participants to form an impression. On similar lines, we also hypothesize that deep models can learn effective representations for recognizing first impressions from a few randomly selected frames.
\subsection{Stochastic Training}
Training of the two proposed models is carried out using Stochastic Gradient Optimization (SGD) method. The parameters used for SGD are: learning rate = 0.05, weight decay = $5\times e^{-4}$, momentum = 0.9, batch size = 128, learning rate decay $ = 1\times e^{-4}$.
As mentioned earlier (figure \ref{preprocess}), each raw video file is split into non-overlapping 6 partitions and the audio as well as visual features are extracted from each partition individually. We propose to train the models by using a combined feature set such that we take single face aligned image from each partition, as well as the pre-processed audio features from each partition. Particularly, in video data, since we are only using 1 frame from whole partition, there are multiple combinations of frames from each partition possible for training. Consider there are N partitions \& F frames per partition and we intend to take a single frame from each partition, hence $F^N$ combinations of frames are possible per video. We assume N as 6 and typically, F is in the range of ${\raise.17ex\hbox{$\scriptstyle\sim$}} 75$ (considering 30 fps and each video of 15 seconds). Training the model with $75^6$ combinations of frames is an overkill. Empirically, we found that training only on several hundreds of combinations (typically {\raise.17ex\hbox{$\scriptstyle\sim$}} 500) is enough for the model to generalize for whole dataset.
\begin{figure}
\centering
\includegraphics[height=0.3\textheight]{./pics/LSTM_pipe.png}
\caption{Pipeline of LSTM model\label{lstmpipe}}
\end{figure}
Going with the above explanation, the 6 input frames (single frame from each partition) for model training is selected randomly by keeping temporal ordering in mind. At every epoch, the random selection will yield new input combination for each video.
This stochastic way of training produces new sample at every epoch and ``regularizes'' the learning effectively, thus increasing the generalization of the model.
\subsection{Testing}
Testing the model also faces the same issue of exponential combination of frames per video. Empirically, we choose to use only a random subset (10 combinations) from total possible combinations and use the average of 10 evaluations as the Personality-traits recognition results. The validation and test results suggest that the model and evaluation method performs significantly better than the other submissions and the LSTM model stood at second place in the Final evaluation phase of competition.
\section{Experiments and Results}
\label{sec:experiments_results}
In this section, we first briefly describe about the dataset and
the evaluation protocol from our experiments. Then we provide the implementation details for our method and discuss
the results.
\subsection{Dataset: Apparent Personality Analysis (APA) - First impressions}
In our validation experiment, we use the ChaLearn LAP 2016 APA dataset provided by the challenge organizers\cite{chalearn1stround1stimpressions}. This dataset has 6000 videos for training with ground truth Personality-traits, 2000 videos for validation without ground truth (performance is revealed on submission of predictions) and 2000 videos for test (Ground truth is not available until the competition is finished). Each video is of length 15 seconds and generally has 30 frames/second. The ground truth consists of fractional scores in the range between 0 to 1 for each of Big-Five Personality traits : Extraversion, Agreeableness, Conscientiousness, Neuroticism, Openness.
\begin{figure}
\centering
\includegraphics[scale=0.4]{./pics/cnn_lstm.png}
\caption{Number of Epochs vs. Mean Squared Error (MSE) for individual models during training phase \label{fig:msecompare}}
\end{figure}
\subsection{Evaluation Protocol}
The evaluation is done in terms of Mean Average Accuracy.\\
The individual personality traits Average Accuracy is calculated as,
\begin{equation}
\text{Average Accuracy}_j = \frac{1}{N} \sum_{i=1}^N(1 - |Target_{ij}-y_{ij}|)
\end{equation}
where j = 1\dots 5, N is the number of total videos, $Target_{ij}$ is the ground truth value for $i^{th}$ video and $j^{th}$ personality-trait, $y_{ij}$ is the predicted value for $i^{th}$ video and $j^{th}$ personality-trait.
The Mean Average Accuracy between the predictions and the ground truth personality-traits values:
\begin{equation}
\text{Mean Average Accuracy} = \frac{1}{m} \sum_{j=1}^m(\text{Average accuracy}_j)
\end{equation}
where m = 5 (the number of Personality Traits).
Note, that the maximum value of the Mean Average Accuracy, as well as Average Accuracy is equal to 1, which represents the best result and the minimum is equal to 0 representing the worst match.
\subsection{Implementation details}
Both of the deep learning models are implemented using Torch\cite{collobert2011torch7} scientific computing framework. The training of 3D convolution based model takes 30 seconds per epoch and LSTM based model takes 3 minutes per epoch on a GeForce GTX Titan Black graphics card. The training of each individual model is done for up-to whole 1 day. We used only the ChaLearn LAP 2016 APA dataset\cite{chalearn1stround1stimpressions} for training. The comparison of mean squared error(MSE) of both models during training is shown in figure \ref{fig:msecompare}. The source code files of both the training and final proposed prediction method are available in github \footnote{refer \href{https://github.com/InnovArul/first-impressions}{https://github.com/InnovArul/first-impressions} for more information} repository.
\subsection{Development phase}
In the development phase of the APA2016 competition\cite{chalearn1stround1stimpressions}, only the training set ground truths were released and the methods were evaluated online by submitting the predictions on the validation videos to a server. The best performance of our models during development phase is shown in Table \ref{validationrankings}.
\subsection{Test phase}
In the test phase of the APA2016 competition\cite{chalearn1stround1stimpressions}, the testing videos were released. The testing ground truths were kept secret and the teams were invited to submit their results on the testing videos. The organizers announced the final ranking after the test phase. The results are summarized in Table \ref{resultstable}. The proposed LSTM model secured the second place in the leader-board and shown in bold font.
\subsection{Results and Discussion}
The performance of CNN (3D convolution) based model and LSTM model can be seen from learning phase evaluation shown in table \ref{validationrankings}:
\newpage
\begin{longtable}{ l | p{3cm} | l }
\multicolumn{1}{l}{} &
\multicolumn{1}{l}{LSTM model} &
\multicolumn{1}{l}{3D conv. based model} \\ \hline
Accuracy & \textbf{0.913355} & 0.912473 \\\hline
Extraversion & 0.914548 & 0.915650 \\\hline
Agreeableness & 0.915749 & 0.916123 \\\hline
Conscientiousness & 0.913594 & 0.908370 \\\hline
Neuroticism & 0.909814 & 0.909931 \\\hline
Openness & 0.913069 & 0.912292 \\\hline
\caption{Evaluation during learning phase on ChaLearn LAP 2016 APA : First Impressions challenge \label{validationrankings}}
\end{longtable}
The test phase leader-board standings is shown in the table \ref{resultstable}.
\begin{longtable}{ p{2cm} | p{5cm} | p{2cm} }
\multicolumn{1}{l}{\textbf{Rank}} &
\multicolumn{1}{l}{\textbf{Team}} &
\multicolumn{1}{l}{\textbf{Accuracy}}\\\hline
1 & NJU-LAMDA & 0.912968 \\\hline
\textbf{2} & \textbf{evolgen (*LSTM model)} & \textbf{0.912063} \\\hline
3 & DCC & 0.910933 \\\hline
4 & ucas & 0.909824 \\ \hline
5 & BU-NKU & 0.909387 \\ \hline
6 & pandora & 0.906275 \\ \hline
7 & Pilab & 0.893602 \\ \hline
8 & Kaizoku & 0.882571 \\ \hline
\caption{Leaderboard of Test-phase on ChaLearn LAP 2016 APA : First Impressions challenge. our entry is with \textbf{bold} \label{resultstable}}
\end{longtable}%
As we noticed from the table \ref{validationrankings}, during learning phase, LSTM based model performs superior to 3D convolution based model. It maybe due to the fact that, LSTM is able to learn better temporal relationships than 3D convolution based approach. Also, the audio-features were not used to define temporal relationship in 3D convolution based model (only 3D face aligned images are used), but LSTM model used both audio and visual features to learn the temporal correspondences, which could have made it perform better. Because of these reasons, we chose LSTM model to be used for test phase: Our method secured second place in ChaLearn LAP 2016: APA challenge\cite{chalearn1stround1stimpressions}.
\section{Conclusions and Future Works}%
\label{sec:conclusions}
In this work, we proposed two deep neural network based models that use audio and visual features for the task of First Impressions Recognition. These networks mine the temporal patterns that exist in a sequence of frames. It was also shown that such sequences can be small and selected in a stochastic manner respecting the temporal order. The proposed methods have been shown to yield excellent performance on the ChaLearn LAP APA2016 Challenge\cite{chalearn1stround1stimpressions}. As deep neural networks are known for their representation and feature extracting ability, they can be used to learn the optimal representations without having to pre-process the data. Appearance and Pose features can also be explored to see if they improve the performance given by the proposed audio and visual features.
\bibliographystyle{splncs}
|
2,869,038,155,021 | arxiv | \section*{Introduction}
The \emph{Schwarzian derivative} of a locally univalent analytic function $\varphi$ in a simply connected domain $\Omega$ of the complex plane $\mathbb{C}$ equals
\begin{equation}\label{eq-classicalSchwarzian}
S(\varphi)=\left(\frac{\varphi''}{\varphi'}\right)' -\frac 12\left(\frac{\varphi''}{\varphi'}\right)^2\,.
\end{equation}
The quotient $\varphi''/\varphi'$, denoted by $P(\varphi)$, is known as the \emph{pre-Schwarzian derivative} of the function $\varphi$.
\par\smallskip
A complex-valued function $f$ is said to be \emph{harmonic} in a simply connected domain $\Omega\subset \mathbb{C}$ if both $Re f$ and $Im f$ are real harmonic in $\Omega$. Every such $f$ can be written in the form $f=\overline g+h$, where both $g$ and $h$ are analytic in $\Omega$ (see \cite[p. 7]{Dur-Harm}). \par
It is known that the (second complex) \emph{dilatation} $\omega=g'/h'$ of a harmonic function $f=\overline{g}+h$ stores important information about $f$. For example, a non-constant harmonic mapping is \emph{orientation-preserving} if and only if $|\omega|\leq 1$ \cite[p. 8]{Dur-Harm}. Lewy \cite{Lewy} proved that a necessary and sufficient condition for $f$ to be locally univalent is that its \emph{Jacobian} $J_f=|f_z|^2-|f_{\overline z}|^2=|h'|^2-|g'|^2$ is different from $0$. Hence, a locally univalent harmonic mapping $f$ is orientation-preserving if its Jacobian is positive and \emph{orientation-reversing} if $J_f<0$. Note that the locally univalent harmonic mapping $f=\overline{g}+h$ is orientation-preserving if and only if $h$ is locally univalent and the dilatation $\omega=g'/h'$ is an analytic function bounded (in modulus) by $1$.
\par\smallskip
The \emph{harmonic Schwarzian derivative} $S_H$ of a locally univalent harmonic function $f$ with Jacobian $J_f$ was defined in \cite{HM-Schwarzian} by
\begin{equation}\label{eq-Schwarzian0}
S_H(f)=\frac{\partial}{\partial z}\left(P_H(f)\right)-\frac 12 \left(P_H(f)\right)^2\,,
\end{equation}
where $P_H(f)$ is the \emph{harmonic pre-Schwarzian derivative} of $f$, which equals
\[
P_H(f)=\frac{\partial}{\partial z} \log J_f\,.
\]
It is easy to check that $S_H(f)=S_H(\overline f)$ for any locally univalent harmonic mapping $f$ in a simply connected domain $\Omega$. Therefore, without loss of generality we can assume that $f$ is orientation-preserving. The harmonic Schwarzian derivative of the orientation-preserving harmonic mapping $f=\overline g+h$ with dilatation $\omega=g^\prime/h^\prime$ can be written as
\begin{equation}\label{eq-Schwarzian}
S_H(f)=S(h)+\frac{\overline \omega}{1-|\omega|^2}\left(\frac{h''}{h'}\,\omega'-\omega''\right)
-\frac 32\left(\frac{\omega'\,\overline
\omega}{1-|\omega|^2}\right)^2\,,
\end{equation}
where $S(h)$ is the classical Schwarzian derivative of the function $h$ defined by \eqref{eq-classicalSchwarzian}.
\par\smallskip
It is clear that if $f$ is analytic (so that its dilatation is identically zero) then $S_H(f)$ coincides with the classical Schwarzian derivative of $f$. In other words, the operator defined by \eqref{eq-Schwarzian0} -or by \eqref{eq-Schwarzian}- can be considered as a generalization of the classical Schwarzian derivative~\eqref{eq-classicalSchwarzian}. We refer the reader to \cite{HM-Schwarzian} for the motivation of the definition -related to the classical argument of approximation by M\"{o}bius transformations that seems to go back to E. Cartan \cite{Cartan} (see also \cite[p. 113]{Gardiner})- as well as for different properties that the harmonic Schwarzian derivative satisfies. Some of them -the most important ones for our purposes- will be considered in Section~\ref{ssec-properties} below.
\par\smallskip
The harmonic Schwarzian operators $P_H$ and $S_H$ defined above have proved to be useful to generalize classical results in the setting of analytic functions to the more general setting of harmonic mappings. For instance:
\par
- The classical criteria of univalence and quasi-conformal extension for analytic functions in terms of the pre-Schwarzian derivative due to Becker \cite{Becker} (see also \cite{Becker-Pom-1}) and Ahlfors \cite{Ahlfors} are generalized to those cases when the functions considered are merely harmonic in \cite{HM-qc} and \cite{HM-Schwarzian}.
\par
- In the article \cite{HM-Nehari}, the celebrated criterion of univalence in terms of the Schwarzian derivative obtained by Nehari \cite{Nehari} as well as the corresponding criterion for quasi-conformal extension due to Ahlfors and Weill \cite{AW} are generalized to the harmonic setting.
\par
- Two criteria for the bounded valence of harmonic mappings in terms of the harmonic pre-Schwarzian and Schwarzian derivatives, respectively, are obtained in \cite{Huusko-M} as a generalization of some of the results in \cite{Becker-Pom-2} and \cite{GP}.
\par
- The relationship between John disks and the pre-Schwarzian derivative analyzed in \cite{Hag-Hag} (see also \cite{Ch-O-P}) is extended to the harmonic setting in \cite{Chen-Ponn}.
\par\smallskip
These harmonic operators have recently attracted the attention of several authors. We refer the reader to the recent papers \cite{Graf, Liu-Ponn, Liu-Ponn-2}, for instance. In fact, also recently, it has been proved that the harmonic Schwarzian derivative defined above would help to solve some related problems regarding harmonic mappings in the unit disk. More concretely, a consequence of the main theorem in \cite{CHM} is that if it would be possible to show that the norm of the harmonic Schwarzian derivative of any univalent harmonic mapping in the unit disk $\mathbb D$ is bounded by $19/2$, then the order of the well-known family of suitable normalized orientation-preserving univalent harmonic functions in $\mathbb D$ would be equal to 3, as conjectured (see \cite{C-SS}, \cite{SS}, or \cite[Sec. 5.4]{Dur-Harm}).
\par\smallskip
However, there are still fundamental questions related to the harmonic Schwarzian derivative that remain unresolved. For instance, as mentioned in Nehari's paper \cite{Nehari}, a locally univalent analytic function $\varphi$ with Schwarzian derivative $S(\varphi)$ necessarily equals the quotient $u_1/u_2$, where $u_1$ and $u_2$ are linearly independent solutions of the linear differential equation
\[
u''+\frac{S(\varphi)}{2}u=0\,.
\]
\par
This readily implies that if two given functions $\varphi_1$ and $\varphi_2$ (again locally univalent and analytic) have equal Schwarzian derivative, then $\varphi_1=T\circ \varphi_2$, where $T$ is a non-constant \emph{M\"obius transformation} $T$ (also called \emph{linear fractional transformation}) of the form
\begin{equation}\label{eq-Mobius}
T(z)=\frac{az+b}{cz+d}\,,\quad z \in \mathbb C\quad \text{and}\quad ad-bc\neq 0\,.
\end{equation}
A straightforward calculation shows that if $\varphi_1=T\circ \varphi_2$ then $S(\varphi_1)=S(\varphi_2)$. Hence we obtain that $\varphi_1$ and $\varphi_2$ have equal Schwarzian derivative if and only if $\varphi_1=T\circ \varphi_2$ for some non-constant M\"obius transformation $T$.
\par\smallskip
The main purpose of this paper is to characterize the relationship between two locally univalent harmonic mappings with equal harmonic Schwarzian derivative.
\par\smallskip
It sometimes happens that in order to get the solution of certain problem, it is needed to distinguish between those cases when the functions $f=\overline{g}+h$ considered have constant dilatation $\omega=g'/h'$ and those cases when the dilatations are not constant (see, for instance, \cite{CM}). This is precisely what occurs in our case, so that we need to consider these situations separately. Notice that the dilatation of a locally univalent harmonic mapping is not constant if and only if the functions $g'$ and $h'$ in the decomposition $f=\overline g+h$ are linearly independent.
\par\smallskip
The paper is organized as follows. In Section~\ref{sec-background}, we review some of the properties of the operator $S_H$ and use them to normalize the functions considered in a suitable way. In Section~\ref{sec-constantdilatation}, we completely identify the relationship between two harmonic mappings with equal harmonic Schwarzian derivatives in those cases when the dilatation of one of the functions involved is constant. In Section~\ref{sec-main}, we treat the more difficult problem of determining the relation between two harmonic maps $f_1$ and $f_2$ with $S_H(f_1)=S_H(f_2)$ when the dilatations of both $f_1$ and $f_2$ are non-constant. Finally, in Section~\ref{sec-solution}, we state the complete solution to the problem considered.
\section{Background}\label{sec-background}
We would like to start this section with some comments related to the Schwarzian derivative operator $\mathcal S$ introduced by Chuaqui, Duren, and Osgood in \cite{Ch-D-O} for the family of harmonic functions $f=\overline g+h$ such that $\lambda_f=|h'|+|g'|\neq 0$ and whose dilatation $\omega=q^2$ for some meromorphic function $q$. The precise definition of $\mathcal S(f)$ for such harmonic mappings $f$ is
\[
\mathcal S(f)=2\left(\frac{\partial^2}{\partial z^2} \lambda_f -\left(\frac{\partial}{\partial z} \lambda_f\right)^2\right)\,.
\]
It is well-known that those harmonic mappings that satisfy the specified conditions (which are not necessarily locally univalent) can be lifted to a minimal surface \cite[Ch. 10.2]{Dur-Harm}. By exploiting this connection with differential
geometry, Chuaqui, Duren, and Osgood have obtained many interesting results on different properties for the \emph{lifts} of the given harmonic mappings in terms of $\mathcal S$: criteria for univalence (see \cite{Ch-D-O-univalence}), quasi-conformal extension \cite{Ch-D-O-qcextension}, or distortion theorems (see \cite{Ch-D-O-distortion} and also \cite{Ch-D-O-M-M-M-O}). In fact, in \cite{Ch-D-O}, using again techniques from differential geometry, the authors are able to characterize the relationship between two harmonic functions $f_1$ and $f_2$ that satisfy the specified conditions and such that $\mathcal S(f_1)=\mathcal S(f_2)$.
\par\smallskip
It is not clear to us how to apply similar differential geometry tools to the solution of the problem that we consider in this article, since the dilatations of locally univalent harmonic functions $f=\overline{g}+h$ do not necessarily equal the square of a meromorphic function. Therefore, a different approach has been developed in order to determine the relationship between two locally univalent harmonic mappings with equal harmonic Schwarzian derivative $S_H$ as in \eqref{eq-Schwarzian}. In fact, perhaps it is convenient to state that our approach cannot be used to provide an alternative proof to the characterization of the relationship between two harmonic functions $f_1$ and $f_2$ that satisfy the specified conditions above and such that $\mathcal S(f_1)=\mathcal S(f_2)$, obtained in \cite{Ch-D-O}. The reason is that one of the main tools we will use is the invariance of $S_H$ under pre-composition with locally univalent \emph{affine} harmonic functions, explained in Section~\ref{ssec-properties} below, which is a property that is not shared by the harmonic Schwarzian operator $\mathcal S$.
\par\smallskip
\subsection{Some properties of the harmonic Schwarzian derivative}\label{ssec-properties}
It was shown in \cite{HM-Schwarzian} that for any given orientation-preserving harmonic mapping $f=\overline{g}+h$ with dilatation $\omega$ in a simply connected domain $\Omega$ and all $z_0\in\Omega$
\begin{equation}\label{eq-Schwarzian2}
S_H(f)(z_0)=S(h-\overline{\omega(z_0)}g)(z_0)\,.
\end{equation}
\par
A straightforward calculation shows that whenever $f$ is a locally univalent harmonic mapping and $\phi$ is an analytic function such that the composition $f\circ\phi$ is well-defined, the harmonic Schwarzian derivative of the harmonic mapping $f\circ\phi$ can be computed using the \emph{chain rule}:
\[
S_H(f\circ\phi)=(S_H(f)\circ\phi)\cdot(\phi')^2+S(\phi)\,.
\]
\par
Another important property verified by the harmonic Schwarzian derivative $S_H$ is the invariance with respect to pre-compositions with affine harmonic mappings. More specifically, consider a \emph{locally univalent affine harmonic mapping}
\[
A(z)=a\overline z+b z +c\,,
\]
where $a$, $b$, and $c$ are complex numbers with $|a|\neq |b|$. Then for any locally univalent harmonic mapping $f$ the composition $A\circ f$ is harmonic and locally univalent as well and $S_H(A\circ f)=S_H(f)$. In particular (setting $a=1$ and $b=c=0$), we have $S_H(\overline f)=S_H(f)$. That is, as was mentioned before, we can assume without loss of generality that the functions considered are orientation preserving.
\par
Let $f=\overline g+h$ be an orientation preserving mapping and let $\mu$ have modulus one. Define the \emph{anti-analytic rotation}
\begin{equation}\label{eq-antianalyticrotation}
R_\mu(f)=\mu\overline g+h\,.
\end{equation}
Notice that if $\omega$ denotes the dilatation of $f$, then the dilatation $\omega_\mu$ or $R_\mu(f)$ is $\omega_\mu=\overline\mu \omega$. It is obvious from \eqref{eq-Schwarzian0} that $S_H(R_\mu\circ f)=S_H(f)$.
\par\smallskip
The following result was proved in \cite{HM-Schwarzian}.
\begin{theorem}[\cite{HM-Schwarzian}, Thms. 1 and 2]\label{thm-constantdilatation} Let $f$ be a locally univalent (orientation-preserving, without loss of generality) harmonic function in a simply connected domain $\Omega$. Then, $S_H(f)$ is harmonic if and only if the dilatation of $f$ is constant. That is, if and only if $f=\alpha\overline h+h+\gamma$ for some constants $\alpha$ and $\gamma$ with $|\alpha|\neq 1$ $(|\alpha|<1)$ and some locally univalent analytic function $h$ in $\Omega$. In this case, $S_H(f)=S(h)$ so that $S_H(f)$ is in fact analytic.
\end{theorem}
\subsection{A useful normalization}\label{ssec-normalization} The properties of the Schwarzian derivative operator $S_H$ stated in the previous section allow us to make certain normalizations that will be useful to determine the relationship between two harmonic functions $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ on a simply connected domain $\Omega$ that satisfy $S_H(f_1)=S_H(f_2)$.
\par
Recall that we can assume that both harmonic mappings $f_1$ and $f_2$ preserve the orientation. Assume that $S_H(f_1)=S_H(f_2)$ is not harmonic so that, according to Theorem~\ref{thm-constantdilatation} above, the dilatations $\omega_1=g_1'/h'_1$ and $\omega_2=g'_2/h'_2$ (which turn out to be analytic functions in $\Omega$) are not identically constant.
\par
Consider any conformal mapping $\phi$ from the unit disk $\mathbb{D}$ onto $\Omega$ with $\phi(0)=z_0\in\Omega$ and use the chain rule for the harmonic Schwarzian derivative, as well as the invariance under pre-composition with affine harmonic mappings, to get that the functions $f_1\circ\phi-f_1(z_0)$ and $f_2\circ \phi-f_2(z_0)$, now sense-preserving harmonic mappings in the unit disk that fix the origin, have equal harmonic Schwarzian derivatives and state the problem in the unit disk. An easy calculation shows that the dilatations or these new harmonic functions are, respectively, $\omega_1\circ\phi$ and $\omega_2\circ\phi$. In order not to burden the notation, we again use $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ to denote the new mappings and $\omega_1$ and $\omega_2$ to denote the corresponding dilatations, that are non-constant analytic functions in $\mathbb{D}$.
\par
Since multiplication by a non-zero constant does not change the Schwarzian derivative, we can also suppose that $h'_1(0)=h'_2(0)=1$.
\par
Bearing in mind that none of the dilatations $\omega_1$ or $\omega_2$ are constant, we have that there exists a point $w\in\mathbb{D}$ such that both $\omega'_1(w)$ and $\omega'_2(w)$ are different from zero. Using again the chain rule and the transformation
\begin{equation}\label{eq-aff1}
f=\overline g+h \mapsto\frac{f\circ\varphi_w-f(w)}{h'(w)(1-|w|^2)}\,,
\end{equation}
where $\varphi_w$ is the automorphism of the unit disk given by
\begin{equation}\label{eq-automorphism}
\varphi_w(z)=\frac{w+z}{1+\overline w z}\,,
\end{equation}
we see that without loss of generality $w$ can be taken to be the origin.
\par
Finally, as it is explained on \cite[Sec. 5.1]{Dur-Harm}, we can apply \emph{invertible} affine transformations (an operation that does not change the Schwarzian derivative) to both $f_1$ and $f_2$ to get new functions with dilatations that fix the origin and satisfy $\omega'_1(0)\neq 0$ and $\omega'_2(0)\neq 0$.
\par
Summing up, these arguments show that in order to characterize the relationship between two orientation-preserving harmonic mappings $f_1$ and $f_2$ in a simply connected domain $\Omega$ having equal harmonic Schwarzian derivatives which is not a harmonic mapping, we can assume without loss of generality that $\Omega=\mathbb{D}$ and that the two harmonic mappings $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ in the unit disk with dilatations $\omega_1$ and $\omega_2$, respectively, satisfy
\begin{equation}\label{eq-normalizationh}
h_1(0)=g_1(0)=1-h'_1(0)=h_2(0)=g_2(0)=1-h'_2(0)=0\,,
\end{equation}
\begin{equation}\label{eq-normalizationw}
\omega_1(0)=\omega_2(0)=0\,,
\end{equation}
$\omega'_1(0)\neq 0$, and $\omega'_2(0)\neq 0$. Indeed, by choosing appropriates $\mu_1$ and $\mu_2$ of modulus one, we can consider the anti-analytic rotations $R_{\mu_1}$ and $R_{\mu_2}$ defined as in \eqref{eq-antianalyticrotation}, and apply them to $f_1$ and $f_2$, respectively, to have that the value of the derivative at the origin of the dilatations is not only different from zero but a positive real number. In other words, we can suppose
\begin{equation}\label{eq-normalizationw'}
\omega'_1(0)>0\quad\text{and}\quad \omega'_2(0)> 0\,.
\end{equation}
\par
A careful reader may have notice that in order to get the normalizations mentioned, we assumed that the harmonic Schwarzian derivatives of both the harmonic mappings $f_1$ and $f_2$ are not harmonic. This is equivalent, according to Theorem~\ref{thm-constantdilatation}, to the fact that the (second complex) dilatations of the functions involved are not constant. Obviously, the harmonic mapping $f=\overline g + h$ has constant dilatation if and only if $g'$ and $h'$ are linearly dependent.
\par
As pointed out in the introduction, in order to solve the problem we consider, we will need to distinguish between those cases when the functions involved have constant dilatation (so that the harmonic Schwarzian derivatives are harmonic) and those cases when the dilatations are not constant. The easier case when $S_H(f_1)=S_H(f_2)$ is a harmonic mapping will be treated in the next section.
\section{The linearly dependent case}\label{sec-constantdilatation}
We now solve the problem of characterizing the relationship between two locally univalent harmonic mappings $f_1$ and $f_2$ for which $S_H(f_1)=S_H(f_2)$ is a harmonic function. Recall that we can assume, without loss of generality that both functions are orientation-preserving after taking complex conjugates if needed. Also, that those harmonic mappings for which their harmonic Schwarzian derivative is a harmonic function are completely described in Theorem~\ref{thm-constantdilatation}.
\par
\begin{theorem}\label{thm-constdilat1}
Let $f_1$ and $f_2$ be two orientation-preserving harmonic mappings in a simply connected domain $\Omega\subset\mathbb{C}$. Suppose that $S_H(f_1)=S_H(f_2)$ is a harmonic function, so that $f_1=\alpha_1\overline{h_1}+h_1+\gamma_1$, where $h_1$ is an analytic function in $\Omega$, $\alpha_1\in\mathbb{D}$, and $\gamma_1\in\mathbb{C}$. Then there exist a M\"{o}bius transformation $T$ as in \eqref{eq-Mobius} and two complex numbers $\alpha_2\in\mathbb{D}$ and $\gamma_2\in\mathbb{C}$ such that
\[
f_2=\alpha_2\ \overline{T\circ h_1}+T\circ h_1+\gamma_2\,.
\]
\end{theorem}
\begin{pf}
The identity $S_H(f_1)=S_H(f_2)$, the fact that these functions are harmonic, and Theorem~\ref{thm-constantdilatation} give $f_2=\alpha_2\overline{h_2}+h_2+\gamma_2$ for certain $\alpha_2\in\mathbb{D}$ and $\gamma_2\in\mathbb{C}$. Moreover, $S_H(f_1)=S(h_1)$ and $S_H(f_2)=S(h_2)$. Therefore, there exists $T$ as in \eqref{eq-Mobius} such that $h_2=T\circ h_1$. This proves the theorem.
\end{pf}
\par\smallskip
The remaining part of this paper will be devoted to solve the more difficult problem of determining the relation between two harmonic mappings $f_1$ and $f_2$ for which $S_H(f_1)=S_H(f_2)$ is \emph{not} a harmonic function.
\section{The case of linear independence}\label{sec-main}
In order to solve the problem of determining the relationship between two locally univalent harmonic mappings $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ in a simply connected domain $\Omega\subset\mathbb{C}$ with non-constant dilatations $\omega_1=g'_1/h'_1$ and $\omega_2=g'_2/h'_2$, respectively (so that $S_H(f_1)$ and $S_H(f_2)$ are not harmonic), and such that $S_H(f_1)=S_H(f_2)$ we can argue as in Section~\ref{ssec-normalization} to obtain that after applying a series of affine transformations, anti-analytic rotations, and a composition with a Riemann map from the unit disk onto $\Omega$, we can assume without loss of generality that $\Omega=\mathbb{D}$ and the normalizations \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'} are satisfied by both $f_1$ and $f_2$.
\subsection{Towards the solution of the problem}\label{ssec-equations}
We now prove one of the key results in this paper.
\begin{prop}\label{prop-schw h}
Let $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ be orientation-preserving harmonic mappings in the unit disk with non-constant dilatations $\omega_1$ and $\omega_2$, respectively. Assume that both $f_1$ and $f_2$ are normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Suppose that $S_H(f_1)=S_H(f_2)$. Then the following assertions hold.
\begin{itemize}
\item[i)] $S(h_1)=S(h_2)$.
\item[ii)] For all $z\in\mathbb{D}$,
\begin{equation}\label{eq-prop-aux1}
\omega'_1(0) \left(\frac{h''_1(z)}{h'_1(z)}\, \omega'_1(z)-\omega''_1(z)\right)=\omega'_2(0) \left(\frac{h''_2(z)}{h'_2(z)}\, \omega'_2(z)-\omega''_2(z)\right)\,.
\end{equation}
\item[iii)] The identity
\begin{align}\label{eq-prop-aux2}
\nonumber &\overline{\left(\frac{h''_1(0)}{h'_1(0)}\, \omega'_1(0) -\omega''_1(0)\right)} \omega_1(z)-\frac 32 (\omega'_1(0))^2\, (\omega_1(z))^2\\
& \quad \quad \quad =\overline{\left(\frac{h''_2(0)}{h'_2(0)}\, \omega'_2(0)-\omega''_2(0)\right)} \omega_2(z)-\frac 32 (\omega'_2(0))^2\, (\omega_2(z))^2
\end{align}
holds for all $z$ in the unit disk.
\end{itemize}
\end{prop}
\begin{pf}
Since the functions $h_1$ and $h_2$ are locally univalent mappings in the unit disk, the (classical) Schwarzian derivatives $S(h_1)$ and $S(h_2)$ of these functions are analytic in $\mathbb{D}$. Hence, to show that $S(h_1)=S(h_2)$ it suffices to see that the Taylor coefficients of $S(h_1)$ and $S(h_2)$ are equal.
\par
The condition that $S_H(f_1)=S_H(f_2)$ is, by \eqref{eq-Schwarzian}, equivalent to
\begin{align}\label{eq-prop-equalSh}
\nonumber S(h_1)& +\frac{\overline{\omega_1}}{1-|\omega_1|^2}\left(\frac{h''_1}{h'_1}\, \omega'_1-\omega''_1\right)-\frac 32 \left(\frac{\omega'_1\overline{\omega_1}}{1-|\omega_1|^2}\right)^2\\
&=S(h_2) +\frac{\overline{\omega_2}}{1-|\omega_2|^2}\left(\frac{h''_2}{h'_2}\, \omega'_2-\omega''_2\right)-\frac 32 \left(\frac{\omega'_2\overline{\omega_2}}{1-|\omega_2|^2}\right)^2\,,
\end{align}
which implies that $S(h_1)(0)=S(h_2)(0)$ since the dilatations $\omega_1$ and $\omega_2$ of $f_1$ and $f_2$, respectively, fix the origin.
\par
A straightforward calculation shows
\begin{align*}
\frac{\partial S_H(f_1)}{\partial z}&=(S(h_1))'+\overline{\omega_1}\frac{\partial}{\partial z}\left[\frac{1}{1-|\omega_1|^2}\left(\frac{h''_1}{h'_1}\omega'_1-\omega''_1\right)\right]\\
&-\frac32\overline{\omega_1}^{\,2}\frac{\partial }{\partial z}\left[\left(\frac{\omega'_1}{1-|\omega_1|^2}\right)^2\right]\,.
\end{align*}
Thus we obtain
\[
\frac{\partial S_H(f_1)}{\partial z}(0)=(S(h_1))'(0)\,.
\]
\par
Similarly, we also have that for any integer $n\geq 2$,
\begin{align*}
\frac{\partial^n S_H(f_1)}{\partial z^n}&=(S(h_1))^{(n)}+\overline\omega_1\frac{\partial^n}{\partial z^n}\left[\frac{1}{1-|\omega_1|^2}\left(\frac{h''_1}{h'_1}\omega_f'-\omega''_1\right)\right]\\
&-\frac32\overline{\omega_1}^{\,2}\frac{\partial^n }{\partial z^n}\left[\left(\frac{\omega'_1}{1-|\omega_1|^2}\right)^2\right]\,.
\end{align*}
Therefore,
\[
\frac{\partial^n S_f}{\partial z^n}(0)=(Sh)^{(n)}(0)
\]
for all $n\geq 1$.
\par
By repeating the same procedure (considering now the function $f_2$ instead of $f_1$), we obtain
\[
\frac{\partial^n S_H(f_2)}{\partial z^n}(0)=(S(h_2))^{(n)}(0)\quad \text{for all}\quad n\geq 1\,.
\]
\par
Since we are assuming that $S_H(f_1)=S_H(f_2)$, we get the identities
\[
(S(h_1))^{(n)}(0)=(S(h_2))^{(n)}(0)
\]
for all positive integer $n$. This proves i).
\par\smallskip
To prove \eqref{eq-prop-aux1}, we are going to show that the Taylor coefficients of the (analytic) functions
\[
\omega'_1(0)\left(\frac{h''_1}{h'_1}\omega'_1-\omega''_1\right)\quad\text{and}\quad \omega'_2(0)\left(\frac{h''_2}{h'_2}\omega'_2-\omega''_2\right)
\]
coincide.
\par
Using that $S(h_1)=S(h_2)$ and \eqref{eq-prop-equalSh}, we have
\begin{align}\label{eq-prop-equalSh2}
\nonumber \frac{\overline \omega_1}{1-|\omega_1|^2}& \left(\frac{h''_1}{h'_1}\,\omega'_1-\omega''_1\right)
-\frac 32\left(\frac{\omega'_1\,\overline
\omega_1}{1-|\omega_1|^2}\right)^2\\
& =\frac{\overline \omega_2}{1-|\omega_2|^2} \left(\frac{h''_2}{h'_2}\,\omega'_2-\omega''_2\right)
-\frac 32\left(\frac{\omega'_2\,\overline\omega_2}{1-|\omega_2|^2}\right)^2\,.
\end{align}
Taking derivatives with respect to $\overline z$ in both sides of the previous equation, we get
\begin{align}
& \nonumber \frac{\overline{\omega'_1}}{(1-|\omega_1|^2)^2} \left(\frac{h''_1}{h'_1}\,\omega'_1-\omega''_1\right)
-\overline{\omega_1} \left(\, \frac{3\omega'_1|\omega'_1|^2}{(1-|\omega_1|^2)^3}\right)\\
& \label{eq-Lemma2 1} =\frac{\overline{\omega'_2}}{(1-|\omega_2|^2)^2} \left(\frac{h''_2}{h'_2}\,\omega'_2-\omega''_2\right)
-\overline{\omega_2} \left(\, \frac{3\omega'_2|\omega'_2|^2}{(1-|\omega_2|^2)^3}\right)\,.
\end{align}
This implies, evaluating at $z=0$ and using also \eqref{eq-normalizationw} and \eqref{eq-normalizationw'},
\[
\omega'_1(0)\left(\frac{h''_1}{h'_1}\omega'_1-\omega''_1\right)(0)= \omega'_2(0)\left(\frac{h''_2}{h'_2}\omega'_2-\omega''_2\right)(0)\,.
\]
Now note that
\begin{equation*}
\frac{\partial}{\partial z} \left(\frac{\overline{\omega'}}{(1-|\omega|^2)^2}\right)= \overline \omega \left(\frac{2|\omega'|^2}{(1-|\omega|^2)^3}\right)\,.
\end{equation*}
Hence, taking the derivatives with respect to $z$ of the functions in \eqref{eq-Lemma2 1} gives
\begin{align}
\nonumber \frac{\overline{\omega'_1}}{(1-|\omega_1|^2)^2} & \left(\frac{h''_1}{h'_1}\,\omega'_1-\omega''_1\right)'
-\overline{\omega_1} \varphi_1\\ &=
\label{eq-lemma 2 3}\frac{\overline{\omega'_2}}{(1-|\omega_2|^2)^2} \left(\frac{h''_2}{h'_2}\,\omega'_2-\omega''_2\right)'
-\overline{\omega_2} \varphi_2
\end{align}
for appropriate (smooth functions) $\varphi_1$ and $\varphi_2$. This gives (since $\omega_1(0)=\omega_2(0)=0$)
\[
\omega'_1(0)\left(\frac{h''_1}{h'_1}\omega'_1-\omega''_1\right)'(0)= \omega'_2(0)\left(\frac{h''_2}{h'_2}\omega'_2-\omega''_2\right)'(0)\,.
\]
By taking successive derivatives with respect to $z$ in \eqref{eq-lemma 2 3} and evaluating at the origin, we get the desired result.
\par
To check \eqref{eq-prop-aux2}, let us introduce the notation
\[
\Phi_1=\frac{h''_1}{h'_1}\omega'_1-\omega''_1\quad\text{and}\quad \Phi_2=\frac{h''_2}{h'_2}\omega'_2-\omega''_2\,,
\]
so that, after taking complex conjugates, \eqref{eq-prop-equalSh2} becomes
\[
\frac{\overline{\Phi_1}}{1-|\omega_1|^2}\omega_1-\frac 32 \left(\frac{\overline{\omega'_1}}{1-|\omega_1|^2}\right)^2(\omega_1)^2
=\frac{\overline{\Phi_2}}{1-|\omega_2|^2}\omega_2-\frac 32 \left(\frac{\overline{\omega'_2}}{1-|\omega_2|^2}\right)^2(\omega_2)^2\,.
\]
Taking derivatives with respect to $z$ of the functions in the previous identity and using that
\[
\frac{\partial}{\partial z} \frac{1}{1-|\omega|^2}=\overline{\omega}\frac{\omega'}{(1-|\omega|^2)^2}\,,
\]
we obtain
\begin{align*}
\frac{\overline{\Phi_1}}{1-|\omega_1|^2}\omega'_1
&-\frac 32 \left(\frac{\overline{\omega'_1}}{1-|\omega_1|^2}\right)^2\left((\omega_1)^2\right)'+\overline{\omega_1} \psi_1 \\ &=\frac{\overline{\Phi_2}}{1-|\omega_2|^2}\omega'_2
-\frac 32 \left(\frac{\overline{\omega'_2}}{1-|\omega_2|^2}\right)^2\left((\omega_2)^2\right)'+\overline{\omega_2} \psi_2
\end{align*}
for appropriate smooth functions $\psi_1$ and $\psi_2$. Therefore, in view of the previous identity, we can argue as above (calculating successive derivatives with respect to $z$ and evaluating at the origin) to show that the analytic functions in \eqref{eq-prop-aux2} have equal Taylor coefficients, so that they are equal. We omit the details.
\end{pf}
Some remarks are now in order. Notice that the condition that $S(h_1)=S(h_2)$, in addition to the normalizations \eqref{eq-normalizationh}, show that there exists a constant $a_0$, say, such that
\begin{equation}\label{eq-afterprop h}
h_1(z)=\frac{h_2(z)}{1+a_0h_2(z)}\,,\quad z\in\mathbb{D}\,.
\end{equation}
A straightforward calculation shows that $a_0=(h''_2(0)-h''_1(0))/2$.
\par\smallskip
We have not been able to use all the information contained in the equations \eqref{eq-prop-aux1}, \eqref{eq-prop-aux2}, and \eqref{eq-afterprop h} to get the solution to the problem we are to solve directly. However, under one single extra assumption on the values of the derivatives of the dilatations $\omega_1$ and $\omega_2$ at the origin, it is possible to identify the relationship between the harmonic mappings $f_1$ and $f_2$ (normalized as in the previous proposition) such that $S_H(f_1)=S_H(f_2)$.
\begin{prop}\label{prop-equalfucn}
Assume that the harmonic mappings $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ satisfy the hypotheses in Proposition~\ref{prop-schw h}. If, in addition, $\omega'_1(0)=\omega'_2(0)$, then $f_1=f_2$.
\end{prop}
\begin{pf}
The hypotheses show that, by \eqref{eq-prop-aux1},
\begin{equation*}
\frac{h''_1}{h'_1}\, \omega'_1-\omega''_1=\frac{h''_2}{h'_2}\, \omega'_2-\omega''_2
\end{equation*}
and hence
\begin{equation*}
\frac{h''_1(0)}{h'_1(0)}\, \omega'_1(0)-\omega''_1(0)=\frac{h''_2(0)}{h'_2(0)}\, \omega'_2(0)-\omega''_2(0)\,.
\end{equation*}
Therefore, from \eqref{eq-prop-aux2} we get
\[
\overline{\left(\frac{h''_1(0)}{h'_1(0)}\, \omega'_1(0)-\omega''_1(0)\right)} \left(\omega_1-\omega_2\right)=\frac 32 (\omega'_1(0))^2 \left((\omega_1)^2-(\omega_2)^2\right)
\]
and we conclude that either $\omega_1=\omega_2$ or
\[
\omega_1+\omega_2=\frac{2}{3(\omega'_1(0))^2}\overline{\left(\frac{h''_1(0)}{h'_1(0)}\, \omega'_1(0)-\omega''_1(0)\right)} \,.
\]
The latter case gives
\[
\omega'_1(0)=-\omega'_2(0)\,,
\]
which is not possible since we are assuming that the normalization \eqref{eq-normalizationw'} holds, so that $\omega'_1(0)$ and $\omega'_2(0)$ are both positive real numbers. Hence, $\omega_1\equiv\omega_2$ and, by \eqref{eq-prop-aux1}, we conclude $h''_1(0)=h''_2(0)$. This shows, by \eqref{eq-afterprop h}, that $h_1=h_2$. Finally, we have also
\[
g'_1=\omega_1 h'_1=\omega_2 h'_2=g'_2\,,
\]
$g_1(0)=g_2(0)$. This gives $g_1=g_2$ and we get the desired identity $f_1=f_2$.
\end{pf}
\par\smallskip
From now on, all our efforts will be devoted to show that if $f_1$ and $f_2$ are as in Proposition~\ref{prop-schw h}, the identity $\omega'_1(0)=\omega'_2(0)$ holds.
\subsection{An equivalent formulation of the problem}\label{ssec-equivalentformulation}
Given an orien\-ta\-tion-preserving harmonic mapping $f=\overline{g}+h$ in the unit disk with dilatation $\omega$ and normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}, and $w\in\mathbb{D}$, we can consider the transformation defined by \eqref{eq-aff1} to get a new function
\[
\widehat{F_w}=\overline{\widehat{G_w}}+\widehat{H_w}=\overline{\left(\frac{g\circ\varphi_w-g(w)}{\overline{h'(w)}(1-|w|^2)}\right)}+\frac{h\circ\varphi_w-h(w)}{h'(w)(1-|w|^2)}\,,
\]
where $\varphi_w$ is the automorphism of the unit disk given by \eqref{eq-automorphism}.
\par
The anti-analytic rotation $R_\mu(\widehat{F_w})=F_w$ (defined as in \eqref{eq-antianalyticrotation}) with $\mu=h'(w)/\overline{h'(w)}$ gives
\[
F_w=H_w+\overline{G_w}=\frac{h\circ\varphi_w-h(w)}{h'(w)(1-|w|^2)}+\overline{\left(\frac{g\circ\varphi_w-g(w)}{h'(w)(1-|w|^2)}\right)}\,.
\]
This new function $F_w$ satisfies the normalization \eqref{eq-normalizationh}. However, the dilatation $\alpha_w$ of $F_w$ equals $\alpha_w=\omega_f\circ\varphi_w$, which clearly does not fix the origin necessarily, so that \eqref{eq-normalizationw} might not be satisfied. In this case we can consider the affine mapping $A(z)=(z-\overline{\alpha_w(0)\, z})/(1-|\alpha_w(0)|^2)$ and the composition $A\circ F_w$ to get the function $\widehat{f_w}=\overline{\widehat{g_w}}+\widehat{h_w}$ with dilatation $\widehat{\omega_w}$, where
\[
\widehat{h_w}=\frac{1}{1-|\omega(w)|^2} \left(\frac{h\circ\varphi_w-h(w)}{h'(w)(1-|w|^2)}-\overline{\omega(w)} \left(\frac{g\circ\varphi_w-g(w)}{h'(w)(1-|w|^2)}\right) \right)
\]
and $\widehat{\omega_w}=\varphi_{-\omega(w)}\circ\omega_f\circ\varphi_w$.
\par
Note that $\widehat{\omega_w}(0)=0$, so that $\widehat{f_w}$ satisfies both \eqref{eq-normalizationh} and \eqref{eq-normalizationw}. The further transformation $f_w=R_{\mu_w}(\widehat{f_w})$, where $\mu_w=\overline{\widehat{\omega'_w(0)}}/|\widehat{\omega'_w(0)}|$ produces a function of the form
\[
f_w=\overline{g_w}+h_w\,,
\]
with $h_w=\widehat{h_w}$, normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}, and (by the chain rule and the invariance of the operator $S_H$ under affine transformations and anti-analytic rotations) such that
\begin{equation}\label{eq-identitySh}
S_H(f_w)(z)=S_H(f)(\varphi_w(z))(\varphi'_w(z))^2\,,\quad z\in\mathbb{D}\,.
\end{equation}
Note that for all such $z$,
\begin{align}\label{eq-schwarzianh}
\nonumber S(h_w)(z)&=S\left(h\circ\varphi_w-\overline{\omega(w)}(g\circ\varphi_w)\right)(z)
\\&=S(h-\overline{\omega(w)}g)(\varphi_w(z))\cdot (\varphi'_w(z))^2\,.
\end{align}
\par\medskip
We can now state our next theorem.
\begin{theorem}\label{thm-mainaux}
Let $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ be orientation-preserving harmonic mappings in the unit disk with non-constant dilatations $\omega_1=g'_1/h'_1$ and $\omega_2=g'_2/h'_2$, respectively. Assume that both $f_1$ and $f_2$ are normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Then $S_H(f_1)=S_H(f_2)$ if and only if for all $w$ in the unit disk
\begin{equation}\label{eq-thm-schwarzianh1}
S(h_1-\overline{\omega_1(w)}g_1)=S(h_2-\overline{\omega_2(w)}g_2)\,.
\end{equation}
\end{theorem}
\begin{pf}
Suppose first that \eqref{eq-thm-schwarzianh1} holds. Then, in particular, we have that for all $w\in\mathbb{D}$
\[
S(h_1-\overline{\omega_1(w)}g_1)(w)=S(h_2-\overline{\omega_2(w)}g_2)(w)\,,
\]
which readily gives, by \eqref{eq-Schwarzian2}, $S_H(f_1)=S_H(f_2)$.
\par\smallskip
To prove the necessity of the assertion in the theorem, we argue as before to produce, for each $w$ in the unit disk the functions $(f_1)_w=\overline{(g_1)_w}+(h_1)_w$ and $(f_2)_w=\overline{(g_2)_w}+(h_2)_w$, that will satisfy \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Moreover, due to the identity \eqref{eq-identitySh} applied, respectively, to $(f_1)_w$ and $(f_2)_w$, and since $S_H(f_1)=S_H(f_2)$ we obtain, for all $z\in\mathbb{D}$,
\begin{align*}
S_H((f_1)_w)(z)& =S_H(f_1)(\varphi_w(z)) (\varphi'_w(z))^2\\
& =S_H(f_2)(\varphi_w(z)) (\varphi'_w(z))^2=S_H((f_2)_w)(z)
\end{align*}
and hence, by Proposition~\ref{prop-schw h} i), we have $S_H((h_1)_w)=S_H((h_2)_w)$. In other words, according to \eqref{eq-schwarzianh}, we have
\begin{equation*}
S(h_1-\overline{\omega_1(w)}g_1)(\varphi_w(z))\cdot (\varphi'_w(z))^2=S(h_1-\overline{\omega_1(w)}g_1)(\varphi_w(z))\cdot (\varphi'_w(z))^2\,,
\end{equation*}
which is equivalent to \eqref{eq-thm-schwarzianh1} since $\varphi_w$ is an automorphism of $\mathbb{D}$.
\end{pf}
\par\smallskip
The following corollary will provide us with another important tool to get the solution to the problem considered in this section.
\begin{cor}
Let $f_1$ and $f_2$ satisfy the hypotheses in the previous theorem. If we assume that $S_H(f_1) = S_H(f_2)$, then the identity
\begin{equation}\label{eq-thm-schwarzianh2}
h_1(z)-\overline{\omega_1(z)}g_1(z)=\frac{h_2(z)-\overline{\omega_2(z)}g_2(z)}{1+\left(a_0+\delta(z)\right)\left(h_2(z)-\overline{\omega_2(z)}g_2(z)\right)}\,,
\end{equation}
where
\begin{equation}\label{eq-delta}
a_0=\frac{h''_2(0)-h''_1(0)}{2}\quad\text{and}\quad \delta(z)=\frac{\omega'_1(0)}{2}\overline{\omega_1(z)}-\frac{\omega'_2(0)}{2}\overline{\omega_2(z)}\,,
\end{equation}
holds for all $z$ in $\mathbb{D}$.
\end{cor}
\begin{pf}
According to the previous theorem, for a complex number $w\in\mathbb{D}$, the analytic functions $\gamma_1=h_1-\overline{\omega_1(w)}g_1$ and $\gamma_2=h_2-\overline{\omega_2(w)}g_2$ have equal (analytic) Schwarzian derivative. Therefore, as mentioned in the introduction, there exists a M\"{o}bius transformation $T_w$ as in \eqref{eq-Mobius} (where the coefficients can depend on $w$) such that $\gamma_1=T_w\circ\gamma_2$. But since $\gamma_1(0)=\gamma_2(0)=0$ and $\gamma'_1(0)=\gamma'_2(0)=1$, we have that, necessarily,
\[
T_w(z)=\frac{z}{1+a_w z}\,.
\]
That is,
\[
h_1(z)-\overline{\omega_1(w)}g_1(z)=\frac{h_2(z)-\overline{\omega_2(w)}g_2(z)}{1+a_w\left(h_2(z)-\overline{\omega_2(w)}g_2(z)\right)}\,.
\]
Taking two successive derivatives with respect to $z$ in both sides of the previous equation, evaluating at $z=0$, and bearing in mind that \eqref{eq-normalizationw} and \eqref{eq-normalizationw'} are satisfied by both $\omega_1$ and $\omega_2$, we get
\[
a_w= \frac{h''_2(0)-h''_1(0)}{2}+\frac{\omega'_1(0)}{2}\overline{\omega_1(w)}-\frac{\omega'_2(0)}{2}\overline{\omega_2(w)}\,.
\]
Setting $w=z$, we obtain the desired result due.
\end{pf}
\subsection{Important lemmas}\label{ssec-prel} In order to simplify the exposition, let us agree with the following notation already used in the proof of Proposition~\ref{prop-schw h}. We use $\Phi_1$ and $\Phi_2$ to denote the analytic functions in the unit disk defined by
\begin{equation}\label{eq-notation}
\Phi_1= \frac{h''_1}{h'_1}\, \omega'_1-\omega''_1\quad\text{and}\quad \Phi_2= \frac{h''_2}{h'_2}\, \omega'_2-\omega''_2\,,
\end{equation}
so that \eqref{eq-prop-aux1} and \eqref{eq-prop-aux2} become
\begin{equation}\label{eq-afterprop-aux1}
\omega'_1(0) \Phi_1=\omega'_2(0) \Phi_2
\end{equation}
and
\begin{equation}\label{eq-afterprop-aux2}
\overline{\Phi_1(0)} \omega_1-\frac 32 (\omega'_1(0))^2\, (\omega_1)^2=\overline{\Phi_2(0)} \omega_2-\frac 32 (\omega'_2(0))^2\, (\omega_2)^2\,,
\end{equation}
respectively.
\par\smallskip
\begin{lem}\label{lem-Phi}
Let $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ be orientation-preserving harmonic mappings in the unit disk with non-constant dilatations $\omega_1$ and $\omega_2$, respectively. Assume that both $f_1$ and $f_2$ are normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Suppose that $S_H(f_1)=S_H(f_2)$. If either $\Phi_1(0)=0$ or $\Phi_2(0)=0$, where $\Phi_1$ and $\Phi_2$ are as in \eqref{eq-notation}, then $\omega'_1(0)=\omega'_2(0)$.
\end{lem}
\begin{pf}
If $\Phi_1(0)=0$, by \eqref{eq-afterprop-aux1} and the normalization \eqref{eq-normalizationw'}, $\Phi_2(0)=0$ as well, so that \eqref{eq-afterprop-aux2} becomes
\[
(\omega'_1(0))^2\, (\omega_1)^2=(\omega'_2(0))^2\, (\omega_2)^2\,.
\]
By taking two successive derivatives in both sides of the previous equation and evaluating at the origin, we get the identity $\omega'_1(0)=\omega'_2(0)$. It is obvious that the same result holds if we suppose $\Phi_2(0)=0$.
\end{pf}
\par\smallskip
The next result will be also used later.
\begin{lem}\label{lem-dilatations}
Let $\omega_1$ and $\omega_2$ be the dilatations of two harmonic functions $f_1$ and $f_2$ as in the previous lemma. Assume that some of the following relations hold in some open set $\Delta$ contained in the unit disk for certain constants $k, l,$ and $m$ in $\mathbb{C}$.
\begin{itemize}
\item[i)] $\omega_2=k\omega_1\,.$
\item[ii)] $\omega_1\omega_2=k(\omega_2)^2+l\omega_2+m\omega_1\,.$
\end{itemize}
Then $\omega'_1(0)=\omega'_2(0)$.
\par\smallskip
Moreover, such functions $\omega_1$ and $\omega_2$ cannot satisfy a relation of the form
\begin{equation}\label{eq-lemdilatfinal}
n(\omega_2)^2=k\omega_2+l\omega_1+m\,,
\end{equation}
unless $n=0$.
\end{lem}
\begin{pf}
Notice that by the uniqueness principle for analytic functions, if any of the relations considered in this lemma hold in $\Delta$, they indeed hold in the whole unit disk.
\par
Assume first that i) holds. Then, necessarily, $k\neq 0$ since, otherwise \eqref{eq-normalizationw'} would not be satisfied by $\omega_2$. In fact, by taking derivatives in both sides of the equation and evaluating at the origin we obtain $k=\omega'_2(0)/\omega'_1(0)$. Hence, the substitution $\omega_2=k\omega_1$ in \eqref{eq-afterprop-aux2} and \eqref{eq-afterprop-aux1} give
\begin{equation*}
(\omega'_1(0))^2 (\omega_1)^2 =\frac{(\omega'_2(0))^4}{(\omega'_1(0))^2} (\omega_1)^2\,,
\end{equation*}
which implies (taking two successive derivatives and evaluating at the origin) $\omega'_1(0)=\omega'_2(0)$.
\par
Let us suppose now that for some constants $k, l,$ and $m$,
\begin{equation}\label{eq-ii.}
\omega_1\omega_2=k(\omega_2)^2+l\omega_2+m\omega_1\,.
\end{equation}
This gives, in particular (after taking derivatives with respect to $z$ and using the normalizations \eqref{eq-normalizationw}), that $0=l\omega'_2(0)+m\omega'_1(0)$. Moreover, if $l=m=0$, we get $\omega_1=k\omega_2$, which implies $\omega'_1(0)=\omega'_2(0)$ by i) and, also, if $(l,m)\neq (0,0)$, we can take derivatives in \eqref{eq-ii.} and evaluate at the origin to get that either $\omega'_1(0)$ or $\omega'_2(0)$ are zero, which is in contradiction with \eqref{eq-normalizationw'}. So that we can assume that both $l$ and $m$ are different from zero and, in fact, we obtain
\begin{equation}\label{eq-L}
l=-m\omega'_1(0)/\omega'_2(0)\,.
\end{equation}
Also, we can re-write \eqref{eq-ii.} as
\[
\omega_1=\frac{k(\omega_2)^2+l\omega_2}{\omega_2-m}
\]
and substitute this expression in \eqref{eq-afterprop-aux2} to get (after multiplying by $(\omega_2-m)^2$, re-arranging the resulting terms, using the fact that $l=-m\omega'_1(0)/\omega'_2(0)$, and \eqref{eq-afterprop-aux1})
\begin{align*}
\frac 32&\left(k^2(\omega'_1(0))^2-(\omega'_2(0))^2\right)(\omega_2)^2\\
&+\left(\overline{\Phi_2(0)}+3m(\omega'_2(0))^2-k\overline{\Phi_1(0)}+3kl(\omega'_1(0))^2\right)\omega_2\\
& +\left(\frac 32 l^2(\omega'_1(0))^2+(mk-l)\overline{\Phi_1(0)}-2m\overline{\Phi_2(0)}-\frac 32 m^2(\omega'_2(0))^2\right)\equiv 0\,.
\end{align*}
Hence, we have that the coefficients in the previous equation are identically zero. That is,
\begin{equation}\label{eq-K}
k^2=\left(\frac{\omega'_2(0)}{\omega'_1(0)}\right)^2\,,
\end{equation}
\begin{align}\label{eq-aux1}
\nonumber k\overline{\Phi_1(0)}-\overline{\Phi_2(0)}&=3m(\omega'_2(0))^2+3kl(\omega'_1(0))^2\\
&= 3m(\omega'_2(0))^2-3km\frac{(\omega'_1(0))^3}{\omega'_2(0)}
\end{align}
(by \eqref{eq-L}), and, using again \eqref{eq-L} and also \eqref{eq-afterprop-aux1},
\begin{equation}\label{eq-aux22}
k\overline{\Phi_1(0)}-\overline{\Phi_2(0)}=\frac 32 m \frac{(\omega'_2(0))^4-(\omega'_1(0))^4}{(\omega'_2(0))^2}\,.
\end{equation}
A combination of \eqref{eq-aux1} and \eqref{eq-aux22} gives the identity
\[
4k^2(\omega'_1(0))^6 (\omega'_2(0))^2=(\omega'_1(0))^8+(\omega'_2(0))^8+2(\omega'_1(0))^4(\omega'_2(0))^4\,,
\]
so that, in view of \eqref{eq-K}, we finally get $\omega'_1(0)=\omega'_2(0)$.
\par\smallskip
Finally, to prove the last assertion of the theorem, suppose that \eqref{eq-lemdilatfinal} is satisfied for some $n\neq 0$ so that, we can assume, without loss of generality that $n=1$ to get a relation of the form
\[
(\omega_2)^2=k\omega_2+l\omega_1+m\,,\quad k, l, m\in\mathbb{C}\,.
\]
Then, on the one hand, $m=0$ since $\omega_1(0)=\omega_2(0)=0$. Also, $l\neq 0$ since $\omega_2$ is not identically constant. On the other hand, $k\neq 0$ since $\omega'_1(0)\neq 0$. Hence, we can re-write the previous equation as
\[
\omega_1=\frac{1}{l}(\omega_2)^2-\frac{k}{l}\ \omega_2\,,\quad k, l\in\mathbb{C}\setminus\{0\}\,.
\]
But then, after using this identity in \eqref{eq-afterprop-aux1} and rearranging the resulting terms, we get
\[
a_4(\omega_2)^4+a_3(\omega_2)^3+a_2(\omega_2)^2+a_1\omega_2\equiv 0
\]
(for certain coefficients $a_1, a_2, a_3$, and $a_4$). Therefore, since $\omega_2(\mathbb{D})$ is an open set we conclude all the coefficients in the previous equation must be equal to zero. In particular,
\[
a_3=-\frac{3(\omega'_1(0))^2k}{l^2}=0\,,
\]
which implies, since $\omega'_1(0)\neq 0$, that $k=0$. This is a contradiction.
\end{pf}
\par\smallskip
The last result in this section is related to the following analytic functions that will be important in the proof of our main theorem in the next Section~\ref{ssec-equaldilat}. They are defined in terms of the functions in the canonical decomposition of $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$, the value of the derivative of their dilatations $\omega_1$ and $\omega_2$, respectively, at the origin, and the constant $a_0$ in \eqref{eq-delta}. Concretely, define
\begin{equation}\label{deq-defn1}
\varphi_1=g_2-a_0h_1g_2-\frac{\omega'_2(0)}{2}h_1h_2\,,\quad \varphi_2=\frac{\omega'_2(0)}{2}h_1g_2\,,
\end{equation}
\begin{equation}\label{deq-defn2}
\varphi_3=\frac{\omega'_1(0)}{2}g_1g_2\,,\quad \varphi_4=-\frac{\omega'_2(0)}{2}g_1g_2\,,
\end{equation}
\begin{equation}\label{deq-defn3}
\varphi_5=a_0g_1g_2-\frac{\omega'_1(0)}{2}h_1g_2+\frac{\omega'_2(0)}{2}g_1h_2\,,\quad \varphi_6=-\frac{\omega'_1(0)}{2}g_1h_2\,,
\end{equation}
\begin{equation}\label{deq-defn4}
\text{and}\quad \varphi_7=\frac{\omega'_1(0)}{2}h_1h_2-a_0g_1h_2-g_1\,.
\end{equation}
\par
Notice that, since we are assuming that the functions $f_1$ and $f_2$ are orientation-preserving (so that both $h_1$ and $h_2$ are analytic and locally univalent) and that the dilatations $\omega_1=g'_1/h'_1$ and $\omega_2=g'_2/h'_2$ are not constant and satisfy \eqref{eq-normalizationw'}, we have that neither $\varphi_2$, $\varphi_3$, $\varphi_4$, nor $\varphi_6$ can be identically zero. At this point, and due to the remarks we made before, we can also prove that $\varphi_1$, $\varphi_5$, and $\varphi_7$ cannot be identically zero either.
\begin{lem}\label{lem-varphi}
Under the hypotheses considered in Lemma~\ref{lem-Phi} and the additional condition $\omega'_1(0)\neq\omega'_2(0)$, we have that neither $\varphi_1$, $\varphi_5$, nor $\varphi_7$ are identically zero.
\end{lem}
\begin{pf}
Assume that $\varphi_1\equiv 0$. Then we have, in view of \eqref{eq-afterprop h},
\[
g_2=\frac{\omega'_2(0)}{2}\frac{h_1}{1-a_0h_1}h_2=\frac{\omega'_2(0)}{2}(h_2)^2\,.
\]
Hence, taking successive derivatives and using \eqref{eq-normalizationh}, we get that $g'''_2(0)=3\omega'_2(0)h''_2(0)$. On the other hand, $g'_2=\omega_2 h'_2$, so that taking derivatives again, we obtain the equation
\begin{equation}\label{eq-g'''}
g'''_2(0)=\omega''_2(0)+2\omega'_2(0)h''_2(0)\,.
\end{equation}
That is,
\[
3\omega'_2(0)h''_2(0)=\omega''_2(0)+2\omega'_2(0)h''_2(0)\,,
\]
or equivalently, $\Phi_2(0)=0$, where $\Phi_2$ is the function defined in \eqref{eq-notation}. But then, by Lemma~\ref{lem-Phi}, we obtain that $\omega'_1(0)=\omega'_2(0)$, which is in contradiction with our hypotheses.
\par
The proof that $\varphi_7$ cannot be identically zero either is completely analogous. We omit the details.
\par\smallskip
Let us suppose now that $\varphi_5\equiv 0$ and recall that, as in \eqref{eq-delta}, $a_0=(h''_2(0)-h''_1(0))/2$. Then
\[
\lim_{z\to 0} a_0 \frac{g_1g_2}{z^4}=\lim_{z\to 0} \frac{\frac{\omega'_1(0)}{2}h_1g_2-\frac{\omega'_2(0)}{2}h_2g_1}{z^4}\,.
\]
A straightforward calculation, which uses that $g'_i=\omega_ih'_i$, $i=1,2$, and that \eqref{eq-normalizationh} and \eqref{eq-normalizationw} are satisfied for all the functions involved, shows that
\[
\lim_{z\to 0} a_0 \frac{g_1g_2}{z^4}=a_0\frac{\omega'_1(0)\omega'_2(0)}{4}\,.
\]
With some for effort and using \eqref{eq-g'''} and the analogous identity $g'''_1(0)=\omega''_1(0)+2\omega'_1(0)h''_1(0)$, we obtain
\[
\lim_{z\to 0} \frac{\frac{\omega'_1(0)}{2}h_1g_2-\frac{\omega'_2(0)}{2}h_2g_1}{z^4}=\frac{\omega'_1(0)\omega''_2(0)-\omega''_1(0)\omega'_2(0)+a_0\omega'_1(0)\omega'_2(0)}{12}\,.
\]
Hence, we conclude
\[
\omega'_1(0)\left(\omega'_2(0)h''_2(0)-\omega''_2(0)\right)=\omega'_2(0)\left(\omega'_1(0)h''_1(0)-\omega''_1(0)\right)
\]
or equivalently, using also \eqref{eq-afterprop-aux1},
\[
\omega'_1(0) \Phi_2(0)=\omega'_2(0) \Phi_1(0)= \omega'_2(0) \frac{\omega'_2(0)}{\omega'_1(0)}\Phi_2(0)\,.
\]
This gives (using Lemma~\ref{lem-Phi} if needed) that $\omega'_1(0)=\omega'_2(0)$. This contradiction ends the proof of this lemma.
\end{pf}
\subsection{Main theorem} \label{ssec-equaldilat}
Now we have all the tools to prove the main result in this section. We are aware of the fact that the proof of this theorem is very technical: there are numerous different equations that must be combined appropriately to get the desired conclusion. With the hope to make our arguments understandable and this paper self-contained, we have decided to include enough details in the different steps in our approach to the proof.
\begin{theorem}\label{thm-main}
Let $f_1=\overline{g_1}+h_1$ and $f_2=\overline{g_2}+h_2$ be orientation-preserving harmonic mappings in the unit disk with non-constant dilatations $\omega_1$ and $\omega_2$, respectively. Assume that both $f_1$ and $f_2$ are normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Suppose that $S_H(f_1)=S_H(f_2)$. Then $\omega'_1(0)=\omega'_2(0)$.
\end{theorem}
\begin{pf}
Let us assume, in order to get a contradiction, that $\omega'_1(0)\neq\omega'_2(0)$.
\smallskip
\par
After multiplying both sides of \eqref{eq-thm-schwarzianh2} by $(1+(a_0+\delta)(h_2-\overline{\omega_2}g_2))$, bearing in mind the definition of the function $\delta$ in \eqref{eq-delta}, using that from \eqref{eq-afterprop h}, we have $h_2-h_1-a_0h_1h_2=0$, and rearranging the resulting terms, we obtain the relation
\begin{equation}\label{eq-1}
0=\varphi_1 \overline{B_1}+\varphi_2 \overline{B_2}+\varphi_3 \overline{B_3}+\varphi_4 \overline{B_4}+\varphi_5 \overline{B_5}+\varphi_6 \overline{B_6}+\varphi_7 \overline{B_7}\,,
\end{equation}
where for $i=1,\ldots, 7$, the functions $\varphi_i$ are defined by \eqref{deq-defn1}, \eqref{deq-defn2}, \eqref{deq-defn3}, and \eqref{deq-defn4}, and the analytic functions $B_i$ in the unit disk are
\begin{align}\label{deq-defn5}
\nonumber B_1&=\omega_2\,,\quad B_2=(\omega_2)^2\,, \quad B_3=(\omega_1)^2\omega_2\,,\quad B_4=\omega_1(\omega_2)^2\,,\\
& \quad B_5=\omega_1\omega_2\,,\quad B_6=(\omega_1)^2\,,\quad\text{and}\quad B_7=\omega_1\,.
\end{align}
\par\smallskip
Now, notice that the fact that $\omega'_1(0)>0$ (which implies that $\omega_1$ is univalent in a disk centered at the origin) shows that for all $i=1,\ldots, 6$, the quotients $B_i/B_7$ are analytic functions on a neighborhood of the origin. Hence, if we divide by $\overline{B_7}$ in \eqref{eq-1} and make the derivatives with respect to $\overline z$ in both sides of the resulting equation, we obtain
\begin{equation}\label{eq-2}
0=\varphi_1 \overline{C_1}+\varphi_2 \overline{C_2}+\varphi_3 \overline{C_3}+\varphi_4 \overline{C_4}+\varphi_5 \overline{C_5}+\varphi_6 \overline{C_6}\,,
\end{equation}
where $C_i=(B_i/B_7)'$, $i=1,\ldots, 6$. Notice that $C_6=\omega'_1$, which is supposed to be different from zero near $z=0$. Hence, we can again divide by the complex conjugate of this function $C_6$ in the previous equation and take derivatives with respect to $\overline z$ to get
\begin{equation}\label{eq-3}
0=\varphi_1 \overline{D_1}+\varphi_2 \overline{D_2}+\varphi_3 \overline{D_3}+\varphi_4 \overline{D_4}+\varphi_5 \overline{D_5}\,,
\end{equation}
where, now, $D_i=(C_i/C_6)'$, $i=1, \ldots 5$ and, in particular, $D_5=(\omega'_2/\omega'_1)'$.
\par
Let us suppose that $D_5$ is identically zero in a neighborhood of the origin $\Delta$, say. Then, we have (after making the integration in both sides of the resulting equation $D_5=0$) that $\omega'_2=k\omega'_1$ for some constant $k\in\mathbb{C}$. This is not possible unless $\omega'_1(0)=\omega'_2(0)$, by Lemma~\ref{lem-dilatations}. Hence, there must be a disk $\Delta_1=\Delta_1(z_0, r) \subset \Delta$ centered at $z_0\in\Delta$ and with radius $r>0$ such that the analytic function $D_5$ satisfies $D_5(z)\neq 0$ for all $z\in\Delta_1$. And hence, for all points in this disk, it makes sense do divide out by $\overline{D_5}$ in \eqref{eq-3} and take derivatives with respect to $\overline z$ to obtain
\begin{equation}\label{eq-4}
0=\varphi_1 \overline{E_1}+\varphi_2 \overline{E_2}+\varphi_3 \overline{E_3}+\varphi_4 \overline{E_4}\,,
\end{equation}
where $E_i=(D_i/D_5)'$, $i=1, \ldots 4$. We indeed have
\[
E_4=2\left(\frac{\left[\frac{\omega_2\omega'_2}{\omega'_1}\right]'}{\left[\frac{\omega'_2}{\omega'_1}\right]'}\right)'\,.
\]
Now we can argue as before to prove that $E_4$ cannot be identically zero in $\Delta_1$, as if this would be the case, we would obtain (after integrating the equation $E_4=0$ three successive times) a relation of the form
\[
(\omega_2)^2=k\omega_2+l\omega_1+m\,,\quad k, l, m\in\mathbb{C}\,,
\]
which is known to be impossible to be satisfied by Lemma~\ref{lem-dilatations}. This proves that $E_4$ is not identically zero in $\Delta_1$ and hence we have that there is another disk $\Delta_2\subset\Delta_1$ where $E_4$ has no zeros. We can then divide out by $\overline{E_4}$ in \eqref{eq-4}, take derivatives with respect to $\overline z$, and get the following equation that holds in this smaller disk $\Delta_2$:
\begin{equation}\label{eq-5}
0=\varphi_1 \overline{F_1}+\varphi_2 \overline{F_2}+\varphi_3 \overline{F_3}\,,
\end{equation}
where $F_i=(E_i/E_4)'$, for $i=1,2,$ and $3$.
\par
We still need to repeat the arguments used before to show that $F_3$ is not identically zero. To this end, it might be useful to point out that
\[
E_3=\left(\frac{\left[\frac{(\omega_1\omega_2)'}{\omega'_1}\right]'}{\left[\frac{\omega'_2}{\omega'_1}\right]'}\right)'\,.
\]
\par
So that let us assume that $F_3\equiv 0$. This gives, after four successive integrals the expression
\[
\omega_1\omega_2=k(\omega_2)^2+l\omega_2+m\omega_1\,,
\]
since $\omega_1(0)=\omega_2(0)=0$, and once more we have (after an application of Lemma~\ref{lem-dilatations}) that this would imply the contradiction $\omega'_1(0)=\omega'_2(0)$. This proves that $F_3$ is not identically zero in $\Delta_2$.
\par\smallskip
Therefore, going back to \eqref{eq-5}, we have (by Lemma~\ref{lem-varphi} and the hypotheses that the functions $f_1$ and $f_2$ considered are locally univalent and have non-constant dilatations) that none of the functions $\varphi_1$, $\varphi_2$, and $\varphi_3$ are identically zero. We have also proved that $F_3$ cannot be identically zero either. Moreover, it is now easy to obtain that $F_2$ satisfies the same property if we argue as follows.
\par
Suppose that $F_2\equiv 0$. Then, we get from \eqref{eq-5} that $\varphi_1\overline{F_1}+\varphi_3\overline{F_3}\equiv 0$. And hence, $F_1$ is not identically zero and we get the identity
\[
\frac{\varphi_1}{\varphi_3}=-\overline{\left(\frac{F_3}{F_1}\right)}\,,
\]
so that, in particular, being the functions $\varphi_1/\varphi_3$ and $F_3/F_1$ involved in the previous equation analytic in some disk $\Delta_2\subset\mathbb{D}$, we conclude that $\varphi_1=k\varphi_3$ for some constant $k\in\mathbb{C}$ different from zero. This identity is valid in $\Delta_2$. But again a direct application of the identity principle for analytic mappings gives that it must hold in the whole unit disk. A straightforward calculation shows
\[
\lim_{z\to 0}\frac{\varphi_3(z)}{z^3}=0\,,
\]
while a repeated use of L'Hopital's rule in addition to the identity $g'''_2(0)=\omega''_2(0)+2\omega'_2(0)h''_2(0)$ (obtained from $g'_2=\omega_2 h'_2$), and the fact that $a_0=(h''_2(0)-h''_1(0))/2$, gives
\[
\lim_{z\to 0}\frac{\varphi_1(z)}{z^3}=-\frac{\Phi_2(0)}{6}\left(=k\lim_{z\to 0}\frac{\varphi_3(z)}{z^3}\right)\,.
\]
Hence, we obtain $\Phi_2(0)=0$ which, as we know from Lemma~\ref{lem-Phi}, implies the identity $\omega'_1(0)=\omega'_2(0)$. Therefore, we have that $\varphi_1$ is not a constant multiple of $\varphi_3$, so that $F_2$ is not identically zero and we can divide \eqref{eq-5} by $\overline{F_2}$ and take derivatives with respect to $\overline z$ to get
\[
\varphi_1\overline{\left(\frac{F_1}{F_2}\right)'}+\varphi_3\overline{\left(\frac{F_3}{F_2}\right)'}\equiv 0\,.
\]
This implies (using that $\varphi_1$ is not a constant multiple of $\varphi_3$) that both the functions
\[
\overline{\left(\frac{F_1}{F_2}\right)'}\quad\text{and} \quad \overline{\left(\frac{F_3}{F_2}\right)'}
\]
are identically zero, which gives the relations
\[
F_1=\widetilde{k}F_2 \quad\text{and} \quad F_3=a F_2\,,
\]
where $a\neq 0$ (since $F_3$ is not identically zero). Moreover, $\widetilde{k}$ must be different from zero as well since, otherwise, we would have $F_1\equiv 0$ and we could argue as before to get from \eqref{eq-5} that $\varphi_2$ and $\varphi_3$ are linearly dependent, which is clearly absurd since $\omega_1$ and $\omega_2$ are not constant. That is, we can write
\begin{equation}\label{eq-EFES}
F_1=kF_3 \quad\text{and} \quad F_2=a F_3\,,\quad k, a\in\mathbb{C}\setminus\{0\}\,.
\end{equation}
Moreover, bearing in mind the definition of the functions $F_1$, $F_2$, and $F_3$, we easily prove the following lemma.
\begin{lem}\label{lem-recursive}
The following identities hold for certain constants $a\neq 0$, $k\neq 0$, $b$, $c$, $d$, $e$ $l$, $m$, $n$, and $p$.
\begin{itemize}
\item[i)] $E_1=kE_3+lE_4$ and $E_2=aE_3+bE_4$.
\item[ii)] $D_1=kD_3+lD_4+mD_5$ and $D_2=aD_3+bD_4+cD_5$.
\item[iii)] $C_1=kC_3+lC_4+mC_5+nC_6$ and $C_2=aC_3+bC_4+cC_5+d C_6$.
\item[iv)] $B_1=kB_3+lB_4+mB_5+nB_6+pB_7$ and $B_2=aB_3+bB_4+cB_5+dB_6+eB_7$.
\end{itemize}
Moreover, from \emph{iv)}, we obtain
\begin{equation}\label{eq-I2}
\omega_2=k(\omega_1)^2\omega_2+l\omega_1(\omega_2)^2+m\omega_1\omega_2+n(\omega_1)^2+p\omega_1
\end{equation}
and
\begin{equation}\label{eq-I3}
(\omega_2)^2=a(\omega_1)^2\omega_2+b\omega_1(\omega_2)^2+c\omega_1\omega_2+d(\omega_1)^2\,,
\end{equation}
respectively, where
\begin{equation}\label{eq-P}
p=\omega'_2(0)/\omega'_1(0)\quad \text{and} \quad d=p^2-cp\,.
\end{equation}
\end{lem}
\begin{pf}
Bearing in mind that $F_1=(E_1/E_4)'$ and $F_3=(E_3/E_4)'$, we easily get from the identity $F_1=kF_3$, $k\neq 0$, obtained above, the relation
\[
\left(\frac{E_1}{E_4}\right)=k\left(\frac{E_3}{E_4}\right)+l\,.
\]
This gives the first identity in i).
\par
Now, from this first identity in i), and since $E_i=(D_i/D_5)'$, $i=1, \ldots, 4$, we obtain the first identity ii), after integrating the corresponding relation obtained in i). The same approach, recalling that we are assuming the normalizations \eqref{eq-normalizationw}, that $D_i=(C_i/C_6)'$, $i=1, \ldots 5$, and that $C_i=(B_i/B_7)'$, $i=1,\ldots, 6$, can be used to prove that the first assertions in iii) and iv) hold.
\par\smallskip
Finally, to get \eqref{eq-I2}, it suffices to use the definition of the functions $B_i$, $i=1,\ldots, 7$ in \eqref{deq-defn5} and that \eqref{eq-normalizationw} is satisfied for both $\omega_1$ and $\omega_2$.
\par
The proofs of the second assertions in the different items in this lemma (that make use of the identity $F_2=a F_3$ obtained above), as well as the proof that \eqref{eq-I3} holds, are completely analogous to the ones we have presented. We leave the details to the reader.
\end{pf}
\par\smallskip
It is now needed to analyze further the equations \eqref{eq-I2} and \eqref{eq-I3} in the previous lemma.
\par\smallskip
Multiply \eqref{eq-I2} by $\overline{\Phi_2(0)}$, \eqref{eq-I3} by $3(\omega'_2(0))^2/2$, subtract the equations obtained, and use \eqref{eq-afterprop-aux1} and \eqref{eq-afterprop-aux2} to get the identity
\begin{align*}\label{eq-I4}
\left(\overline{\Phi_2(0)} k\right. & \left.-\frac 32 (\omega'_2(0))^2 a\right)\omega_1\omega_2+\left(\overline{\Phi_2(0)}l-\frac 32 (\omega'_2(0))^2b\right)(\omega_2)^2\\
& +\left(\overline{\Phi_2(0)} m-\frac 32 (\omega'_2(0))^2c\right)\omega_2\\
& +\left(\overline{\Phi_2(0)} n-\frac 32 (\omega'_2(0))^2 d+\frac 32 (\omega'_1(0))^2 \right)\omega_1\equiv 0\,.
\end{align*}
A direct application of Lemma~\ref{lem-dilatations} and the fact that $\Phi_2(0)\neq 0$ by Lemma~\ref{lem-Phi} give
\begin{equation}\label{eq-identitycoefficients}
k=a\alpha\,,\quad l=b\alpha\,,\quad m=c\alpha\,,\quad \text{and} \quad n=d\alpha+\beta\,,
\end{equation}
where
\[
\alpha=\frac 32\left(\frac{(\omega'_2(0))^2}{\overline{\Phi_2(0)}}\right)\quad \text{and} \quad \beta=-\frac 32\left(\frac{(\omega'_1(0))^2}{\overline{\Phi_2(0)}}\right)\,.
\]
\par
Our next step is to use \eqref{eq-1}, \eqref{eq-2}, \eqref{eq-3}, \eqref{eq-4}, and \eqref{eq-EFES} to identify completely the constants in Lemma~\ref{lem-recursive}.
\par
To this end, first notice that by \eqref{eq-EFES} and \eqref{eq-5}, we have
\begin{equation}\label{eq-II1}
\varphi_1 \overline{a\alpha}+\varphi_2 \overline{a}+\varphi_3\equiv 0\,.
\end{equation}
\par
The substitution of the equations in assertion i) in Lemma~\ref{lem-recursive} into the identity \eqref{eq-4} gives (using also \eqref{eq-identitycoefficients} and \eqref{eq-II1})
\begin{equation}\label{eq-II2}
\varphi_1 \overline{b\alpha}+\varphi_2 \overline{b} +\varphi_4\equiv 0\,.
\end{equation}
Hence, if we multiply \eqref{eq-II1} by $\omega'_2(0)$, \eqref{eq-II2} by $\omega'_1(0)$ and sum up both equations, we get
\[
\left(\omega'_2(0) \overline{a} + \omega'_1(0) \overline{b}\right) \left(\overline{\alpha}\varphi_1+\varphi_2\right)\equiv 0\,,
\]
which implies (since $\overline{\alpha}\varphi_1+\varphi_2$ cannot be identically zero because otherwise, by \eqref{eq-II1}, we would have that $\varphi_3\equiv 0$ which is a contradiction, as was pointed out right before Lemma~\ref{lem-varphi})
\[
b=-\frac{\omega'_2(0)}{\omega'_1(0)}a=-pa\,.
\]
\par
Let us now replace those assertions in item ii) in Lemma~\ref{lem-recursive} in \eqref{eq-3}, and use \eqref{eq-identitycoefficients}, \eqref{eq-II1}, and \eqref{eq-II2} to get
\[
\varphi_1 \overline{c\alpha}+\varphi_2 \overline{c} +\varphi_5\equiv 0\,.
\]
This gives, since $\varphi_5\not\equiv 0$ by Lemma~\ref{lem-varphi}, that $c\neq 0$.
\par
The substitution of all the information obtained so far of the coefficients $a$, $b$, $c$, $d$, $k$, $l$, $m$, $n$, $\alpha$, and $\beta$ in equation \eqref{eq-I2} shows
\begin{equation}\label{eq-I2new}
ap\alpha\omega_1(\omega_2)^2 =\left(a\alpha(\omega_1)^2+c\alpha\omega_1-1\right)\omega_2 +(d\alpha+\beta)(\omega_1)^2+p\omega_1\,,
\end{equation}
while a re-arrangement of \eqref{eq-I3} gives
\begin{equation}\label{eq-I3new}
\left(1+ap\,\omega_1\right)(\omega_2)^2=\left(a(\omega_1)^2+c\omega_1\right)\omega_2+(p^2-cp)(\omega_1)^2\,.
\end{equation}
Now, multiply \eqref{eq-I2new} by $\left(1+ap\omega_1\right)$, multiply \eqref{eq-I3new} by $ ap\alpha\omega_1$, subtract the resulting equations and simplify to get
\[
\omega_2=\frac{p\omega_1+\left(a p^2+(p^2-cp)\alpha+\beta \right) (\omega_1)^2+ap\beta(\omega_1)^3}{1+\left(ap-c\alpha\right)\omega_1-a\alpha(\omega_1)^2}\,.
\]
We now substitute this expression for $\omega_2$ in \eqref{eq-afterprop-aux2}, multiply by $(1+\left(ap-c\alpha\right)\omega_1-a\alpha(\omega_1)^2)^2$ and re-arrange the terms to get an identity of the form
\[
a_1\omega_1+a_2(\omega_1)^2+a_3(\omega_1)^3+a_4(\omega_1)^4+a_5(\omega_1)^5+a_6(\omega_1)^6=0\,,
\]
where the coefficients $a_1, \ldots, a_6$ (which must be necessarily equal to zero) depend on the non-zero constants $a$, $c$, $p$, $\alpha$, and $\beta$.
\par
The coefficient $a_1=p\overline{\Phi_2(0)}-\overline{\Phi_1(0)}$ is directly equal to zero by \eqref{eq-P} and \eqref{eq-afterprop-aux1}. It is straightforward to check that the equation $a_2=0$ is automatically satisfied as well. A tedious but routine calculation shows that the expression $a_3=0$ is equivalent to the identity
\[
ap\,(p^2-1)=(c-2p)(\alpha p^2+\beta)\,,
\]
which gives, since $\alpha p^2+\beta=\beta(1-p^4)$,
\begin{equation}\label{eq-a}
ap=(2p-c)(1+p^2)\beta\,.
\end{equation}
Finally, from $a_5=0$ we obtain
\[
ap=2c\alpha +2 p(1+p^2)\beta\,.
\]
Hence, using also \eqref{eq-a} we have
\[
2\alpha+\beta(1+p^2)=0\,.
\]
It remains to replace the values of $p$, $\alpha$, and $\beta$ in terms of $\omega'_1(0)$ and $\omega'_2(0)$ in the previous equation to get the desired contradiction $\omega'_1(0)=\omega'_2(0)$.
\end{pf}
\section{The solution}\label{sec-solution}
\par\medskip
We now state the solution to the problem considered in this paper.
\begin{theorem}
Let $f$ be a locally univalent harmonic mapping in a simply connected domain $\Omega\subset \mathbb{C}$.
\par
\begin{itemize}
\item[i)] The only transformations that preserve local univalence and the harmonic Schwarzian derivative are pre-compositions with locally univalent affine harmonic mappings and anti-analytic rotations, in the cases when $f$ has non-constant dilatation.
\item[ii)] If the dilatation of $f$ is constant, so that $f=\alpha\overline{h}+h+\gamma$, where $h$ is an analytic function in $\Omega$, $|\alpha|\neq 1$, and $\gamma\in\mathbb{C}$ then, in addition to the transformations described in the previous item, we have that any other harmonic function $F$ of the form
\[
F=\beta\, \overline{T\circ h}+T\circ h+\lambda\,,
\]
where $|\beta|\neq 1$, $\delta\in\mathbb{C}$, and $T$ is a non-constant M\"{o}bius transformation as in \eqref{eq-Mobius} satisfies $S_H(f)=S_H(F)$.
\end{itemize}
\end{theorem}
\begin{pf}
The transformations considered preserve the Schwarzian derivative, as pointed out in Sections~\ref{ssec-properties} and \ref{ssec-equaldilat}.
\par
Conversely, if $f$ has non-constant dilatation and $F$ is another locally univalent harmonic function in $\Omega$ with $S_H(f)=S_H(F)$ we can compose both functions with a Riemann map from $\Omega$ onto the unit disk and apply a series of pre-compositions with (invertible) locally univalent affine harmonic mappings and anti-analytic rotations to get two new harmonic functions $\widetilde f$ and $\widetilde F$, say, now locally univalent and orientation-preserving in the unit disk and normalized as in \eqref{eq-normalizationh}, \eqref{eq-normalizationw}, and \eqref{eq-normalizationw'}. Then, by Theorem~\ref{thm-main}, the value of the derivative of the dilatations of these two functions at the origin must be equal. Hence, by Proposition~\ref{prop-equalfucn}, $\widetilde f=\widetilde F$. Undoing the invertible transformations used to produce $\widetilde f$ and $\widetilde F$, from $f$ and $F$, respectively, and using that the uniqueness principle holds for orientation-preserving harmonic mappings (see \cite[p. 8]{Dur-Harm}), we get the desired result.
\par
A direct application of Theorem~\ref{thm-constdilat1} ends the proof.
\end{pf}
|
2,869,038,155,022 | arxiv | \section{Introduction}
In noncommutative projective algebraic geometry, one studies graded algebras with good properties, most of the time of a homological nature. The most famous examples of such algebras are given by the Sklyanin algebras, that is, quadratic algebras of global dimension $n$ associated to a degree $n$ elliptic curve $E$ embedded in $\mathbb{P}^{n-1}$ and a point $\tau \in E$. The Heisenberg group $H_n$ of order $n^3$ acts on these Sklyanin algebras, see for example \cite{odesskii1989sklyanin}, such that their degree 1 part is isomorphic to the Schr\"odinger representation $V$ of $H_n$ associated to some primitive $n$th root of unity $\omega$ and the relations are isomorphic to $V \wedge V$ as $H_n$-representation. These algebras also have the same Hilbert series as the polynomial ring in $n$ variables and are very well behaved.
\par Similarly, another famous class of noncommutative algebras with Hilbert series $\frac{1}{(1-t)^n}$ is given by quantum polynomial rings (see for example \cite{2015point}) defined by
$$
\mathbb{C} \langle x_1,\ldots,x_n \rangle /(x_i x_j -q_{ij} x_j x_i) , 1\leq i < j \leq n, q_{ij}\in \mathbb{C}^*.
$$
In this case, the $n$-dimensional torus group $\mathbb{T}_n = (\mathbb{C}^*)^n$ acts on these algebras with $V = \oplus_{i=1}^n \chi_{e_i}$ and again we have that the relations are isomorphic to $ V \wedge V$ as $\mathbb{T}_n$-representation.
\par These examples motivate the following questions:
\begin{itemize}
\item starting from a reductive group $G$ and $A$ a positively graded, connected algebra on which $G$ acts as gradation preserving automorphisms, can we construct new graded algebras $B$ such that $A \cong B$ as graded $G$-module? The most interesting case will be when $A = \mathbb{C}[V]$ with $V$ a finite dimensional representation of $G$.
\item Can we use the properties of $G$ or $V$ to say something about the constructed algebras? Are some isomorphic, how do point modules behave, $\ldots$
\end{itemize}
This paper shows some general theory regarding such algebras. One of the purposes of this paper is the study of subvarieties of grassmannians and maps between such varieties which are constructed this way, the key observation being that if $W \subset V $ is a subrepresentation of $V$ with $W \cong \oplus_{i=1}^k S_i^{\oplus e_i}$ and $V \cong \oplus_{i=1}^l S_i^{\oplus a_i}$ with obviously $k \leq l$, $e_i \leq a_i$ and $\dim \Hom_G(S_i,S_j) = \delta^i_j$, $\delta^i_j$ being the Kronecker delta, then
\begin{align*}
\Emb_G(W,V)&=\{\xymatrix{W \ar[r]^-f & V}| f \text{ is } G-\text{linear and injective}\}/\Aut_G(W)
\\ &\cong \prod_{i=1}^k \wis{M}^s_{a_i \times e_i}(\mathbb{C})/\wis{GL}_{e_i}(\mathbb{C}) \cong\prod_{i=1}^k \Grass(e_i,a_i)
\end{align*}
where $\wis{M}^s_{a\times b}(\mathbb{C})$ are the $a \times b$-matrices of maximal rank. These objects are studied in Section 2. An example of maps between grassmannians that can occur will be studied in section 3 for quantum polynomial rings and 3-dimensional Sklyanin algebras.
\par In Section 4 we give some well known constructions of quadratic algebras with Hilbert series $\frac{1}{(1-t)^n}$ that can occur as $G$-deformations of the polynomial ring $\mathbb{C}[V]$ (the terminology will be explained in Section 2), like twisting with an automorphism or making Ore extensions. In the last section this theory is applied for $G = S_{n+1}$ the symmetric group of order $(n+1)!$ and its natural permutation representation coming from the action on $\{0,\ldots,n\}$. It will turn out that the `good' algebras will be either skew polynomial rings that will be twists of the commutative polynomial ring or differential polynomial rings.
\begin{remark}
Part of Section 2 was already proven in \cite{de2014character}, but the author feels that for completeness sake these results should be included in this paper.
\end{remark}
\subsection{Conventions and notations}
All the algebras we will study will be positively graded, connected and finitely generated in degree 1 $\mathbb{C}$-algebras, that is,
$$
A = \mathbb{C} \oplus A_1 \oplus A_2 \oplus \ldots, A_i A_j = A_{i+j}.
$$
The last condition is equivalent to $A$ being generated over $\mathbb{C}$ by $V=A_1$ with $\dim V < \infty$. As such, we can write $A$ as
$$
A = T(V)/R \text{ with } T(V) = \mathbb{C} \oplus V \oplus (V \otimes V) \oplus \ldots, R \text{ homogeneous}
$$
By $\mathbb{C}[V]$ we denote the algebra $T(V)/(V \wedge V)$, the polynomial ring in $n$ variables with $\dim V = n$.
\par $G$ will always be a reductive group acting on $V$. In addition, we will assume that $G$ acts faithfully on $V$. If $g,h \in G$, then $[g,h] = ghg^{-1}h^{-1}$ is the commutator of $g$ and $h$.
\par If $x,y \in A$ with $A$ an algebra, then $[x,y] = xy-yx$ and $\{x,y\} = xy+yx$.
\par For $f_1,\ldots,f_k \in \mathbb{C}[V]$ for $V$ a vector space, $\mathbf{V}(f_1,\ldots,f_k)$ will be the Zariski closed subset of $\mathbb{P}^{n-1}$ (if the elements are homogeneous) or $\mathbb{A}^n$ determined by $f_1,\ldots,f_k$, it will be clear from the context whether we are working in projective space or affine space.
\par $\mathbb{T}_k$ will be the $k$-dimensional torus $(\mathbb{C}^*)^k$. Let $\mathbb{Z}^k = \oplus_{i=1}^k \mathbb{Z} e_i$ and $\mathbf{a} \in \mathbb{Z}^k$, then $\chi_{\mathbf{a}}$ will be the character of $\mathbb{T}_k$ defined by
$$
\chi_{\mathbf{a}}(t_1,\ldots,t_k) = t_1^{a_1}t_2^{a_2}\ldots t_k^{a_k}.
$$
\par For $\mathbb{F}$ a field, $\wis{GL}_n(\mathbb{F})$, $\wis{PGL}_n(\mathbb{F})$, $\wis{SL}_n(\mathbb{F})$ or $\wis{PSL}_n(\mathbb{F})$ will denote the general linear group, respectively the projective general linear group, special linear group or the projective special linear group. $\mathbb{T}_n$ will be the maximal torus in $\wis{GL}_n(\mathbb{F})$ embedded as diagonal matrices and $\mathbb{F}^*$ will correspond to the scalar matrices. If $V$ is a finite dimensional $\mathbb{F}$-vector space, then we will also write $\wis{GL}(V)$, $\wis{PGL}(V)$, $\wis{SL}(V)$ or $\wis{PSL}(V)$.
\par $H_n$ will be the Heisenberg group of order $n^3$ for $n \geq 2$. This group is defined as
$$
H_n = \langle e_1,e_2|\langle e_1^n = e_2^n = [e_1,e_2]^n=1, [e_1,e_2] \text{ central}\rangle.
$$
$H_n$ fits in a short exact sequence
$$
\xymatrix{1 \ar[r]& \mathbb{Z}_n \ar[r] & H_n \ar[r] & \mathbb{Z}_n \times \mathbb{Z}_n \ar[r] & 1}.
$$
As such, the 1-dimensional representations of $H_n$ are labelled by $\chi_{a,b}$, with $\chi_{a,b}(e_1) = \omega^a$ and $\chi_{a,b}(e_2) = \omega^b$ for $\omega$ a fixed primitive $n$th root of unity. If $\omega$ is a primitive $n$th root of unity, then $H_n$ has a simple $n$-dimensional representation $V = \oplus_{i=0}^n \mathbb{C} x_i$ with the action defined by
$$
e_1 \cdot x_i = x_{i-1}, \qquad e_2 \cdot x_i = \omega^i x_i.
$$
A quick calculation shows that $[e_1,e_2]$ acts by multiplication with $\omega$. In particular, if $n = p$ prime, then all simple representations are described by these $p^2$ 1-dimensional characters and $(p-1)$ $p$-dimensional representations. For a complete description of simple representations of $H_n$ for $n\geq 2$ arbitrary, see \cite{grassberger2001note}.
\section{$G$-deformations}
\subsection{General theory}
\begin{mydef}
Let $G$ be a reductive group. We call a graded connected algebra $A$, finitely generated by degree 1 elements, a $G$-algebra if $G$ acts on it by gradation preserving automorphisms.
\end{mydef}
This implies that there exists a representation $V$ of $G$ such that $T(V)/R \cong A$ with $R$ a graded ideal of $T(V)$, which is itself a $G$-subrepresentation of $T(V)$. From now on, $A$ is a $G$-algebra.
\par In this setting, as $G$ is reductive, each graded component $A_k$ has a decomposition in simple $G$-representations.
\begin{mydef}
Let $A$ be a $G$-algebra. Then $B$ is a $G$-deformation of $A$ up to degree $k$ if $B$ is a $G$-algebra and we have
$$
\forall 0 \leq i \leq k: A_i \cong B_i \text{ as $G$-representations.}
$$
If $\forall k \in \mathbb{N}: A_k \cong B_k$, then we call $B$ a $G$-deformation of $A$.
\end{mydef}
We will always assume that, if $A_1 \cong V$, then $B_1 \cong V$ as $G$-representations. Equivalently, the relations we will deform are always of degree larger than or equal to 2.
\par If $A$ is a $G$-algebra, then $A$ determines for each $k \geq 2$ a short exact sequence of $G$-morphisms
$$
\xymatrix{0 \ar[r]& \ker(\phi_k) \ar[r] & T(V)_k \ar[r]^-{\phi_k} & A_k \ar[r] & 0}
$$
Let $A_k \cong \oplus_{i_k=1}^{n_k} S_{i_k}^{ e_{i_k}}$ and $T(V)_k = \oplus_{i_k=1}^{n_k} S_{i_k}^{ a_{i_k}}$ as $G$-representations with $e_{i_k} \leq a_{i_k}$, with some $e_{i_k}$ possibly equal to 0.
Then $\ker(\phi_k) \cong \oplus_{i_k=1}^{n_k} S_{i_k}^{ f_{i_k}}$ with $f_{i_k} = a_{i_k}-e_{i_k}$. Therefore, if $B$ is a $G$-deformation of $A$ up to degree $k$, then $B$ determines a $G$-subrepresentation of $T(V)_j$ for each $2\leq j \leq k$, isomorphic to $\ker(\phi_j)$. Such subrepresentations are determined by
$$
\Emb_G(\ker(\phi_j),V^{\otimes j}) = \left\{ \xymatrix{\ker(\phi_j) \ar[r]^-f & V^{\otimes j}}| f \text{ injective, $G$-linear} \right\}/\Aut_G(\ker(\phi_j)).
$$
Due to Schur's lemma, using the decompositions from above, this set is the same as
$$
\prod_{i_j = 1}^{n_j} \wis{M}^s_{a_{i_j}\times f_{i_j}}(\mathbb{C})/\wis{GL}_{f_{i_j}}(\mathbb{C}) \cong \prod_{i_j = 1}^{n_j} \Grass(f_{i_j},a_{i_j}).
$$
\par We see that a $G$-deformation $B$ up to degree $k$ of an algebra $A$ determines a unique point in
$$
\prod_{j=2}^k \prod_{i_j = 1}^{n_j} \Grass(f_{i_j},a_{i_j}).
$$
\begin{theorem}
The $G$-deformations up to degree $k$ of $A$ are parametrized by a projective variety.
\end{theorem}
\begin{proof}
The proof is by induction. For any degree $k$, put
$$
T(V)_k \cong \oplus_{i_k=1}^{n_k} S_{i_k}^{ a_{i_k}}, A_k\cong \oplus_{i_k=1}^{n_k} S_{i_k}^{ e_{i_k}}
$$
and we allow $e_{i_k}=0$. Let $f_{i_k} = a_{i_k}-e_{i_k}$ be the multiplicities of the simple representations in $\ker(\phi_k)$ with as above $\phi_k$ being the natural projection map
$$
\xymatrix{ T(V)_k \ar[r]^-{\phi_k}& A_k}.
$$
Let $\mathbf{V}_k$ be the set parametrizing deformations up to degree $k$ of $A$.
\par First consider $k=2$. Then from the above discussion it follows that the deformations up to degree 2 are parametrized by $\prod_{i_2 = 1}^{n_2} \Grass(f_{i_2},a_{i_2})$, which is clearly a projective variety.
\par Assume now that $\mathbf{V}_{k-1}$ is a projective subvariety of
$$
\prod_{j=2}^{k-1} \prod_{i_j=1}^{n_j} \Grass(f_{i_j},a_{i_j}).
$$
Then a point $(P_2,\ldots,P_{k-1},P_k) \in \prod_{j=2}^{k} \prod_{i_j=1}^{n_j} \Grass(f_{i_j},a_{i_j})$ with $P_i \subset T(V)_i$ is an element of $\mathbf{V}_{k}$ if and only if $(P_1,\ldots,P_{k-1}) \in \mathbf{V}_{k-1}$ and for each $2\leq i \leq k-1$ and each $0\leq l \leq k-i$, $V^{\otimes l} \otimes P_i \otimes V^{\otimes k-i-l} \subset P_k$. This is clearly a closed condition, so $\mathbf{V}_{k-1}$ is indeed closed.
\end{proof}
As in the theorem, let $\mathbf{V}_k$ be the variety parametrizing $G$-deformations of $A$ up to degree $k$.
\par We have natural morphisms coming from projection maps between grassmannians
$$
\xymatrix{\ldots \ar[r]^-{\psi_{k+1}} &\mathbf{V}_k \ar[r]^-{\psi_k}& \mathbf{V}_{k-1}\ar[r]^-{\psi_{k-1}}& \ldots}
$$
such that $\mathbf{V} = \varprojlim \mathbf{V}_k$ parametrizes $G$-deformations of $A$.
\begin{mydef}
We say that a connected variety $Z$ parametrizes $G$-deformations up to degree $k$ of a $G$-algebra $A$ if $Z$ can be embedded in $V_k$ for some $k \geq 2$ by some morphism $\xymatrix{Z \ar[r]^-\alpha & V_k}$ such that
the point corresponding to the algebra $A$ is in $\im(\alpha)$. If for each point $z \in Z$, the algebra $A^z=T(V)/(\alpha(z))$ has the property that
$$
\forall k \in \mathbb{N}: (A^z)_k \cong A_k \text{ as $G$-representations},
$$
then we say that $Z$ parametrizes $G$-deformations of $A$.
\end{mydef}
\begin{proposition}
Let $C$ be a smooth projective curve parametrizing $G$-deformations up to degree $k$ such that there exists an open subset $U \subset C$ parametrizing $G$-deformations of $A$, let $\xymatrix{C\ar@{^{(}->}[r]^-{\alpha_k} & \mathbf{V}_k}$ be the corresponding embedding. Then $C$ naturally parametrizes $G$-deformations of $A$, by which we mean that there exists a natural embedding $\xymatrix{C \ar@{^{(}->}[r]& \mathbf{V}}$ extending $\alpha_k$.
\end{proposition}
\begin{proof}
We have the following commutative diagram
$$
\xymatrix{\ldots \ar[r]^-{\psi_{k+2}} & \mathbf{V}_{k+1}\ar[r]^-{\psi_{k+1}} & \mathbf{V}_k \ar[r]^-{\psi_{k}}& \ldots\\
& & C \ar@{^{(}->}[u]^-{\alpha_k} \ar@{-->}[lu]^-{\alpha_{k+1}} &}
$$
with $\alpha_k$ an embedding and $\alpha_{k+1}$ a rational morphism. Due to \cite[Proposition 2.1]{ArithmeticElliptic2009}, $\alpha_{k+1}$ can be extended to a morphism of $C$ into $\mathbf{V}_{k+1}$ which we also denote as $\alpha_{k+1}$. As $\psi_{k+1}\circ \alpha_{k+1}$ coincides with $\alpha_k$ on an open (and hence dense) subset of $C$, they coincide on $C$. As $\alpha_{k}$ is an injection, $\alpha_{k+1}$ is also an injection.
\par By induction, we get commuting triangles $\forall k \geq 2$ with $\alpha_k$ an embedding for all $k$ large enough, so we indeed have an embedding of $C$ into $\mathbf{V}$.
\end{proof}
\begin{example}
Let $A = \mathbb{C}[x,y,z]/(\{x,y\}\{y,z\},\{z,x\})$, put $V = \mathbb{C} x \oplus \mathbb{C} y \oplus \mathbb{C} z$ and consider the group $G$ generated by $S_3\subset \wis{GL}_3(\mathbb{C})$ as permutation matrices and the order 3 element
$$\begin{bmatrix}
1&0&0 \\0 & \omega & 0 \\ 0 & 0 & \omega^2
\end{bmatrix}.
$$
Then $G \cong H_3 \rtimes \mathbb{Z}_2$, with the action of $\mathbb{Z}_2= \langle t \rangle$ defined by $t \cdot e_1 = e_1^{-1}$, $t \cdot e_2 = e_2^{-1}$. Then $V \otimes V$ decomposes as $G$-representation as $W^{\oplus 2} \oplus P$ and $V \wedge V \cong P$, so the $G$-deformations of $A$ are parametrized by $\mathbb{P}^1$. These algebras parametrized by $\mathbf{V}_2 \cong \mathbb{P}^1$ have relations
$$
\begin{cases}
A(yz+zy)+Bx^2,\\
A(zx+xz)+By^2,\\
A(xy+yx)+Bz^2.
\end{cases}
$$
As in \cite{de2015graded}, generically these are Artin-Schelter regular graded Clifford algebras. There are 4 points where the corresponding algebra doesn't have the correct Hilbert series:
$$
S=\{[0:1],[1:1],[1:\omega],[1:\omega^2]\} \text{ with } \omega^3=1, \omega \neq 1.
$$
For example, in $[0:1]$, we have that $[\{x,y\},z]$ and cyclic permutations of this relation are not implied by the relations $x^2=y^2=z^2$, although they are implied by all other relations on $\mathbb{P}^1$. So a natural extension of the rational map
$$
\xymatrix{\mathbb{P}^1 \ar@{^{(}->}[r]^{\alpha_3}& \mathbf{V}_3}
$$
to the point $[0:1]$ is by adding the cyclic permutations of $[\{x,y\},z]$ as extra relations.
\end{example}
A trivial consequence of a connected variety $Z$ parametrizing $G$-deformations is that for each $i \in \mathbb{N}$ the function
\begin{gather*}
\xymatrix{Z \ar[r]^-{\beta_i}& \mathbb{N}},
\xymatrix{z \ar@{|->}[r] &\dim_\mathbb{C} A^z_i}
\end{gather*}
is constant. A natural question to ask is the following: if $Z$ parametrizes $G$-deformations up to degree $k$ for some $k \in \mathbb{N}, k \geq 2$ and $\forall i \in \mathbb{N}: \beta_i(z) = \dim_\mathbb{C} A_i$, does $Z$ parametrize $G$-deformations of $A$?
\begin{lemma}
Let $A$ be a $G$-algebra with $\xymatrix{ T(V) \ar@{->>}[r]^-{p} & A}$ the natural projection map. Let $A_k = \oplus_{i_k=1}^{n_k} S_{i_k}^{e_{i_k}}$ be the decomposition of $A_k$ into simple $G$-representations and similarly $T(V)_k = \oplus_{i_k=1}^{n_k} S_{i_k}^{a_{i_k}}$ with naturally $a_{i_k} \geq e_{i_k}$. Then there exists a subspace $W \subset T(V)_k$ such that $W \cong A_k$ as $G$-representations and $p|_W$ is an isomorphism of $G$-representations.
\end{lemma}
\begin{proof}
It follows from Schur's lemma that the map $\xymatrix{ T(V)_k \ar@{->>}[r]^-{p_k} & A_k}$ is a surjective element of $\oplus_{i_k=1}^{n_k}\Hom_G(S_{i_k}^{a_{i_k}},S_{i_k}^{ e_{i_k}})\cong\oplus_{i_k=1}^{n_k} \Hom(\mathbb{C}^{a_{i_k}},\mathbb{C}^{e_{i_k}})$. There it reduces to a statement of linear maps, which follows from standard linear algebra.
\end{proof}
\begin{theorem}
Let $Z$ be an irreducible variety parametrizing $G$-deformations up to degree $k$ of $A = T(V)/(R)$. For a point $z \in Z$, let $A^z = T(V)/(\alpha(z))$ be the corresponding associative algebra with $R^z \in \mathbf{V}_k$. If $\forall z \in Z: H_{A^z}(t) = H_{A}(t)$, then we have
$$
\forall i \in \mathbb{N}: (A^z)_i \cong A_i \text{ as $G$-representations}.
$$
\end{theorem}
\begin{proof}
For $x \in Z$, let $\xymatrix{ T(V) \ar@{->>}[r]^-{p^x} & A^x}$ denote the natural homomorphism. Assume that $(A^x)_l \not \cong (A^y)_l$ for some $l > k$. Then we can choose $l$ minimal with this property. According to the previous lemma we can find a subspace $W$ of $T(V)_l$ such that $p^x(W) = (A^x)_l$ and $W \cong (A^x)_l$ as $G$-representations.
\par Using the fact that $\forall z \in Z: H_{A^z}(t) = H_{A}(t)$, there exists a Zariski open subset $U$ with $x \in U$ of $Z$ such that
$$
\forall z \in U: p^z(W) = (A^z)_l.
$$
As $p^z$ is a $G$-morphism, we have $W \cong (A^z)_l$.
\par Similarly there exists an open subset $U'$ with $y \in U'$ and a subspace $W'$ of $T(V)_l$ such that $W' \cong (A^y)_l$ and $((p^z)_l)|_{W'}$ a $G$-isomorphism $\forall z \in U'$. As $Z$ was irreducible, there exists a point $a \in U \cap U'$. But then it follows that
$$
(A^x)_l \cong W \cong (A^a)_l \cong W' \cong (A^y)_l
$$
which is a contradiction.
\end{proof}
A trivial consequence of this theorem follows.
\begin{corollary}
Let $Z$ be a connected variety that parametrizes graded algebras $A^z, z \in Z$ with constant Hilbert series. Then for every 2 points $x,y \in Z$ and every $k \in \mathbb{N}$, we have isomorphisms $(A^x)_k \cong (A^y)_k$ as $G$-representations.
\label{cor:constchar}
\end{corollary}
\subsection{Symmetries on $\mathbf{V}_k$}
Assume that $A = T(V)/R$ is a $G$-algebra and let $H = N_{\wis{GL}(V)}(G) = \{h \in \wis{GL}(V)|hgh^{-1} \in G \}$. Let $H'$ be the maximal subgroup of $H$ such that $A$ is also a $H'$-algebra, so in particular $G \subset H'$. We have a morphism $\xymatrix{H' \ar[r] & \Aut(G)}$ defined by $h\mapsto \varphi_h,\varphi_h(g) = hgh^{-1}$. This implies that we can twist every representation $\xymatrix{G \ar[r]^f& \wis{GL}(W)}$ with $\varphi_h$
$$
\xymatrix{G \ar[r]^-{\varphi_h} & G \ar[r]^-f & \wis{GL}(W)},
$$
which defines a new representation $f\circ \varphi_h$ of $G$, possibly non-isomorphic to $W$ (except if $V = W$ of course). Denote this new $G$-representation by $W^h$. It is obvious that if $W$ is simple, then $W^h$ is also simple. However, if $A$ is an $H'$-algebra, this implies that if $S^{e}$ with $S$ simple is a $G$-subrepresentation of $R_k$ for some $k \geq 2$ with $e$ the multiplicity of $S$ in $R_k$, then $(S^h)^e$ for each $h \in H$ is also a subrepresentation of $R_k$ and $e$ is the multiplicity of $S^h$ in $R_k$.
\begin{proposition}
Let $A=T(V)/R$ be a $G$-algebra and $H,H'$ as before. Then $H'$ acts on $\mathbf{V}_k$, the variety parametrizing $G$-deformations up to degree $k$ such that points in the same orbit correspond to isomorphic algebras.
\label{prop:symVk}
\end{proposition}
\begin{proof}
Let $h \in H'$, $B$ a $G$-deformation of $A$. Define a new action of $G$ on $B$ by $g \cdot v = hgh^{-1}v$, then $B$ is a $G$-algebra under this action, with the degree 1 part isomorphic to $V$. Taking as generators of $B$ the elements $y_i=h x_i$, for some basis $x_1,\ldots,x_m$ of $V$, we can decompose the relations of $B$ as $G$-modules with respect to the generators $y_i$ in the same way as we did for the generators $x_1,\ldots,x_m$. But these 2 decompositions will be the same as $H'$ acts on the set of simple representations of $G$, but $A$ was also a $H'$-algebra.
\end{proof}
One would expect that the centralizer of $G$ in $\wis{GL}(V)$ acts trivially on $\mathbf{V}_k$, but this will not be the case in general. However, if $V$ is a simple representation, then this is obviously true.
\par If $A=\mathbb{C}[V]$ or $A = \wedge V$, then $H=H'$ and we have an action of $H$ on $\mathbf{V}_k$ for each $k\geq 2$.
\begin{example}
Let $G = \mathbb{T}_2$ and $V = \chi_{e_1} \oplus \chi_{e_2}$. Then the $\mathbb{T}_2$-deformations of $\mathbb{C}[V]$ are parametrized by $\mathbb{P}^1$ with $[a:b] \in \mathbb{P}^1$ corresponding to the algebra
$$
T(V)/(ax_1x_2-bx_2x_1).
$$
The normalizer of $\mathbb{T}_2 \subset \wis{GL}(V)$ is the semidirect product $\mathbb{T}_2 \rtimes \mathbb{Z}_2$. $\mathbb{T}_2$ acts trivially on this moduli space, so the action boils down to the action of $\mathbb{Z}_2$ on $\mathbb{P}^1$ defined by $s \cdot [a:b] = [b:a]$, which indeed give isomorphic algebras.
\label{ex:1dquantum}
\end{example}
\section{Some constructions}
\subsection{Twisting}
The main $G$-deformations up to degree $k$ we want to study are those of the polynomial ring $\mathbb{C}[V] = T(V)/(V \wedge V)$. Although the decomposition of $V \wedge V$ and $V \otimes V$ can be completely arbitrary, we can bound the dimension from below using twisting.
\begin{mydef}
Let $B$ be a graded algebra and $\beta \in \Aut(B)$ an automorphism that preserves the gradation. Then the twist $B^\beta$ is defined as the graded associative algebra with underlying set the elements of $B$ and multiplication rule
$$
\forall a \in B_k, b \in B_l: a *_\beta b = a\beta^k(b).
$$
\end{mydef}
\begin{proposition}
Let $\mathbf{V}_k$ be the variety parametrizing $G$-deformations up to degree $k$ of $\mathbb{C}[V]$ and let $V \cong \oplus_{i=1}^n S_i^{\oplus e_i}$ be the decomposition of $V$ in simple $G$-representations.
We then have $$\dim \mathbf{V}_k \geq -1+\sum_{i=1}^n e_i^2.$$
In particular, $V_k$ contains a subvariety isomorphic to
$$
\wis{PGL}_{e_1,\ldots,e_n}(\mathbb{C}) = \left(\prod_{i=1}^n \wis{GL}_{e_i}(\mathbb{C})\right)/\mathbb{C}^*
$$
\end{proposition}
\begin{proof}
Let $\alpha \in \Aut_G(V) \cong \prod_{i=1}^n\wis{GL}_{e_i}(\mathbb{C})$. Then the map
$$
\xymatrix{f_\alpha=Id \otimes \alpha^{-1}:V \otimes V \ar[r]& V\otimes V}
$$
is also a $G$-isomorphism, and $f_\alpha(V \wedge V)$ is the vector space of defining relations of $\mathbb{C}[V]^\alpha$. Therefore $f_\alpha(V \wedge V)\cong V \wedge V$. As $\mathbb{C}[V]$ has the property
$$
\forall \alpha,\beta \in \Aut(\mathbb{C}[V]) : f_\alpha(V \wedge V) = f_\beta(V \wedge V) \Rightarrow \exists \lambda \in \mathbb{C}^*: \alpha = \lambda \beta,
$$
we have that each $\alpha \in \left(\prod_{i=1}^n\wis{GL}_{e_i}(\mathbb{C})\right)/\mathbb{C}^*$ defines a $G$-deformation of $\mathbb{C}[V]$.
\end{proof}
It can be the case that $G$-deformations up to to degree 2 of $\mathbb{C}[V]$ are (generically) twists of $\mathbb{C}[V]$, as the next example will show.
\begin{example}
Let $G=\mathbb{T}_2=\mathbb{C}^* \times \mathbb{C}^*$ be the 2-dimensional torus and let $V = \chi_{e_1} \oplus \chi_{e_2}$. We then have the decomposition
\begin{gather*}
V \otimes V \cong \chi_{2e_1} \oplus \chi_{2e_2} \oplus \chi_{e_1 + e_2}^2,\\
V \wedge V \cong \chi_{e_1 + e_2}.
\end{gather*}
So the $G$-deformations up to degree 2 of $\mathbb{C}[V]$ are parametrized by $\mathbb{P}^1$. The twists of $\mathbb{C}[V]$ that commute with the action of $\mathbb{T}_2$ is $\mathbb{T}_2$ itself, so the twists give a 1-dimensional family of $G$-deformations of $\mathbb{C}[V]$.
\par In fact, every point of $\mathbb{P}^1$ defines a $G$-deformation of $\mathbb{C}[V]$, as for each point~$p=[a:b]\in \mathbb{P}^1$ and $A^p = \mathbb{C}\langle x,y \rangle/(axy-byx)$, we have $H_{A^p}(t) = \frac{1}{(1-t)^2}$ and using Corollary \ref{cor:constchar}, we conclude that
$$
\forall p \in \mathbb{P}^1 \forall k \in \mathbb{N}: (A^p)_k \cong \mathbb{C}[V]_k \cong \oplus_{i=0}^k \chi_{(k-i,i)}.
$$
\end{example}
If we are just interested whether we can make twists of $A$ that are also $G$-algebras but not necessarily $G$-deformations, then we can improve this theorem.
\begin{theorem}
Let $A=T(V)/(R)$ be a $G$-algebra and let $\phi \in \Aut_{\mathbb{C}^*}(A)$ such that
$$
\forall g \in G: [\phi,g] \in Z(\wis{GL}(V)).
$$
Then the twist $A^\phi$ is also a $G$-algebra with $A^\phi \cong V$.
\end{theorem}
\begin{proof}
Let $R^\phi$ be the relations of $A^\phi$ and let $\phi_k$ be the vector space automorphism on $V^{\otimes k}$ defined by
$$
\phi_k(x_{i_1}\ldots x_{i_k})= x_{i_1} \phi(x_{i_2})\ldots\phi^{k-1}(x_{i_k})
$$
with $x_j \in V$ and extending this linearly. From the construction of $A^\phi$, it follows that
$$
f \in R_k \Leftrightarrow \phi^{-k}(f) \in R^\phi_k
$$
We need to show that, for any $g \in G, g\cdot \phi^{-k}(f)\in R^\phi_k$. Let $[\phi,g]=\lambda \in \mathbb{C}^*$. We then have
$$
g \cdot \phi^{-k}(f) = \lambda^m \phi^{-k}(g\cdot f), m=\binom{k}{2}
$$
and as $g\cdot f \in R_k$, we have $g \cdot \phi^{-k}(f) \in R^\phi_k$.
\end{proof}
However, as the next example will show, it generally is not true that $A^\phi \cong A$ as graded $G$-module.
\begin{example}
Let $D_4 = H_2$ the Heisenberg group of order 8 (which is the same as the dihedral group of order 8) and take the Schr\"odinger representation of $D_4$ $V = \mathbb{C} x \oplus \mathbb{C} y$, defined by the matrices
$$
e_1 \mapsto \begin{bmatrix}
0 & 1 \\ 1 & 0
\end{bmatrix}, e_2 \mapsto \begin{bmatrix}
1 & 0 \\ 0 & -1
\end{bmatrix}.
$$
Then the induced map $\xymatrix{H_2 \ar[r]& \wis{PGL}_2(\mathbb{C})}$ has as kernel the group generated by $[e_1,e_2]$ and $H_2/[e_1,e_2] \cong \mathbb{Z}_2 \times \mathbb{Z}_2$. So for example $e_2$ satisfies the condition of the previous theorem. Therefore, let $A = \mathbb{C}[V]$, then $A^{e_2} \cong \mathbb{C}\langle x,y \rangle/(xy+yx)$ ($A^{e_2}$ is the twist of $A$ by the automorphism $e_2$, not the subalgebra of $A$ fixed by $e_2$).
\par But $\mathbb{C}[x,y] \not\cong \mathbb{C}\{x,y\}$ as $H_2$-representations, as $\mathbb{C}[x,y] \cong \chi_{1,1}$ while $\mathbb{C}\{x,y\}\cong \chi_{0,1}$. So $A \not\cong A^{e_2}$.
\end{example}
\subsection{Ore extensions}
Another way to make $G$-deformations of an algebra $A$ is by using Ore extensions. Recall that $A[t;\sigma,\delta]$ with $\sigma \in \Aut(A)$ and $\delta \in \Der^{\sigma}(A)$.
\begin{proposition}
Let $A = T(V)/(R)$ be a quadratic $G$-algebra, $\sigma \in \Aut_G(A)$ and $\delta \in \Der^\sigma(A)$ such that
\begin{align}
\forall g \in G, \forall x \in A_1: \delta(g(x)) = \chi^*(g) g(\delta(x)).
\label{req:ore}
\end{align}
Then $A[t;\sigma,\delta]$ is a $G$-algebra such that $\deg(t) = 1$ and $\mathbb{C} t \cong \chi$. Conversely, if $A[t;\sigma,\delta]$ is a $G$-algebra such that $\mathbb{C} t\cong \chi$, then $\sigma \in \Aut_G(A)$ and $\delta \in \Der^\sigma(A)$ fulfils the above property.
\end{proposition}
\begin{proof}
The relations of an Ore extension of a quadratic algebra $A$ are of the form
$$
\forall x \in A_1: tx - \sigma(x)t - \delta(x)=0.
$$
In order for $G$ to act on such an extension such that $\mathbb{C} t \cong \chi$, we need to calculate $g(tx - \sigma(x)t - \delta(x))$ for each $x \in A_1$. We have
\begin{align*}
g(tx - \sigma(x)t - \delta(x)) &= g(t)g(x) - g(\sigma(x))g(t) - g \delta(x) \\
&= \chi(g)t g(x) - \sigma(g(x)) \chi(g)t - \chi(g) \delta(g(x))\\
&=\chi(g)(tg(x) - \sigma(g(x))t- \delta(g(x))) = 0.
\end{align*}
For the other arrow, let $A[t;\sigma,\delta]$ be a $G$-algebra such that $\deg(t) = 1$ and $\mathbb{C} t \cong \chi$. Observe that the extra relations to get from $A$ to $A[t;\sigma,\delta]$ lie in the finite dimensional vector space $\mathbb{C} t \otimes V \oplus V \otimes \mathbb{C} t \oplus A_2$, which is also a direct sum as $G$-representations. These relations then form a $G$-subrepresentation if and only if the 2 maps
\begin{gather*}
\xymatrix{\mathbb{C} t \otimes V \ar[r]^-f& V \otimes \mathbb{C} t}\\
\xymatrix{\mathbb{C} t \otimes V \ar[r]^-g& A_2}
\end{gather*}
with $f$ mapping $t \otimes v$ to $\sigma(v) \otimes t$ and $g$ mapping $t \otimes v$ to $\delta(v) \in A_2$ are $G$-morphisms. So $\sigma \in \Aut_G(A)$ and $\delta$ fulfils the requirements of the proposition.
\end{proof}
Unfortunately, sometimes the only Ore extensions of $\mathbb{C}[V]$ we can make are those with $\delta = 0$.
\begin{proposition}
If $\sigma \in \prod_{i=1}^n\wis{GL}_{e_i}(\mathbb{C})$ has no eigenvalue equal to 1 and if there is no $G$-submodule of $V$ isomorphic to $\chi$, then there exists no $0 \neq \delta \in \Der^\sigma(\mathbb{C}[V])$ fulfilling requirement \ref{req:ore}.
\label{prop:Ore}
\end{proposition}
\begin{proof}
Let $V = \oplus_{i=0}^{n-1}\mathbb{C} x_i$ as vector space. From the definition of a $\sigma$-derivation, we need
$$
\forall 0 \leq i,j \leq n-1: \sigma(x_i)\delta(x_j) + x_j \delta(x_i) = \delta(x_i x_j) = \delta(x_j x_i) = \sigma(x_j)\delta(x_i)+x_i \delta(x_j).
$$
From which it follows that
$$
(\sigma(x_i)-x_i)\delta(x_j) = (\sigma(x_j)-x_j)\delta(x_i).
$$
As $\sigma$ has no eigenvalue equal to 1, the elements $v_i = \sigma(x_i)-x_i=(\sigma -1)(x_i),i=1\ldots n$ also form a basis of $V$ and we have
$$
v_i \delta (x_j)= v_j \delta(x_i).
$$
Using the fact that $\mathbb{C}[V]$ is an UFD and that $v_1,\ldots,v_n$ form a basis, it follows that $\delta(x_i) = v v_i$ for some $v \in V$, so $\delta(x_i)=v(\sigma - 1)(x_i)$. By requirement \ref{req:ore}, we need that
$$
v (\sigma-1)(g(x)) =\delta(g(x)) = \chi^*(g)g(\delta(x))=\chi^*(g) g(v)g((\sigma-1)(x)),
$$
from which it follows that
$$
g(v) = \chi(g) v.
$$
So if $\chi$ is not a subrepresentation of $V$, then necessarily $\delta = 0$.
\end{proof}
\section{$S_{n+1}$-deformations of $\mathbb{C}[V]$}
Let $V = S \oplus T$ be the $n+1$-dimensional permutation representation of $S_{n+1}$ with $S$ $n$-dimensional and $T$ the trivial representation. We have the following proposition.
\begin{proposition}
Let $n \geq 2$. The $S_{n+1}$-deformations up to degree 2 of $\mathbb{C}[V]$ are parametrized by $\mathbb{P}^2$.
\end{proposition}
\begin{proof}
We have $ V\otimes V = S \otimes S \oplus S^{\oplus 2} \oplus T$ and $V \wedge V = S \wedge S \oplus S$. $S \otimes S = S\wedge S \oplus T \oplus S \oplus W$ with $W$ simple, $W \neq S, S \wedge S$ according to \cite[Exercise 4.19]{fulton1991representation}. Then the $S_{n+1}$-deformations of $\mathbb{C}[V]$ are parametrized by $\Emb_{S_{n+1}}(S \wedge S \oplus S, S \wedge S \oplus S^{\oplus 3}) \cong \mathbb{P}^2$. Let $V = \oplus_{i=0}^n \mathbb{C} x_i$ such that
$$
\forall \sigma \in S_{n+1}: \sigma(x_i)= x_{\sigma(i)},
$$
put $y_i = x_0 - x_i, 1\leq i \leq n$ and let $v = \sum_{i=0}^n x_i$. Then for $P=[A:B:C] \in \mathbb{P}^2$, the relations of a $S_{n+1}$-deformation $A_P$ of $\mathbb{C}[V]$ correspond to
\[
\begin{cases}
A y_i v + B v y_i + C y_i\left( (n-1)y_i-2\sum_{j=1,j\neq i}^n y_j\right)=0, 1 \leq i \leq n,\\
[y_i,y_j] = 0, 1 \leq i < j \leq n.
\end{cases}
\]
The theorem follows.
\end{proof}
From the relations, it generically follows that these algebras are Ore extensions of the commutative ring generated by $y_1,\ldots,y_n$. Unfortunately, if $\frac{B}{A}\neq -1$ and $C \neq 0$, the algebra $\mathbb{C}[y_1,\ldots,y_n]$ is not a commutative polynomial, as the corresponding automorphism of $\mathbb{C}[y_1,\ldots,y_n]$ does not have eigenvalue 1 as in Proposition \ref{prop:Ore}.
\par Therefore, the $S_{n+1}$-deformations of $\mathbb{C}[V]$ are parametrized by 2 lines, one corresponding to the differential polynomial rings $\mathbb{C}[y_1,\ldots,y_n][v,\delta_\alpha]$ with $\delta_a(y_i)=a y_i\left( (n-1)y_i-2\sum_{j=1,j\neq i}^n y_j\right)$ and one corresponding to a skew polynomial ring $\mathbb{C}[y_1,\ldots,y_n][x;\sigma_a]$ with $\sigma_a(y_i) = a y_i$.
\begin{theorem}
The Artin-Schelter regular algebras of global dimension $n+1$ which are $S_{n+1}$-deformations of the polynomial ring $\mathbb{C}[V]$ and are domains are parametrized by $\mathbf{V}(xy)\subset \mathbb{A}^1$, with one line corresponding to quantum algebras and the other line corresponding to differential polynomial rings.
\end{theorem}
\begin{proof}
Both lines define Ore extensions of the polynomial ring $\mathbb{C}[S]$, so we can apply the results of \cite{phan2012yoneda}. The points at infinity will be domains, as there will be zero divisors in degree 1.
\end{proof}
The two strata we will study will correspond to the algebras with relations
\[
\begin{cases}
y_i v - a v y_i, 1 \leq i \leq n,\\
[y_i,y_j] = 0, 1 \leq i < j \leq n,
\end{cases}
\]
and
\[
\begin{cases}
vy_i-y_iv= c y_i\left( (n-1)y_i-2\sum_{j=1,j\neq i}^n y_j\right), 1 \leq i \leq n,\\
[y_i,y_j] = 0, 1 \leq i < j \leq n.
\end{cases}
\]
\subsection{Classification of the simple objects in $\wis{Proj}(\mathcal{A})$}
Let $\mathcal{A}$ be a AS-regular $S_{n+1}$-deformation of $\mathbb{C}[V]$. In order to calculate the simple elements of $\wis{Proj}(\mathcal{A})$, we can use that in both cases, the sequence $y_1,y_2,\ldots,y_n$ is a normalizing, regular sequence.
\subsubsection{The Skew polynomial case}
Here we can use the results of \cite{2015point}.
\begin{theorem}
$\mathcal{A}$ is a twist of the polynomial ring $\mathbb{C}[V]$. Consequently, $\wis{Proj}(\mathcal{A}) \cong \mathbb{P}^{n}$.
\end{theorem}
\begin{proof}
Take $\beta (y_i) = y_i, \beta v = av$ with $a \in \mathbb{C}^*$. Then the twist $\mathbb{C}[V]^\beta$ has relations
$$
y_i *_\beta y_j = y_i y _j = y_j *_\beta y_i, y_i *_\beta v = a y_i v = a v*_\beta y_i, 1 \leq i < j \leq n.
$$
which corresponds to one of the lines in $\mathbb{P}^2$ we defined.
\end{proof}
\subsubsection{The differential polynomial ring case}
By rescaling $v$, we may assume that we are working with the derivation defined by
$$
\delta(y_i) = y_i\left((n-1)y_i-2\sum_{j=1,j\neq i}^n y_j\right)=y_i f_i, i = 1 \ldots n.
$$
\begin{theorem}
The simple objects of $\wis{Proj}(\mathcal{A})$ are parametrized by $2^{n}-1$ lines through one point in $\mathbb{P}^{n} = \mathbb{P}((\mathcal{A}_1)^*)$. Each point on such a line corresponds to a point module of $\mathcal{A}$. If $n$ is even, the only point module with (non-trivial) finite dimensional simple quotients is the intersection point of these lines. If $n$ is odd, then there are $\binom{n}{\frac{n+1}{2}}$-lines parametrizing $\mathbb{C}^*$-orbits of 1-dimensional representations, which are the only lines with non-trivial simple quotients. The equations for these lines are determined by
$$
\mathbf{V}(y_i y_j (y_i-y_j)|1\leq i < j \leq n)
$$
There are no other simple objects in $\wis{Proj}(\mathcal{A})$.
\end{theorem}
\begin{proof}
Let $P \in \wis{Proj}(\mathcal{A})$ be a simple object, that is, $P$ is a graded module with Hilbert series $\frac{p}{(1-t)}$ for some $p \in \mathbb{N}, p \geq 1$ and each graded quotient of $P$ is finite dimensional. As each $y_i$ is normalizing, either $y_i \in \Ann(P)$ or $P$ corresponds to a simple $\mathcal{A}[y_i^{-1}]_0$-representation, cfr. \cite{NastaFVO}. Assume that each $y_i\in \Ann(P)$, then $P$ is a $\mathcal{A}/(y_1,\ldots, y_n) \cong \mathbb{C}[v]$-module and therefore $P \cong \mathbb{C}[v]$ as $\mathcal{A}$-module.
\par Assume now that $y_1 \notin \Ann(P)$, then $P$ corresponds to a simple representation of $\mathcal{A}[y_1^{-1}]_0$. Let $v_j = y_j y_1^{-1},2\leq j \leq n,w=vy_1^{-1}$. Then the relations of $\mathcal{A}[y_1^{-1}]_0$ become $v_j v_k - v_k v_j=0$ and for $2\leq k \leq n$
\begin{align*}
w v_k &= v y_1^{-1} y_k y_1^{-1}\\
&= (y_1^{-1} v - \left((n-1)-2\sum_{j=2}^n y_jy_1^{-1}\right)) y_k y_1^{-1}\\
&=y_1^{-1}vy_ky_1^{-1}-(n-1)v_k+2\sum_{j=2}^n v_j v_k\\
&=y_1^{-1}(y_k v +y_k\left((n-1)y_k-2\sum_{j=1,j\neq k}^n y_j \right))y_1^{-1}-(n-1)v_k+2\sum_{j=2}^n v_j v_k\\
&=v_kw+(n-1)(v_k^2-v_k)-2\sum_{j=1,j\neq k}^n v_j v_k + 2\sum_{j=2}^n v_j v_k\\
&=v_kw + (n+1)v_k(v_k-1).
\end{align*}
Consequently, the 1-dimensional representations of $\mathcal{A}[y_1^{-1}]_0$ correspond to the Zariski closed subset $\mathbf{V}(a_k(a_k-1)|2\leq k \leq n)\subset \mathbb{A}^n$, which is the union of $2^{n-1}$ lines.
\par We can now do the same for the algebra $(\mathcal{A}[y_2^{-1}])_0$, which will give an additional $2^{n-1}$ lines. However, there are already $2^{n-1}-2^{n-2}$ lines found $\mathcal{A}[y_1^{-1}]_0$ (those not annihilated by $y_1$), so we get $2^{n-2}$ new lines. By induction, we get in total
$$
\sum_{j=0}^{n-1} 2^{j} = 2^{n}-1
$$
lines. In $\mathbb{P}^n$, each of these lines goes through the point $[0:0:\ldots:0:1]$, which will be the intersection point.
\par In order to prove that there are no fat point modules (simple objects with Hilbert series $\frac{p}{1-t}$, $p >1$), we will prove that $\mathcal{A}[y_1^{-1}]_0$ has no $p$-dimensional representations for $p>1$. This will be enough, for $\mathcal{A}[y_1^{-1}]_0 \cong \mathcal{A}[y_i^{-1}]_0$ for any $1 \leq i \leq n$. We can rescale $v$ such that we are looking at the differential polynomial ring $\mathbb{C}[v_2,\ldots,v_n][w,\delta]$ with $\delta(v_i) = v_i(v_i-1)$.
\par First, we prove a lemma.
\begin{lemma}
Let $\rho$ be a simple representation of the algebra $\mathbb{C}\langle x,y \rangle/(ux-xu-x(x-1))$. Then $\rho$ is 1-dimensional and $\rho(x) = 0$ or $\rho(x)=1$.
\end{lemma}
\begin{proof}
If this is not so, then $\rho(x)-1$ and $\rho(x)$ are invertible. We then have for $Y = 1-\rho(x)^{-1}$ and $U = \rho(u)$
$$
YU - UY = \rho(u)\rho(x)^{-1}-\rho(x)^{-1}\rho(u) = -(1-\rho(x)^{-1})=-Y,
$$
so the couple $(U,Y)$ forms a representation of the 2-dimensional Heisenberg Lie algebra. If the lemma is not true, $Y$ is also invertible. Let $a$ be an eigenvector of $U$ with eigenvalue $\alpha$. We then have
$$
UYa = (YU+Y)a = (\alpha +1)Ya.
$$
But $\rho$ was finite dimensional, so $Y$ has a non-trivial kernel, which is a contradiction.
\par Let $\rho_0 = \{a \in \rho | \rho(x)a=0 \}$ and $\rho_1 = \{a \in \rho | \rho(x)a=a \}$ and take elements $a \in \rho_0$, $b \in \rho_1$. We then have
\begin{align*}
\rho(x)\rho(u)a = \rho(u)\rho(x)a - \rho(x)(\rho(x) - 1)a = 0 \\
\rho(x)\rho(u)b = \rho(u)\rho(x)b - \rho(x)(\rho(x) - 1)b = b
\end{align*}
So $\rho = \rho_0$ and $\rho_1=0$ or vice versa as $\rho$ is simple, in both cases $\rho(x(x-1))=0$ and $\rho$ is actually a simple representation of $\mathbb{C}[x,u]/(x(x-1))$. The lemma follows.
\end{proof}
As the $v_j, 2 \leq j \leq n$ commute with each other, let $a$ be an eigenvector of each $v_j$ with eigenvalue either 1 or 0 for each $j$, so $v_j a = \alpha_j a$ with $\alpha_j = 0$ or $\alpha_j = 1$ for some simple representation $\rho$ of $\mathcal{A}[y_1^{-1}]_0$. We then have
$$
v_j w a = w v_j a + (v_j)(v_j-1)a = v_j w a.
$$
As $\mathcal{A}[y_1^{-1}]_0 a = \rho$, it follows that the images of $w$ and $w_j$ commute, which means that $\rho$ is only simple if $\rho$ is 1-dimensional.
\par Now we need to check which of these point modules have simple quotients. This amounts to check which of these point modules have under the shift functor $\xymatrix{\wis{Proj}(\mathcal{A}) \ar[r]^-{[1]}&\wis{Proj}(\mathcal{A})}$ a finite orbit. Equivalently, as $\mathcal{A}$ is Artin-Schelter regular, we need to check which orbits of the associated automorphism $\phi$ on the point variety are finite, cfr. \cite{smith19924}. It is clear that under $\phi$ the intersection point is sent to itself, as it is the only singular point of the point variety. Let $P$ be a point module, not the intersection point, then $P$ corresponds to a set $T \subset \{1,\ldots,n\}, T \neq \{1,\ldots,n\}$ such that
$ y_i \in \Ann(P) \Leftrightarrow i \in T$. In addition, we have $y_i,y_j \notin \Ann(P) \Rightarrow y_i-y_j \in \Ann(P)$. Let $[a_1:\ldots:a_n:r]$ be the corresponding point of $\mathbb{P}^n$, so $a_i = 0 \Leftrightarrow i \in T$ and $i,j \not\in T \Rightarrow a_i = a_j$ and $r$ arbitrary. We need to find $[b_1:\ldots:b_n,s]$ such that the following equations hold
$$
\begin{cases}
rb_i-a_is = a_i\left((n-1)b_i-2\sum_{j=1,j\neq i}^n b_j \right), 1\leq i \leq n,\\
a_ib_j-a_jb_i=0, 1\leq i < j \leq n.
\end{cases}
$$
From the second set of equations, it follows that $b_i = a_i \forall 1\leq i \leq n$. We may pick $a_i= b_i = 1$ if $i \not\in T$. Consequently, the first set of equations are fulfilled if $i \in T$. If $i \not\in T$, then we have
$$
s = r-\left((n-1)-2\sum_{j=1,j\neq i}^n a_j \right)=r-n-1+2|T|
$$
If $|T| \neq \frac{n+1}{2}$, this shows that $\phi$ is a translation on $\mathbb{P}^1_{[a_i:r]}$, which fixes one point (the intersection point of all the lines). However, if $|T| = \frac{n+1}{2}$ (so $n$ has to be odd), then $\phi$ fixes each point on the corresponding line, so this will give a line parametrizing $\mathbb{C}^*$-orbits of 1-dimensional representations of $\mathcal{A}$.
\end{proof}
For $n=2,3$, we can make things more explicit.
\begin{example}
As $S_3 \cong D_3 = \langle e_1,e_2 | e_1^3=e_2^2=1, e_2e_1e_2=e_1^2\rangle$, the differential polynomial ring $\mathcal{A}$ we are studying is isomorphic to $\mathbb{C}\langle x,y,t\rangle/(R)$ with relations
$$
\begin{cases}
xy-yx=0,\\
xt-tx=y^2,\\
yt-ty=x^2.
\end{cases}
$$
In this case, the point modules can be computed by the method of multilinearization as the zeroset of the polynomial
$$
\det\left(\begin{bmatrix}
-y_0 & x_0 & 0 \\ -t_0 & -y_0 & x_0 \\ -x_0& -t_0 & y_0
\end{bmatrix}\right)=y_0^3-x_0^3.
$$
This is indeed the union of three lines through the point $[0:0:1]$. If we take for example the line $x_0 = y_0$, then the automorphism defined by this algebra on $\mathbf{V}(x_0-y_0)$ is given by $[x_0:x_0:t_0]\mapsto [x_0:x_0:t_0+x_0]$, which is clearly an automorphism of infinite order and fixes only one point.
\par The fact that all 1-dimensional simple representations come from the intersection point follows immediately as we have $\mathcal{A}/([x,t],[y,t])\cong \mathbb{C}[a,b,c]/(a^2,b^2)$.
\end{example}
\begin{example}
For $n=3$, we have the isomorphism $S_4 \cong (\mathbb{Z}_2 \times \mathbb{Z}_2) \rtimes S_3$. Let $e_1,e_2$ be the natural generators of $\mathbb{Z}_2 \times \mathbb{Z}_2$. If $V=S \oplus T$ is the permutation representation of $S_4$, then we can find a basis $\{v_{i,j}|0\leq i,j \leq 1\}$ of $V$ such that
$$
e_1 \cdot v_{i,j} = (-1)^i v_{i,j}, e_2 \cdot v_{i,j} = (-1)^j v_{i,j}.
$$
As $S_4$-module, the decomposition is given by $\mathbb{C} v_{0,0} \oplus S_4 \cdot v_{1,0}$. Using this basis, the differential polynomial ring has relations (under a suitable isomorphism)
$$
\begin{cases}
[v_{1,0},v_{0,1}]=[v_{1,1},v_{1,0}]=[v_{0,1},v_{1,1}]=0,\\
v_{1,0} v_{0,0} - v_{0,0} v_{1,0}= v_{0,1}v_{1,1},\\
v_{0,1} v_{0,0} - v_{0,0} v_{0,1}= v_{1,0}v_{1,1},\\
v_{1,1} v_{0,0} - v_{0,0} v_{1,1}= v_{1,0}v_{0,1}.
\end{cases}
$$
Again using multilinearization, we find that the point modules are parametrized by 7 lines, given by the Zariski closed subset in $\mathbb{P}^3$ defined by
$$
\mathbf{V}\left(V_{1,1}(V_{1,0}^2-V_{0,1}^2),V_{1,0}(V_{0,1}^2-V_{1,1}^2),V_{0,1}(V_{1,0}^2-V_{1,1}^2)\right).
$$
The point modules with 1-dimensional simple quotients can again be easily found, by taking the quotient $\mathcal{A}/([\mathcal{A},\mathcal{A}])\cong \mathbb{C}[a,b,c,d]/(bc,bd,cd)$. This ring is clearly the coordinate ring of 3 affine planes, intersecting 2 by 2.
\end{example}
|
2,869,038,155,023 | arxiv | \section{Introduction}
It has been well-known since the work of
T\'oth~\cite{toth-heis} and
Aizenman and Nachtergaele~\cite{an} in the early
1990's that many quantum spin-systems can be analyzed using
probabilistic representations.
T\'oth's representation of the (spin-$\tfrac12$) Heisenberg
ferromagnet in terms of random transpositions
is particularly appealing in its simplicity.
However, though simple to define, it
has proved challenging to obtain rigorous results using this
representation.
While substantial progress has
been made on several other models using probabilistic
representations~\cite{B-irb,B-van,BG,bjo-uel,CI,CNS,lees,uel-jmp},
proving a phase-transition in
the ferromagnetic Heisenberg model on the lattice $\mathbb{Z}^d$
remains an open challenge.
For mean-field variants there has been more progress, and
related models have recently received quite a lot of
attention in the probability
literature~\cite{alon-kozma,angel,berestycki,berestycki-kozma,
bjo-cycles,hammond-sharp,kmu,schramm}.
The free energy of the spin-$\tfrac12$
Heisenberg ferromagnet on the complete graph was
determined already in 1990:
by T\'oth~\cite{toth-bec} using a random-walk
representation, and simultaneously but independently by
Penrose~\cite{penrose}
by explicitly diagonalizing the Hamiltonian.
Here we extend the latter
results to a class of spin $S\in\tfrac12\mathbb{N}$
models, with Hamiltonian equal to a sum of transposition-operators
(see below for a precise definition). Probabilistically, the model
naturally generalizes T\'oth's permutation-representation:
a weight factor $2^{\#\mathrm{cycles}}$ is replaced by
$(2S+1)^{\#\mathrm{cycles}}$. Our approach is different both from
that of T\'oth and that of Penrose.
The key step is to
obtain an expression for the partition function in terms of
the irreducible representations of the symmetric group.
Perhaps our most surprising
result is a connection to the classical Potts-model:
we show that the critical temperature
of our model, as a function of $S$, coincides with that of the
$q=2S+1$ state Potts model.
We now define the model and state
our primary results.
\subsection{Model and main results}
We let $S^1,S^2,S^3$ denote the usual spin-operators, satisfying
the relations
\[
[S^1,S^2]=iS^3,\quad
[S^2,S^3]=iS^1,\quad
[S^3,S^1]=iS^2,
\]
where $i=\sqrt{-1}$.
For each $S\in\tfrac12\mathbb{N}$ we work with the standard spin-$S$
representation, where the $S^j$ are Hermitian matrices acting on
$\mathcal{H}=\mathbb{C}^{2S+1}$. We fix an orthonormal basis for $\mathcal{H}$
consisting of eigenvectors for $S^3$, denoting the basis
vector with eigenvalue $a\in\{-S,-S+1,\dotsc,S\}$ by
$\ket a$.
Let $G=K_n=(V,E)$ be the complete graph on $n$ vertices, i.e.\ the
graph with vertex set $V=\{1,\dotsc,n\}$ and edge set $E=\binom{V}{2}$
consisting of one edge (bond) per pair $x\neq y$ of vertices. For
each $x\in V$ we take a copy $\mathcal{H}_x$ of $\mathcal{H}$, and
we form the tensor product $\mathcal{H}_V=\otimes_{x\in V}\mathcal{H}_x$.
An orthonormal basis for $\mathcal{H}_V$ is given by the vectors
$\ket{\mathbf{a}}=\otimes_{x\in V}\ket{a_x}$ for
$\mathbf{a}=(a_x)_{x\in V}\in\{-S,\dotsc,S\}^V$.
If $A$ is an operator acting on $\mathcal{H}$ we define $A_x$ acting on
$\mathcal{H}_V$ by $A_x=A\otimes\mathrm{Id}_{V\setminus \{x\}}$.
The transposition operator $T_{xy}$
on $\mathcal{H}_V$ is defined as follows.
For each pair $x\neq y$ of vertices, $T_{xy}$
is given by its action on the basis elements $\ket{\mathbf{a}}$:
\begin{equation}\label{T-def}
T_{xy}\otimes_{z\in V}\ket{a_z}=
\otimes_{z\in V} \ket{a_{\tau(z)}},
\end{equation}
where $\tau=(x,y)$ is the transposition of $x$ and $y$:
\[
\tau(z)=\left\{
\begin{array}{ll}
y, & \mbox{if } z=x,\\
x, & \mbox{if } z=y,\\
z, & \mbox{otherwise.}
\end{array}
\right.
\]
Thus $T_{xy}$ interchanges the $x$ and $y$ entries of
$\ket{\mathbf{a}}$.
Our model has the Hamiltonian
\begin{equation}
H=H_n=-\sum_{xy\in E} (T_{xy}-1)
\end{equation}
acting on $\mathcal{H}_V$. We take the inverse-temperature of the form $\b/n$
for constant $\b>0$, thus the partition function is
\begin{equation}
Z_n(\b)=\mathrm{tr} (e^{-(\sfrac\b n) H_n}).
\end{equation}
We note that $T_{xy}$ may be expressed as a polynomial in the
operators $\mathbf{S}_x\cdot\mathbf{S}_y=\sum_{j=1}^3S^j_xS^j_y$.
For example, when $S=\tfrac12$ we have
that $T_{xy}=2(\mathbf{S}_x\cdot\mathbf{S}_y)+\tfrac12$, and when $S=1$ that
$T_{xy}=(\mathbf{S}_x\cdot\mathbf{S}_y)^2+(\mathbf{S}_x\cdot\mathbf{S}_y)-1$. (See
Proposition~\ref{T-prop}
in the appendix for the general case.)
Thus for $S=\tfrac12$ we recover the Heisenberg ferromagnet at
inverse-temperature $2\b/n$.
Our first main result is an explicit formula
for the free energy.
For each $S\in\tfrac12\mathbb{N}$, let $\theta=2S+1$ and let
\[
\D=\D_\theta=\{x=(x_1,\dotsc,x_\theta)\in[0,1]^\theta:
x_1\geq\dotsb\geq x_\theta,\textstyle\sum_{j=1}^\theta x_j=1\}.
\]
Define the function $\phi_\b:\D\to\RR$ by
\begin{equation}\label{phi-def}
\phi_\b(x)=\frac\b2\Big(\sum_{j=1}^\theta x_j^2-1\Big)
-\sum_{j=1}^\theta x_j \log x_j.
\end{equation}
\begin{theorem}\label{free-en-thm}
We have that
\begin{equation}
\lim_{n\to\infty} \frac1n \log Z_n(\b)=
\max_{x\in\D} \phi_\b(x).
\end{equation}
\end{theorem}
As mentioned previously, our analysis relies on a probabilistic
representation.
We describe this now.
Let $\PP(\cdot)$ be a probability measure governing a
collection $\om=(\om_{xy}:xy\in E)$ of independent rate 1 Poisson
processes on $[0,\b/n]$, indexed by the edges of $G$. Thus each
$\om_{xy}$ is a random (almost-surely finite) subset of $[0,\b/n]$;
the number of elements of $\om_{xy}$ in an interval $[s,t]$
has the Poisson distribution Po($t-s$), and these numbers are
independent for disjoint intervals.
We think of $[0,\b/n]$ as a time-interval.
See Figure~\ref{loops-fig} for a pictorial representation.
As explained in e.g.~\cite[eq.~(2.11)]{an}
we have from the Lie--Trotter product formula that
\begin{equation}\label{trotter}
e^{-(\sfrac\b n) H_n}=\EE\Big[
\sideset{}{^{\,\star}}\prod_{(xy,t)\in\om} T_{xy}\Big],
\end{equation}
where $\Pi^\star$ is the time-ordered product over all elements of
$\om$.
In light of~\eqref{T-def} and~\eqref{trotter} we may think of each
point $(xy,t)\in\om$ as representing a transposition of $x,y\in\{1,\dotsc,n\}$
at time $t$. We let $\sigma=\sigma(\om)=\prod^\star_{(xy,t)\in\om}(x,y)$
denote the (time-ordered) composition
of these transpositions from time 0 to time $\b/n$.
Thus $\sigma\in \cS_n$, the symmetric group on $n$
letters.
Recall that each $\sigma\in \cS_n$ may be written as a product of disjoint
cycles (orbits). Let $\ell=\ell(\om)$ denote the number of such
cycles of $\sigma(\om)$, including singletons. Taking the trace
in~\eqref{trotter} we find that we get a contribution of 1 from each
basis vector $\ket{\mathbf{a}}$ for which the function
$\mathbf{a}:V\to\{-S,\dotsc,S\}$ is constant on each cycle of
$\sigma(\om)$. (Figure~\ref{loops-fig} is helpful in verifying this statement.)
From the other $\ket{\mathbf{a}}$ we get contribution 0.
Writing
$\theta=2S+1$, as before,
for the number of possibilities per cycle, we
conclude that
\begin{equation}\label{pf-exp}
Z_n(\b)=\mathrm{tr}(e^{-(\sfrac\b n) H_n})=\EE[\theta^{\ell(\om)}].
\end{equation}
\begin{figure}
\centering
\includegraphics[scale=.8]{loops.eps}
\caption{
A sample $\om$, with
the vertex set $V=\{1,\dotsc,10\}$ on the
horizontal axis and time going upwards.
Elements $(xy,t)\in\om$ are represented
as crosses, and are to be thought of as
transpositions. (In this picture, for clarity only, most crosses
occur between consecutive vertices.)
Here $\sigma(\om)=(1,3)(2,6,7,4)(9,10)(5)(8)$
and $\ell(\om)=5$.
}
\label{loops-fig}
\end{figure}
In order to identify a phase-transition we will work also with a
`weighted' version of~\eqref{pf-exp}. Let $\mathcal{C}=\mathcal{C}(\om)$ denote the
set of cycles of the permutation $\sigma(\om)$, and for $h\in\RR$ write
\begin{equation}\label{pf-exp-2}
Z_n(\b,h)=
e^{-(\sfrac h\theta) n}
\EE\Big[\prod_{\g\in\mathcal{C}} (e^{h|\g|}+\theta-1)\Big],
\end{equation}
where $|\g|$ denotes the size of the cycle $\g$. Note that
$Z_n(\b,0)=Z_n(\b)$.
We will later (see Theorem~\ref{free-energy-thm})
obtain an explicit expression for
the limit
$z(\b,h)=\lim_{n\to\infty} \frac1n\log Z_n(\b,h)$.
(Theorem~\ref{free-en-thm} is the special case $h=0$ of that result.)
Our second main result concerns the right derivative
$z^+(\b)=\lim_{h\downarrow 0}\frac{z(\b,h)-z(\b,0)}{h}$
of $z(\b,h)$ at $h=0$.
\begin{theorem}\label{mag-thm}
Define
\begin{equation}\label{b-crit}
\beta_\crit(\theta)=\left\{
\begin{array}{ll}
2, & \mbox{if } \theta=2,\\
2\big(\tfrac{\theta-1}{\theta-2}\big)\log(\theta-1), &
\mbox{if } \theta\geq3.
\end{array}\right.
\end{equation}
Then for all $\theta\in\{2,3,\dotsc\}$ we have that
\begin{equation}
z^+(\b)\left\{\begin{array}{ll}
=0, & \mbox{if } \b<\b_\crit,\mbox{ or } \theta=2\mbox{ and }
\b=\b_\crit,\\
>0, & \mbox{if } \b>\b_\crit,\mbox{ or } \theta\geq3\mbox{ and }
\b=\b_\crit.
\end{array}\right.
\end{equation}
\end{theorem}
Thus, the critical inverse-temperature is given by~\eqref{b-crit}, and
the phase-transition is continous for $\theta=2$ (i.e.\ $S=\tfrac12$)
and discontinuous for $\theta\geq3$ (i.e.\ $S\geq1$).
We reiterate that the case $\theta=2$ was fully understood
previously~\cite{penrose,toth-bec}.
\subsection{Discussion}
Theorem~\ref{mag-thm} has
consequences for the following
\emph{weighted interchange process}. Recall the measure $\PP$
governing the random permutation $\sigma(\om)$, obtained as the
composition of a process of transpositions up to time
$\b/n$. For each $\theta>0$ one can define
another probability measure $\PP_\theta$ by requiring
$\frac{d\PP_\theta}{d\PP}\propto\theta^{\ell(\om)}$. The measure
$\PP_\theta$ allows for probabilistic interpretation
of correlation functions. For example when $S=\tfrac12$:
\[
\el S^3_xS^3_y\rangle=\tfrac12\PP_2(x\leftrightarrow y),
\]
where $\{x\leftrightarrow y\}$ is the event that $x$ and $y$ belong to the same
cycle. Similar relations hold for other $\theta$.
Magnetic ordering is thus accompanied by the occurrence of
large cycles in a $\PP_\theta$-distributed random permutation.
For each $k\geq0$
let $X_n(k)=\tfrac1n\sum_{|\g|\geq k}|\g|$ denote the
fraction of vertices in cycles of size at least $k$ in the random
permutation $\sigma(\om)$.
From Theorem~\ref{mag-thm} we will deduce the
following:
\begin{proposition}\label{X-prop}
If $\theta\in\{2,3,\dotsc\}$ and $z^+(\b)=0$ then for any sequence
$k=k_n\to\infty$ and any fixed $\varepsilon>0$, there is a $c>0$ such that
\[
\PP_\theta(X_n(k)\geq\varepsilon)\leq e^{-cn}.
\]
\end{proposition}
T\'oth's formula~\cite[eq.~(5.2)]{toth-heis}
suggests that a converse to Proposition~\ref{X-prop}
should also hold, i.e.\ that there are cycles of size of the order $n$
when $z^+(\b)>0$. We have not been able to prove this.
Note, however, that cycles of
order $n$ do occur whenever $\b>\theta\geq1$.
For $\theta=1$ this was proved by Schramm~\cite{schramm},
and for $\theta>1$ it was proved in~\cite{bjo-cycles}
using Schramm's result.
Theorem~\ref{mag-thm} also points to a connection to the classical Potts
model. In that model, one considers random assignments
$\eta=(\eta_x:x\in V)$ of the values $1,2,\dotsc,q$ to the vertices
$x\in V$, for some fixed $q\in\{2,3,\dotsc\}$.
Each such assignment receives probability proportional to
\[
\exp\big(\tfrac\b n\textstyle\sum_{xy\in E}\d_{\eta_x,\eta_y}\big).
\]
It was proved by Bollob\'as, Grimmett and Janson
in~\cite{bgj} (in the more general context of the
random-cluster-representation) that a phase-transition
occurs in this model
at the point $\b=\b_\crit(q)$ with $\beta_\crit(\cdot)$ as given
in~\eqref{b-crit}.
This equality of critical points may indicate a deeper connection
between the two models, which we hope to explore in future work.
\subsection{Outline}
Over the following three sections we will prove somewhat
more detailed versions of Theorems~\ref{free-en-thm}
and~\ref{mag-thm} and Proposition~\ref{X-prop}.
In Section~\ref{char-sec} we first
obtain a formula for $Z_n(\b,h)$ for finite $n$,
stated in Lemma~\ref{pf-lem}.
This formula is amenable to asymptotic analysis,
which we perform in Section~\ref{lim-sec}. The main result
of that Section is Theorem~\ref{free-energy-thm},
where we compute
$\lim_{n\to\infty}\tfrac1n\log Z_n(\b,h)$.
In Section~\ref{pt-sec} we use the latter result
to describe the phase transition and identify
the critical point.
Some additional proofs are given in the Appendix.
\section{Character decomposition of the partition function}
\label{char-sec}
In this section we obtain an expression for the
partition function $Z_n(\b,h)$ in terms of the irreducible
characters of the symmetric group.
From now on we will usually only refer to the spin
$S\in\tfrac12\mathbb{N}$ via the parameter
$\theta=2S+1\in\{2,3,\dotsc\}$. Recall that
$\sigma=\sigma(\om)\in\cS_n$ is the random permutation introduced
below~\eqref{trotter}, that $\mathcal{C}=\mathcal{C}(\om)$ is the set of
cycles in a disjoint-cycle decomposition of $\sigma$,
and that $\ell=\ell(\om)=|\mathcal{C}(\om)|$ is the number
of cycles.
By a \emph{composition} $\k$ of $n$
we mean a vector $\k=(\k_1,\dotsc,\k_\theta)$
with non-negative integer entries, such that
$\sum_{j=1}^\theta\k_j=n$. Note that we restrict the
number of entries to be exactly $\theta$, and
that we allow some $\k_j$
to be $=0$. A composition $\lambda$
is called a \emph{partition} if in addition
$\lambda_j\geq\lambda_{j+1}$ for all $j$,
in which case we write $\lambda\vdash n$.
Any composition may be rearranged to
form a partition. Given a partition $\lambda$, let $K(\lambda)$ denote the set
of compositions that can be obtained by re-ordering the entries of
$\lambda$. Clearly $1\leq |K(\lambda)|\leq\theta!$.
We write $\binom{n}{\lambda}$ for the multinomial coefficient
\[
\binom{n}{\lambda}=\frac{n!}{\lambda_1!\lambda_2!\dotsb\lambda_\theta!}.
\]
\subsection{Colouring-lemma}
Let $p_1,\dotsc,p_\theta$ be probabilities, i.e.\
non-negative numbers summing to 1.
Write $f(\sigma)=\PP(\sigma(\om)=\sigma)$ for the
distribution function of $\sigma(\om)$. Note that $f(\cdot)$ is a
class-function, i.e.\ $f(\sigma)=f(\pi)$ whenever $\sigma$ and $\pi$ have the
same cycle-type. (This uses that we are working on the complete
graph.)
For $\lambda\vdash n$
we write $\mathcal{T}_\lambda$ for the \emph{Young subgroup} of
$\cS_n$, i.e.\ the subgroup consisting of those permutations which fix
each of the sets
\[
\{1,\dotsc,\lambda_1\},\quad
\{\lambda_1+1,\dotsc,\lambda_1+\lambda_2\},\quad
\mbox{etc}.
\]
\begin{lemma}[Colouring-lemma]\label{col-lem}
We have that
\[
\EE\Big[\prod_{\g\in\mathcal{C}}
\Big(\sum_{i=1}^\theta p_i^{|\g|}\Big)\Big]
=\sum_{\lambda\vdash n} \binom{n}{\lambda}
\Big(\sum_{\k\in K(\lambda)} \prod_{i=1}^\theta p_i^{\k_{i}} \Big)
\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma).
\]
\end{lemma}
\begin{proof}
Colour each vertex of
$V=\{1,\dotsc,n\}$ independently using the colours
$1,\dotsc,\theta$,
colour $\# i$ with probability $p_i$.
Write $\cM$ for the event that all cycles of $\sigma$ are
monochromatic. The conditional probability
of $\cM$ given $\sigma$ is
\[
\prod_{\g\in\mathcal{C}}
\Big(\sum_{i=1}^\theta p_i^{|\g|}\Big),
\]
so the left-hand-side of the claim is just $\PP(\cM)$.
On the other hand, by assigning the colours first
we see that
\begin{equation}\label{cs-eq}
\PP(\cM)=\sum_{C_1,\dotsc,C_\theta}
\Big(\prod_{i=1}^\theta p_i^{|C_i|}\Big)
\PP\big(\sigma\in \mathcal{T}(C_1,\dotsc,C_\theta)\big)
\end{equation}
where the sum is over all (ordered) set partitions
$C_1,\dotsc,C_\theta$ of $\{1,\dotsc,n\}$, and
$\mathcal{T}(C_1,\dotsc,C_\theta)$ is the subgroup of $\cS_n$ consisting of
permutations which fix each of the sets $C_i$.
Let $\lambda\vdash n$ be the partition of $n$
obtained by ordering the $|C_i|$ by size.
Then there is some $\pi\in \cS_n$ such that
\begin{equation}
\pi^{-1} \mathcal{T}(C_1,\dotsc,C_\theta) \pi= \mathcal{T}_\lambda.
\end{equation}
Indeed, conjugation corresponds to relabelling
the vertices, so we simply choose the appropriate relabelling of the
sets $C_i$. It follows that
\[\begin{split}
\PP\big(\sigma\in \mathcal{T}(C_1,\dotsc,C_\theta)\big)
&=\sum_{\sigma\in \mathcal{T}(C_1,\dotsc,C_\theta)} f(\sigma)
=\sum_{\sigma\in \mathcal{T}_\lambda} f(\pi\sigma \pi^{-1})\\
&=\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma),
\end{split}\]
since $f(\cdot)$ is a class-function.
Putting this into~\eqref{cs-eq} and summing over all possible
$\lambda\vdash n$ we get that
\begin{equation}
\PP(\cM)=
\sum_{\lambda\vdash n}
\Big(\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma)\Big)
\Big(\sum_{C_1,\dotsc,C_\theta}
\prod_{i=1}^\theta p_i^{|C_i|}\Big)
\end{equation}
where now the sum over the $C_i$ is restricted to those
with the property that $(|C_1|,\dotsc,|C_\theta|)\in K(\lambda)$.
This sum may be performed by first summing over all
$\k\in K(\lambda)$,
and then over all choices of the sets $C_i$ with $\k_i=|C_i|$.
For each fixed $\k$, there are $\binom{n}{\lambda}$ choices of
the sets. It follows that
\[
\sum_{C_1,\dotsc,C_\theta}
\prod_{i=1}^\theta p_i^{|C_i|}
=\binom{n}{\lambda} \sum_{\k\in K(\lambda)} \prod_{i=1}^\theta p_i^{\k_i},
\]
which proves the claim.
\end{proof}
Introduce the notation
\begin{equation}\label{G-def}
G_n(\lambda)=\binom{n}{\lambda}\PP(\sigma\in \mathcal{T}_\lambda)=
\binom{n}{\lambda} \sum_{\sigma\in\mathcal{T}_\lambda}f(\sigma),\quad
\mbox{for } \lambda\vdash n.
\end{equation}
Taking all the $p_i=\tfrac1\theta$ in Lemma~\ref{col-lem}
and using~\eqref{pf-exp}
we get (cancelling a factor $\theta^{-n}$) that
$Z_n(\b)=\EE[\theta^{\ell(\om)}]=\sum_{\lambda\vdash n} |K(\lambda)| G_n(\lambda)$.
More generally, we may take
\begin{equation}
p_1=p e^h, \qquad p_2=\dotsb=p_\theta=p,
\qquad\mbox{for }h\in\RR,
\end{equation}
with appropriate normalization $p=(e^h +\theta -1)^{-1}$.
Lemma~\ref{col-lem} and~\eqref{pf-exp-2} give that
\begin{equation}
e^{(\sfrac h \theta) n} Z_n(\b,h)=
\EE\Big[\prod_{\g\in\mathcal{C}} (e^{h|\g|}+\theta -1)\Big]
=\sum_{\lambda\vdash n}
\Big(\sum_{\k\in K(\lambda)} e^{h \k_1}\Big)G_n(\lambda).
\end{equation}
The factors $\sum_{\k}e^{h\k_1}$ are bounded by simple
expressions. Indeed, if $h\geq0$ then,
since $e^{h\lambda_1}$ is a summand in the sum over $K(\lambda)$,
we have that
\begin{equation}
\sum_{\k\in K(\lambda)} e^{h \k_1}=
e^{h \lambda_1} \sum_{\k\in K(\lambda)} e^{h (\k_1-\lambda_1)}
\left\{
\begin{array}{l}
\geq e^{h \lambda_1}\\
\leq \theta! e^{h \lambda_1},
\end{array}\right.
\end{equation}
since $\lambda_1$ is the largest of the $\k_i$.
Similarly, if $h\leq0$ then
\begin{equation}
\sum_{\k\in K(\lambda)} e^{h \k_1}
\left\{
\begin{array}{l}
\geq e^{h \lambda_\theta}\\
\leq \theta! e^{h \lambda_\theta}.
\end{array}\right.
\end{equation}
We will use the notation $f(n)\asymp g(n)$ to denote that there is a
constant $C>0$ such that $\tfrac1C g(n)\leq f(n)\leq C g(n)$ for all
$n$. We may summarize the above as follows:
\begin{lemma}\label{pf-lem}
With $G_n(\lambda)=\binom{n}{\lambda}\PP(\sigma\in\mathcal{T}_\lambda)$ as in~\eqref{G-def},
we have that
\[\begin{split}
Z_n(\b)&\asymp \sum_{\lambda\vdash n} G_n(\lambda),\quad\mbox{and}\\
e^{(\sfrac h\theta) n}Z_n(\b,h)&\asymp
\left\{\begin{array}{ll}
\sum_{\lambda\vdash n}e^{h\lambda_1} G_n(\lambda), &
\mbox{if } h> 0,\\
\sum_{\lambda\vdash n}e^{h\lambda_\theta} G_n(\lambda), &
\mbox{if } h< 0.
\end{array}
\right.
\end{split}\]
\end{lemma}
\subsection{Some representation theory}
From Lemma~\ref{pf-lem} it is clear that the probabilities
$\PP(\sigma\in \mathcal{T}_\lambda)$ are important. We now express them using the
irreducible representations of $\cS_n$.
For background on the representation theory of $\cS_n$
we refer to e.g. Fulton--Harris~\cite{fulton-harris}.
The irreducible representations of $\cS_n$
are indexed by partitions $\mu\vdash n$.
(In this description we temporarily omit
our convention that partitions
have at most $\theta$ non-zero parts.)
It is convenient to
represent $\mu\vdash n$ by its Young-diagram,
as in Figure~\ref{ytab-fig}.
We write
$U_\mu$ for the irreducible representation corresponding to
$\mu\vdash n$, and
$\chi_\mu$ for its character. Let $V_\lambda$ denote the
coset representation of the subgroup $\mathcal{T}_\lambda$,
that is $V_\lambda$ is a vector space spanned
by the cosets $\pi \mathcal{T}_\lambda$ and $\cS_n$ acts by
left multiplication. By Young's
rule~\cite[Corollary~4.39]{fulton-harris},
the representation $V_\lambda$ decomposes as a
direct sum of irreducible representations with known multiplicities:
\begin{equation}\label{V-decomp}
V_\lambda=\bigoplus_{\mu\vdash n}K_{\mu\lambda} U_\mu.
\end{equation}
Here the multiplicities $K_{\mu\lambda}$ are
the \emph{Kostka numbers}:
$K_{\mu\lambda}$ equals the number of ways to fill the Young
diagram for $\mu$ with $\lambda_1$ 1's, $\lambda_2$ 2's
et.c.\ so that the rows are weakly increasing
and the columns are strictly increasing.
See Figure~\ref{ytab-fig} again.
\begin{figure}
\centering
$\yng(4,4,2)$ \qquad\qquad
$\young(111123,222,3)$
\caption{Left: Young diagram of the partition
$\lambda=(4,4,2)\vdash 10$. Right: diagram for
$\mu=(6,3,1)\trianglerighteq\lambda$ filled with $\lambda_1=4$ 1's,
$\lambda_2=4$ 2's and $\lambda_3=2$ 3's so that rows are weakly
increasing and columns strictly increasing.}
\label{ytab-fig}
\end{figure}
We say that $\mu$ \emph{dominates} $\lambda$,
written $\mu\trianglerighteq\lambda$, if for each $i$ we have
that $\mu_1+\dotsb+\mu_i\geq \lambda_1+\dotsb+\lambda_i$.
Note that $K_{\mu\lambda}=0$ unless $\mu\trianglerighteq\lambda$.
In particular, if $\lambda$ has at most $\theta$ non-zero parts, then
$K_{\mu\lambda}=0$ unless $\mu$ also has at most $\theta$ non-zero parts.
Writing $\psi_\lambda$ for the character of $V_\lambda$ it follows
from~\eqref{V-decomp} that
\begin{equation}\label{char-decomp-V}
\psi_\lambda=\sum_{\mu\vdash n} K_{\mu\lambda} \chi_\mu.
\end{equation}
Let $\el\cdot,\cdot\rangle$ denote the inner product
of functions on $\cS_n$ given by
\[
\el f,g\rangle=\frac{1}{n!}\sum_{\sigma\in \cS_n}f(\sigma)\overline{g(\sigma)}.
\]
Lemma 1 in Alon--Kozma~\cite{alon-kozma} tells us that for
a class function $f$ we have
\begin{equation}
\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma)=|\mathcal{T}_\lambda|\el f,\psi_\lambda\rangle
\end{equation}
so using~\eqref{char-decomp-V} we see that
\begin{equation}\label{ak}
\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma)=|\mathcal{T}_\lambda|
\sum_{\mu\vdash n} K_{\mu\lambda} \el f,\chi_\mu\rangle.
\end{equation}
Now let $f(\sigma)=\PP(\sigma(\om)=\sigma)$ as before.
As already noted, this is a class-function.
The calculations in Lemma~1
of Berestycki--Kozma~\cite{berestycki-kozma} show that
\begin{equation}\label{bk}
\el f,\chi_\mu\rangle=
\tfrac{1}{n!} \mathrm{tr}(\hat f(\mu))
=\tfrac{1}{n!} d_\mu
\exp\Big\{\frac\b n \binom{n}{2}[r(\mu)-1]\Big\}.
\end{equation}
Here $\hat f(\mu)$ denotes the
Fourier transform of $f$ at the irreducible
representation $U_\mu$, the number
$d_\mu$ is the dimension of $U_\mu$,
and finally $r(\mu)=\chi_\mu((1,2))/d_\mu$
is the character ratio at a transposition.
We note for future reference that
\begin{equation}\label{r-eq}
\frac{\b}{n}\binom{n}{2}[r(\mu)-1]
=\frac\b2\Big[
\sum_{j=1}^\theta \frac{\mu_j(\mu_j-2j+1)}{n}
-(n-1)\Big]
\end{equation}
see e.g.\ equation (7) in~\cite{berestycki-kozma}.
Putting together~\eqref{ak} and~\eqref{bk} gives
\begin{equation}
\PP(\sigma\in \mathcal{T}_\lambda)=\sum_{\sigma\in \mathcal{T}_\lambda} f(\sigma)=
\frac{|T_\lambda|}{n!}\sum_{\mu\vdash n}
d_\mu K_{\mu\lambda} \exp\Big\{t\binom{n}{2}[r(\mu)-1]\Big\}.
\end{equation}
Noting that
$\frac{|\mathcal{T}_\lambda|}{n!}=\binom{n}{\lambda}^{-1}$,
we obtain:
\begin{lemma}\label{G-lem}
\[
G_n(\lambda)=
\sum_{\mu\vdash n}
d_\mu K_{\mu\lambda} \exp\Big\{\frac\b n\binom{n}{2}[r(\mu)-1]\Big\}.
\]
\end{lemma}
For $\theta=2$ the partitions $\mu$ can
be indexed by the length of the second row,
and it is well-known (and easy to see) that
$K_{\mu\lambda}=1$ when $\mu\trianglerighteq\lambda$. In that case
Lemma~\ref{G-lem}
is essentially~\cite[eq.~(49)]{penrose}.
\section{Convergence-results}
\label{lim-sec}
In this section will use the expressions in Lemmas~\ref{pf-lem}
and~\ref{G-lem} to identify the limit of $\frac1n\log Z_n(\b,h)$.
\subsection{Lemmas}
We first present convergence-results in a slightly more
general form, which we will later apply to our specific problem.
Some of the arguments in this subsection
are strongly inspired by~\cite[Section~6]{penrose}
and~\cite[Section~3.4]{ruelle}.
Recall that
\[
\D=\{(x_1,\dotsc,x_\theta)\in [0,1]^\theta:
x_1\geq\dotsb\geq x_\theta, \textstyle\sum x_i=1\}.
\]
For $x,y\in\D$ we write $y\trianglerighteq x$ if
$y_1+\dotsb+y_i\geq x_1+\dotsb+x_i$
for all $i$.
For $x\in\D$ we write
\begin{equation}
\D(x)=\{y\in \D: y\trianglerighteq x\}.
\end{equation}
It is not hard to see that
$\D(\tfrac1\theta,\dotsc,\tfrac1\theta)=\D$.
Also note that each $\D(x)$, and hence also $\D$, is compact and
convex.
Write $\|\cdot\|$ for the $\infty$-norm on $\RR^\theta$,
$\|x-y\|=\max_{i=1,\dotsc,\theta} |x_i-y_i|$.
Write $d_\mathrm{H}(\cdot,\cdot)$ for the associated Hausdorff
distance between sets in $\RR^\theta$:
\[
d_\mathrm{H}(A,B)=\inf\{\varepsilon\geq 0:
A\subseteq B^\varepsilon\mbox{ and } B\subseteq A^\varepsilon\}
\]
where
$A^\varepsilon=\{x\in\RR^\theta: \|x-a\|<\varepsilon\mbox{ for some }a\in A\}$.
The proof of the following result is given in Appendix~\ref{dh-app}.
\begin{lemma}\label{D-lem}
Let $x,y\in\D$ with $\|x-y\|\leq \varepsilon<\theta^{-2}$. Then
\[
d_\mathrm{H}(\D(x),\D(y))< \theta\varepsilon^{1/2}.
\]
\end{lemma}
Now let $\phi:\D\to\RR$ be any continuous function
(we will later take $\phi=\phi_\b$).
Since $\D$ is compact, $\phi$ is uniformly continuous.
Let $\phi^{(\lambda)}_n(\mu)$
be a sequence of functions of partitions $\lambda,\mu\vdash n$
converging \emph{uniformly} to $\phi$ in the
following sense: there is a sequence
$\d_n\to 0$, not depending on $\lambda$ or $\mu$, such that
$|\phi^{(\lambda)}_n(\mu)-\phi(\mu/n)|\leq\d_n$ for all $n$.
\begin{lemma}\label{penrose-lem}
If $n\to\infty$ and $\lambda/n\to x\in\D$ then
\[
\frac1n\log\Big(\sum_{\mu\trianglerighteq\lambda}
\exp\big(n\,\phi^{(\lambda)}_n(\mu)\big)\Big)
\to \max_{y\in \D(x)} \phi(y).
\]
The maximum is attained since $\D(x)$ is compact and $\phi$
continuous.
\end{lemma}
\begin{proof}
We first prove an upper bound.
Since the number of partitions of $n$ into at most $\theta$ parts is
at most $n^\theta$ we have that
\[\begin{split}
\sum_{\mu\trianglerighteq\lambda}\exp\big(n\, \phi^{(\lambda)}_n(\mu)\big)&
\leq n^\theta \max_{\mu\trianglerighteq\lambda} \exp\big(n\, \phi^{(\lambda)}_n(\mu)\big)\\
&\leq n^\theta \exp\big(n\, \max_{\mu\trianglerighteq\lambda} \phi(\mu/n)+n\d_n\big)
\end{split}\]
so that
\[
\frac1n\log \Big(\sum_{\mu\trianglerighteq\lambda}\exp\big(n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\leq o(1)+ \max_{\mu\trianglerighteq\lambda} \phi(\mu/n).
\]
Let $x_n=\lambda/n$, then $\mu\trianglerighteq\lambda$ is equivalent to $\mu/n\in\D(x_n)$,
so we have that
\[
\max_{\mu\trianglerighteq\lambda} \phi(\mu/n)\leq
\max_{y\in\D(x_n)} \phi(y)=\phi(y_n^\star)
\]
for some $y_n^\star\in\D(x_n)$.
Now we use Lemma~\ref{D-lem}:
given any $\d>0$ we have, for $n$ large
enough, that there is some $x^\star_n\in\D(x)$ such that
$\|x_n^\star-y_n^\star\|<\d$. Since $\phi$ is uniformly continuous we
may, given $\varepsilon>0$, pick $\d$ so that
$\|x_n^\star-y_n^\star\|<\d$ implies
$|\phi(x_n^\star)-\phi(y_n^\star)|<\varepsilon$.
Then
\[
\phi(y_n^\star) \leq \phi(x_n^\star)+\varepsilon
\leq \max_{y\in\D(x)} \phi(y)+\varepsilon,
\]
since $x^\star_n\in\D(x)$. This shows that
\[
\frac1n\log \Big(\sum_{\mu\trianglerighteq\lambda}\exp\big(n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\leq o(1)+ \max_{y\in\D(x)} \phi(y)+\varepsilon,
\]
for any $\varepsilon>0$, so
\[
\limsup_{n\to\infty, \lambda/n\to x}
\frac1n\log \Big(\sum_{\mu\trianglerighteq\lambda}\exp\big(n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\leq \max_{y\in\D(x)} \phi(y).
\]
Now for the lower bound. Pick some $x^\star\in\D(x)$ where
$\phi$ attains its maximum over $\D(x)$. As before, write
$x_n=\lambda/n$. Using Lemma~\ref{D-lem} as before, given $\d>0$ we have
that $\D(x_n)$ intersects the ball $B_\d(x^\star)$ of radius $\d$
around $x^\star$ provided that $n$ is large enough. Write
$\overline B_\d(x^\star)$ for the closed ball. By the triangle inequality
we may further assume that $\D(x_n)\cap\overline B_\d(x^\star)$ contains
some point of the form $\mu/n$. Thus
\begin{equation}
\begin{split}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)
&\geq \sum_{\mu/n\in \D(x_n)}
\exp\big( n\, \phi(\mu/n)-n\d_n\big)\\
&\geq \min_{\mu/n\in \D(x_n)\cap\overline B_\d(x^\star)}
\exp\big( n\, \phi(\mu/n)-n\d_n\big)\\
&\geq \min_{y\in \D\cap\overline B_\d(x^\star)}
\exp\big( n\, \phi(y)-n\d_n\big)\\
&= \exp\Big( n \min_{y\in \D\cap\overline B_\d(x^\star)} \phi(y)-n\d_n\Big).
\end{split}
\end{equation}
Hence
\[
\frac1n\log\Big(\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\geq \min_{y\in \D\cap\overline B_\d(x^\star)} \phi(y)-\d_n.
\]
By the uniform continuity of $\phi$, given $\varepsilon>0$ we may pick $\d$
small enough such that
\[
\min_{y\in \D\cap\overline B_\d(x^\star)} \phi(y)\geq \phi(x^\star)-\varepsilon.
\]
This gives
\[
\limsup_{n\to\infty, \lambda/n\to x}
\frac1n\log\Big(\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\geq \phi(x^\star)-\varepsilon,
\]
which proves the claim.
\end{proof}
The next result may be established
straightforwardly using Lemma~\ref{D-lem}:
\begin{lemma}\label{g-lem}
The function $g:\D\to\RR$ given by
$g(x)=\max_{y\trianglerighteq x} \phi(y)$
is continuous.
\end{lemma}
We next present a slight extension of Lemma~\ref{penrose-lem}.
We assume that $\phi$ and
$\phi^{(\lambda)}_n(\mu)$ are as before.
Write $y=(y_1,\dotsc,y_\theta)\in\RR^\theta$ and
$y\cdot x=\sum y_ix_i$ for the usual product.
\begin{lemma}\label{penrose-lem-2}
For any $y\in\RR^\theta$ we have as $n\to\infty$ that
\[
\frac1n\log\Big(\sum_{\lambda\vdash n} e^{y\cdot \lambda}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)\Big)
\to \max_{x\in\D}\big( y \cdot x + g(x)\big)
\]
where $g$ is the function in Lemma~\ref{g-lem}.
\end{lemma}
\begin{proof}
Write $m(y)=\max_{x\in\D}\big( y \cdot x + g(x)\big)$.
Bounding the number of partitions by $n^\theta$ as before, we see
that
\[
\begin{split}
\sum_{\lambda\vdash n} e^{y\cdot \lambda}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)
&\leq n^{2\theta} \cdot \max_{\lambda\vdash n} \max_{\mu\trianglerighteq\lambda}
\exp\big( y\cdot\lambda+ n\, \phi(\mu/n)+n\d_n\big)\\
&\leq n^{2\theta} \cdot \max_{\lambda\vdash n} \max_{z\in\D(\lambda/n)}
\exp\big( y\cdot\lambda+ n\, \phi(z)+n\d_n\big)\\
&\leq n^{2\theta} \cdot \max_{x\in\D} \max_{z\in\D(x)}
\exp\big( ny\cdot x+ n\, \phi(z)+n\d_n\big)\\
&\leq n^{2\theta} \cdot
\exp\big( \max_{x\in\D} \big\{ny\cdot x+n g(x)\big\}+n\d_n\big)
\end{split}
\]
Thus
\[
\frac1n\log \Big(
\sum_{\lambda\vdash n} e^{y\cdot \lambda}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)
\Big)\leq m(y)+o(1).
\]
For the lower bound, note that given $\d>0$ we
may find $\tilde x\in\D$ such that
\[
y\cdot \tilde x+g(\tilde x) \geq m(y)-\d.
\]
We may also find a sequence $\tilde\lambda\vdash n$ such that
$\tilde\lambda/n\to \tilde x$. Clearly
\[
\sum_{\lambda\vdash n} e^{y\cdot\lambda}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)
\geq e^{y\cdot\tilde \lambda}
\sum_{\mu\trianglerighteq\tilde\lambda}
\exp\big( n\, \phi^{(\tilde\lambda)}_n(\mu)\big)
\]
and hence
\[
\frac1n\log \Big(
\sum_{\lambda\vdash n} e^{y\cdot\lambda}
\sum_{\mu\trianglerighteq\lambda}\exp\big( n\, \phi^{(\lambda)}_n(\mu)\big)
\Big)\geq
\frac{y\cdot \tilde\lambda}{n}+
\frac1n\log \Big(
\sum_{\mu\trianglerighteq\tilde\lambda}\exp\big( n\, \phi^{(\tilde\lambda)}_n(\mu)\big)
\Big).
\]
By Lemma~\ref{penrose-lem} the right-hand-side converges to
\[
y\cdot \tilde x+g(\tilde x) \geq m(y)-\d.
\]
This proves the claim.
\end{proof}
\subsection{The free energy}
From now on we let $\phi=\phi_\b:\D\to\RR$ be the function given
in~\eqref{phi-def}, i.e.\
$\phi_\b(x)=\frac\b2\big(\sum_{i=1}^\theta x_i^2-1\big)
-\sum_{i=1}^\theta x_i \log x_i$.
Note that $\phi_\b$ is continuous.
We write $g_\b(x)=\max_{y\trianglerighteq x}\phi_\b(y)$ and
we define
\begin{equation}\label{zbh-def}
z(\b,h)=\left\{
\begin{array}{ll}
\max_{x\in\D}\big(h(x_1-\tfrac1\theta)+g_\b(x)\big),
& \mbox{if } h\geq 0,\\
\max_{x\in\D}\big(h(x_\theta-\tfrac1\theta)+g_\b(x)\big),
& \mbox{if } h\leq 0.
\end{array}\right.
\end{equation}
Note that $x_1-\tfrac1\theta\geq0$ and
$x_\theta-\tfrac1\theta\leq0$.
The following theorem
contains Theorem~\ref{free-en-thm} as the case $h=0$.
\begin{theorem}\label{free-energy-thm}
We have that
\[
\tfrac1n\log G_n(\lambda)\to g_\b(x),
\mbox{ as }n\to\infty\mbox{ and } \lambda/n\to x,
\]
and for $h\in\RR$ that
\[
\tfrac1n\log Z_n(\b,h)\to z(\b,h),
\mbox{ as }n\to\infty.
\]
\end{theorem}
\begin{proof}
We will use Lemmas~\ref{penrose-lem} and~\ref{penrose-lem-2} with
\begin{equation}
\phi^{(\lambda)}_n(\mu)=\frac{\b}{n^2}\binom{n}{2}
[r(\mu)-1]+\tfrac1n \log d_\mu
+\tfrac1n \log K_{\mu\lambda}.
\end{equation}
Due to Lemmas~\ref{pf-lem} and~\ref{G-lem}
it suffices to establish the uniform convergence of
$\phi^{(\lambda)}_n(\mu)$ to $\phi=\phi_\b$. First note that
$K_{\mu\lambda}\leq (n+1)^{\theta^2}$. Indeed, for each row of $\mu$ we must
choose the number of 1's, the number of 2's etc. Thus there are
certainly at most $(\lambda_1+1)\dotsb(\lambda_\theta+1)$ choices for each row,
and thus
\[
K_{\mu\lambda}\leq [(\lambda_1+1)\dotsb(\lambda_\theta+1)]^\theta\leq (n+1)^{\theta^2},
\]
as claimed. Defining
\[
\phi_n(\mu)=\frac{\b}{n^2}\binom{n}{2}
[r(\mu)-1]+\tfrac1n \log d_\mu
\]
we thus have that
\[
|\phi^{(\lambda)}_n(\mu)-\phi(\mu/n)|\leq
|\phi_n(\mu)-\phi(\mu/n)|
+\tfrac{\theta^2}{n}\log (n+1).
\]
Now by~\eqref{r-eq} we have
\[
\frac{\b}{n^2}\binom{n}{2}[r(\mu)-1]
=\frac\b2\Big[
\sum_{j=1}^\theta \frac{\mu_j(\mu_j-2j+1)}{n^2}
-\frac{n-1}{n}\Big]
\]
and we have that
\[
\Big|
\sum_{j=1}^\theta \frac{\mu_j(\mu_j-2j+1)}{n^2}
-\sum_{j=1}^\theta\big(\frac{\mu_j}{n}\big)^2\Big|
\leq \sum_{j=1}^\theta \frac{\mu_j}{n}
\big(\frac{2j-1}{n}\big)\leq \frac{2\theta-1}{n}.
\]
Next, (4.11) on page 50 of~\cite{fulton-harris}
gives that
\[
\log d_\mu=\log\Big(\frac{n!}{m_1!\dotsb m_k!}
\prod_{1\leq i<j\leq k}(m_i-m_j)\Big)
\]
where $m_i=\mu_i+k-i$ and $k$ is the number of
\emph{nonzero} parts of
$\mu$. Thus
\[\begin{split}
&\Big|\frac1n\log d_\mu-\frac1n\log\binom n\mu\Big|\leq
\frac1n\log \prod_{1\leq i<j\leq k}(m_i-m_j)\\
&+\frac1n\log [(\mu_1+k-1)\dotsb(\mu_1+1)
(\mu_2+k-2)\dotsb (\mu_2+1)\dotsb (\mu_{k-1}+1)]\\
&\leq \frac1n\log (n+\theta-1)^{\theta^2}
+\frac1n\log (n+\theta-1)^\theta.
\end{split}\]
Thus it suffices to bound
\[
\Big|\frac1n\log\binom n\mu-
\Big(-\sum_{j=1}^\theta \frac{\mu_j}{n}\log\frac{\mu_j}{n}\Big)\Big|.
\]
But by Stirling's formula
\[
\binom{n}{\mu}\asymp
\Big(\frac{n}{\prod_{j=1}^\theta \mu_j}\Big)^{1/2}
\prod_{j=1}^\theta \Big(\frac{n}{\mu_j}\Big)^{\mu_j}
\]
so that
\[
\Big|\frac1n\log\binom n\mu-
\Big(-\sum_{j=1}^\theta \frac{\mu_j}{n}\log\frac{\mu_j}{n}\Big)\Big|
\leq \Big|\frac1n \log
\Big(\frac{n}{\prod_{j=1}^\theta \mu_j}\Big)^{1/2}\Big|
+\frac Cn.
\]
The right-hand-side is at most $\tfrac{C'}{n} \log n$.
This proves the result.
\end{proof}
\section{The phase-transition}
\label{pt-sec}
In this last section we prove Theorem~\ref{mag-thm} and
Proposition~\ref{X-prop}.
Recall $\phi_\b$ and $z(\b,h)$
defined in~\eqref{phi-def} and~\eqref{zbh-def},
respectively.
\subsection{Left and right derivatives of $z(\b,h)$ at $h=0$}
Let $x^\uparrow(\b)\in\D$ denote a maximizer of $\phi_\b$
which maximizes the
first coordinate. That is, among the maximizers $x$ of
$\phi_\b$ we pick one for which $x_1$ is maximal.
Similarly, let $x^\downarrow(\b)$ denote a maximizer
of $\phi_\b$ which
\emph{minimizes} the last coordinate $x_\theta$.
Note that $x^\uparrow$ and $x^\downarrow$ depend on
$\b$, though we do not always write this explicitly.
The left and right derivatives of $z(\b,h)$ at $h=0$
are given by
\[
z^+(\b)=\lim_{h\downarrow0}\frac{z(\b,h)-z(\b,0)}{h},\qquad
z^-(\b)=\lim_{h\uparrow0}\frac{z(\b,h)-z(0)}{h}.
\]
We will show:
\begin{theorem}\label{der-thm}
\[
z^+(\b)=x_1^\uparrow(\b)-\tfrac1\theta,\qquad
z^-(\b)=x_\theta^\downarrow(\b)-\tfrac1\theta.
\]
\end{theorem}
\begin{proof}
We prove the claim about $z^+(\b)$;
the argument for $z^-(\b)$ is similar.
First note that $z(\b,0)=\phi_\b(x^\uparrow)$ and so
\[
\frac{z(\b,h)-z(\b,0)}{h}=\max_{x\in\D} f(x,h),
\]
where
\[
f(x,h)=x_1-\tfrac1\theta +\frac{g_\b(x)-\phi_\b(x^\uparrow)}{h}.
\]
We have that $f(x^\uparrow,h)=x_1^\uparrow-\tfrac1\theta$,
since $g_\b(x^\uparrow)=\phi_\b(x^\uparrow)$, and
thus
\[
\frac{z(\b,h)-z(\b,0)}{h}\geq x_1^\uparrow-\tfrac1\theta
\mbox{ for all }h>0.
\]
Also, $f$ is continuous as a function on $\D\times(0,\infty)$, thus for
each $h$ it attains its maximum at some point $x(h)\in\D$.
Since $g_\b(x)\leq\phi_\b(x^\uparrow)$ for all $x\in\D$ it follows that
\[
x_1^\uparrow-\tfrac1\theta\leq f(x(h),h)\leq x_1(h)-\tfrac1\theta,
\mbox{ for all }h>0.
\]
It thus suffices to show
that $x_1(h)\to x_1^\uparrow$ as $h\downarrow0$.
If not then there is some $\varepsilon>0$ and some sequence
$h_i\downarrow0$ such that $x(h_i)\in A_\varepsilon$ for all $i$, where
\[
A_\varepsilon=\{x\in\D:x_1\geq x_1^\uparrow+\varepsilon\}.
\]
Note that there is some $\d>0$ such that
$\phi_\b(x)\leq\phi_\b(x^\uparrow)-\d$ for all $x\in A_\varepsilon$,
since $\phi_\b$ is continuous and $A_\varepsilon$ compact.
Also note that if $x\in A_\varepsilon$ then $\D(x)\subseteq A_\varepsilon$, by
the definition of $\trianglerighteq$.
Thus $g_\b(x(h_i))\leq\phi_\b(x^\uparrow)-\d$ for
all $i$. But then
\[
f(x(h_i),h_i)= x_1(h_i)-\tfrac1\theta
+\frac{g_\b(x(h_i))-\phi_\b(x^\uparrow)}{h_i}
\leq 1-\tfrac1\theta-\frac{\d}{h_i}\to -\infty.
\]
This contradicts the fact that
$f(x,h)\geq x^\uparrow_1-\tfrac1\theta$ for all $x\in\D$.
Hence it must be the case that $x_1(h)\to x_1^\uparrow$, as claimed.
\end{proof}
\subsection{The critical point}
In light of Theorem~\ref{der-thm}, the following result
implies Theorem~\ref{mag-thm}.
Recall that
$\b_\crit(\theta):=
2\big(\frac{\theta-1}{\theta-2}\big)\log(\theta-1)$
for $\theta\geq3$
and $\b_\crit(2)=2$.
\begin{theorem}\label{critval-thm}
\hspace{1cm}\\
If $\b<\b_\crit$, or $\theta=2$ and $\b=\b_\crit$,
then $x^\uparrow_1=x^\downarrow_\theta=\tfrac1\theta$.
\noindent
If $\b>\b_\crit$, or $\theta\geq3$ and $\b=\b_\crit$,
then $x^\uparrow_1>\tfrac1\theta$
and $x^\downarrow_\theta<\tfrac1\theta$.
\end{theorem}
\begin{proof}
Since $x_1\geq\tfrac1\theta$ and $x_\theta\leq\tfrac1\theta$
for all $x\in\D$ we must determine
when $\phi_\b$ has a maximizer different from
$(\tfrac1\theta,\dotsc,\tfrac1\theta)$.
We start by characterizing the possible
maxima of $\phi_\b$ using the Lagrangian
necessity theorem. Since the
functions $\phi_\b(x)$
and $c(x)=\sum x_i-1$ are $C^1$ on $(0,\infty)^\theta$, and
$\nabla c(x)$ is nonzero for all $x$, if $x\in\D$ is any local
extremum of $\phi_\b$ then there is some $a\in\RR$ such that
\[
\nabla\phi_\b(x)=a\nabla c(x)=(a,\dotsc,a).
\]
Now
\begin{equation}\label{partial-phi}
\frac{\partial\phi_\b}{\partial x_i}=\b x_i-\log x_i-1
\end{equation}
so if $x\in\D$ is a local maximum then there is some $a\in\RR$ such
that
\begin{equation}\label{loc-fp}
\b x_i=(1-a)+\log x_i,\qquad\mbox{ for all } i=1,\dotsc,\theta.
\end{equation}
(We see from~\eqref{partial-phi} that the partial derivative diverges
to $+\infty$ if $x_i\downarrow0$,
thus $\phi_\b$ is not maximized on the boundary
and it suffices to consider local maxima.)
For each $\b>0$ and $a\in\RR$, there are 0, 1 or 2 values of $x_i$
which satisfy~\eqref{loc-fp}. If there is just 1 solution then all
the $x_i$ are equal, and hence equal to $\tfrac1\theta$.
If there are 2 solutions then, since $\phi_\b$ is symmetric in its
arguments, we can assume that
there is some $1\leq r\leq \theta-1$ such that
$x$ is of the form
\begin{equation}\label{x-r}
x=(t,\dotsc,t,\tfrac{1-rt}{\theta-r}, \tfrac{1-rt}{\theta-r})
\qquad
\mbox{for } \tfrac1\theta< t < \tfrac1r,
\end{equation}
with the first $r$ coordinates equal and the last $\theta-r$
coordinates equal. Write $\phi^{(r)}_\b(t)$
for $\phi_\b$ evaluated at $x$ of
the form~\eqref{x-r}.
We now establish a condition on $\b$ for $\phi^{(r)}_\b(t)$ to exceed
$\phi^{(r)}_\b(\tfrac1\theta)$ for some $t>\tfrac1\theta$. A short
calculation shows that
\[
\phi^{(r)}_\b(t)-\phi^{(r)}_\b(\tfrac1\theta)=\frac{\b r}{2\theta(\theta-r)}
(\theta t -1)^2
-[rt\log t+(1-rt)\log\tfrac{1-rt}{\theta-r}+\log\theta].
\]
Thus $\phi^{(r)}_\b(t)-\phi_\b^{(r)}(\tfrac1\theta)\geq 0$ if and only if
\begin{equation}\label{R-eq}
\b\geq R(t)=\Big(\frac{2\theta(\theta-r)}{r}\Big)
\frac{rt\log t+(1-rt)\log\tfrac{1-rt}{\theta-r}+\log\theta}
{(\theta t-1)^2}.
\end{equation}
Hence $\phi_\b$ has a maximizer different
from $(\tfrac1\theta,\dotsc,\tfrac1\theta)$
if and only if $\b\geq R(t)$ for some $r$ and some $t>\tfrac1\theta$.
We show in Appendix~\ref{R-app} that $R$ is convex. Also,
note that $R(t)\to+\infty$ as $t\uparrow\tfrac1r$ and
that $R'(\tfrac{\theta-r}{r\theta})=0$. Thus $R(t)$ has a unique
minimum in $[\tfrac1\theta,\tfrac1r)$,
either at the boundary point $t=\tfrac1\theta$ if $r>\theta/2$,
or at $t=\tfrac{\theta-r}{r\theta}$ if $r\leq\theta/2$.
In the case when $\theta=2$ the only possibility for $t>\tfrac1\theta$
is when $r=1$. Then
$\tfrac1\theta=\tfrac{\theta-r}{r\theta}=\tfrac12$ and
hence $\b_\crit(2)=\inf_{t>1/2}R(t)=2$.
If $\theta\geq3$, note that
\begin{equation}\label{beta-f}
R(\tfrac{\theta-r}{r\theta})=\rho(\tfrac r\theta),
\mbox{ with }
\rho(t)=2\theta t \frac{1-t}{1-2t}
\log\Big(\frac{1-t}{t}\Big).
\end{equation}
The function $\rho$ is increasing on $[0,\tfrac12]$,
so $\rho(\tfrac r\theta)$ is minimal for $r=1$.
This gives the critical value $\b_\crit=\rho(\tfrac1\theta)$
claimed.
To check the statements about
$x^\uparrow$ and $x^\downarrow$
at $\b=\b_\crit$, we note that for this
value of $\b$ we have a maximizer of $\phi_\b$ at the point~\eqref{x-r}
with $r=1$ and $t=\tfrac{\theta-1}{\theta}$.
Thus, at $\b=\b_\crit$,
\[
x^\uparrow_1= \tfrac{\theta-1}{\theta}\quad\mbox{and}
\quad
x^\downarrow_\theta= \tfrac{1}{(\theta-1)\theta}.
\]
The claims follow.
\end{proof}
\subsection{The number of vertices in large cycles}
Let $k=k_n\to\infty$ be any sequence going to
$\infty$. Recall that $X_n(k)=\frac1n\sum_{|\g|\geq k} |\g|$
denotes the fraction of vertices in cycles of
size at least $k$ in the random permutation $\sigma(\om)$.
We now show that, under $\PP_\theta$ with
$\theta\in\{2,3,\dotsc\}$, asymptotically $X_n(k)$ is at most
\begin{equation}
\frac{\theta x_1^\uparrow-1}{\theta-1}.
\end{equation}
Note that this number is $=0$ if and only
if $x^\uparrow_1=\tfrac1\theta$,
i.e.\ $z^+(\b)=0$. Proposition~\ref{X-prop}
is a special case of the following result:
\begin{proposition}\label{expdecay-prop}
Let $\b>0$.
For any $\a<1-\tfrac1\theta$
and any $\varepsilon>0$ there is some $c>0$ such that
\begin{equation}\label{expdecay}
\PP_\theta\big(X_n(k)>\varepsilon+\tfrac1\a(x_1^\uparrow-\tfrac1\theta)\big)
\leq e^{-c n}
\end{equation}
for all large enough $n$.
\end{proposition}
\begin{proof}
We claim that for any
$h>0$ we have for large enough $n$ that
\begin{equation}\label{exp-bd}
\EE_\theta\Big[
\exp\Big(\a h\sum_{|\g|\geq k} |\g|\Big)
\Big]\leq
\frac{Z_n(\b,h)}{Z_n(\b,0)}.
\end{equation}
Indeed,
\begin{equation}\label{prod-split}
\begin{split}
Z_n(\b,h)&=\EE\Big[
\prod_{\g}\Big(\frac{e^{h |\g|}+\theta-1}{e^{(h/\theta)|\g|}}
\Big)\Big]\\
&=\EE\Big[\theta^\ell
\prod_{|\g|\geq k}
w(h|\g|)
\prod_{|\g|<k}
w(h|\g|)
\Big)\Big],
\end{split}
\end{equation}
where
\[
w(x)=\frac{e^{x}+\theta-1}{\theta e^{x/\theta}}
=\tfrac1\theta e^{x(1-\sfrac1\theta)}+
\tfrac{\theta-1}{\theta}e^{-\sfrac{x}{\theta}}
\]
is increasing in $x\geq0$, and satisfies:
\begin{equation}
w(x)\geq
\left\{\begin{array}{ll}
w(0)=1, & \mbox{for all }x\geq0, \\
e^{\a x}, & \mbox{for all large enough }x.
\end{array}\right.
\end{equation}
It follows from~\eqref{prod-split} that for large enough $n$,
\begin{equation}
Z_n(\b,h)\geq
\EE\Big[\theta^\ell
\exp\Big(\a h\sum_{|\g|\geq k} |\g|\Big)
\Big],
\end{equation}
which gives the claim.
For any $\varepsilon>0$ we have that
\[
\PP_\theta\big(X_n(k)>\varepsilon+\tfrac1\a(x_1^\uparrow-\tfrac1\theta)\big)=
\PP_\theta\Big(\a h \sum_{|\g|\geq k}|\g|>
hn(\a\varepsilon +x_1^\uparrow-\tfrac1\theta)\Big).
\]
Using Markov's inequality and~\eqref{exp-bd}
it follows that
\[
\PP_\theta\big(X_n(k)>\varepsilon+\tfrac1\a(x_1^\uparrow-\tfrac1\theta)\big)
\leq
\exp(-hn(\a\varepsilon+x^\uparrow_1-\tfrac1\theta)) \frac{Z_n(\b,h)}{Z_n(\b,0)}.
\]
Thus
\begin{equation}
\begin{split}
\limsup_{n\to\infty}\frac1n &
\log \PP_\theta\big(X_n(\om)>\varepsilon+\tfrac1\a(x_1^\uparrow-\tfrac1\theta)\big)
\\
&\leq -h(\a\varepsilon+x^\uparrow_1-\tfrac1\theta)+z(\b,h)-z(\b,0)\\
&= h\Big(\frac{z(\b,h)-z(\b,0)}{h}-\a\varepsilon-(x^\uparrow_1-\tfrac1\theta)\Big).
\end{split}
\end{equation}
By Theorem~\ref{der-thm} we have that
$\lim_{h\downarrow 0} \frac{z(\b,h)-z(\b,0)}{h} =x_1^\uparrow-\tfrac1\theta$,
hence there is some $h>0$ such that
\begin{equation}
\limsup_{n\to\infty}\frac1n
\log \PP_\theta\big(X_n(\om)>\varepsilon+\tfrac1\a(x_1^\uparrow-\tfrac1\theta)\big)
\leq-\frac{h\a\varepsilon}{2}.
\end{equation}
This proves the result.
\end{proof}
|
2,869,038,155,024 | arxiv | \section{Introduction}
\medskip
In the last few years, the study of the Ly$\alpha$\ forest has undergone
several observational revolutions: the extension to low redshift
via HST, the probe of internal structure from spectra along
neighboring lines of sight, the extraordinary detail
provided by Keck HIRES data, and the clear detections
of metals associated with low column density HI absorbers.
The field has also undergone a theoretical revolution, driven by
hydrodynamic cosmological simulations.
These allow one to predict properties of the Ly$\alpha$\ forest from
{\it a priori} theoretical models motivated by independent considerations
of large scale structure, the cosmic microwave background, and
galaxy formation.
Since the pioneering numerical study of Cen et al.\ \cite{cen94},
there have been more than 30 papers using
cosmological simulations to investigate QSO absorption phenomena,
by several independent groups.
The resulting picture of the Ly$\alpha$\ forest
has features in common with some earlier models, especially the
fluctuating intergalactic medium (IGM) scenario of Bi \cite{bi93}.
One distinctive feature of this cosmological picture of the Ly$\alpha$\ forest
is the low density of the absorbing structures.
Typical marginally saturated lines ($N_{\rm HI} \sim 10^{14}\;{\rm cm}^{-2}$)
arise in gas whose density is a few times the cosmic mean or less.
Weak lines ($N_{\rm HI} \mathrel{\copy\simlessbox} 10^{13}\;{\rm cm}^{-2}$) often occur at local
maxima that lie below the global mean density.
This low density has a number of important consequences.
Most absorption arises in structures that are still expanding
with residual Hubble flow.
These absorbing systems are usually far from dynamical, hydrostatic,
or thermal equilibrium.
The low density implies a low recombination rate and thus a low
neutral fraction (typically $\sim 10^{-6} - 10^{-4}$),
so the neutral hydrogen revealed by the observed Ly$\alpha$\ opacity
is only the tip of a much larger iceberg.
The low density of the absorbing gas also means that absorbers must
be physically large in order to produce the observed column densities.
The large size implies that the Hubble flow across an absorber
can be substantial. Indeed, for a typical low column density line
in the simulations, the primary contribution to the line width
($b$-parameter) comes from Hubble flow. This situation contrasts
with that in traditional conceptions of the forest, where lines
are assumed to be broadened by thermal motions of the gas or by
``turbulent'' motions of cloudlets.
Gravitationally induced peculiar velocities do distort the lines in
cosmological simulations, but these {\it coherent} flows are not
at all like Gaussian turbulence, where there is a large {\it dispersion}
in the velocities at a given spatial position.
Furthermore, because most lines with $N_{\rm HI} \mathrel{\copy\simgreatbox} 10^{14}\;{\rm cm}^{-2}$
arise in moderately overdense regions that are expanding
slower than the Hubble rate, peculiar velocities on average have
the effect of {\it narrowing} Ly$\alpha$\ forest lines, not broadening them.
\begin{figure}[tb]
\centerline{\vbox{
\psfig{figure=hflow_fig1.ps,width=4.0truein}
}}
\caption[]{Solid lines show spectra along twelve randomly chosen lines of sight
through a simulation of the standard CDM model, at $z=3$.
Dotted and dashed lines show spectra along the same lines of sight
with no thermal broadening and no peculiar velocities, respectively.
}
\end{figure}
Figure 1 illustrates these points using spectra extracted along twelve
random lines of sight through a hydrodynamic
simulation of the ``standard'' CDM model
(SCDM, with $\Omega=1$, $h=0.5$, $\sigma_8=0.7$) at $z=3$
(see \cite{hkwm96} and \cite{cwkh97} for details).
Solid lines show the full Ly$\alpha$\ absorption spectra.
Dotted lines show the spectra with no thermal broadening --- they
are computed by artificially setting the gas temperature to zero
(without changing the neutral fraction). There are a handful of
sharp features in the non-thermally-broadened spectra that are
smoothed away in the full spectra. However, in most regions
the dotted and solid lines are barely distinguishable,
demonstrating that thermal broadening usually does not contribute
significantly to the width of the absorption features.
The dashed lines in Figure~1 show spectra with thermal broadening
but no peculiar motions --- they are extracted along the same lines
of sight after setting the peculiar velocities of all gas particles to zero.
Comparing the solid lines to the dashed lines shows that
peculiar velocities do shift the positions and distort the shapes
of individual absorption features, but they do not make them systematically
broader. Indeed, as expected from the physical argument above, the
lines in the full spectra tend to be somewhat narrower than the lines
in the spectra with no peculiar velocities.
Studies of QSO pairs \cite{bechtold94,dinshaw94,dinshaw95} provide
observational evidence for Hubble flow broadening of Ly$\alpha$\ forest lines
independently of cosmological simulations.
The inferred transverse coherence scale of the absorbers,
$l_t \sim 150 h^{-1}\;$kpc at $z \sim 2$, corresponds (for $\Omega=1$)
to a line-of-sight extent $H(z)l_t \approx 80\;{\rm km}\;{\rm s}^{-1}$,
considerably larger than the $\sim 25\;{\rm km}\;{\rm s}^{-1}$ $b$-parameter of
a typical forest line \cite{hu95}.
Nonspherical absorbers would be preferentially intercepted ``face on,''
but unless the absorbing structures are {\it highly} flattened
the observed transverse scale implies that Hubble flow across them
must make a major contribution to the line width.
The transverse coherence could in principle be a signature of clustering of
small clouds rather than a physical scale of large absorbers, but the
nearly perfect coincidence of lines towards small-separation
gravitational lens pairs \cite{smette92,smette95} argues against
the clustering interpretation, and the detailed match of absorption
features shown by Rauch in these proceedings seems to rule it out
definitively.
Many papers have remarked on the low density of the absorbing
gas in cosmological simulations (e.g.,
\cite{cen94,zhang95,hkwm96,miralda96,zhang97}). The issue of
Hubble flow broadening has received less attention, but
its implications are perhaps even more profound.
For a start, it means that the profile of a Ly$\alpha$\ forest line
shows a line-of-sight density profile through the absorbing
structure, albeit one that is non-linear and distorted by
peculiar motions. This is not the case in the thermal broadening
picture, where the absorber itself is compact and the wings of
the line arise from high velocity atoms. In the cosmological
picture, line wings show the absorbing structure itself fading
into the background, like mountains into foothills.
Another consequence of Hubble flow broadening is that the
Gunn-Peterson \cite{gunn65} formula,
\begin{equation}
\tau_{\rm GP} = \frac{\pi e^2}{m_e c}\; f_\alpha \lambda_\alpha
H^{-1}(z) n_{\rm HI},
\label{taugp}
\end{equation}
provides a good approximation to the relation between local
Ly$\alpha$\ optical depth and the local space density of neutral hydrogen.
In a thermally broadened, compact cloud model, by contrast,
the optical depth is lower than $\tau_{\rm GP}$ at the line center
(the redshift space location of the dense cloud) and higher than
$\tau_{\rm GP}$ in the line wings (where there is no gas at the
corresponding redshift space position).
Cosmological simulations imply that the Ly$\alpha$\ forest can
be viewed as a fluctuating Gunn-Peterson effect, produced by
an inhomogeneous, diffuse intergalactic medium.
What turns the fluctuating Gunn-Peterson idea from a novelty into
a powerful conceptual tool is the simplicity of the physics that
governs the ionization state of the low density gas.
This gas is in photoionization equilibrium, so the neutral hydrogen
density is $n_{\rm HI} \propto \rho^2 T^{-0.7}/\Gamma$, where $\Gamma$ is
the photoionization rate and the $T^{-0.7}$ factor accounts for the
temperature dependence of the hydrogen recombination coefficient near
$T \sim 10^4\;{\rm K}$.
The interplay between photoionization heating and adiabatic cooling
leads to a tight relation between temperature and density,
which can be well approximated by a power law, $T=T_0({\rho/{\overline\rho}})^\gamma$
\cite{cwkh97,hg97}. The values of $T_0$ and $\gamma$ depend on
the UV background spectrum and reionization history and can be
computed semi-analytically \cite{hg97};
typically $T_0 \sim 6000\;$K and $\gamma \sim 0.3-0.6$.
With this physical reasoning, equation~(\ref{taugp}) can
be converted to a formula we can describe as the
{\it fluctuating Gunn-Peterson approximation},
\begin{eqnarray}
\tau(\lambda_{\rm obs}) = &
0.172 \left(\frac{\rho}{\overline \rho}\right)^\beta
\left(1 + \frac{dV_{\rm los}}{H(z) dx}\right)^{-1}
\left(\frac{1+z}{4}\right)^6
\left(\frac{H(z)/H_0}{5.51}\right)^{-1} h^{-1} \;\times & \nonumber \\
& \left(\frac{\Omega_b h^2}{0.0125}\right)^2
\left(\frac{T_0}{10^4\;{\rm K}}\right)^{-0.7}
\left(\frac{\Gamma}{10^{-12}\;{\rm s}^{-1}}\right)^{-1}\;, & \label{fgpa}
\end{eqnarray}
where $\beta \equiv 2-0.7\gamma$,
${\rho/{\overline\rho}}$ is the overdensity at the position where the
redshift (cosmological plus peculiar velocity) is
$\lambda_{\rm obs}/\lambda_\alpha - 1$, and $dV_{\rm los}/dx$
is the derivative of the line-of-sight peculiar velocity at the same
position. The peculiar velocity term accounts for the mapping
from real space to redshift space.
In principle, ${\rho/{\overline\rho}}$ here refers to the {\it gas} overdensity,
but because the temperature is low, pressure gradients are small
compared to gravitational forces, and the gas traces the dark matter
quite well.
Equation~(\ref{fgpa}) is valid if all gas lies on the temperature-density
relation and thermal broadening and collisional ionization can be
ignored. The approximation breaks down when ${\rho/{\overline\rho}} \mathrel{\copy\simgreatbox} 10$,
but these regions occupy a small fraction of the spectrum.
They are responsible for high column density lines, and the general
description presented above begins to break down for
$N_{\rm HI} \mathrel{\copy\simgreatbox} 10^{15}-10^{16}\;{\rm cm}^{-2}$. This description might
also become less accurate at low redshifts;
we have not examined the Ly$\alpha$\ forest
at $z<2$ with our simulations. We should also note that
{\it some} low column density lines at high $z$
arise in shock heated gas and are thermally broadened.
A second useful approximation arises from ignoring peculiar velocities,
setting $dV_{\rm los}/dx=0$ in equation~(\ref{fgpa}), so that there is
a one-to-one relation between optical depth and overdensity.
Figure~6 of \cite{cwkh97} shows that the $\tau-\rho$ relation remains
tight in simulated spectra even when peculiar velocities, thermal
broadening, shock heating, and collisional ionization are all taken
into account. The distribution of Ly$\alpha$\ optical depths $P(\tau)$
is directly observable from high resolution QSO spectra \cite{rauch97},
and one can use this second approximation to write the mean density
of the ``warm'' IGM that produces the Ly$\alpha$\ forest in terms of an
integral over this distribution,
${\overline\rho}_{\rm WIGM} = \int_0^\infty \rho(\tau) P(\tau)d\tau$.
After some manipulation, one obtains the density parameter of this
warm IGM component \cite{wkh97,wmhk97},
\begin{eqnarray}
\Omega_{\rm WIGM}
= & 0.021 h^{-3/2}
\left(\frac{
\left[\int_0^\infty \tau^{1/\beta} P(\tau) d\tau\right]^{\beta/2}}
{0.70} \right) \left(\frac{4}{1+z}\right)^3 \;\times & \nonumber \\
& \left(\frac{H(z)/H_0}{5.51}\right)^{1/2}
\left(\frac{T_0}{10^4\;{\rm K}}\right)^{0.35}
\left(\frac{\Gamma}{10^{-12}\; {\rm s}^{-1}}\right)^{1/2}\;. & \label{igm}
\end{eqnarray}
For the fiducial value of the optical depth integral, we have used
a value inferred from the observations of \cite{rauch97} at $z=3$.
The implied $\Omega_{\rm WIGM}$ is a substantial fraction of the
baryon density parameter $\Omega_b$ allowed by big bang nucleosynthesis,
indicating that most of the baryons in the universe at $z \sim 3$
resided in the Ly$\alpha$\ forest, as the simulations predict.
The relation between the underlying mass distribution and the number density
of Ly$\alpha$\ forest lines in a spectrum may be physically complex, and it
is sensitive to the details of the observational procedures and the
method used deblend absorption features.
However, the fluctuating Gunn-Peterson approximation implies that
the relation between mass density and observed flux is direct and simple.
If one wants to use the Ly$\alpha$\ forest to test theories of structure
formation, it is best to abandon
lines altogether and treat the full observed spectrum as a continuous field.
Statistical properties of this ``flux field'' are directly related to
statistical properties of the underlying density and velocity fields,
which are basic predictions of cosmological models.
We are currently studying a variety of statistical measures similar to
those used in large scale structure analyses, applying them to simulations
in order to assess their sensitivity to different properties of the
initial fluctuations and to values of cosmological parameters.
We have also developed and tested a method to recover the shape and
amplitude of the primordial mass power spectrum $P(k)$ from Ly$\alpha$\ forest data
\cite{cwkh97b}, again motivated by the ``continuous field'' point of view.
The 3-d flux power spectrum has the same shape as the mass power spectrum
on large scales, and the normalization can be determined by evolving numerical
simulations with this initial $P(k)$ shape until they reproduce the
observed power spectrum of the QSO flux. Imposing the
observed mean Ly$\alpha$\ opacity as a constraint makes the derived $P(k)$
normalization insensitive to the choice of cosmological parameters,
ionizing background spectrum, or reionization history.
This approach thus neatly circumvents the uncertain physics of galaxy
formation and ``biasing,'' which complicates the interpretation of
power spectra measured from galaxy redshift surveys.
Application to existing samples of QSO spectra should soon yield
the power spectrum of mass fluctuations in the high redshift universe.
\begin{iapbib}{99}{
\bibitem{bechtold94}
Bechtold J., Crotts A.P.S., Duncan R.C., Fang Y., 1994, \apj 437, L83
\bibitem{bi93}
Bi H.G., 1993, \apj 405, 479
\bibitem{cen94}
Cen R., Miralda-Escud\'e J., Ostriker J.P., Rauch M., 1994, \apj 437, L9
\bibitem{cwkh97}
Croft R.A.C., Weinberg D.H., Katz N., Hernquist L., 1997, \apj in press
(astro-ph/9611053)
\bibitem{cwkh97b}
Croft R.A.C., Weinberg D.H., Katz N., Hernquist L., 1997,
\apj submitted (astro-ph/9708018)
\bibitem{dinshaw94}
Dinshaw N., Impey C.D., Foltz C. B., Weymann R.J.,
Chaffee F.H., 1994, \apj 437, L87
\bibitem{dinshaw95}
Dinshaw N., Foltz C.B., Impey C.D., Weymann R.J., Morris S.L., 1995,
Nature 373, 223
\bibitem{gunn65}
Gunn J.E., Peterson B.A., 1965, \apj 142, 1633
\bibitem{hkwm96}
Hernquist L., Katz N., Weinberg D.H.,
Miralda-Escud\'e J. 1996, \apj 457, L5
\bibitem{hu95}
Hu E.M., Kim T.S., Cowie L.L., Songaila A., Rauch M., 1995, \aj
110, 1526
\bibitem{hg97}
Hui L., Gnedin N., 1997, \mn submitted (astro-ph/9612232)
\bibitem{miralda96}
Miralda-Escud\'e J., Cen R., Ostriker J.P., Rauch M., 1996, \apj 471, 582
\bibitem{rauch97}
Rauch M., Miralda-Escud\'e J., Sargent W.L.W., Barlow T.A.,
Weinberg D.H., Hernquist L., Katz N., Cen R., Ostriker J.P.,
1997, \apj in press (astro-ph/9612245)
\bibitem{smette92}
Smette A., Surdej J., Shaver P.A., Foltz C.B., Chaffee F.H.,
Weymann R.J., Williams R.E., Magain P., 1992, \apj 389, 39
\bibitem{smette95}
Smette A., Robertson J.G., Shaver P.A.,
Wisotzki, L., Koehler, T., 1995, A\&AS, 113, 199
\bibitem{wkh97}
Weinberg D.H., Katz N., Hernquist L., 1997,
in Origins, eds. J. M. Shull, C. E. Woodward, \& H. Thronson,
(ASP Conference Series: San Francisco), (astro-ph/9708213)
\bibitem{wmhk97}
Weinberg D.H., Miralda-Escud\'{e} J., Hernquist L., Katz N., 1997,
\apj 490, in press (astro-ph 9701012)
\bibitem{zhang95}
Zhang Y., Anninos P., Norman M.L., 1995, \apj 453, L57
\bibitem{zhang97}
Zhang Y., Meiksin A., Anninos P., Norman M.L., 1997, \apj in press
}
\end{iapbib}
\vfill
\end{document}
|
2,869,038,155,025 | arxiv | \section{Introduction}
\label{intro}
We are interested here in what one might call the practical
implementation
of the Dirac Quantization procedure \cite{Dirac}
for constrained systems. Recall
that the Dirac approach involves
introducing the constraints as operators on some space and then taking
only those states which are annihilated by the constraints to be
`physical.'
These physical states are then made into a (physical) Hilbert space.
Recall also that the
Dirac procedure and the closely related BRST approach
\cite{BRST} are the favored
methods for addressing quantum gauge systems.
A number of variants of the Dirac method have been discussed, including
geometric quantization \cite{Woodhouse}, reduced phase space methods
\cite{Karel,OP}, coherent state quantization
\cite{Klauder}, algebraic quantization \cite{AAbook}, and refined
algebraic quantization \cite{ALMMT,GM} (in which we include the
work of \cite{KL,AH,QORD}). It is refined algebraic quantization (RAQ)
in particular that we will study here. RAQ has been
shown to have a certain generality \cite{GM} and has the useful property
that the classical reality conditions of an observable algebra
are implemented as hermiticity relations of the operators
on the physical Hilbert space {\it without} first
constructing the
quantum observables explicitly \cite{ALMMT}.
However, refined algebraic quantization
becomes much more powerful when a technique
known as `group averaging' can be applied. Group averaging uses
the integral
\begin{equation}
\label{GAI}
\int_G \langle \phi_1|U(g)|\phi_2 \rangle \ dg
\end{equation}
over the gauge group $G$
to define the physical Hilbert space. Here $dg$ is what one might call
the
`symmetric' Haar measure on $G$ \cite{GM2}.
Once a space of states
($\Phi$) has been found for which this procedure converges, group
averaging gives an {\it algorithm} for the implementation of RAQ. When
group
averaging converges sufficiently strongly\footnote{At least for locally
compact
(i.e., finite dimensional or non-field theoretic) gauge groups.}, this
algorithm gives the {\it unique} implementation of RAQ \cite{GM2}.
In particular, group averaging provides the unique Hilbert space
representation (with a unique inner product) of the algebra of
observables which is compatible with RAQ. Convergent group
averaging also gives an algorithm for construction of a complete set of
observables \cite{GM2}. The convergence of group averaging is typical
in
mini-superspace settings, in which it has been used to construct
physically
meaningful observables \cite{QORD} as well as to study the
semi-classical limit
\cite{BDT,PI} and, in particular, the instanton approximation
\cite{PI}.
Although the influence of the choice of domain $\Phi$
is not fully understood,
we see that the case where group averaging converges is under fair
control.
However, it will often happen that group averaging fails to converge
on some interesting domain. As described in \cite{GM2},
the fact that convergent group averaging ensures a unique
representation (compatible with RAQ) of the algebra of observables
shows that group averaging {\it must} in
fact diverge in the presence of any superselection rules. However, as
was
described in \cite{ALMMT}, one can sometimes construct a `renormalized'
group
averaging operation, even when group averaging
does not properly converge. Ref. \cite{ALMMT} successfully used
this idea in the context of the loop approach \cite{AAbook,CR}
to quantum gravity
to construct a Hilbert space of states which are invariant under the
group
of diffeomorphisms of a spacelike surface $\Sigma$. Our goal here is
to construct further examples of successful `renormalized group
averaging'
as a potential aid to its future general study.
Below, we consider as gauge groups the
components $SO_c(n,1)$ of $SO(n,1)$ which are connected to the identity.
As discussed in \cite{GM2},
group averaging is guaranteed to converge on the regular representation,
in which $SO_c(n,1)$ acts on $L^2(SO_c(n,1))$. However, the most
familiar
representation of $SO_c(n,1)$ is given by its action on $n+1$
dimensional
Minkowski space $M^{n,1}$. We consider here the associated
representations of $SO_c(n,1)$ on $L^2(M^{n,1})$.
Section \ref{ga} studies the convergence of group averaging for various
sectors. We find that
group averaging converges for states corresponding to smooth
functions $f$ on $M^{n,1}$ when the closure of the support of $f$ lies
inside the light cone. For $n>1$, group averaging does not converge
for states whose support extends outside the light cone. However, we
show
that a certain `renormalization' of the group averaging scheme does lead
to a well-defined physical inner product. We then show
in section \ref{rig} that this
satisfies the detailed requirements of RAQ. In fact, we
find a two parameter family of such physical Hilbert spaces. One
parameter is
a trivial overall normalization, but the other stems from a
superselection rule
between physical states associated with the
interior of the light cone and those associated with the exterior.
Further implications of our results are discussed in section \ref{Disc}.
We will not review the details of group averaging and refined algebraic
quantization here. Instead, we refer the reader to \cite{ALMMT,GM,GM2},
whose notation we follow.
\section{Group Averaging}
\label{ga}
Consider the group $G=SO_c(n,1)$ acting on $L^2(M^{n,1}, d^n x)$. The
infinitesimal action of the group is defined by the generators of
the Lie Algebra,
\begin{equation}
J_{\mu\nu} = \eta_{\mu\alpha}x^{\alpha}\frac{\partial}{\partial x^\nu} -
\eta_{\nu\alpha}x^{\alpha}\frac{\partial}{\partial x^\mu} \ ,
\end{equation}
whose exponentiation gives the unitary action $U(g)$ of the group.
The generators $J_{\mu\nu}$ also define the constraints of the theory.
Thus,
physical states satisfy
\begin{equation}
J_{\mu\nu} |\psi\rangle_{\rm phys} = 0 \ .
\end{equation}
Since there are no such normalizable states, RAQ redefines this
condition to be
\begin{equation}
\langle \psi |_{\rm phys} J_{\mu \nu} =0.
\end{equation}
We are interested in the convergence of the associated group
averaging integral (\ref{GAI}) on some
domain $\Phi$. If it converges, or if it can be renormalized in a
useful
way, it will define a map (known as the `rigging map') from $\Phi$ into
the
space of physical states. Below, we study this issue by first finding a
useful parameterization of the Haar measure on $SO_c(n,1)$ in subsection
A. We then
perform explicit calculations of the group averaging integral in
subsection B. In subsection C we present the final form of the
resulting (candidate) rigging map. The proof that this is indeed a
rigging map
will be given in the section III.
\subsection{The Haar Measure}
\label{param}
Let us first find a parameterization of $SO_c(n,1)$ and compute its Haar
measure.
Any element $g$ in $G= SO_c(n,1)$ is a product of a boost and a
rotation.
In general
this is called the ``Cartan decomposition'' \cite{barut}. In our case,
choosing some $x^0$ time coordinate in Minkowski space, we write
$g = h_0k_0$ for $k_0$ in the $SO_c(n)$ subgroup $K$ of $G$
that preserves the $x^0$ axis and and $h_0$ a symmetric positive
definite
matrix (a pure boost). In general, such an $h_0$ can be written as
$h_0 = k_1 b(\lambda) k_1^{-1}$ for a rotation $k_1 \in K$ and
$b(\lambda)$
a boost (with boost parameter $\lambda$) in the $x^0,x^1$ plane. For
our purposes, it is convenient to write $k = k_1^{-1} k_0$ and $h=k_1
b(\lambda)$ so that we
have
\begin{equation}
\label{cd}
g = hk \ , k\in K \ , \ \ \ \mbox{and} \ \ \ h \in H^n_+ \ .
\end{equation}
Note that $H^{n}_+$ may be identified with the (right) coset space
$SO_c(n,1)/K$. It will be useful to represent this space as
the upper sheet of the Hyperboloid
\begin{equation}
\label{uh}
-(x^0)^2 + (x^1)^2 + \cdots + (x^n)^2 = -1 \
\end{equation}
by mapping $h$ to the image of the $x^0$ axis under $h$.
A generic element
of $H^{n}_+$ can be written
\begin{equation}
h = k_{n-1}(\theta_{n-1})k_{n-2}(\theta_{n-2})\cdots k_{1}(\theta_{1})
b(\lambda)\ ,
\label{h}
\end{equation}
where $k_{m}$ is a rotation in the plane $(x^m,x^{m+1})$ and
$b(\lambda)$
is a hyperbolic rotation in $(x^0,x^1)$. Here
$0<\lambda<\infty$, $0\le\theta_{i}<\pi$ for $i=1,\ldots,n-2$ and
$0\le\theta_{n-1}<2\pi$.
In terms of standard Minkowski coordinates and the
identification of $H_+^n$ with the upper sheet of the hyperboloid
(\ref{uh}),
the parameterization (\ref{h}) is
\begin{eqnarray*}
x^0 &=& \cosh\lambda \\
x^1 &=& \sinh\lambda \cos\theta_1\\
x^2 &=& \sinh\lambda \sin\theta_1 \cos\theta_2\\
\vdots &\vdots & \ \ \ \ \ \ \ \ \ \vdots \\
x^n &=& \sinh\lambda \sin\theta_1 \sin\theta_2\cdots
\sin\theta_{n-2}\sin\theta_{n-1} \ \ .
\end{eqnarray*}
Now, the standard measure $d^{n+1}x$ on the region within
the future light cone $x^0>0$,
$x\cdot x
<0$ in $n+1$ Minkowski space is invariant under $SO_c(n,1)$. Let $s$
denote
the timelike separation of a point $x$ inside this light cone from the
origin: $s^2 = - x \cdot x$. Writing the measure $d^{n+1}x$ as $s^n ds
\ dh$
leads to an $SO_c(n,1)$-invariant measure $dh$ for $H_+^n$ given by
\begin{equation}
dh=\sinh^{n-1}\lambda\sin^{n-2}\theta_1\cdots\sin\theta_{n-2}
d\lambda d\theta_1\cdots d\theta_{n-1} \ ,
\end{equation}
where $d\lambda$ and $d\theta_{i}$ are the usual Lebesgue measures on
the
appropriate intervals.
Consider then the measure $dg (hk) = dh(h) \ dk(k)$ on
$SO_c(n,1)$, where $dk(k)$ denotes the Haar measure on $K$. For any $g
\in G$, we may write $gh = h_1 k_1$
for $h_1 \in H_+^n$ and $k_1 \in K$. In particular, $h_1$ is such
that it takes the $x^0$ axis to the same line in $M^{n,1}$
as $gh$ does. Thus, $g$
acts as an $SO_c(n,1)$ transformation on $H^n_+$ and $dh(h_1) = dh(h)$.
Since $dk(k_1k) = dk(k)$, we have $dg(ghk) = dh(h_1) dk(k_1 k) = dg(g)$
and
\begin{equation}
dg = dh \ dk \
\end{equation}
is a Haar measure on $G$. For more details on the
procedure to
compute Haar measures for different parameterizations, see
\cite{vilenkin}.
\subsection{The averaging procedure}
We wish to study the integral
\begin{equation}
\int_{g\in G} \langle \phi_1|U(g)|\phi_2 \rangle dg \ ,
\end{equation}
where $\phi_1$ and $\phi_2$ lie in some domain $\Phi \subset
{\cal H}_{\rm aux}$.
It is natural to take $\Phi$ to be a
subspace of smooth functions of compact support.
Thus, we proceed by introducing the distributional states $|x\rangle$
for $x\in
M^{n,1}$ satisfying
$
\langle x_1| x_2 \rangle = \delta^{n+1}(x_1,x_2)$.
We will then study
\begin{equation}
I := \int_{g\in G} \langle x_1|U(g)|x_2 \rangle dg \ ,
\label{aver}
\end{equation}
treating this expression as a distribution in both $x_1$ and $x_2$.
The expression (\ref{aver}) can be written as
follows,
\begin{eqnarray}
\int dg \langle x_1|U(g)|x_2\rangle &=& \frac{1}{V_{SO(n)}}\int dk \int
dg
\langle x_1|U(kg)|x_2\rangle \nonumber \\
&=& \frac{1}{V_{SO(n)}} \int dk \ dh \ dk' \langle
x_1|U(khk')|x_2\rangle
\ ,
\label{b}
\end{eqnarray}
where $k,k'\in K$, $h\in H^n_+$ and $V_{SO(n)}=\int dk$ is the
volume of
$SO(n)$.
However, any element of $h$ can be written as in (\ref{h}). Thus, using
the $SO(n)$ translation invariance of $dk$,
equation (\ref{b}) takes
the form,
\begin{equation}
I=\frac{V_{S_{n-1}}}{V_{SO(n)}} \int dk \ dk' \ d\lambda \ \sinh^{n-1}
\lambda
\ \langle x_1|U(k)U\left(b(\lambda)\right)U(k')|x_2\rangle \ ,
\label{integral}
\end{equation}
where $V_{S_{n-1}}=\frac{\pi^{n/2}}{\Gamma(n/2)}$ is the volume of the
$(n-1)$--sphere. Below, we write $U(b(\lambda))$ as $B(\lambda)$ to
make the distinction clear between this boost and the rotations $U(k)$.
To evaluate the integral in (\ref{integral})
it is useful to introduce two complete sets of states, and to rewrite
(\ref{integral}) as
\begin{equation}
\int_{k,k' \in K} dk dk' \ d\lambda d^{n+1} x \ d^{n+1} x'
\langle x_1|U(k)|x\rangle\langle x|B(\lambda)|x'\rangle\langle
x'|U(k')|x_2\rangle
\ .
\label{int}
\end{equation}
Averaging over the compact group $SO(n)$ is straightforward, and up to a
constant factor yields
\begin{equation}
\int_K dk \langle x| U(k)|x'\rangle =
\frac{1}{r^{n-2}}\delta(t,t')\delta(r^2,{r'}^2) ,
\label{inner}
\end{equation}
where $r^2 = \sum_{i>0} x^i x^i$. This may be seen from the fact that,
if we assign each coordinate ($t,x^i$) dimensions of length, the matrix
elements $\langle x|U(k)|x'\rangle$ have dimensions of
(length)${}^{-(n+1)}$
while the measure $dk$ is dimensionless. This necessitates the factor
of
$r^{-(n-2)}$ on the right hand side.
Substituting this into (\ref{integral}) we find that, up to a finite
constant
factor independent of the initial and final states,
\begin{equation}
I= \frac{1}{r_1^{n-2}r_2^{n-2}}\int dt \ d^n x \ d\lambda \ \sinh^{n-1}
\lambda \
\delta(r_1^2 ,
r^2)\delta(t_1,t)\delta(r_2^2,r_\lambda^2)\delta(t_2,t_{\lambda})
,
\label{a}
\end{equation}
where the subscript $\lambda$ indicates that the quantity is boosted in
the
$(x^0,x^1)$ plane with
parameter $\lambda$; that is,
\begin{equation}
\left(
\begin{array}{c}
t_\lambda \\ {(x^1)}_\lambda
\end{array} \right)
=\left(
\begin{array}{c}
t \cosh \lambda + x^1 \sinh \lambda \\ t \sinh \lambda + x^1 \cosh
\lambda
\end{array} \right) ,
\end{equation}
and $x^i_{\lambda} = x^i$ for $i>1$.
Note that for $n=1$ the integral $I$ can be easily done. We use three of
the
$\delta$--functions to integrate over $dt$, $dx$ and $d\lambda$,
obtaining a
result that is finite in the distributional sense:
\begin{equation}
I_{n=1}= \delta\left((x_1^2-t_1^2),(x_2^2-t_2^2)\right) \ .
\end{equation}
This expression
is manifestly Lorentz invariant. The convergence for $n=1$ is not
surprising as, in this case, there is only one constraint and it has
a well-behaved spectrum (satisfying, for example, property A of
\cite{BC}). For this kind of system,
the averaging procedure converges in the same way that $\int e^{ikx} dx$
converges to $\delta(k)$ as a distribution over $C_0^\infty$. Here, $k$
plays the role of the spectral parameter of the constraint.
{}From now on we will consider only the case $n>1$.
Using the $(t_1,t)$, $(t_2,t_{\lambda})$ and $(r_2^2,r_\lambda^2)$
delta-functions to do the $d^n x$ and $dt$ integrations in (\ref{a}), we
obtain, again up to an overall constant factor,
\begin{equation}
I= \frac{\delta(s_1^2,s_2^2)}{r_1^{n-2}r_2^{n-2}} \int \left[
r_1^2\sinh^2\lambda - \left(
t_2-t_1\cosh\lambda
\right)^2 \right]^{\frac{n-3}{2}} \sinh \lambda \ d\lambda \ ,
\end{equation}
where $s_a^2=\eta_{\mu\nu}x_a^\mu x_a^\nu$, $a=1,2$ and $\lambda$ is
integrated
over all positive values such that the term in
square brackets is positive. Changing variables to
$\xi=\sinh\lambda$ this can be
written,
\begin{equation}
I=\frac{\delta(s_1^2,s_2^2)}{r_1^{n-2}r_2^{n-2}} \int d\xi \left[s_1^2
\xi^2 +
2t_1t_2 \xi - (r_1^2+t_2^2)\right]^{\frac{n-3}{2}} \ .
\label{I}
\end{equation}
It is now convenient to treat independently the
cases where $s_1$ and $s_2$ are either both spacelike or both timelike.
We will
not treat the lightlike case, and it is clear from (\ref{aver}) that $I$
will
vanish if
$x_1$ is timelike while $x_2$ is spacelike.
\subsubsection{$s_1$, $s_2$ Spacelike}
In this case, the term in square brackets in (\ref{I}) will be
positive for $\xi$ greater than some $\xi_0$. The integral has an
infinite domain and will, in general, diverge.
Now define the dimensionless parameter,
\begin{equation}
\label{u-def}
u=\frac{s_1^2 \xi + t_1t_2}{r_1r_2} \ .
\end{equation}
The interval $\xi \in [\xi_0,\infty)$ maps to $u \in [1, \infty)$ while
the
point
$\xi=1$ maps to $u=u_0$, with
\begin{equation}
u_0=\frac{s_1^2 + t_1t_2}{r_1r_2} \ .
\end{equation}
In the appendix we show that $u_0 < 1$.
In terms of $u$, Eq. (\ref{I}) takes the form
\begin{equation}
I=\frac{\delta (s_1^2,s_2^2)}{s_1^{n-1}}
\ \int_1^\infty du (u^2-1)^{ \frac{n-3}{2} }.
\label{ii}
\end{equation}
Hence, we have succeeded in writing $I$ as a divergent
factor times a Lorentz invariant quantity:
\begin{equation}
I=\mbox{lim}_{\Lambda->\infty}\frac{\delta (s_1^2,s_2^2)}{s_1^{n-1}}
\Delta(\Lambda) \ ,
\end{equation}
where
$$
\Delta(\Lambda)=\int_{1}^{\Lambda} du (u^2-1)^{ \frac{n-3}{2} } \ .
$$
This diverges as $\Lambda^{n-2}$ for $n>2$ and as $\log(\Lambda)$ for
$n=2$.
\subsubsection{$s_1$, $s_2$ Timelike}
In this case, the term under square brackets in (\ref{I}) will be
negative for
all values of $\xi$ greater than some $\xi_0$, which will be
the upper limit of the domain of integration. The integral $I$ is
therefore
convergent. In the present case we define
\begin{equation}
u=-\frac{s_1^2 \xi + t_1t_2}{r_1r_2} \
\end{equation}
and (\ref{I}) takes the form
\begin{equation}
I=\frac{\delta (s_1^2,s_2^2)}{s_1^{n-1}}
\ \int^{1} du (1-u^2)^{ \frac{n-3}{2} },
\end{equation}
were the lower limit of integration will be the maximum of $-1$ and
$$
u_0=\frac{-(s_1^2 +t_1t_2)}{r_1r_2}.
$$
As shown in the Appendix, $u_0$ can be either greater than $1$ (when
$t_1$
and
$t_2$ have different sign) or less than $-1$
(when $t_1$ and $t_2$ have the same
sign). Thus, as expected, the integral $I$ vanishes when (say) $x_1$
lies in the
future lightcone and $x_2$ lies in the past.
If, on the other hand, $t_1$ and $t_2$ have the same
sign, then
\begin{equation}
\label{inside}
I=\Sigma\frac{\delta (s_1^2,s_2^2)}{s_1^{n-1}} \ ,
\end{equation}
where
\begin{equation}
\Sigma= \int_{-1}^{1} du (1-u^2)^{ \frac{n-3}{2} } .
\end{equation}
For $n>1$, this integral is convergent, and its value is
\begin{equation}
\Sigma={\frac{{\sqrt{\pi }}\,\Gamma({\frac{n-1}{2}})}
{\Gamma({\frac{n}{2}})}} \ .
\end{equation}
\subsection{A candidate for the rigging map}
At this point we have succeeded in regularizing the divergent integrals
that arise when averaging distributional states over $SO_c(n,1)$. Take
now
$\Phi \subset
{\cal H}_{\rm aux}$ to be the set of functions with compact support
not intersecting the light cone. It follows from our work above that the
averaging
procedure converges for states $\phi$ supported
inside the lightcone. Let us now consider the case of $x_1,x_2$
outside the light cone. Note that, given two such points $x_1$ and
$x_2$,
the expression (\ref{u-def}) for $u$ defines a
function
$u(h)$ for $h \in H_+^n$. To define the physical inner product of states
supported outside the lightcone, we will ``renormalize'' the averaging
integrals
by dividing by $\Delta(\Lambda)$. Let us define an object $Q$ by
the expression:
\begin{equation}
\langle x_1|Q|x_2\rangle = \mbox{lim}_{\Lambda\rightarrow\infty}
\frac{\int_{g\in G_{\Lambda}(x_1,x_2)}\langle x_1|U(g)|x_2 \rangle \ dg
}{\Delta(\Lambda)} \ , \label{norm}
\end{equation}
where $G_{\lambda}(x_1,x_2)$ is the compact subset of $G$ given by
$g = hk$, $k \in K$, $h \in H_+^n$ with $h$ such that $u(h) <
\Lambda$.
The results of the previous subsection show
that this expression converges to a distribution
in $x_1,x_2$ given by
\begin{equation}
\langle x_1|Q |x_2\rangle = \frac{1}{|x_1|^{n-1}} \delta(x_1^2,{x}_2^2)
\ .
\label{normx}
\end{equation}
for $x_1,x_2$ outside the light cone.
While this has the same form as the group averaging result
(\ref{inside})
inside the light cone, we should recall that it is in reality not the
same
object; the limit (\ref{norm}) would vanish for any $x_1,x_2$ inside the
light cone. Thus, we have a domain $\Phi_1$ of functions of compact
support inside the light cone and a domain $\Phi_2$ of functions of
compact
support outside the light cone with $\Phi \ = \Phi_1 \oplus \Phi_2$.
On $\Phi_1$, we have a rigging map $\eta_1$
defined by group averaging. For $\phi_2 \in \Phi_2$, we have a
candidate rigging map $\eta_2$ defined by
\begin{equation}
\eta_2 | \phi_2 \rangle = \langle \phi_2 | Q \ ,
\end{equation}
where we have established that this expression defines an element of
$\Phi^*$,
the algebraic dual of $\Phi$, as is appropriate for a rigging map
\cite{ALMMT}.
\section{Rigging Maps}
\label{rig}
In section \ref{ga}, we used a `renormalization' procedure to arrive
at a candidate rigging map $\eta_2,$ for the region
outside the light cone:
$[\eta_2
| x_1 \rangle ] (|x_2 \rangle) = |x_1|^{n-1}\delta(x_1^2 ,x_2^2)$ for
$x_1^2, x_2^2 > 0$. This certainly appears to be a reasonable choice
(it
gives the `natural' inner product on physical states), but
we should take care to check that it does indeed fulfill the
requirements
of refined algebraic quantization.
It is clearly real, symmetric, and positive. Thus,
the only remaining requirement \cite{ALMMT}
is that $\eta_2$ commute with the action
of the observables. For the obvious observables (the invariant distance
$s^2$
from the origin or observables associated with the vector field
${{\partial}
\over {\partial s}}$) this is again trivial.
However, the definition
of observable used in RAQ is rather subtle,
so that we cannot be sure that this list is exhaustive.
Thus, a proof is required to show that $\eta_2$ commutes with the
observables.
This is given by a computation in subsection A below.
We will then show in subsection B
that any map of the form $a_1 \eta_1 \oplus a_2 \eta_2$
(for $a_1, a_2 \in {\bf R}^+$) is a rigging map, where $\eta_1$
denotes group averaging on states supported inside the light cone.
By this notation we mean that, for $\phi_1, \tilde \phi_1 \in \Phi_1$
and
$\phi_2, \tilde \phi_2 \in \Phi_2$,
\begin{equation}
[(a_1 \eta_1 \oplus a_2 \eta_2) (\phi_1 + \phi_2)](\tilde \phi_1 +
\tilde \phi_2) = a_1 [\eta_1 \phi_1](\tilde \phi_1) + a_2 [\eta_2
\phi_2](
\tilde \phi_2).
\end{equation}
The statement that $a_1 \eta_1 \oplus a_2 \eta_2$ is a rigging map
again requires a proof that it commutes
with the observables. We proceed by deriving a general result:
Given a suitable decomposition
$\Phi = \Phi_1 \oplus \Phi_2$ and rigging maps $\eta_1$ and $\eta_2$
on $\Phi_1$ and $\Phi_2$ separately, the fact that
group averaging converges on $\Phi_1$ but not on $\Phi_2$ means that
$a_1
\eta_1 \oplus a_2 \eta_2$ is a rigging map. Along the way, we come to
an improved understanding of the interaction between RAQ and
superselection rules.
\subsection{$\eta_2$ is a rigging map on $\Phi_2$}
\label{exp}
To show that $\eta_2$
is a rigging map on $\Phi_2$, we must verify that $\eta_2$ commutes
with the action of observables on $\Phi_2$.
As we will see, the proof is trivial for the groups
$SO_c(1,1)$ and $SO_c(2,1)$, but a calculation is required for
$SO_c(n,1)$
when $n$
is larger than $2$. For $SO_c(1,1)$,
group averaging in fact converges so that the associated $\eta_2$ is
clearly a rigging map. For $SO_c(2,1)$,
taking the leading order divergence of (\ref{I}) gives a result
proportional
to our candidate rigging map (\ref{normx}).
Thus, the cut-off may be imposed in a
state-independent manner. It follows that the
candidate map may be written $\eta_2 = \lim_{\Lambda \rightarrow \infty}
\eta_{2,\Lambda}$, where \begin{equation}
\eta_{2,\Lambda} = {{\int_{K_\Lambda} dg \ U(g)} \over {N(K_\Lambda)}},
\end{equation}
for a sequence $K_\Lambda$ of compact subsets of $SO_c(2,1)$
given by elements of the form (\ref{cd}), (\ref{h}) with $\lambda <
\Lambda$
and an appropriate
function $N$.
As a result, any observable ${\cal O}$ commutes with $\eta_{2,\Lambda}$
for all $\Lambda$. Using the fact that each $\phi \in \Phi$ acts
continuously on $\Phi'$, we may pass to the limit.
It then follows that ${\cal O}$ commutes with $\eta_2$.
For $n \ge 3$, the limit by which $\eta_2$ is defined is more
complicated as
we must use the sets $G_\Lambda(x_1,x_2)$ which do in fact depend on the
points $x_1$ and $x_2$. Thus, the fact that ${\cal O}$ commutes with
$U(g)$ no longer guarantees that it commutes with a regularized rigging
map.
As a result, we need to explicitly verify that $\eta_2$ commutes with
the action of observables for the cases $n \ge 3$.
It will be convenient to label points outside the light cone with the
invariant
distance $s$ from the origin and a point $\theta$ on the unit
hyperboloid
$x^2 = + 1$. We introduce the distributional states
$|s, \theta \rangle = s^{n/2} |x(s,\theta) \rangle$ satisfying
$\langle s_1,\theta_1 | s_2, \theta_2 \rangle = \delta(s_1,s_2)
\delta(\theta_1,\theta_2)$ where $\int d\theta \ \delta
(\theta,\theta_0) = 1$
for the invariant measure $d\theta$ on the hyperboloid.
For any observable
${\cal O} : \Phi_2 \rightarrow \Phi_2$, both $\eta_2 \circ {\cal O}$
and ${\cal O} \circ \eta_2$ define maps from $\Phi_2$ to its algebraic
dual, $\Phi_2^*$. Thus,
given $\phi, \psi \in \Phi_2$, we have
$[{\cal O} \circ \eta_2 (\phi)](\psi) \in {\bf
C}$ (where {\bf C} denotes the complex numbers),
and similarly for $\eta_2 \circ {\cal O}$. Thus, the objects
$[{\cal O} \circ \eta_2 (|x_1 \rangle)](|x_2 \rangle)$ and
$[\eta_2 \circ {\cal O} (|x_1 \rangle)](|x_2 \rangle)$ both
define distributions on $M^{n,1} \times M^{n,1}$.
If these distributions coincide, then
$\eta_2$ commutes with
${\cal O}$.
In terms of our states $|s,\theta \rangle$, the map $\eta_2$ can be
written
\begin{equation}
\label{K}
\eta_2 |\phi \rangle = \langle \phi| Q =
\int ds \left( \int d\theta \langle \phi
|s, \theta \rangle \right) \left( \int d\theta \langle
s, \theta | \right).
\end{equation}
The distributions are therefore:
\begin{eqnarray}
[{\cal O}^\dagger \circ \eta_2 (|x_1 \rangle)](|x_2 \rangle) &=:&
\langle x_1| Q {\cal O} | x_2 \rangle
\cr
[\eta_2 \circ {\cal O}^\dagger (|x_1 \rangle)](|x_2 \rangle) &=:&
\langle x_1| {\cal O} Q | x_2 \rangle.
\end{eqnarray}
Let us denote by ${\cal A}_{2,2}$ the set of observables that map
$\Phi_2$
to $\Phi_2$.
Using the fact\footnote{It is not necessarily true that $A^{\dagger
\dagger}
= A$ for every $A \in {\cal A}_{2,2}$. However,
it must be true that the domain
of $A^{\dagger \dagger}$ includes $\Phi_2$, and that $A$ and $A^{\dagger
\dagger}$ agree when restricted to $\Phi_2$. As a result,
$A$ and $A^{\dagger \dagger}$
may be identified for our purposes.} that $\dagger$ is
an involution on ${\cal A}_{2,2}$, showing
that $\eta_2$ commutes with the observables is equivalent to showing
$ \langle x_1| Q {\cal O} | x_2 \rangle =
\langle x_1| {\cal O} Q | x_2 \rangle$ for all ${\cal O} \in {\cal
A}_{2,2}$.
We now begin a computation. Let us pick a reference point
$\theta_0$ on the unit hyperboloid $x^2 = +1$ and, for any other point
$\theta$
on the unit hyperboloid, an $SO_c(n,1)$ element $g(\theta,\theta_0)$
that moves $\theta_0$ to $\theta$. Also, note that since the
measure $d\theta$ is invariant under $SO_c(n,1)$, we have $ U(g) Q = Q =
QU(g)$ for any $g\in G$. We may therefore write
\begin{eqnarray}
\label{steps}
\langle s_1, \theta_1 | Q {\cal O} | s_2, \theta_2 \rangle
&=&
\langle s_1, \theta_1 | Q {\cal O} U(g(\theta_2,\theta_0))
| s_2, \theta_0 \rangle
\cr
&=& \langle s_1, \theta_1 | Q
U(g(\theta_2,\theta_0))
{\cal O} | s_2, \theta_0 \rangle \cr
&=& \int d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle,
\end{eqnarray}
where, in the last line, we have absorbed $U$ into $Q$ and used the
explicit form (\ref{K}) of $Q$.
It will be useful now to set $\theta_0 = (0,1,0,\ldots,0)$, and split
the domain
of integration in (\ref{steps}) into two regions, $F$ and $B$, the
``front'' and
the ``back'' of the unit hyperboloid, defined by the sign of
$x^{1}$,{\it i.e.},
$\theta \in F$ if $x^1\ge 0$, $\theta \in B$ if $x^1 \le 0$. Now, given
any
state $| s,\theta \rangle$ in $F$ we can always write it as
$U(\theta,\theta_0)| s, \theta_0\rangle$, where $U(\theta,\theta_0)$ is
a
Lorentz transformation associated\footnote{The area element of a plane
picks out a 2-form, and therefore the generator of a one-parameter
subgroup
of the Lorentz group.} with the plane defined by the origin of
coordinates and the points $\theta$ and $\theta_0$.
Note that if $\theta\in F$, the
intersection of this plane with the unit hyperboloid will always define
a
geodesic passing through $\theta$ and $\theta_0$ (there may be a
disconnected
geodesic as well). The inverse Lorentz transformation
$[U(\theta,\theta_0)]^{-1}$ must take $\theta_0$ to a point located
symmetrically with respect to $\theta$ along this geodesic. We may
write
this as
\begin{equation}
U^{-1}(\theta,\theta_0) | s,\theta_0 \rangle = R_1 | s,\theta \rangle \
,
\label{uaction}
\end{equation}
where $R_1$ is the reflection through the $x^1$ axis. This reflection
acts
on any point $x$ by changing the sign of each coordinate except $x^1$.
Similarly, we define the other reflection operators $R_\mu$.
The integral in (\ref{steps}) now takes the form
\begin{equation}
\int d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle
=\int_F d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle +
\int_B d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle \
.
\label{sum}
\end{equation}
Since the measure $d\theta$ is invariant under reflections and since
$R_1$
preserves the distinction between front and back,
for the integral over $F$ we have
\begin{eqnarray*}
\int_F d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle
&=&
\int_F d \theta \langle s_1, \theta | R_1 {\cal O}
|s_2, \theta_0 \rangle
=
\int_F d \theta \langle s_1, \theta_0 |U(\theta,\theta_0) {\cal O} |s_2,
\theta_0 \rangle \\
&=& \int_F d \theta \langle s_1, \theta_0 | {\cal O}
U(\theta,\theta_0)|s_2,
\theta_0 \rangle =\int_F d \theta_0 \langle s_1, \theta_0| {\cal O}
|s_2,
\theta \rangle \ ,
\end{eqnarray*}
where we have used (\ref{uaction}), the fact that $U(\theta,\theta_0)$
commutes with ${\cal O}$, and the definition of $U(\theta,\theta_0)$.
For the integral over $B$ we
first note the following identities:
\begin{eqnarray}
U(\theta,\theta_0) J_{12}(\pi)|\theta_0\rangle &=& I |\theta\rangle \\
J_{12}(\pi) U^{-1}(\theta,\theta_0) |\theta_0\rangle &=& R_2
|\theta\rangle \ ,
\end{eqnarray}
where $I$ is a reflection through the origin, changing the sign of all
coordinates and therefore exchanging front and back. The symbol
$J_{12} (\pi)$ denotes a rotation by $\pi$ in the $(x^1,x^2)$--plane.
In this case we
have,
\begin{eqnarray*}
\int_B d \theta \langle s_1, \theta | {\cal O} |s_2, \theta_0 \rangle
&=&
\int_F d \theta \langle s_1, \theta | I {\cal O}
|s_2, \theta_0 \rangle
=
\int_F d \theta \langle s_1, \theta_0
|J_{12}(\pi)U^{-1}(\theta,\theta_0)
{\cal O} |s_2,
\theta_0 \rangle \\
&=& \int_F d \theta_0 \langle s_1, \theta | {\cal O} R_2 |s_2,
\theta_0 \rangle =\int_B d \theta \langle s_1, \theta_0| {\cal O} |s_2,
\theta \rangle \ .
\end{eqnarray*}
It follows that we have $\int d\theta \langle s_1, \theta | {\cal O} |
s_2,\theta_0 \rangle = \int d\theta \langle s_1,\theta_0 | {\cal O}|
s_2,\theta \rangle$ and $Q{\cal O}={\cal O}Q$.
Thus, we have shown that $\eta_2$ commutes with any observable
${\cal O} \in {\cal A}_{2,2}$.
\subsection{A Superselection Rule}
\label{mix}
Here, we wish to show that $a_1 \eta_1 \oplus a_2 \eta_2$ is a rigging
map
on $\Phi_1 \oplus \Phi_2$, where $\Phi_1$ is the space of smooth
functions supported on compact sets {\it inside} the light cone. Again,
the
main issue is to show that our putative rigging map commutes with the
relevant set of observables.
Let us refer to the Hilbert space associated with functions
supported inside the light
cone as ${\cal H}_1$, and that associated with functions outside the
light
cone as ${\cal H}_2$, so that we have ${\cal H}_{\rm aux} = {\cal H}_1
\oplus
{\cal H}_2$. Then, since the associated projectors $P_1$ and $P_2$
are observables, we may use them to split the algebra ${\cal A}$ of
observables into four linear spaces: ${\cal A} = \bigoplus_{i,j \in
\{1,2\}}
{\cal A}_{i,j}$, where $A \in {\cal A}_{i,j}$ maps $\Phi_i$ into
$\Phi_j$.
The observables in ${\cal A}_{1,1}$ need only commute with $\eta_1$.
But, $\eta_1$ is given by convergent group averaging, so this is
satisfied. Similarly, observables in ${\cal A}_{2,2}$ need only
commute
with $\eta_2$, and this was checked in subsection A. Thus, we need only
consider the observables in ${\cal A}_{1,2}$ and ${\cal A}_{2,1}$.
Lest the reader think that ${\cal A}_{1,2}$ and ${\cal A}_{2,1}$
are clearly empty and the result is trivial, we recall from \cite{GM2}
that since group averaging converges both inside and outside the light
cone for $SO_c(1,1)$, a nontrivial element of ${\cal A}_{1,2}$ in that
case is given by the expression
\begin{equation}
\int dg \ U(g) |\phi_2 \rangle \langle \phi_1 | U(g^{-1})
\end{equation}
for any $\phi_1 \in \Phi_1$ and any $\phi_2 \in \Phi_2$. For the case
of $SO_c(n,1)$ with $n > 1$, it is unclear whether ${\cal A}_{1,2}$ is
in
fact empty, but in any case our proof below is sufficient.
We begin with a Lemma.
{\bf Lemma.} {\it Suppose that we have
1) a unitary representation
of a group $G$ on a Hilbert space ${\cal H}_{\rm aux}$,
2) a decomposition ${\cal H}_{\rm aux} = {\cal H}_1 \oplus {\cal H}_2$
that reduces the group action; that is, for which both ${\cal H}_1$
and ${\cal H}_2$ are invariant under the group action, and
3) a dense subspace $\Phi$ of ${\cal H}_{\rm aux}$ whose intersection
$\Phi_1$ with ${\cal H}_1$ has the property that, for all $\phi_1,
\phi_1{}'
\in \Phi_1$, the matrix elements $\langle \phi_1 | U(g) | \phi_1{}'
\rangle$
define an $L^1$ function on the group $G$ with respect to some measure
$dg$
on $G$.
Let us denote the intersection $\Phi \cap {\cal H}_2$ by $\Phi_2$ and
define ${\cal A}_{i,j}$ as above. In this case, for any
state $\phi_2$ of the form ${\cal O}
\phi_1$ for ${\cal O} \in {\cal A}_{2,1}$
and $\phi_1 \in \Phi_1$, the matrix elements $\langle \phi_2 |U(g)|
\phi_2 \rangle$ are also $L^1$ with respect to $dg$. }
\noindent{\it Proof.\hskip 0.35em \relax}
To see this, we simply choose such ${\cal O}, \phi_1, \phi_2$.
We have
\begin{equation}
\langle \phi_2 | U(g) | \phi_2 \rangle = \langle \phi_1 | U(g)
{\cal O}^\dagger {\cal O} | \phi_1 \rangle.
\end{equation}
Since ${\cal O}^\dagger {\cal O}$ maps $\Phi_1$ to $\Phi_1$, these
matrix elements define an $L^1$ function on $G$. \hfill$\Box{}$
Note that the measure to which this Lemma refers need not be the one
associated with group averaging. In this way, the Lemma shows that
if the fall-off rate of $\langle \phi_1 |U(g) | \phi_1 \rangle$ can
be bounded in some uniform way on $\Phi_1$,
then this same bound also applies to ${\cal O} \phi_1$. Clearly, any
other
property of these matrix elements on $\Phi_1$ also carries over to
${\cal O} \phi_1$.
We are now in a position to prove our main result:
{\bf Theorem.} {\it Suppose that conditions (1-3) of the above
Lemma hold with respect to the measure for group averaging and let
$\eta_1$
denote the group averaging rigging map on $\Phi_1$. Suppose also that
4) Given states $\phi_2, \phi_2' \in \Phi_2$ such that
$f(g) := \langle \phi_2 |U(g) | \phi_2' \rangle$ is $L^1$ with respect
to
the group averaging measure,
the group average of this quantity is
zero.
5) There is a rigging map $\eta_2$ on $\Phi_2$ which annihilates all
states $\phi_2$ in $\Phi_2$ for which $\langle \phi_2|U(g)|\phi_2
\rangle$ is $L^1$ with respect to the group averaging measure.
\noindent
Then, for any $a_1,a_2 \in {\bf R}$, the map $a_1 \eta_1 \oplus a_2
\eta_2$
is a rigging map.}
Conditions (4) and (5) may seem a bit awkward. However, they
are much easier to verify in practice than the (cleaner) condition
that $\Phi_2$ contains no non-trivial $L^1$ states. In particular,
for our choices of $\Phi_1, \Phi_2 \subset L^2(M^{n,1})$,
the results of section \ref{ga} show
that our case of $G= SO_c(n,1)$ (for $n >1$)
satisfies the assumptions of this theorem. This follows
since group averaging clearly diverges for
any state $|\phi \rangle = \int ds \ d\theta \ \phi(s,\theta) |s, \theta
\rangle \in \Phi_2$, except perhaps when the integral
$\int d \theta \ \phi(s,\theta)$ vanishes
for all $s$. However, in this case $\eta_2 | \phi \rangle = 0$.
\noindent{\it Proof.\hskip 0.35em \relax}
It is clear that $\eta = a_1 \eta_1
\oplus a_2 \eta_2$ commutes with the action of ${\cal A}_{1,1}$ and
${\cal A}_{2,2}$. Thus, we need only consider the operators in ${\cal
A}_{1,2}$ and their adjoints in ${\cal A}_{2,1}$. So, let
${\cal O}: \Phi_1 \rightarrow \Phi_2$ and ${\cal O}^\dagger :
\Phi_2 \rightarrow \Phi_1$. Recalling that $\dagger$ defines
a bijection between ${\cal A}_{1,2}$ and ${\cal A}_{2,1}$ (see footnote
3),
the map $\eta$ will be a rigging map
iff
\begin{equation}
\label{sym}
[\eta_2 {\cal O} \phi_1] (\phi_2) = [\eta_1 \phi_1] ({\cal O}^\dagger
\phi_2).
\end{equation}
Now, our Lemma tells us that ${\cal O} \phi_1$ is an $L^1$ state in
$\Phi_2$.
Thus, by condition (5), $\eta_2$ annihilates this state and the
left-hand
side of (\ref{sym}) vanishes. The right-hand side is given by group
averaging:
\begin{equation}
\label{lastexp}
[\eta_1 \phi_1] ({\cal O}^\dagger \phi_2) = \int_G dg \langle \phi_1 |
U(g) {\cal O}^\dagger |\phi_2 \rangle.
\end{equation}
Note that the function
$\langle \phi_1 | U(g) {\cal O}^\dagger |\phi_2 \rangle$ is $L^1$ since
${\cal O}^\dagger|\phi_2 \rangle \in \Phi_1$.
Since ${\cal O} |\phi_1 \rangle \in \Phi_2$,
expression (\ref{lastexp}) vanishes by condition (4) and we are done.
\hfill$\Box{}$
Note that while we were unable to decide whether ${\cal A}_{2,1}$ was
empty (and thus whether ${\cal H}_1$ and ${\cal H}_2$ are superselected
in ${\cal H}_{aux}$), we have shown that any ${\cal O}$ in ${\cal
A}_{1,2}$
acts as the zero operator on the physical Hilbert space so that a
superselection rule must exist at the physical level. It is
clear that, whether or not a superselection rule exists in ${\cal
H}_{\rm
aux}$, the ambiguity in the choice of rigging map directly corresponds
to
superselection rules on the physical phase space\footnote{See, however,
\cite{tpo} for subtleties that may arise when further constraints are
imposed.}.
\section{Discussion}
\label{Disc}
In the above work, we considered a particular regularization of the
rigging
map given by choosing compact subsets of the gauge group.
While we were able to `renormalize' our group averaging map, the
limiting procedure (\ref{norm}) defining the
physical inner product
$[\eta_2(\phi_2)](\phi_2')$ depended on the states $\phi_2, \phi_2'$
in a rather complicated way.
This necessitated the separate proof in section IIIA that our limit
did in fact define a rigging map for the case $SO_c(n,1)$ with $n > 2$,
and
is not particularly encouraging for the development of a general
algorithm. One might expect similar results from other
renormalization procedures (such as the one suggested
in \cite{JK}) which are not manifestly symmetric
under the $G$ action\footnote{It might be of interest
to determine if the scheme of \cite{JK} requires `state-dependent
regularization' in the case where group averaging converges.}.
However, suppose for the moment that we had used a
state-independent
renormalization scheme to define a map $\alpha : \Phi \rightarrow
\Phi^*$ of the form: $\alpha = \lim_{\Lambda \rightarrow \infty}
\alpha_{\Lambda}$ where
\begin{equation}
\label{newmap}
[\alpha_{\Lambda} (\phi)](\phi') =
N(\Lambda) \int_{G_\Lambda} dg \ \langle \phi | U(g) | \phi' \rangle,
\end{equation}
with
$G_\Lambda \subset G$ containing those elements of the form
$\{k b(\lambda) k' \}$ for $k, k' \in K$, $b(\lambda)$ a boost
of magnitude $\lambda$ in the $0,1$ plane, and $\lambda < \Lambda$.
We may take $N(\Lambda)$
to be defined such that $[\alpha_\Lambda
(\phi_0)](\phi_0) = 1$ for some reference
state $\phi_0$. This leads to imposing a cutoff in terms of the
integration
variable $\xi$ of (\ref{I}) instead of in terms of $u$. As one can see
from (\ref{I}) and (\ref{normx}),
this new map can be related to the rigging map $\eta_2$ by
\begin{equation}
\label{comp}
[\alpha(|x_1\rangle)] (|x_2 \rangle) = {{s_1^{2(n-2)
} } \over {r_1^{n-2} r_2^{n-2}} }
[\eta_2(|x_1\rangle)] (|x_2 \rangle),
\end{equation}
for $x_1,x_2$ outside the light cone.
The map $\alpha$ is not a rigging map as it does not solve the
constraints.
This is evident from the lack of Lorentz invariance in (\ref{comp}).
Note, however, that since states $\phi_2 \in \Phi_2$ are
associated with functions
supported on compact sets outside the light cone, the coefficient
$ {{s_1^{2(n-2) } } \over {r_1^{n-2} r_2^{n-2}} }$ is strictly positive
and is bounded on any such compact set. As a result, if we
restrict attention to the action of $\alpha$ and $\eta_2$ on positive
functions, the
maps $\alpha$ and $\eta_2$ have the same domain and the same kernel.
In general, a study of the maps (\ref{newmap}) for various
choices of $N(\lambda)$ may lead to a detailed knowledge of
superselection sectors, as we now discuss.
Suppose that we have $\Phi = \Phi_1 \oplus \Phi_2$ and that
the two spaces are in some sense
characterized by different rates of divergence of the limit
(\ref{newmap}), say
with the integral diverging faster on $\Phi_2$ than $\Phi_1$.
One might expect that through suitable renormalization one can define
rigging maps $\eta_1$ and $\eta_2$ on $\Phi_1$ and $\Phi_2$, with
$\eta_2$
requiring a stronger renormalization than $\eta_1$. In analogy
with the Lemma of the last subsection, we expect the action of $\eta_1$
can be defined on the image of $\Phi_1$ under any observable. We also
expect a parallel with the subsequent theorem. Let us replace
assumptions $(4)$ and $(5)$ with:
$4'$) Given states $\phi_2, \phi_2{}' \in \Phi_2$
such the limit defining
$[\eta_1(\phi_2)](\phi_2{}')$ converges, the limit of this quantity is
zero.
$5'$) There is a rigging map $\eta_2$ on $\Phi_2$ which annihilates all
states $\phi_2$ in $\Phi_2$ for which the limit defining
$[\eta_1(\phi_2)]
(\phi_2)$ converges.
\noindent
Since the map $\eta_2$ involves a stronger renormalization than
$\eta_1$,
we may expect property $(5')$ to hold. On the other hand,
one might arrange for property $4'$ to hold by simply assigning to
$\Phi_1$
any state $\phi_2$ for which the limit defining $\eta_1(\phi_2)(\phi_2)$
converges to a nonzero value. Under these conditions, the argument
proceeds in exact parallel with section IIIB. We conclude that
$a_1 \eta_1 \oplus a_2 \eta_2$ is a rigging map, and that the images of
$\eta_1$ and $\eta_2$ are superselected in the physical Hilbert space.
In this way,
it may be generally true that spaces of functions for which the
group averaging integral diverges at different rates are superselected
in the physical Hilbert space.
However, certain subtleties remain to be explored. For example,
let us return for a moment to the case of $SO_c(n,1)$ acting on
$L^2(M^{n,1})$.
There are of course functions supported {\it inside} the light cone
for which group averaging does not converge. These are simply functions
whose support is not compact. Thus, one might conceivably attempt to
renormalize the group averaging map on a space of such functions
associated
with the interior of the light cone. In this case, it is {\it not}
clear
that a physical superselection rule results. This issue, and others, we
leave
for future investigation.
\acknowledgements
We would like to thank Laurent Freidel for enlightening
discussions. This work was was supported in part by National Science
Foundation
grants PHY94-07194 and PHY97-22362, and by
funds from Syracuse University. We also thank Domenico Giulini for
comments
on an earlier draft of the paper.
|
2,869,038,155,026 | arxiv | \section{introduction}
Fine-grained rims (FGRs) are frequently found around chondrules and calcium-aluminum-rich inclusions \editR{(CAIs)} in primitive chondrites.
FGRs are distinguishable from the interchondrule matrix in optical and scanning electron microscopy images as they have different texture and composition, and the typical thickness of FGRs is on the order of 10--100 \si{\micro}m \citep[e.g.,][]{1984PolRe..35..126M,2018E&PSL.481..201H}.
The physical mechanism that produced these rims is still under debate, and several scenarios have been suggested so far \citep[e.g.,][]{1992GeCoA..56.2873M,2006GeCoA..70.1271T,2012GeCoA..98....1T,2019GeCoA.264..118L}.
The majority of studies assumed that FGRs were formed via the accretion of dust particles onto the surfaces of chondrules/CAIs in the turbulent solar nebula \citep[e.g.,][]{1992GeCoA..56.2873M,1998Icar..134..180M,2004Icar..168..484C,2019Icar..321...99X,2021Icar..35414053X,2021Icar..36714538M,2022Icar..37414726K}.
\editX{The} \editR{This} nebular scenario naturally reproduces the positive correlation between the rim thickness and the chondrule radius, which is reported for FGRs around chondrules in CM chondrites \citep[e.g.,][]{1992GeCoA..56.2873M,2018E&PSL.481..201H,2021GeCoA.295..135Z}.
However, \citet{2019GeCoA.264..118L} pointed out that the nebular scenario has a difficulty explaining the low porosity of FGRs.
Assuming that collisions between chondrules and fine grains occurred in the turbulent solar nebula, the impact velocity would be approximately or lower than 1 m/s and porous dust rims with the porosity of approximately 60\% would be formed \citep{2013GeCoA.116...41B}.
In addition, \editX{(sub)micron-sized} dust grains turned into fluffy aggregates prior to the accretion onto chondrules when the grain size is smaller than 1 \si{\micro}m \citep[e.g.,][]{2017ApJ...846..118A,2019ApJ...887..248M,2022Icar..37414726K}.
The typical grain size of FGRs in primitive chondrites is indeed submicron \citep[e.g.,][]{2000GeCoA..64.3263L,2008GeCoA..72..602C,2021GeCoA.295..135Z}, although grain size might be subsequently modified by aqueous/thermal alteration processes.
Hence the structure of FGRs formed in the turbulent solar nebula would be highly porous; which seems to be inconsistent with the observed compact FGRs with low porosity of 10--20\% \citep[e.g.,][]{2006GeCoA..70.1271T}.
Alternatively, several studies investigated a scenario that FGRs were formed after accretion of chondrite parent bodies \citep[e.g.,][]{1993Metic..28..669S,2006GeCoA..70.1271T,2010GeCoA..74.4438T,2012GeCoA..98....1T}.
In the framework of \editX{the} \editR{this} parent-body scenario, the FGRs are formed via aqueous/thermal alterations of host chondrules and/or via impact-induced compaction/fragmentation of the matrix material around chondrules \citep[see][and references therein]{2010GeCoA..74.4438T}.
The parent-body scenario can naturally explain the non-porous nature of FGRs, and this is one of the reasons why parent-body scenario is still favored for the origin of FGRs.
However, another difficulty exists when we consider the parent-body scenario.
Based on the fabric analysis by high-resolution electron backscatter diffraction, \citet{2011NatGe...4..244B} found that FGRs were exposed to a spherically symmetric stress field while the matrix exhibits a bulk uniaxial stress field.
This result indicates that FGRs were compressed prior to rimmed chondrules being incorporated into chondrite parent bodies.
Moreover, \citet{2013Icar..225..558B} revealed that impact-induced compaction cannot form non-porous FGRs, based on their impact experiments into mixtures of chondrule analogs and fine dust particles.
To solve these problems, \citet{2019GeCoA.264..118L} proposed a novel idea for the origin of FGRs: high-speed collisions between chondrules and fine dust grains called the {\it kinetic dust aggregation} process.
The kinetic dust aggregation is also known as the aerosol deposition method \citep[e.g.,][]{akedo2006aerosol,akedo2008room,akedo2008aerosol,2014APExp...7c5501J,hanft2015overview} in the field of ceramic coating technologies.
Experimental studies revealed that (sub)micron-sized ceramic particles can stick to a ceramic substrate in a vacuum, and the impact velocity for sticking is approximately 0.1--1 km/s \citep[see][and references therein]{hanft2015overview}.
Molecular dynamics simulations also confirmed that 10--100 nm-sized brittle nanoparticles can stick to the substrate when the impact velocity is on the order of 0.1--1 km/s \citep[e.g.,][]{2014JTST...23..541D}.
The resulting dust layer formed via the kinetic dust aggregation have low porosity and are fine grained, as illustrated in Figure \ref{fig:Liffman}.
Therefore, we can reproduce the observed structure of FGRs if they are formed via the kinetic dust aggregation process, which should be related to chondrule-forming supersonic events.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig_Liffman}
\caption{
Illustration of the fracturing and compaction of dust particles during kinetic dust aggregation.
Note that the maximum/minimum velocities for adhesion shown in this figure (100 and 550 m/s) are for the case of (sub)micron-sized yttrium iron garnet (${\rm Y}_{3}{\rm Fe}_{5}{\rm O}_{12}$) particles, and these critical velocities should depend on the composition and grain size in reality.
Figure taken from \citet{2019GeCoA.264..118L} modified after \citet{2014APExp...7c5501J}.
}
\label{fig:Liffman}
\end{figure}
In this study, we examine the possibility of FGR formation via kinetic dust aggregation in chondrule-forming shock waves.
Shock waves caused by eccentric planetesimals in the gaseous solar nebula is one of the leading candidates for the chondrule-forming transient events \citep[e.g.,][]{1998Sci...279..681W,2004M&PS...39.1809C,2012ApJ...752...27M,2016ApJ...818..103M,2018ApJ...857...96M,2019ApJ...871..110N}.
When shock waves are created by undifferentiated icy planetesimals, fine dust grains would be released from the planetary surface due to evaporation of icy planetesimals \citep[e.g.,][]{2013ApJ...764..120T}.
The enrichment of fine dust grains in chondrule-forming environment would be preferred from a variety of perspectives \citep[e.g.,][]{2008Sci...320.1617A,2012GeCoA..78....1H,2015GeCoA.148..228T}.
Based on the oxygen isotope composition and oxidation state of chondrule olivine, \citet{2013GeCoA.101..302S} concluded that chondrules in CR chondrites formed under ${\rm H}_{2}{\rm O} / {\rm H}_{2}$ ratios between 10 and 1000 times the solar ratio \citep[see also][]{2015GeCoA.148..228T}.
As evaporating icy planetesimals can supply high ${\rm H}_{2}{\rm O}$ vapor pressure, our scenario is also consistent with the observed oxygen fugacity.
We consider the dynamics of chondrules behind the shock front and calculate the growth of FGRs via kinetic dust aggregation.
Although our numerical results are based on simple one-dimensional calculations, we found that non-porous FGRs with the thickness of 10--100 \si{\micro}m would be formed in shock waves around evaporating icy planetesimals.
\section{model}
\subsection{Outline}
The formation process of FGRs in shock waves is illustrated in Figure \ref{fig:schematic}.
We consider the accretion of FGRs onto bare chondrules.
When shock waves are caused by undifferentiated icy planetesimals, the dusty region would be formed behind the shock front due to evaporation of planetesimals.
We assume that fine dust grains released from planetesimals are dynamically coupled with gas while chondrules entered the shock wave have relative velocity with respect to gas, and fine dust grains collide with chondrules.
Then fine dust grains accrete onto chondrules if the impact velocity satisfies the condition for adhesion.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{schematic}
\caption{
Schematic of our fine-grained rim formation scenario.
Evaporation of undifferentiated icy planetesimals produces dusty regions behind the shock front.
As chondrules entered the shock wave have a relative velocity with respect to fine grains, which are dynamically coupled with gas, fine dust grains collide with chondrules and fine-grained rim will be formed in dusty regions.
}
\label{fig:schematic}
\end{figure}
We briefly explain the models and settings in the following sections.
In this study, we discuss the dynamics of chondrules in one-dimensional normal shocks.
The basic framework of our model is identical to that used in \citet{2019ApJ...877...84A}.
We calculate the evolution of the velocity and radius of rimmed chondrules, $v$ and $r$, simultaneously.
\subsection{Gas structure}
We do not calculate the dynamics of gas behind the shock front but assume a simple gas structure.
Then the dynamics of chondrules is simulated in the given gas flow.
We assume that the gas velocity with respect to the shock front, $v_{\rm g}$, and the gas density, $\rho_{\rm g}$, evolve as functions of the distance from the shock front, $x$:
\begin{equation}
v_{\rm g} =
\begin{cases}
\displaystyle v_{0} & {( x < 0 )}, \\
\displaystyle v_{0} + {\left( v_{\rm post} - v_{0} \right)} \exp{\left( {- x}/{L} \right)} & {( x \geq 0 )},
\end{cases}
\label{eq:vg}
\end{equation}
and
\begin{equation}
\rho_{\rm g} = \frac{v_{0}}{v_{\rm g}} \rho_{{\rm g}, 0},
\end{equation}
where $v_{0}$ is the pre-shock gas velocity with respect to the shock front, $v_{\rm post}$ is the post-shock gas velocity with respect to the shock front, $\rho_{{\rm g}, 0}$ is the pre-shock gas density, and $L$ is the spatial scale of the shock.
The spatial scale of the shock should be several times or much larger than the radius of planetesimals, $r_{\rm p}$ \citep[see][and references therein]{2019ApJ...877...84A}.
However, the value of $L$ should also depend on the physical properties of the solar nebula, e.g., the turbulence strength and the opacity.
Thus we regard $L$ as a parameter and consider three cases: $L = 3 \times 10^{4}\ {\rm km}$, $1 \times 10^{4}\ {\rm km}$, and $3 \times 10^{3}\ {\rm km}$.
The post-shock gas velocity, $v_{\rm post}$, is given by $v_{\rm post} = {\left[ {(\gamma - 1)}/{(\gamma + 1)} \right]} v_{0}$, where $\gamma$ is the ratio of specific heats.
We set $\rho_{{\rm g}, 0} = 5 \times 10^{-10}\ {\rm g}\ {\rm cm}^{-3}$, $v_{0} = 12\ {\rm km}\ {\rm s}^{-1}$, and $\gamma = 1.4$.
Similarly, the temperature of the gas $T_{\rm g}$ is assumed as follows:
\begin{equation}
T_{\rm g} =
\begin{cases}
\displaystyle T_{0} & {( x < 0 )}, \\
\displaystyle T_{0} + {\left( T_{\rm post} - T_{0} \right)} \exp{\left( {- x}/{L} \right)} & {( x \geq 0 )}.
\end{cases}
\end{equation}
We assume that the pre-shock gas temperature is $T_{0} = 200\ {\rm K}$ and the post-shock gas temperature is $T_{\rm post} = 1600\ {\rm K}$.
The most probable molecular velocity $c_{\rm s}$ is given by $c_{\rm s} \equiv {(2 k_{\rm B} T_{\rm g} / m_{\rm g})}^{1/2} = 1.3\ {\left[ T_{\rm g} / {( 200\ {\rm K} )} \right]}^{1/2}\ {\rm km}\ {\rm s}^{-1}$, where $k_{\rm B} = 1.38 \times 10^{-16}\ {\rm erg}\ {\rm K}^{-1}$ is the Boltzmann constant and $m_{\rm g} = 3.34 \times 10^{-24}\ {\rm g}$ is the gas molecule mass, which value corresponds to ${\rm H}_{2}$ gas.
\subsection{Chondrule dynamics}
The velocity of chondrules with respect to the shock front, $v$, will change as follows \citep[e.g.,][]{1991Icar...93..259H}:
\begin{equation}
\frac{4 \pi}{3} r^{3} \rho \frac{{\rm d}v}{{\rm d}x} = - \frac{C_{\rm D}}{2} \pi r^{2} \rho_{\rm g} \frac{\left| v - v_{\rm g} \right|}{v} {\left( v - v_{\rm g} \right)},
\end{equation}
where $C_{\rm D}$ is the drag coefficient, $r$ is the chondrule radius, and $\rho = 3.3\ {\rm g}\ {\rm cm}^{-3}$ is the internal density of chondrules \citep{2004M&PS...39.1809C}.
Assuming that the temperature of chondrules is equal to gas temperature, the drag coefficient, $C_{\rm D}$, is given by
{\footnotesize
\begin{equation}
C_{\rm D} = \frac{2 \sqrt{\pi}}{3 s} + \frac{2 s^{2} + 1}{\sqrt{\pi} s^{3}} \exp{(- s^{2})} + \frac{4 s^{4} + 4 s^{2} - 1}{2 s^{4}} {\rm erf}{(s)},
\end{equation}
}where the Mach number, $s$, is given by $s \equiv {|v - v_{\rm g}|} / c_{\rm s}$.
Here we introduce the stopping length of chondrules, $l_{\rm stop}$.
For the case in which chondrules move in gas with supersonic velocities, $l_{\rm stop}$ is approximately given by
\begin{eqnarray}
l_{\rm stop} &\equiv& {\left( \frac{1}{v} {\left| \frac{{\rm d}v}{{\rm d}x} \right|} \right)}^{-1} \nonumber \\
&\simeq& \frac{4}{3} \frac{\rho}{\rho_{\rm g}} {\left( \frac{v - v_{\rm g}}{v} \right)}^{-2} r.
\label{eq:lstop}
\end{eqnarray}
If the spatial scale of shock is much larger than the stopping length ($L \gg l_{\rm stop}$), the velocity of a chondrule reaches $v \simeq v_{\rm post}$ behind the shock front, while $v$ barely changes when $L \ll l_{\rm stop}$ \citep[see][]{2019ApJ...877...84A}.
On the other hand, for the case in which chondrules move in gas with subsonic velocities, $l_{\rm stop}$ is approximately given by the following equation:
\begin{equation}
l_{\rm stop} \simeq 0.64 \frac{\rho}{\rho_{\rm g}} {\left( \frac{c_{\rm s} {\left| v - v_{\rm g} \right|}}{v^{2}} \right)}^{-1} r.
\label{eq:lstop2}
\end{equation}
\subsection{Accretion of fine-grained rims}
In this study, we calculate the accretion of fine-grained rim in shock waves.
The mass accretion rate per unit length, ${{\rm d}m}/{{\rm d}x}$, is given by
\begin{equation}
\frac{{\rm d}m}{{\rm d}x} = Q \rho_{\rm d} \pi r^{2} \frac{v_{\rm imp}}{v},
\end{equation}
where $Q$ is the coefficient for adhesion/erosion of fine grains, and $\rho_{\rm d}$ is the dust density.
Here we assume that fine grains are both dynamically and thermally coupled with gas, and the impact velocity of fine grains is given by
\begin{equation}
v_{\rm imp} = {\left| v - v_{\rm g} \right|}.
\end{equation}
The growth rate of the thickness of rims, ${{\rm d}r}/{{\rm d}x}$, is given by the following equation:
\begin{equation}
\frac{{\rm d}r}{{\rm d}x} = \frac{1}{4 \pi \rho r^{2}} \frac{{\rm d}m}{{\rm d}x},
\end{equation}
and we do not consider the porosity of FGRs for simplicity.\footnote{
\editR{The porosity of FGRs formed via the kinetic dust aggregation process would be 10\% or less \citep[e.g.,][]{hanft2015overview}, although it must depend on many parameters including the impact velosity and the material composition.}
}
The thickness of the rim, $\Delta$, is given by
\begin{equation}
\Delta = r - r_{0},
\end{equation}
where $r_{0}$ is the radius of the bare chondrule.
The coefficient for adhesion/erosion depends on the impact velocity: $Q = Q {\left( v_{\rm imp} \right)}$.
In this study, we assume that $Q {\left( v_{\rm imp} \right)}$ is given by a step function as follows:
\begin{equation}
Q =
\begin{cases}
\displaystyle Q_{\rm ad} & {\left( v_{\rm min} \le v_{\rm imp} \le v_{\rm max} \right)}, \\
\displaystyle Q_{\rm er} & {\left( v_{\rm imp} > v_{\rm max}\ {\rm and}\ \Delta > 0 \right)}, \\
0 & {\left( {\rm otherwise} \right)},
\end{cases}
\end{equation}
where $Q_{\rm ad}$ and $Q_{\rm er}$ are the coefficients for adhesion/erosion, and $v_{\rm max}$ and $v_{\rm min}$ are the maximum/minimum velocity for adhesion, respectively.
We change the values of $Q_{\rm ad}$, $Q_{\rm er}$, $v_{\rm max}$, and $v_{\rm min}$ as parameters (see Table \ref{table:coeff}).
\begin{table*}
\caption{
Fundamental parameters for describing the accretion of FGRs: $Q_{\rm ad}$, $Q_{\rm er}$, $v_{\rm max}$, and $v_{\rm min}$.
}
\label{table:coeff}
\centering
\begin{tabular}{ccc}
{\bf Parameter} & {\bf Symbol} & {\bf Value} \\ \hline
Coefficient for adhesion & $Q_{\rm ad}$ & $0.5$ or $0.2$ \\
Coefficient for erosion & $Q_{\rm er}$ & $0$ or $-1$ \\
Maximum velocity for adhesion & $v_{\rm max}$ & $1\ {\rm km}\ {\rm s}^{-1}$ or $0.3\ {\rm km}\ {\rm s}^{-1}$ \\
Minimum velocity for adhesion & $v_{\rm min}$ & $0.1\ {\rm km}\ {\rm s}^{-1}$ or $0.3\ {\rm km}\ {\rm s}^{-1}$
\end{tabular}
\end{table*}
We do not consider the erosion of chondrules for simplicity; however, it might play an important role for the origin of a non-zero constant in the linear relationship between $\Delta$ and $r_{0}$ reported from observations of chondrules in CM chondrites \citep{2019GeCoA.264..118L}.
The erosion of chondrules may also be problematic in the context of the survival of chondrules in shock waves if $Q_{\rm er} \ll -1$ \citep[e.g.,][]{2014ApJ...797...30J}.
However, we can imagine that the value of $Q_{\rm er}$ for the erosion of chondrules should differ from that for the erosion of FGRs, and our knowledge of erosion of chondrules is still limited.
Thus, future studies on the physics of erosive collision are necessary.
\subsection{Production of silicate dust from evaporating planetesimals}
We simply set the following assumption for the structure of $\rho_{\rm d}$:
\begin{equation}
\rho_{\rm d} =
\begin{cases}
0 & {\left( x < 0 \right)}, \\
\displaystyle \chi \rho_{\rm g} & {\left( x \geq 0 \right)},
\end{cases}
\end{equation}
where $\chi$ is the dust-to-gas mass ratio in the dusty region formed behind the shock front.
In this study, we set $\chi = 1$ based on the order-of-magnitude analysis shown below.
In this study, we consider the evaporation of undifferentiated icy planetesimals.
The planetesimal surface is heated by a hot shocked gas, and the surface ice evaporates.
For the case of the supersonic limit, \citet{2013ApJ...764..120T} derived that the evaporation flux of the surface ice of the planetesimal is approximately given by
\begin{equation}
J_{\rm ice} \simeq \pi {R_{\rm p}}^{2} \frac{2 \gamma}{{\left( \gamma + 1 \right)}^{2}} \frac{\alpha \rho_{{\rm g}, 0} {v_{0}}^{3}}{L_{\rm eva}},
\end{equation}
where $L_{\rm eva} = 2.7 \times 10^{10}\ {\rm erg}\ {\rm g}^{-1}$ is the latent heat of evaporation of ice, and $\alpha$ is the non-dimensional parameter called the Stanton number, which expresses the efficiency of heat conduction.
\citet{2013ApJ...764..120T} found that the realistic range of $\alpha$ for planetesimal bow shocks is $10^{-2} \le \alpha \le 10^{-1}$.
When the surface ice evaporates, dust grains are also released from the surface of undifferentiated planetesimals.
The mass flux of the released dust grains, $J_{\rm dust}$, would be simply given as follows:
\begin{equation}
J_{\rm dust} = f_{\rm dust/ice} J_{\rm ice},
\end{equation}
where $f_{\rm dust/ice}$ is the dust-to-ice mass ratio of the evaporating undifferentiated planetesimals.
The value of $f_{\rm dust/ice}$ is uncertain; however, several studies on the internal structure of comet 67P/Churyumov--Gerasimenko suggested that the dust-to-ice mass ratio of the comet is significantly higher than one, $f_{\rm dust/ice} \gg 1$ \citep[e.g., ][]{2019MNRAS.482.3326F,2019MNRAS.483.2337P,2020MNRAS.497.1166A}.
The bulk density of the comet indicates $f_{\rm dust/ice} \simeq 9$ \citep{2020MNRAS.497.1166A} if comets are formed via gravitational collapse of a cloud of dust aggregates in the solar nebula \citep[e.g.,][]{2012Icar..221....1S,2017MNRAS.469S.149W,2021A&A...647A.126V}.
\citet{2019MNRAS.482.3326F} also reviewed the dust-to-ice mass ratio of other comet nuclei visited by space missions and of trans-Neptunian objects (TNOs), and these objects have generally the value of $f_{\rm dust/ice} \gg 3$.
These estimates on the value of $f_{\rm dust/ice}$ are an order of magnitude higher than the classical value for the dust composition in protoplanetary disks \citep[e.g.,][]{1994ApJ...421..615P,2001ApJ...553..321D}.
We note, however, that recent studies on the dust composition of protoplanetary disks \citep[see][and references therein]{2018ApJ...869L..45B} suggest that $f_{\rm dust/ice}$ should be several times higher than that predicted by \citet{1994ApJ...421..615P}.
\citet{2021ApJ...910...26T} also evaluated the dust-to-ice mass ratio using the scattering polarization in the envelope of the low mass protostar L1551 IRS 5, and they found that icy dust grains with the radius of a few \si{\micro}m (or larger) and $f_{\rm dust/ice} \gtrsim 10$ are consistent with the observed polarization excess around a wavelength of 3 \si{\micro}m.
Thus, we can expect that icy planetesimals are formed from dust-rich icy grains with $f_{\rm dust/ice} \gg 1$.
Assuming the mass conservation, the dust density is given by
\begin{equation}
\rho_{\rm d} \simeq \frac{J_{\rm dust}}{\pi {R_{\rm d}}^{2} v_{\rm g}},
\end{equation}
where $R_{\rm d}$ is the radius of the dusty region.
Then, the typical value of the dust-to-gas mass ratio behind the shock front would be obtained as follows:
{\footnotesize
\begin{eqnarray}
\chi & \simeq & f_{\rm dust/ice} {\left( \frac{R_{\rm p}}{R_{\rm d}} \right)}^{2} \frac{2 \gamma}{{\left( \gamma + 1 \right)}^{2}} \frac{\alpha {v_{0}}^{2}}{L_{\rm eva}} \nonumber \\
& \simeq & 0.8 {\left( \frac{f_{\rm dust/ice}}{9} \right)} {\left( \frac{R_{\rm d} / R_{\rm p}}{3} \right)}^{-2} {\left( \frac{\alpha}{0.03} \right)} {\left( \frac{v_{0}}{12\ {\rm km}\ {\rm s}^{-1}} \right)}^{2}.
\label{eq:chi}
\end{eqnarray}
}Therefore, the value of $\chi \simeq 1$ could be achieved in the dusty region caused by the evaporation of undifferentiated icy planetesimals, although there are large uncertainties of the values of $f_{\rm dust/ice}$, $R_{\rm p} / R_{\rm d}$, and $\alpha$.
Thus, future studies on the detailed analysis on the dust-to-gas mass ratio behind the shock front would be essential.
The diameter-density relation among TNOs are investigated so far \citep[e.g.,][]{2012AREPS..40..467B,2019Icar..334...30G}.
Large TNOs whose diameter is larger than 1000 km have usually the bulk density of approximately $2$--$3\ {\rm g}\ {\rm cm}^{-3}$, while mid-sized TNOs with a diameter smaller than 1000 km have the bulk density of approximately $1\ {\rm g}\ {\rm cm}^{-3}$.
\citet{2019Icar..334...30G} pointed out that difference in bulk density may reflect the porosity change.
Thus, icy planetesimals with a diameter smaller than 1000 km would be porous and undifferentiated bodies, and the dusty region may be formed when shock waves are caused by these mid-sized planetesimals.
In contrast, large icy bodies with a diameter larger than 1000 km would be differentiated and might not be suitable for the formation of rimmed chondrules.
\section{results}
\subsection{Impact velocity}
First, we show the impact velocity of fine grains.
Figure \ref{fig:vimp} shows $v_{\rm imp}$ as a function of the distance from the shock front.
Panels (a), (b), and (c) show the results for the cases of $L = 3 \times 10^{4}\ {\rm km}$, $L = 1 \times 10^{4}\ {\rm km}$, and $L = 3 \times 10^{3}\ {\rm km}$, respectively.
Solid lines indicate $v - v_{\rm g} < 0$ while dashed line indicate $v - v_{\rm g} > 0$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig1a}
\includegraphics[width=\columnwidth]{fig1b}
\includegraphics[width=\columnwidth]{fig1c}
\caption{
Impact velocity of fine grains, $v_{\rm imp} = {\left| v - v_{\rm g} \right|}$.
(a) For the case of $L = 3 \times 10^{4}\ {\rm km}$.
(b) For the case of $L = 1 \times 10^{4}\ {\rm km}$.
(c) For the case of $L = 3 \times 10^{3}\ {\rm km}$.
Solid lines indicate $v - v_{\rm g} < 0$ while dashed lines indicate $v - v_{\rm g} > 0$.
We set $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
}
\label{fig:vimp}
\end{figure}
\citet{2019ApJ...877...84A} found that the dynamical evolution of chondrules in shock waves can be divided into two stages: deceleration region behind the shock front (Stage 1) and recovery region where the velocity of chondrules and gas approach the pre-shock velocity (Stage 2).
As shown in Figure \ref{fig:vimp}, the change of Stages 1/2 occurred at around $x \sim 1000\ {\rm km}$ for the case of $\rho_{{\rm g}, 0} = 5 \times 10^{-10}\ {\rm g}\ {\rm cm}^{-3}$, and small chondrules enter Stage 2 earlier than larger chondrules.
This is because smaller chondrules have shorter stopping lengths (see Equations \ref{eq:lstop} and \ref{eq:lstop2}).
For the cases of $L \ge 1 \times 10^{4}\ {\rm km}$, $v_{\rm imp}$ in Stage 2 is approximately proportional to the radius of the bare chondrule $r_{0}$.
In Discussion section, we will derive $v_{\rm imp} = v_{\rm imp} {\left( r_{0} \right)}$ in Stage 2 from an analytical argument.
\subsection{Evolution of rim thickness}
Then, we show the evolution of the thickness of FGRs in the dusty region.
We introduce the results for two cases: rim formation without erosion ($Q_{\rm er} = 0$) and with erosion ($Q_{\rm er} = -1$).
\subsubsection{Rim formation without erosion}
Figure \ref{fig:Delta-x-no-erosion} shows the thickness of FGRs, $\Delta$, as a function of $x$ and $r_{0}$.
Panels (a), (b), and (c) show the results for the cases of $L = 3 \times 10^{4}\ {\rm km}$, $L = 1 \times 10^{4}\ {\rm km}$, and $L = 3 \times 10^{3}\ {\rm km}$, respectively.
Here we set $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig2a}
\includegraphics[width=\columnwidth]{fig2b}
\includegraphics[width=\columnwidth]{fig2c}
\caption{
Thickness of fine-grained rims, $\Delta - r - r_{0}$.
(a) For the case of $L = 3 \times 10^{4}\ {\rm km}$.
(b) For the case of $L = 1 \times 10^{4}\ {\rm km}$.
(c) For the case of $L = 3 \times 10^{3}\ {\rm km}$.
We set $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$, and rim formation without erosion is assumed.
}
\label{fig:Delta-x-no-erosion}
\end{figure}
As shown in Figure \ref{fig:Delta-x-no-erosion}, FGRs with thickness of 10--100 \si{\micro}m are formed via the kinetic dust aggregation process.
We found that the thickness of FGRs formed in Stage 1 is significantly smaller than the final thickness in these simulations; therefore the FGRs are mainly formed in Stage 2.
In addition, for the case of large $L = 3 \times 10^{4}\ {\rm km}$, the thickness is approximately proportional to $r_{0}$.
We derived analytical solutions for the rim thickness formed in Stages 1 and 2 in Discussion section, and the analytical solutions reproduce the linear relationship between $\Delta$ and $r_{0}$.
\subsubsection{Rim formation with erosion}
However, in reality, FGRs would be eroded when $v_{\rm imp}$ is higher than the critical value for erosion.
Although the exact value of the coefficient for erosion, $Q_{\rm er}$, is highly uncertain, the assumption of $Q_{\rm er} < 0$ seems to be more realistic than $Q_{\rm er} = 0$.
Figure \ref{fig:Delta-x-erosion} shows the thickness of FGRs, $\Delta$, as a function of $x$ and $r_{0}$.
Panels (a), (b), and (c) show the results for the cases of $L = 3 \times 10^{4}\ {\rm km}$, $L = 1 \times 10^{4}\ {\rm km}$, and $L = 3 \times 10^{3}\ {\rm km}$, respectively.
We set $Q_{\rm ad} = 0.5$, $Q_{\rm er} = -1$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig2a_er}
\includegraphics[width=\columnwidth]{fig2b_er}
\includegraphics[width=\columnwidth]{fig2c_er}
\caption{
Thickness of fine-grained rims, $\Delta - r - r_{0}$.
(a) For the case of $L = 3 \times 10^{4}\ {\rm km}$.
(b) For the case of $L = 1 \times 10^{4}\ {\rm km}$.
(c) For the case of $L = 3 \times 10^{3}\ {\rm km}$.
We set $Q_{\rm ad} = 0.5$, $Q_{\rm er} = -1$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$, and rim formation with erosion is assumed.
}
\label{fig:Delta-x-erosion}
\end{figure}
Figure \ref{fig:Delta-x-erosion}(a) shows the evolution of $\Delta$ for the case of $L = 3 \times 10^{4}\ {\rm km}$.
For the case of $r_{0} = 1\ {\rm mm}$ (black line), the erosion of FGRs occurs at around $x \simeq 5 \times 10^{4}\ {\rm km}$ but FGRs partly survive after erosion.
Then fine dust grains accrete onto chondrules again; multi-layered FGRs would be formed by single shock-heating event.
Interestingly, many chondrules in Kivesvaara CM2 chondrite are covered by multi-layered FGRs \citep{1992GeCoA..56.2873M} and our scenario might explain the origin of these multi-layered FGRs.
Our scenario also indicates that inner rims formed in a hotter environment than outer rims.
This would be consistent with the observed characteristics of inner rims \citep[e.g., silicate sintering, sulfides growth, and compaction;][]{2021GeCoA.295..135Z}.
Figure \ref{fig:Delta-x-erosion}(b) shows the evolution of $\Delta$ for the case of $L = 1 \times 10^{4}\ {\rm km}$.
For the cases of $r_{0} = 1\ {\rm mm}$ (black line) and $r_{0} = 0.5\ {\rm mm}$ (green line), FGRs formed before erosion are completely eroded once, then re-accretion of FGRs occurs.
Similar evolutionary path are also found in Figure \ref{fig:Delta-x-erosion}(c), i.e., for the case of $L = 3 \times 10^{3}\ {\rm km}$.
We note that the final thickness of FGRs is in the range of 10--100 \si{\micro}m even if we take into account the effect of erosion.
This is because the final thickness of FGRs is mainly controlled by the accretion of fine grains in Stage 2.
\subsection{Dependence of final rim thickness on chondrule radius}
Finally, we show the dependence of final rim thickness on chondrule radius.
Figure \ref{fig:Delta-r-no-erosion} shows the results for the case of $Q_{\rm er} = 0$ (rim formation without erosion) and Figure \ref{fig:Delta-r-erosion} is for the case of $Q_{\rm er} = -1$ (rim formation with erosion).
As shown in Figures \ref{fig:Delta-x-no-erosion} and \ref{fig:Delta-x-erosion}, FGR formation finishes at $x \sim 10^{5}\ {\rm km}$ because $v_{\rm imp} < v_{\rm min}$ for $x \gg 10^{5}\ {\rm km}$.
Then we stop numerical simulations at $x = 10^{6}\ {\rm km}$ in this study.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig_fin_a}
\includegraphics[width=\columnwidth]{fig_fin_b}
\includegraphics[width=\columnwidth]{fig_fin_c}
\includegraphics[width=\columnwidth]{fig_fin_d}
\caption{
Thickness of FGRs, $\Delta$, as a function of chondrule radius, $r_{0}$.
Fine-grained rim formation without erosion is assumed: $Q_{\rm er} = 0$.
The black dashed line indicates the relationship between $\Delta$ and $r_{\rm 0}$ for chondrules in Murchison CM chondrite: ${\left( \Delta / 1\ \si{\micro}{\rm m} \right)} = 0.11 {\left( r_{0} / 1\ \si{\micro}{\rm m} \right)} + 24.5$ \citep{2018E&PSL.481..201H}.
The gray shaded range indicates the typical thickness of FGRs around chondrules in unequilibrated ordinary chondrites: $5\ \si{\micro}{\rm m} \le \Delta \le 40\ \si{\micro}{\rm m}$ \citep{1984PolRe..35..126M}.
}
\label{fig:Delta-r-no-erosion}
\end{figure*}
Figure \ref{fig:Delta-r-no-erosion}(a) shows the results for the case of $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
We found that the final rim thickness is approximately consistent with that for chondrules in Murchison CM chondrite: ${\left( \Delta / 1\ \si{\micro}{\rm m} \right)} = 0.11 {\left( r_{0} / 1\ \si{\micro}{\rm m} \right)} + 24.5$ \citep{2018E&PSL.481..201H}.
The value of $\Delta$ also depends on the spatial scale of the shock, $L$, and our numerical results show a good agreement with observations for CM chondrites when $L = 1 \times 10^{4}\ {\rm km}$ or $3 \times 10^{4}\ {\rm km}$.
Figure \ref{fig:Delta-r-no-erosion}(b) shows the results for the case of $Q_{\rm ad} = 0.2$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
As the accretion rate of FGRs is proportional to $Q_{\rm ad}$, the final thickness of FGRs formed in this setting is smaller than that shown in Figure \ref{fig:Delta-r-no-erosion}(a).
We found that the final rim thickness is in the range of $5\ \si{\micro}{\rm m} \le \Delta \le 40\ \si{\micro}{\rm m}$ for the cases of $L = 1 \times 10^{4}\ {\rm km}$ and $3 \times 10^{3}\ {\rm km}$.
This is consistent with the thickness of FGRs around chondrules in unequilibrated ordinary chondrites \citep{1984PolRe..35..126M}.
The observations by \citet{1984PolRe..35..126M} indicate that the thickness of FGRs is not dependent on the chondrule radius, and similar results are also reported by \citet{bigolski2017formation}.
We note that our results are based on simple one-dimensional simulations.
However, in reality, shock waves caused by eccentric planetesimals are bow shocks.
The trajectories of chondrules are curved and strongly depend on their size \citep[e.g.,][]{2013ApJ...776..101B,katsuda}.
Moreover, we assumed that the coefficient for adhesion is constant in the range of $v_{\rm min} < v_{\rm imp} < v_{\rm max}$; this assumption also seems to be unlikely.
For these reasons, we do not discuss the detailed features of the dependence of $\Delta$ on $r_{0}$ in this study.
Figure \ref{fig:Delta-r-no-erosion}(c) shows the results for the case of $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 1\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.3\ {\rm km}\ {\rm s}^{-1}$.
Interestingly, the thickness of FGRs is significantly smaller than the observed values when $L = 3 \times 10^{4}\ {\rm km}$ and $r_{0} < 300\ \si{\micro}{\rm m}$.
This is because the maximum value of $v_{\rm imp}$ in Stage 2 is lower than $0.3\ {\rm km}\ {\rm s}^{-1}$ if the radius of chondrules is smaller than $300\ \si{\micro}{\rm m}$, as shown in Figure \ref{fig:vimp}(a).
In this case, FGRs cannot be formed in Stage 2 and final thickness would be equal to the thickness formed in Stage 1.
Figure \ref{fig:Delta-r-no-erosion}(d) shows the results for the case of $Q_{\rm ad} = 0.5$, $Q_{\rm er} = 0$, $v_{\rm max} = 0.3\ {\rm km}\ {\rm s}^{-1}$, and $v_{\rm min} = 0.1\ {\rm km}\ {\rm s}^{-1}$.
Although the final thickness of FGRs is smaller than that formed in Figure \ref{fig:Delta-r-no-erosion}(a), FGRs with thickness of 10--100 \si{\micro}m are formed even if $v_{\rm max} = 0.3\ {\rm km}\ {\rm s}^{-1}$.
In conclusion, the kinetic dust aggregation in shock waves around evaporating icy planetesimals would be the leading candidate for the origin of FGRs around chondrules in primitive chondrites.
Figure \ref{fig:Delta-r-erosion} shows the results for the case of FGR formation with erosion ($Q_{\rm er} = -1$).
Although the final thickness of FGRs formed in Figure \ref{fig:Delta-r-erosion} is slightly smaller than that in Figure \ref{fig:Delta-r-no-erosion} ($Q_{\rm er} = 0$), the general trends are similar and FGRs with thickness of 10--100 \si{\micro}m are formed even if we consider the effect of erosion.
This is consistent with the fact that the thickness of FGRs formed in Stage 1 is significantly smaller than that formed in Stage 2.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig_fin_a_er}
\includegraphics[width=\columnwidth]{fig_fin_b_er}
\includegraphics[width=\columnwidth]{fig_fin_c_er}
\includegraphics[width=\columnwidth]{fig_fin_d_er}
\caption{
Thickness of FGRs, $\Delta$, as a function of chondrule radius, $r_{0}$.
Fine-grained rim formation with erosion is assumed: $Q_{\rm er} = -1$.
The black dashed line indicates the relationship between $\Delta$ and $r_{\rm 0}$ for chondrules in Murchison CM chondrite: ${\left( \Delta / 1\ \si{\micro}{\rm m} \right)} = 0.11 {\left( r_{0} / 1\ \si{\micro}{\rm m} \right)} + 24.5$ \citep{2018E&PSL.481..201H}.
The gray shaded range indicates the typical thickness of FGRs around chondrules in unequilibrated ordinary chondrites: $5\ \si{\micro}{\rm m} \le \Delta \le 40\ \si{\micro}{\rm m}$ \citep{1984PolRe..35..126M}.
}
\label{fig:Delta-r-erosion}
\end{figure*}
The relation between the thickness of FGRs and the radius of chondrules is discussed so far.
For chondrules in carbonaceous chondrites, the positive correlation was reported within the range of $100\ \si{\micro}{\rm m} < r_{0} < 1000\ \si{\micro}{\rm m}$ \citep[e.g.,][]{2018E&PSL.481..201H}.
In contrast, no clear correlation between $\Delta$ and $r_{0}$ was found for chondrules in unequilibrated ordinary chondrites \citep{1984PolRe..35..126M}.
Our results show that the positive correlation appears when accretion of FGRs occurs in the almost all region of Stage 2 (see Figure \ref{fig:Delta-x-no-erosion}(a)).
\section{discussion}
\subsection{Rim thickness formed in Stage 1: deceleration region behind the shock front}
As mentioned above, the thickness of FGRs formed in Stage 1 is significantly smaller than that formed in Stage 2.
Here we derive an analytic solution for the thickness of FGRs formed in Stage 1.
The motion of chondrules in Stage 1 is described as the deceleration behind the shock front.
Here we consider the accretion of fine dust grains onto chondrules in Stage 1, and we assume that $v_{\rm g}$, $\rho_{\rm g}$, and $c_{\rm s}$ are almost constant for simplicity.
Although the relative velocity of chondrules with respect to gas is supersonic at $x \ll l_{\rm stop}$, FGRs are not formed in this region because $v_{\rm imp}$ is higher than the maximum velocity for adhesion, $v_{\rm max}$.
Then $v_{\rm imp}$ will drop to the range for adhesion, and FGR formation in Stage 1 will start.
When the relative velocity of chondrules with respect to gas is subsonic, the time evolution of $v_{\rm imp}$ is given by
\begin{eqnarray}
\frac{{\rm d}v_{\rm imp}}{{\rm d}t} & \simeq & - {\left| \frac{{\rm d}v}{{\rm d}t} \right|} \nonumber \\
& \simeq & - \frac{1}{0.64} \frac{\rho_{\rm g}}{\rho} \frac{c_{\rm s} v_{\rm imp}}{r_{0}}.
\end{eqnarray}
For the case of $v_{\rm min} < v_{\rm imp} < v_{\rm max}$, the time evolution of the radius of rimmed chondrules is given by
\begin{eqnarray}
\frac{{\rm d}r}{{\rm d}t} & = & \frac{Q_{\rm ad}}{4} \frac{\rho_{\rm d}}{\rho} v_{\rm imp} \nonumber \\
& \simeq & - \frac{0.64 Q_{\rm ad}}{4} \chi \frac{r_{0}}{c_{\rm s}} \frac{{\rm d}v_{\rm imp}}{{\rm d}t}.
\end{eqnarray}
Then the thickness of FGRs formed in Stage 1 would be approximately given by the following equation:
{\footnotesize
\begin{eqnarray}
\Delta_{1} & = & \frac{0.64 Q_{\rm ad}}{4} \chi \frac{v_{\rm max} - v_{\rm min}}{c_{\rm s}} r_{0} \nonumber \\
& \simeq & 2\ {\left( \frac{Q_{\rm ad}}{0.5} \right)} {\left( \frac{\chi}{1} \right)} {\left( \frac{v_{\rm max} - v_{\rm min}}{900\ {\rm m}\ {\rm s}^{-1}} \right)} {\left( \frac{r_{0}}{100\ \si{\micro}{\rm m}} \right)}\ \si{\micro}{\rm m}.
\end{eqnarray}
}
Our analytic solution suggests that the thickness of FGRs formed in stage 1 is $\Delta_{1} \simeq 2\ {\left( {r_{0}}/{100\ \si{\micro}{\rm m}} \right)}\ \si{\micro}{\rm m}$, and this value is one order of magnitude smaller than the observed thickness of FGRs around chondrules in CM chondrites \citep[e.g.,][]{2018E&PSL.481..201H}.
Thus we need to consider the FGR formation in Stage 2.
\subsection{Rim thickness formed in Stage 2: quasi-steady state in recovery region}
Similarly, we can derive the analytic solution for the thickness of FGRs formed in Stage 2.
When the spatial scale of the shock is sufficiently larger than the stopping length ($L \gg l_{\rm stop}$), the motion of chondrules in Stage 2 is described as the dynamically quasi-steady state.
In this region, the velocities of both gas and chondrules recover (see Equation \ref{eq:vg}), and the relative velocity of the chondrule to the gas is negligibly smaller than $v_{\rm g}$ \citep[see also][]{2019ApJ...877...84A}.
When we consider the quasi-steady state for the dynamics of chondrules in Stage 2, the differential of the velocity of chondrules is approximately given by the following equation:
\begin{eqnarray}
{\left| \frac{{\rm d}v}{{\rm d}x} \right|} & = & \frac{v}{l_{\rm stop}} \nonumber \\
& \simeq & \frac{v_{\rm g}}{l_{\rm stop}} \nonumber \\
& \simeq & \frac{1}{0.64} \frac{\rho_{\rm g}}{\rho} \frac{c_{\rm s}}{v_{\rm g}} \frac{v_{\rm imp}}{r_{0}}.
\end{eqnarray}
On the other hand, the differential of the velocity of gas is given as follows (see Equation \ref{eq:vg}):
\begin{equation}
{\left| \frac{{\rm d}v_{\rm g}}{{\rm d}x} \right|} = \frac{\left| v_{\rm g} - v_{0} \right|}{L}.
\end{equation}
Assuming that ${{\rm d}v} / {{\rm d}x}$ and ${{\rm d}v_{\rm g}} / {{\rm d}x}$ are approximately equal, the relative velocity of the chondrule from the gas, which is equal to $v_{\rm imp}$, is derived as follows:
\begin{equation}
v_{\rm imp} \simeq 0.64 \frac{\rho}{\rho_{\rm g}} \frac{v_{\rm g}}{c_{\rm s}} \frac{\left| v_{\rm g} - v_{0} \right|}{L} r_{0}.
\end{equation}
As $v_{\rm imp}$ takes the maximum at around $x \sim L$, we show the value of $v_{\rm imp}$ at $x = L$ as a reference:
{\footnotesize
\begin{eqnarray}
v_{\rm imp}|_{x = L} \simeq & & 120\ {\left( \frac{\rho_{{\rm g}, 0}}{5 \times 10^{-10}\ {\rm g}\ {\rm cm}^{-3}} \right)}^{-1} \nonumber \\
& & \times {\left( \frac{L}{3 \times 10^{4}\ {\rm km}} \right)}^{-1} {\left( \frac{r_{0}}{100\ \si{\micro}{\rm m}} \right)}\ {\rm m}\ {\rm s}^{-1}.
\end{eqnarray}
}
Then we can calculate the time evolution of the radius of rimmed chondrules.
When the impact velocity of fine dust grains satisfies $v_{\rm min} < v_{\rm imp} < v_{\rm max}$, the differential of the radius of rimmed chondrules is given by
\begin{eqnarray}
\frac{{\rm d}r}{{\rm d}x} & = & \frac{Q_{\rm ad}}{4} \frac{\rho_{\rm d}}{\rho} \frac{v_{\rm imp}}{v} \nonumber \\
& \simeq & \frac{0.64 Q_{\rm ad}}{4} \chi \frac{\left| v_{\rm g} - v_{0} \right|}{c_{\rm s}} \frac{r_{0}}{L}.
\end{eqnarray}
The maximum thickness formed in Stage 2, $\Delta_{2, {\rm max}}$, is therefore given by the following equation:
\begin{eqnarray}
\Delta_{2, {\rm max}} & = & \int_{0}^{\infty}\ {\rm d}x\ \frac{{\rm d}r}{{\rm d}x} \nonumber \\
& \simeq & 32\ {\left( \frac{Q_{\rm ad}}{0.5} \right)} {\left( \frac{\chi}{1} \right)} {\left( \frac{r_{0}}{100\ \si{\micro}{\rm m}} \right)}\ \si{\micro}{\rm m}.
\end{eqnarray}
We found that $\Delta_{2, {\rm max}} \gg \Delta_{1}$, thus FGRs would be mainly formed in Stage 2, quasi-steady state in recovery region.
The maximum thickness of FGRs formed in stage 2 is $\Delta_{2, {\rm max}} \simeq 32\ {\left( {r_{0}}/{100\ \si{\micro}{\rm m}} \right)}\ \si{\micro}{\rm m}$, and this value can explain the existence of thick FGRs around chondrules found in CM chondrites \citep[e.g.,][]{2018E&PSL.481..201H}.
We note that the thickness of FGRs formed in Stage 2 is approximately equal to $\Delta_{2, {\rm max}}$ only when $v_{\rm min} < v_{\rm imp}|_{x = L} < v_{\rm max}$.
When $v_{\rm imp}|_{x = L} \gg v_{\rm max}$, the thickness of FGRs is smaller than $\Delta_{2, {\rm max}}$ because fine dust grains cannot accrete onto chondrules at around $x \sim L$.
This effect appears in the blue line in Figures \ref{fig:Delta-r-no-erosion}(d) and \ref{fig:Delta-r-erosion}(d); FGRs around chondrules with radius larger than $0.25\ {\rm mm}$ are thinner than $\Delta_{2, {\rm max}}$.
In addition, FGRs are not formed in Stage 2 when $v_{\rm imp}|_{x = L} \ll v_{\rm min}$.
We also note that the power-law exponent for the relation between $\Delta$ and $r_{0}$ (for chondrules in carbonaceous chondrites) is still under debate.
Although several studies \citep[e.g.,][]{1992GeCoA..56.2873M,2004Icar..168..484C} reported that $\Delta$ is approximately proportional to $r_{0}$, \citet{2018E&PSL.481..201H} pointed out that $\Delta$ is approximately proportional to the square root of $r_{0}$.
When accretion of FGRs occurs in the entire region of Stage 2, our model predicts that $\Delta$ is proportional to ${r_{0}}^{1 - \beta}$, where $\beta$ is the exponent for the velocity dependence of $Q_{\rm ad}$ (i.e., $Q_{\rm ad}$ is proportional to ${v_{\rm imp}}^{- \beta}$).
Thus the relation between $\Delta$ and $r_{0}$ could be reproduced if $\beta \simeq 0.5$ in the range of $v_{\rm min} < v_{\rm imp} < v_{\rm max}$.
Although we set $\beta = 0$ (i.e., $Q_{\rm ad}$ is constant) in this preliminary study, we need to investigate the velocity dependence of $Q_{\rm ad}$ from laboratory experiments.
\subsection{Co-existence of rimmed and unrimmed chondrules}
Although FGRs are frequently observed around chondrules in primitive chondrites, the occurrence rate is not 100\%.
For unequilibrated ordinary chondrites, the occurrence rate is 79\% for Semarkona, 70\% for Watonga, and 59\% for Bishunpur \citep{2017LPICo1987.6234B}.
In addition, the occurrence rate of FGRs is only 15--20\% for Allende CV chondrite \citep{2018E&PSL.494...69S}.
Therefore, we must give an explanation for the co-existence of rimmed and unrimmed chondrules in the context of FGR formation.
Several mechanisms were proposed so far: \citet{2010GeCoA..74.4438T} claimed that unrimmed chondrules have lost FGRs during the brecciation process on their parent bodies, whereas \citet{2021A&A...652A..40U} proposed that unrimmed chondrules were formed via collisional fragmentation of chondritic aggregates in the solar nebula.
In our scenario, FGRs are formed via the kinetic dust aggregation process in the dusty region formed behind the evaporating icy planetesimal.
We note that dusty regions would be formed only when shock waves are caused by undifferentiated icy planetesimals; no dusty regions are expected for the case of differentiated planetesimals.
Therefore, if chondrules are formed via shock-wave heating events caused by both undifferentiated and differentiated planetesimals, we can expect the co-existence of rimmed and unrimmed chondrules.
As the critical diameter of icy planetesimals for differentiation would be approximately 1000 km, parts of chondrules might be formed via shock waves caused by huge planetesimals (or protoplanets) whose diameter is far larger than 1000 km.
\subsection{The oxygen isotope ratios and Mg\# systematics of chondrules}
The Mg\# of chondrules, which is defined as Mg\# = [MgO] / [MgO + FeO] in molar percent, reflects the oxidation state of iron during chondrule formation, and we can estimate the environment of chondrule formation (e.g., oxygen fugacity) from the Mg\#.
The mass-independent oxygen isotope fractionation, ${\Delta}^{17}$O, is also useful to estimate the redox conditions and dust-to-ice mass ratio in chondrule formation environment \citep[e.g.,][]{2015GeCoA.148..228T,2018GeCoA.224..116H,2020PNAS..11723426W}.
\citet{2015GeCoA.148..228T} calculated the dust-to-gas and dust-to-ice mass ratios in chondrule formation environment for chondrules in CR chondrites.
Using the mass balance and the equilibrium condensation model, they reported that type I (Mg\# $>$ 90) chondrules would be formed in moderately dust-rich environments (100--200 times the solar metallicity) and from ice-dust mixtures with 0--0.8 times the abundance of ice in CI chondrites.
Similar results are also reported by \citet{2018GeCoA.224..116H} for type I chondrules in CV chondrites.
When chondrules formed via bow shocks around evaporating undifferentiated icy planetesimals, Equation (\ref{eq:chi}) predicted that the degree of dust enrichment would be on the order of 100 (i.e., the dust-to-gas mass ratio is on the order of 1).
This value is approximately consistent with the results from Mg\#--${\Delta}^{17}$O systematics for type I chondrules in carbonaceous chondrites \citep[e.g.,][]{2020PNAS..11723426W}.
The dust-to-ice mass ratio in chondrule formation environment would be approximately equal to the bulk composition of the planetesimals.
Therefore, undifferentiated icy planetesimals with slightly dust-rich compared to the CI composition might be suitable to reproduce the oxygen isotope ratios and Mg\# systematics.
We will discuss the redox conditions and dust-to-ice mass ratio in chondrule formation environment in future studies.
\section{summary}
FGRs are frequently found around chondrules in primitive chondrites.
The remarkable feature of FGRs is their submicron-sized and non-porous nature \citep[e.g.,][]{2006GeCoA..70.1271T,2008GeCoA..72..602C}.
The typical thickness of FGRs around chondrules is 10--100 \si{\micro}m.
\editX{Recently,} \citet{2019GeCoA.264..118L} proposed an idea for the origin of FGRs: high-speed collisions between chondrules and fine dust grains, which is called the kinetic dust aggregation process\editX{.
Experimental and numerical studies revealed that (sub)micron-sized ceramic particles can stick to a ceramic substrate in a vacuum, and the impact velocity for sticking is approximately 0.1--1 km/s} \citep[see][and references therein]{hanft2015overview}.
The resulting dust layer formed via the kinetic dust aggregation \editR{would} have low porosity and \editX{are} \editR{be} fine grained\editX{, as illustrated in Figure \ref{fig:Liffman}}.
Therefore, \editX{we can} \editR{it would be possible to} reproduce the observed structure of FGRs if they are formed via the kinetic dust aggregation process, which should be related to chondrule-forming supersonic events.
\editR{In this study,} \editX{We} \editR{we} examined the possibility of FGR formation via kinetic dust aggregation in chondrule-forming shock waves (see Figure \ref{fig:schematic}).
When shock waves are caused by undifferentiated icy planetesimals, fine dust grains would be released from the planetary surface due to evaporation of icy planetesimals \citep[e.g.,][]{2013ApJ...764..120T}.
\editX{The enrichment of fine dust grains in chondrule-forming environment would be preferred from a variety of perspectives.}
\editR{Then the dusty region would be formed behind the shock front.}
We studied the dynamics of chondrules behind the shock front \editR{using simple one-dimensional calculations}, and the growth of FGRs via kinetic dust aggregation was investigated.
Our key findings are summarized as follows.
\begin{enumerate}
\item{
\editR{As} \citet{2019ApJ...877...84A} \editX{found that} \editR{pointed out,} the dynamical evolution of chondrules in shock waves can be divided into two stages: deceleration region behind the shock front (Stage 1) and recovery region where the velocity of chondrules and gas approach the pre-shock velocity (Stage 2).
\editX{For the case of shock waves,} \editR{We showed that} $v_{\rm imp}$ is approximately proportional to $r_{0}$ in Stage 2.
\editX{We derived an analytical solution for $v_{\rm imp} = v_{\rm imp} {\left( r_{0} \right)}$ in Stage 2.}
}
\item{
We found that non-porous FGRs with the thickness of 10--100 \si{\micro}m are formed in shock waves around evaporating icy planetesimals (Figures \ref{fig:Delta-x-no-erosion} and \ref{fig:Delta-x-erosion}).
This thickness is in good agreement with observations \citep[e.g.,][]{1984PolRe..35..126M,2018E&PSL.481..201H}.
We also found that the thickness of FGRs formed in Stage 1 is significantly smaller than that formed in Stage 2.
}
\item{
We derived analytic solutions for the thickness of FGRs formed in Stages 1 and 2.
The motion of chondrules in Stage 1 is described as the deceleration behind the shock front, while the motion of chondrules in Stage 2 is described as the dynamically quasi-steady state.
Our analytical solutions also predict that the thickness of FGRs is proportional to the chondrule radius when the effect of erosion is negligible.
}
\item{
In some cases, the erosion of FGRs occurs but FGRs partly survive after erosion, and fine dust grains accrete onto chondrules again (see Figure \ref{fig:Delta-x-erosion}).
Thus multi-layered FGRs would be formed by single shock-heating event; this might be consistent with the fact that chondrules in some CM2 chondrites are covered by multi-layered FGRs \citep{1992GeCoA..56.2873M}.
}
\item{
Although FGRs are frequently observed around chondrules in primitive chondrites, the occurrence rate is not 100\%.
\editX{Therefore, we should give an explanation for the co-existence of rimmed and unrimmed chondrules.}
In our scenario, \editX{FGRs are formed via the kinetic dust aggregation process} \editR{FGR formation would proceed} in the dusty region formed behind the evaporating icy planetesimal.
We note that dusty regions would be formed only when shock waves are caused by undifferentiated icy planetesimals; no dusty regions are expected for the case of differentiated planetesimals.
Therefore, if chondrules are formed via shock-wave heating events caused by both undifferentiated and differentiated planetesimals, we can expect the co-existence of rimmed and unrimmed chondrules.
}
\end{enumerate}
\section*{acknowledgments}
\editR{The anonymous reviewer provided a constructive review that improved this paper.}
\editX{We} \editR{The authors} thank Yuji Matsumoto for helpful comments.
S.A.\ was supported by JSPS KAKENHI Grant No.\ JP20J00598.
T.N.\ was supported by JSPS KAKENHI Grant No.\ JP18K03721.
|
2,869,038,155,027 | arxiv | \section{Introduction}
\subsection{Context}
Compartmental models, based on ordinary differential equations, have a long history of use in epidemiology. Indeed, it is now almost a century since William Ogilvy Kermack and Anderson Gray McKendrick introduced the well known SIR model, \cite{MK}, on which most compartmental modeling elaborates on. Their relevance stems for their extreme simplicity and ability to capture important qualitative behaviors, rather than their limited capacity to make quantitative predictions. As a matter of fact, the current outbreak of COVID-19, caused by SARS-CoV-2, is teaching us that most models, not only the compartmental ones, are of very limited quantitative predictive capacity. Nevertheless, there are important qualitative features of outbreaks that one may attempt to capture, for instance: under which conditions can one guarantee that there will be only one peak? This stands as a very important question which ought to be answered if one is to effectively prevent a second, or third wave of infections. Specially, having in mind the concerns to be taken once a first wave have passed. The goal of this article is to propose and analyze some compartmental models which have been constructed to capture some important features of diseases such as COVID-19. Its two major characteristics which are not captured by the standard compartmental models are:
\begin{itemize}
\item[(i)] The long incubation period, during which the individuals that have been exposed do not yet develop symptoms but are already contagious (at least in the second half of that period). Even though the standard compartmental models do not take into account the contagiousness of the exposed individuals, they are easy to modify in order to account for it.
\item[(ii)] There is a non-negligible fraction of the contagious population which either has mild symptoms or never develops them, thus passing undetected. These are the so-called asymptomatic carriers. After some period of time they are no longer contagious and, as the infected individuals, developed anti-bodies. This suggests they are, at least for now, immune. The standard compartmental models which have a carrier mode are not suitable for modeling such a behavior.
\end{itemize}
In this article, namely in Sections \ref{sec:SEIR} and \ref{sec:SEIAR}, we analyze some compartmental models that have been specially designed to handle these two features. The resulting compartmental models are more complicated than the standard ones and using a mix of precise and numerical results, we shall answer some fundamentally important questions which we now pose:
\begin{question}\label{que:1}
Regarding each of the models:
\begin{itemize}
\item[(a)] Are there disease-free equilibrium states?
\item[(b)] Having answered positively to (a), one may ask whether there are necessary or sufficient conditions for these disease free equilibrium states to be stable in some sense.
\end{itemize}
\end{question}
\begin{question}\label{que:2}
Again, for each of the models one can ask:
\begin{itemize}
\item[(a)] Are there criteria guaranteeing that any outbreak of the epidemic will have a unique peak?
\item[(b)] If these criteria are violated, can the outbreak have other separate peaks, i.e. second, third waves and so?
\item[(c)] Can these possible later peaks be higher than the first?
\end{itemize}
\end{question}
Depending on which model one considers, these two questions will be answered separately in Sections \ref{sec:SEIR} and \ref{sec:SEIAR}. For instance, using the more refined model we give in Corollary \ref{cor:One_Peak} and Remark \ref{rem:After_Cor} a quantitative condition ensuring only one peak will form. The broad conclusion is that if all transmission rates are kept sufficiently small, then only one peak form, which answers item (a) of question \ref{que:2}. If this condition is not met, more peaks can form as we show in example \ref{ex:Second_Wave_Second_Model} and these can be higher, thus answering items (b) and (c) of question \ref{que:2} above.\\
As for the next question, it will not be answered in any precise way but we shall give some evidence on how to answer it based on numerical results. This will be done in the examples \ref{ex:Realistic}, \ref{ex:Realistic2} and \ref{ex:Second_Wave_Second_Model} presented in subsection \ref{ss:Controling_Outbreak_A}.
\begin{question}\label{que:3}
What is the role of asymptomatic carriers? Can they help the spread of the disease and thus make the outbreak worse by anticipating and increasing its peak and/or can they shield the rest of the population by creating herd immunity.
\end{question}
One other major question which one poses now is related to the quantifying by how much specific policies reduce the rate of transmission.
\begin{question}\label{que:4}
Is there a strategy to effectively reduce the rate of transmission in a manner that can be quantified?
\end{question}
Broadly speaking, for the general strategy or policy it is hard, if not impossible, to make such a quantitative analysis. For instance, how can we predict by how much the use of masks by the general population reduces the rate of transmission? or washing hands? Indeed, it is impossible to say it a-priori. We only know that these measures do reduce the rate of transmission but not by how much. Having this in mind, we propose one further measure whose contribution to the reduction of the transmission rate can be quantified using the models. The ``idea'' is to split the population into $n$ different groups which are supposed to never interact, this reduces the contact rate of any individual by a factor of $n$ thus reducing the overall transmission rates by a factor of $n$. This intuitive argument can be made more precise using the models we use. The last section of this article, Section \ref{sec:Groups}, is dedicated to making such an argument and running a simulation. We shall also explain how this relates to our answer to the questions \ref{que:1} and \ref{que:2} above.
\subsection{A note on the timing of this article}
Once we are through the worst part of the first wave of the COVID-19 outbreak with relative success, meaning that only a small fraction of the population got infected and so we cannot count with group immunity, the question we must pose is: How can we proceed and go back to a relative normality so that a putative second wave is as mild as possible?\\
I hope that the answers to question \ref{que:2} above, stated as Proposition \ref{lem:Only_One_Maximum} and as Corollary \ref{cor:One_Peak} depending on the model, may serve as good indication that it is possible, but difficult, to have only one peak. They quantify and show what one could already suspect, that the way to do so is to reduce all the transmission rates. It is enough that one transmission rate be high for the outbreak to be out of control, see Conclusion \ref{conclusion:Second_Model}. How to keep the transmission rates small remains a formidable challenge which I do not attempt at answering.\\
As said before, there is no way to quantify how most policies reduce the transmission rates. In that direction Section \ref{sec:Groups} explains a strategy, which may be difficult to implement, but whose effect on the transmission rate can be pined down.
\begin{disclaimer}
I understand that the strategy of splitting the population into non-interacting groups is of difficult implementation. I do believe it cannot be literally implemented in a society which respects our civil liberties and so would remain at the level of a request to the population, which one would hope to be understood. Of course, in that scenario the different groups would interact, even if mildly. Having such considerations into account leads to modifications of the proposed solutions which will consequently modify the models one should use into mathematically more intractable ones. We may hope that the increased mathematical sophistication of those models will not lead to substantially different qualitative outcomes but instead quantitative ones. In any case, one must understand that the models are simply that: attempts to model a reality which is much more complex. They are, by no means, a truthful reflection of the real world.
\end{disclaimer}
\subsection*{Acknowledgment}
I want to thank \'Alvaro Kruger Ramos for his comments on an earlier version of this manuscript.\\
Gon\c{c}alo Oliveira is supported by Funda\c{c}\~ao Serrapilheira 1812-27395, by CNPq grants 428959/2018-0 and 307475/2018-2, and by FAPERJ through the grant Jovem Cientista do Nosso Estado E-26/202.793/2019.
\section{A discussion of possible compartmental models}
Recall that the main goal of this article is to find a good and simple compartmental models which realistically capture the properties of a disease similar to COVID-19, at least in terms of the dynamics of transmission. We intend to capture the following:
\begin{itemize}
\item[(i)] Account for the contagiousness of exposed individuals which may have a long incubation period;
\item[(ii)] Account for asymptomatic carriers;
\item[(iii)] Account for the possibly non-negligible mortality rate associated with the disease.
\end{itemize}
In fact, looking at a time scale of a few months it is actually conceivable that the background birth and death dynamics be negligible when compared with the death rate associated with the disease itself. Thus, in such a time scale, it is perhaps more realistic to only associate a death rate to the infected population. For an introduction to the use of compartmental models in epidemiology se for example \cite{C} and \cite{AR}.
\subsection{The modified SEIR model}
We shall start by constructing a version of the SEIR model where $S$, $E$, $I$, $R$ represent the fraction of the population which is susceptible, exposed, infected and recovered respectively. In order to keep the system as simple as possible one may attempt to regard the asymptomatic carriers as being part of the exposed population. For this one must regard each exposed individuals as representing an average of someone that is in the incubation period and an asymptomatic carrier.
We are then led to the following system
\begin{align}\label{eq:ODE1}
\dot{S} & = - \beta_i I S - \beta_e E S \\ \label{eq:ODE2}
\dot{E} & = \beta_i I S + \beta_e E S - a E \\ \label{eq:ODE3}
\dot{I} & = a E - \gamma I - \mu I \\ \label{eq:ODE4}
\dot{R} & = \gamma I ,
\end{align}
for some positive functions of time $\beta_e$, $\beta_i$, $a$, $\gamma$. The parameters $\beta_e$, $\beta_i$ represent the transmission rates for the exposed and infected respectively, $a$ denotes the rate of transition from the exposed to infected state and $\gamma$ the rate of recovery. For example, in the easiest case, one would take
$$\beta_i = \frac{\overline{\beta_i}}{S(t)+E(t)+I(t)+R(t)}, \ \text{and} \ \beta_e = \frac{\overline{\beta_e}}{S(t)+E(t)+I(t)+R(t)}, $$
for some constants $\overline{\beta_e}$, $\overline{\beta_i}$. This is not the standard SEIR model as we are forcefully making the exposed interact with the susceptible as the first of these are assumed to be contagious. The analysis of this model is the scope of Section \ref{sec:SEIR}.
\subsection{The SEIAR model}\label{ss:SEIAR}
There are at least two easy ways to further refine this model. First, we may split the exposed into two groups, those which are not yet infectious and those which already are. Secondly, we may have some of these exposed passing directly to the recovered state to account for the asymptomatic carriers which never develop symptoms and have therefore passed undetected. Thirdly, we may distinguish the asymptomatic carriers in a more effective way as they may remain so for a longer period of time.
\subsubsection{Splitting the exposed into groups (SEEIR)}
We may attempt at splitting the exposed into two new compartments, the initially exposed $E_i$ and the final exposed state $E_f$. The idea is that the initially exposed ones are not yet contagious while the final are. To account for the first two requirements above we propose the following system.
\begin{align*}
\dot{S} & = - \beta_i I S - \beta_e E_f S \\
\dot{E}_i & = \beta_i I S + \beta_e E_f S - a_i E_i \\
\dot{E}_f & = a_i E_i - a_f E_f - \gamma_e E_f \\
\dot{I} & = a_f E_f - \gamma_i I - \mu I \\
\dot{R} & = \gamma_i I + \gamma_e E_f.
\end{align*}
I expect this model to be quite good at capturing many qualitative the phenomena. However, given the interest in explicitly analyzing the asymptomatic carriers we shall instead focus on a different model.
\subsubsection{Having the asymptomatic carriers separate (SEAIR)}\label{subsubsec:Asymptomatic}
The goal of this model is to better capture the effects of asymptomatic carriers. In the case of COVID-19 we do know this to be a non-trivial part of all those which are contagious. However, given the small number of anti-bodies tests that have been made so far, not much quantitative knowledge regarding these is known.\\
In order to account for the asymptomatic carriers we consider one more compartment of the population, denoted by $A$, which represents the fraction of the population which is an asymptomatics carrier. Then, we propose the following model
\begin{align*}
\dot{S} & = - \beta_i I S - \beta_e E S - \beta_a A S \\
\dot{E} & = \beta_i I S + \beta_e E S + \beta_a A S - a_i E - a_a E \\
\dot{A} & = a_a E - \gamma_a A \\
\dot{I} & = a_i E - \gamma_i I - \mu I \\
\dot{R} & = \gamma_i I + \gamma_a A ,
\end{align*}
for some positive functions of time $\beta_i$, $\beta_e$, $\beta_a$, $a_i$, $a_a$, $\gamma_i$, $\gamma_a$, $\mu$ which need not be constants. This system will be carefully analyzed in Section \ref{sec:SEIAR}. For now, we shall simply comment on the biological meaning of all the parameters on the model. Similar remarks hold for the parameters of the simpler SEIR type model. The $\beta_i$, $\beta_e$, $\beta_a$ respectively represent the transmission rates of the infected, exposed (in th incubation period), and asymptomatic individuals. The parameters $a_i$, $a_a$ represent the rate at which the exposed individuals respectively pass to the infected and asymptotic compartments. In particular, $\tfrac{a_i}{a_i+a_a}$ and $\tfrac{a_a}{a_i+a_a}$ respectively represent the fraction of the population which if exposed will eventually become infected or asymptomatic. The quantity $\gamma_i^{-1}$ represents the average time an infected individual takes to recover, while $\gamma_a^{-1}$ is the average time an asymptomatic individual takes to stop being contagious. Finally, $\mu$ stands for the death rate associated with the disease. For example, in the easiest case one would take
$$\beta_i = \frac{\overline{\beta_i}}{S(t)+E(t)+A(t)+I(t)+R(t)}$$
for a constant $\overline{\beta_i}$ with similar formulas for $\beta_e$ and $\beta_a$. For convenience, we show in figure \ref{fig:Diagram} a diagrammatic presentation of this model.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.3\textheight]{Diagram}
\caption{Diagram of the SEIAR model.}
\label{fig:Diagram}
\end{figure}
A somewhat similar model have also recently been used in \cite{Re} and in \cite{Vo}. The first of these attempted at predicting the fraction of undocumented infections for COVID-19 in China. One other similar version of it using Markov Chains have also recently been used to predict the spatiotemporal spread of COVID-19 in Spain, see \cite{Ae} for this work.
\subsection{Refining the models}
There are a few natural ways to further refine the models proposed above.
\subsubsection{With exposed split and asymptomatic carriers separate (SEEAIR)}\label{subsubsec:Combined}
This is a combination of the previous two models proposed in subsection \ref{ss:SEIAR}. This models also splits the exposed population in two groups and includes the asymptomatic carriers as well.
\begin{align*}
\dot{S} & = - \beta_i I S - \beta_e E_f S - \beta_a A S \\
\dot{E}_i & = \beta_i I S + \beta_e E_f S + \beta_a A S - a E \\
\dot{E}_f & = a E_i - a_f E_f - a_a E_f \\
\dot{A} & = a_a E - \gamma_a A \\
\dot{I} & = a_i E - \gamma_i I - \mu I \\
\dot{R} & = \gamma_i I + \gamma_a A .
\end{align*}
This differs from the previous model by the splitting $E=E_i+E_f$ with the $E_i$ not yet infectious. The $E_f$ may then become visibly infected ($I$) or only exhibit mild symptons or even completely asymptomatic ($A$).
\subsubsection{With birth and death dynamics}
We can further refine the previous model by including a birth and death dynamics. We assume there is a birth rate $\Lambda$ of healthy individuals and a normal death rate $\mu_n$ by other conditions totally unrelated to the disease which affects all the population. In this manner the model becomes
\begin{align*}
\dot{S} & = \Lambda - \beta_i I S - \beta_e E_f S - \beta_a A S - \mu_n S \\
\dot{E}_i & = \beta_i I S + \beta_e E_f S + \beta_a A S - a E - \mu_n E_i \\
\dot{E}_f & = a E_i - a_f E_f - a_a E_f - \mu_n E_f \\
\dot{A} & = a_a E_f - \gamma_a A - \mu_n A \\
\dot{I} & = a_i E_f - \gamma_i I - \mu I -\mu_n I\\
\dot{R} & = \gamma_i I + \gamma_a A -\mu_n R.
\end{align*}
As we are interested in running the model for short periods of time, such as a few months, I expect the birth rate to be relatively small and so negligible. Similarly, as the normal death rate equally affects the whole populating I believe that including it in the model will simply lead to unimportant complications.
\section{The simplest somewhat realistic model}\label{sec:SEIR}
The goal of this section is to analyze the model proposed in subsection \ref{sec:SEIR}. For convenience we recall here the system of ordinary differential equations ruling it.
\begin{align}\label{eq:ODE1}
\dot{S} & = - \beta_i I S - \beta_e E S \\ \label{eq:ODE2}
\dot{E} & = \beta_i I S + \beta_e E S - a E \\ \label{eq:ODE3}
\dot{I} & = a E - \gamma I - \mu I \\ \label{eq:ODE4}
\dot{R} & = \gamma I .
\end{align}
It is straightforward to check that all $S,E,I,R$ remain nonnegative and that $S+E+I+R $ is non-increasing and so $S+E+I+R \leq S(0)+E(0)+I(0)+R(0) =1$.
\subsection{The linearized system}
From the last equation above, i.e. \ref{eq:ODE4}, a critical point
$$(S,E,I,R)=(S_c,E_c,I_c,R_c)$$
will certainly have $I_c=0$ which then also gives $E_c=0$ from the semi-last equation \ref{eq:ODE3}. Then, the reaming equations \ref{eq:ODE1} and \ref{eq:ODE2} we conclude that $R_c$ and $S_c$ can be any two constants so that $R_c+S_c \leq 1$. These are disease free equilibrium points and are obviously not isolated. Thus, the Grobman-Hartmann theorem cannot be applied but I believe the linearization to still carry important information. Thus, we shall linearize the system around such an equilibrium and use it to define the following notion.
\begin{definition}
Given a disease free equilibrium point $(S_c , 0, 0, R_c)$ is called infectiously-stable if the associated linear system is such that any of its solutions $(s,e,i,r)$ satisfies
$$\lim_{t \to + \infty} i(t)=0.$$
\end{definition}
Intuitively, we may think of this condition as saying that perturbing away from the equilibrium always makes the disease to get extinct. In what follows of this linear analysis we shall suppose $\beta_i$, $\beta_e$ are constant which is a reasonable assumption for large $t \gg 1$. In order to study their stability, we shall now linearize the equations around these equilibria. The linearized system is given by
\begin{align*}
\dot{s} & = - \beta_i S_c i - \beta_e S_c e \\
\dot{e} & = \beta_i S_c i + \beta_e S_c e - a e \\
\dot{i} & = a e - (\gamma +\mu ) i \\
\dot{r} & = \gamma i .
\end{align*}
Clearly, the dynamics of $(e,i)$ controls the whole dynamics which evolve independently. The stability is therefore dependent on the sign of the eigenvalues. As we want both of these two be negative we need to have $\det(A)>0$ and $\tr(A)<0$, where $A$ is the matrix
$$A= \begin{pmatrix}
\beta_e S_c - a & \beta_i S_c \\
a & - (\gamma +\mu )
\end{pmatrix},$$
which controls the system. Thus, the condition for stability is
\begin{align*}
- (\gamma +\mu )(\beta_e S_c - a) - a \beta_i S_c & > 0 \\
\beta_e S_c - (\gamma +\mu ) & < 0,
\end{align*}
which we can rewrite as
\begin{align*}
a \beta_i S_c & < (\gamma +\mu )(a-\beta_e S_c) \\
\beta_e S_c & < (\gamma +\mu ) .
\end{align*}
The second of this equations says that the rate at which the exposed infect other people must be smaller than the rate of recovery (plus the death rate). The first says that not only the exposed must be infecting people slower than they pass to the infected state ($\beta_eS_c < a$) but also, the already infected must infect others at a very very slow rate when compared to the recovery rate. More precisely, we have obtained the following result.
\begin{lemma}
For the generic values of $\beta_e$, $\beta_i$, $a$, $\gamma$, $\mu$. The disease free equilibrium $(S_c , 0, 0, R_c)$ is infectiously-stable if
\begin{equation}\label{eq:Upper_Bound_S_c}
S_c < \min \big\lbrace \frac{\gamma+\mu}{\beta_e} , \frac{a(\gamma+ \mu)}{a \beta_i + (\gamma+\mu) \beta_e} \big\rbrace .
\end{equation}
In particular, if both $\frac{\gamma+\mu}{\beta_e}$ and $\frac{a(\gamma+ \mu)}{a \beta_i + (\gamma+\mu) \beta_e}$ are greater than one, we find that any disease free equilibrium is stable.
\end{lemma}
\begin{remark}\label{conclusion:First_Model}
It is possible to reduce the population of those infected keeping many people $S_c<S(0)$ still susceptible (without ever having been infected). One way to do that is to have the transmission rate of those exposed small when compared to the rate of recovery, i.e.
$$\frac{\beta_e}{\gamma + \mu} < 1 .$$
Furthermore, one must have the rate of transmission of those infected even smaller when compared with rate of recovery, namely that
$$ \beta_i< (\gamma + \mu)\left(1-\frac{\beta_e}{a}\right) , $$
which in particular also assumes that $\beta_e < a$, i.e. that the rate at which the exposed infect the susceptible is small when compared with the rate of transition from the exposed to infected state. We may also rewrite this last condition as
$$\frac{\beta_i}{\gamma+\mu} + \frac{\beta_e}{a} < 1,$$
which suggests regard the left-hand-side as an analogue of the basic reproduction rate for this model.\\
Achieving these conditions may, however, be a difficult task to accomplish. If they are not, then the final population of susceptible may be very low which is quite bad.
\end{remark}
\subsection{Global stability}\label{subsec:Global_Stability}
We shall now find a sufficient condition for a solution to the system \ref{eq:ODE1}--\ref{eq:ODE4} to asymptotically approach a disease free equilibrium state. With this in mind, it is convenient to abstract such a notion as follows.
\begin{definition}
A solution $(S,E,I,R)$ to \ref{eq:ODE1}--\ref{eq:ODE4} is called asymptotically disease free if
$$\lim_{t \to + \infty} E(t)+I(t) =0 .$$
\end{definition}
\begin{proposition}
Let $(S,E,I,R)$ be a solution to \ref{eq:ODE1}--\ref{eq:ODE4} satisfying
\begin{equation}\label{eq:assumption0}
\sup_{t \geq 0}\left( \frac{\beta_i}{\gamma+\mu} \right) + \sup_{t \geq 0}\left( \frac{\beta_e}{a} \right) < 1.
\end{equation}
Then, $(S,E,I,R)$ is asymptotically disease free.
\end{proposition}
\begin{proof}
Let $\epsilon \in (0,1)$ to be fixed later and consider the function $L_{\epsilon}(t)= E(t) + \epsilon I(t)$. Then, using equations \ref{eq:ODE2} and \ref{eq:ODE3} we compute
\begin{align*}
\dot{L_\epsilon} & = \dot{E} + \epsilon \dot{I} \\
& = (\beta_eES + \beta_i I S -aE) + \epsilon( a E - (\gamma+\mu)I) \\
& = a E \left( \frac{\beta_e}{a} S - (1-\epsilon) \right) + (\gamma+\mu) I \left( \frac{\beta_i}{\gamma+\mu} S - \epsilon \right) .
\end{align*}
Then, by the assumption \ref{eq:assumption0} there is $\epsilon \in (0,1)$ such that $\sup_{t \geq 0}\left( \frac{\beta_i}{\gamma+\mu}\right) S(0) < \epsilon$ and $\sup_{t \geq 0}\left( \frac{\beta_e}{a} \right) S(0) < 1 - \epsilon$. Picking such an $\epsilon$, the above computation shows that $L_{\epsilon}$ is decreasing and satisfies an inequality of the form
$$\dot{L_\epsilon} \leq - \delta L_{\epsilon},$$
for some $\delta >0$. Then, Gr\"onwall's inequality yields that $L_{\epsilon} \leq L_{\epsilon}(0) e^{-\delta t}$ and thus converges to $0$ as $t \to \infty$. In particular, as both $E$ and $I$ are non-negative we find that both of these must converge to $0$.
\end{proof}
\begin{remark}
The previous proof actually shows that it is enough that
$$\sup_{t \geq 0}\left( \frac{\beta_i}{\gamma+\mu} S(t) \right) + \sup_{t \geq 0}\left( \frac{\beta_e}{a} S(t) \right) < 1,$$
for the same conclusion to hold. Furthermore, it shows that under the above hypothesis, both $E$ and $I$ exponentially converge to $0$.
\end{remark}
\subsection{Examples and numeric simulations}
\begin{example}\label{ex:First}
Consider this model with $S(0)=0.99$, $I(0)=0$, $E(0)=0.01$ and $R(0)=0$ (for small nonzero $I(0),E(0)$ the outcome seems to not be very dependent on the precise value) whose simulation is shown in figure \ref{fig:First}. In this precise case, only a very small fraction of the population got away without being infected.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{First}
\caption{Example when $\beta_i=0.9$, $\beta_e=2.5$, $a=1$, $\gamma=0.9$, $\mu=0.1$ which has
$$\frac{\beta_e}{\gamma+\mu} \sim 2.78 , \ \ \frac{\beta_i}{\gamma+\mu}+ \frac{\beta_e}{a} \sim 3.5,$$
and initial conditions $S(0)=0.99$, $I(0)=0$, $E(0)=0.01$ and $R(0)=0$. The red, blue, green and purple curves respectively denote the fraction of the population composed of susceptible, exposed, infected and recovered.}
\label{fig:First}
\end{figure}
Indeed, form equation \ref{eq:Upper_Bound_S_c} for the stability of the disease free equilibrium we find that it must satisfy
\begin{equation}
S_c < \min \big\lbrace \frac{\gamma+\mu}{\beta_e} , \frac{a(\gamma+ \mu)}{a \beta_i + (\gamma+\mu) \beta_e} \big\rbrace =\frac{2}{7} ,
\end{equation}
which is in agreement with what we see in figure \ref{fig:First}.\\
On the other hand, we may consider the same model but supposing that social isolation and other measures have been put in place to reduce the transmission rate. In figure \ref{fig:First2} we run one such example.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{Fourth}
\caption{Example when $\beta_i=0.45$, $\beta_e=1.25$, $a=1$, $\gamma=0.9$, $\mu=0.01$ which has
$$\frac{\beta_e}{\gamma+\mu} \sim 1.389 , \ \ \frac{\beta_i}{\gamma+\mu}+ \frac{\beta_e}{a} \sim 1.75,$$
and initial conditions $S(0)=0.99$, $I(0)=0$, $E(0)=0.01$ and $R(0)=0$. The color code is that of figure \ref{fig:First}.}
\label{fig:First2}
\end{figure}
In this case, equation \ref{eq:Upper_Bound_S_c} gives that the stable disease free equilibrium to which the model converges satisfies $S_c<\tfrac{4}{7}$ which is compatible with what can be inferred from figure \ref{fig:First2}.
\end{example}
\begin{remark}
It would be interesting to actually get a lower bound on $S_c$ rather than an upper bound.
\end{remark}
\subsubsection{Example with two peaks}
We could now try and make $\beta_i$ and $\beta_e$ periodic in a way that, on average, the inequalities in Conclusion \ref{conclusion:First_Model} hold true. In that case, I expect that, after the peak, the number of infected people will also decrease, in average, with time. Of course, it may be that, with no extra measures, the actual values of $\beta_e$ and $\beta_i$ are so high that by opening up (even if only for a short period of time) will make the task of making the inequalities above hold in average very difficult. It is exactly this scenario that we explore in the next example where after a strict control of the transmission rate, it is once again allowed to become large. This is an alert to the dangers of a non-careful reopening, after social distancing measures had been put in place, for an insufficient time so that herd immunity develops. In this example we can actually see that the second peak can be made larger than the first which provides an answer to the items (b) and (c) in question \ref{que:2}.
\begin{example}\label{ex:Second_Peak_Easy_Model}
Figure \ref{fig:Second_Peak} shows an example where after seeing positive signs in the decrease in the number of infected, the rate of transmission is allowed to increase. This leads to a formation of a second peak of infections which ends up infecting almost everyone.\footnote{This model is made with $\gamma=0.9$, $\mu=0.01$, $a=1$ and $\beta_i=0.9(\tfrac{H(1-t)}{10}+H(t-5))$, $\beta_e=2.5(\tfrac{H(1-t)}{10}+H(t-5))$, where $H(t)=\tfrac{1}{10}+S_{23}(t)$ for $S_{23}$ the first 23 terms of the Fourier series in $[-20,20]$ of the Heaviside function.}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{Second_Peak}
\caption{Example of an un-careful reopening which leads to a second peak.}
\label{fig:Second_Peak}
\end{figure}
Indeed, we can see in the plot in figure \ref{fig:Criteria_1} that the conditions in Conclusion \ref{conclusion:First_Model} are violated right before time $t=5$. This leads to a change of tendency which will makes the curve of infected restart increasing and eventually leading to the formation of the second peak. This is an example of something we want to avoid.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{Criteria_1}
\caption{Plots of $\frac{\beta_e}{\gamma+\mu}$ and $\frac{\beta_i}{\gamma+\mu} + \frac{\beta_e}{a}$ which one can see to surpass the value $1$ before $t=5$.}
\label{fig:Criteria_1}
\end{figure}
\end{example}
\begin{remark}
This example leads to a fundamental question. Is there a criteria which guarantees that no second peak will form? This is precisely the problem raised in th problem (a) by question \ref{que:2}. Preferably a criteria depending on $\beta_e$ and $\beta_i$ which are the quantities that can be somewhat controlled using social distancing and other measures. We shall answer this question next in Proposition \ref{lem:Only_One_Maximum}.
\end{remark}
\subsection{Rigorous qualitative results}
First, notice that from the first equation \ref{eq:ODE1} we find that $\dot{S} \leq 0$ so $S(t)$ is decreasing. Consider the asymptotic fraction of susceptible population
$$S_c := \lim_{t \rightarrow + \infty} S(t) , $$
and recall that it is to our interest that $S_c>0$ and as big as possible (of course $S_c \leq S(0) \leq 1$). Similarly, we shall define $I_c$ and $E_c$ as the asymptotic values of $I$ and $E$ if they exist.
\begin{lemma}\label{lem:E_I_to_zero}
Suppose that $S_c >0$, then $I_c=0=E_c$.
\end{lemma}
\begin{proof}
Again, from the first equation \ref{eq:ODE1} we have
$$S(t) = S(0) \exp \left( - \int_0^t (\beta_iI+\beta_eE) ds \right).$$
which will converge to zero if $I$ and $E$ do not themselves uniformly converge to zero.
\end{proof}
Political measures can more easily affect the transmission rate than the recovery and death rate which are more dependent on medical conditions. Thus, in the next result we find a, potentially useful, criteria for having only one peak which does not assume the transmission rates $\beta_i, \beta_e$ to be constant. This provides a rigorous answer to the item (c) of question \ref{que:2}.
\begin{proposition}\label{lem:Only_One_Maximum}
Suppose that $a, \gamma, \mu$ are all constant and $(S,E,I,R)$ is a nonconstant solution of \ref{eq:ODE1}--\ref{eq:ODE4}. If there is a time $t_\ast$ such that $\dot{I}(t_\ast)<0$ and
$$S(t_\ast)\leq \inf_{t \geq t_*} \left( \frac{a (\gamma+\mu)}{a \beta_i + (\gamma + \mu) \beta_e} \right),$$
then, for $t \geq t_\ast$, the fraction of infected population $I(t)$ has at most one critical point. Moreover, if any such critical point exists, then it is a maximum. In particular, if
\begin{equation}\label{eq:Criteria_No_Second_Peak}
\sup_{t \geq 0} \left( \frac{\beta_e}{a} + \frac{\beta_i }{\gamma+\mu} \right) \leq 1,
\end{equation}
then $I(t)$ at most a unique maximum.
\end{proposition}
\begin{proof}
First notice that if $I$ and $E$ both vanish at some point, then the solution is constant. Then, we may suppose this is not the case and differentiate the third equation \ref{eq:ODE3}, using the second \ref{eq:ODE2} to substitute for $\dot{E}$. This gives
\begin{align*}
\ddot{I} = a \dot{E} - (\gamma+\mu) \dot{I} = a (\beta_i I S + \beta_e E S- a E) - (\gamma+\mu) \dot{I},
\end{align*}
which at a critical point of $I$ yields
$$\ddot{I} = a (\beta_i I S + \beta_e E S ) - a^2 E . $$
As between any two maxima there must be a minimum, in order for $I$ to only have one maximum it is enough that it has no minimum. At a minimum we would have $\ddot{I} \geq 0$ which from the previous computation is ruled out if
$$ a \beta_i I S + ( \beta_e S - a ) aE. $$
However, the condition that we are at a critical point of $I(t)$, i.e. $\dot{I}(t)=0$, yields that $aE=(\gamma+\mu) I$ which upon inserting above gives $ a \beta_i S + ( \beta_e S - a ) (\gamma+\mu) <0 $, which in terms of $S$ reads as
$$S<\frac{a (\gamma+\mu)}{a \beta_i + (\gamma + \mu) \beta_e}.$$
Thus, if $S$ always satisfies this inequality past some time, then there is at most one critical point of $I$ and this must a maximum, if it exists at all.
\end{proof}
\begin{remark}
The previous result gives a criteria to guarantee that a second peak will not form. For that, one must try and control the $\beta_e$ and $\beta_i$ so that equation \ref{eq:Criteria_No_Second_Peak} holds.
\end{remark}
\section{A model capturing asymptomatic carriers}\label{sec:SEIAR}
In this section we analyze the model previously derived in \ref{subsubsec:Asymptomatic} which was designed to capture asymptomatic carriers. This model regards the exposed (E) in a slightly different manner than in the more refined model \ref{subsubsec:Combined}. Namely, we consider the exposed as only one state and average out their infectiousness. For convenience, we recall here the system of equations governing this model.
\begin{align}\label{eq:A1}
\dot{S} & = - \beta_i I S - \beta_e E S - \beta_a A S \\ \label{eq:A2}
\dot{E} & = \beta_i I S + \beta_e E S + \beta_a A S -a_a E - a_i E \\ \label{eq:A3}
\dot{A} & = a_a E - \gamma_a A \\ \label{eq:A4}
\dot{I} & = a_i E - \gamma_i I - \mu I \\ \label{eq:A5}
\dot{R} & = \gamma_i I + \gamma_a A .
\end{align}
From equation \ref{eq:A1} we find that $S(t)$ is decreasing and from adding \ref{eq:A1} and \ref{eq:A2} we find that $S(t)+E(t)$ as well.
\subsection{Linear analysis}\label{ss:Linear_Second_Model}
Let us now find the critical points of this system. From equations \ref{eq:A3}, \ref{eq:A4} and \ref{eq:A5} we find that for the generic values of $a_a$, $a_i$, $\gamma_i$, $\gamma_a$, $\mu$ all $E,A,I$ must vanish at a critical point. From the remaining equations we find that $S$ and $R$ can be any arbitrary constants as long as $S+R \leq 1$. We shall write such a critical point as $c=(S,E,A,I,R)=(S_c,0,0,0,R_c)$. Notice in particular that any such critical point is disease free.\\
Then, the linearised system at such a critical point is given by
\begin{align*}
\dot{s} & = - \beta_i S_c i - \beta_e S_c e - \beta_a S_c a \\
\dot{e} & = \beta_i S_c i + \beta_e S_c e + \beta_a S_c a - (a_i + a_a )e \\
\dot{a} & = a_a e - \gamma_a a \\
\dot{i} & = a_i e - (\gamma_i +\mu) i \\
\dot{r} & = \gamma_i i + \gamma_a a .
\end{align*}
In this case, we have that the equations for $\dot{x}$ where $x:=(e, a, i)$ do control the whole system. This subsystem is then given by $\dot{x}=Ax$ where
$$A = \begin{pmatrix}
\beta_e S_c - (a_i + a_a) & \beta_a S_c & \beta_i S_c \\
a_a & - \gamma_a & 0 \\
a_i & 0 & - (\gamma_i + \mu)
\end{pmatrix}.$$
While it is possible to compute the eigenvalues of this system in general, the formulas are extremely unwieldy and so it is hard to find a general statement which decides on the stability of the system in general. Nevertheless, as necessary condition for stability is that $\det(A)<0$ and $\tr(A)<0$ which respectively give
\begin{align}\label{eq:Necessary_Conditions}
S_c < \frac{\gamma_a (\gamma_i + \mu) (a_i + a_a)}{\gamma_a (\gamma_i + \mu) \beta_e + \gamma_a a_i \beta_i + (\gamma_i + \mu) a_a \beta_a} , \ \text{and} \
S_c < \frac{a_i + a_a + \gamma_a + \gamma_i + \mu}{\beta_e} .
\end{align}
If $\beta_e < a_i +a_a + \gamma_a + \gamma_i + \mu$, then the first equation above suggests that the quantity
\begin{equation}\label{eq:R0_A}
\frac{1}{a_i+a_a} \left( \beta_e + a_i \frac{\beta_i}{\gamma_i + \mu} + a_a \frac{\beta_a}{\gamma_a} \right),
\end{equation}
may behave as the basic reproduction rate for this model. Hence, and in order to have large (closed to $1$) values of the asymptotic fraction of susceptible, we would like to keep \ref{eq:R0_A} below $1$. This is then compatible with having $S_c$ close to $1$, at least from the point of view of the first equation in \ref{eq:Necessary_Conditions}.
\begin{remark}\label{conclusion:Second_Model}
The above discussion suggests that for an outbreak to be under control we must have $\beta_e < a_i +a_a + \gamma_a + \gamma_i + \mu$ and the quantity in equation \ref{eq:R0_A} must be kept below $1$. Furthermore, from formula \ref{eq:R0_A} one can immediately see that it is enough that one of the transmission rates $\beta_e$, $\beta_i$, or $\beta_a$ to be large, for the outbreak to be out of control!
\end{remark}
\subsection{Global stability}
As in subsection \ref{subsec:Global_Stability} we will now characterize the asymptotic stability properties of the disease free equilibrium points of solutions to \ref{eq:A1}--\ref{eq:A5}. As in that subsection, it is convenient to abstract the notion of asymptotic stability for these equilibrium.
\begin{definition}
A solution $(S,E,A,I,R)$ to \ref{eq:A1}--\ref{eq:A5} is called asymptotically disease free if
$$\lim_{t \to + \infty} E(t)+A(t)+I(t) =0 .$$
\end{definition}
\begin{proposition}
Let $(S,E,A,I,R)$ be a solution to \ref{eq:A1}--\ref{eq:A5} satisfying
\begin{equation}\label{eq:assumption01}
\sup_{t \geq 0}\frac{\beta_e}{a_i+a_a} + \sup_{t \geq 0}\frac{a_i}{a_i+a_a} \frac{\beta_i}{\gamma_i + \mu} + \sup_{t \geq 0}\frac{a_a}{a_i+a_a} \frac{\beta_a}{\gamma_a} < 1 .
\end{equation}
Then, $(S,E,A,I,R)$ is asymptotically disease free. Furthermore, all $E$, $A$ and $I$ exponentially converge to $0$.
\end{proposition}
\begin{proof}
Following a similar strategy to that of subsection \ref{subsec:Global_Stability} we shall consider the function $L_{\epsilon}(t)= E(t) + \epsilon_a A(t)+ \epsilon_i I(t)$ for constants $\epsilon_a , \epsilon_i \in (0,1)$ to be fixed at a later stage. Then, it from equations \ref{eq:A2}, \ref{eq:A3} and \ref{eq:A4} follows that
\begin{align*}
\dot{L_\epsilon} & = (\beta_eES + \beta_a A S + \beta_i I S - (a_a+a_i) E) + \epsilon_a ( a_a E - \gamma_a A ) + \epsilon_i ( a_i E - (\gamma_i+\mu)I) \\
& = (a_i + a_a) E \left( \frac{\beta_e}{a_i+a_a} S - \left(1- \frac{\epsilon_a a_a + \epsilon_i a_i}{a_i+a_a} \right) \right) + \gamma_a A \left( \frac{\beta_a}{\gamma_a} S - \epsilon_a \right) + (\gamma_i+\mu) I \left( \frac{\beta_i}{\gamma_i +\mu} S - \epsilon_i \right) ,
\end{align*}
which we can rewrite as
$$(a_i + a_a) E \left( \frac{\beta_e}{a_i+a_a} S - \left(1- \frac{\epsilon_a a_a + \epsilon_i a_i}{a_i+a_a} \right) \right) + \frac{\gamma_a A}{a_a} \left( \frac{a_a\beta_a}{\gamma_a} S - \epsilon_a a_a \right) + \frac{\gamma_i+\mu}{a_i} I \left( \frac{a_i\beta_i}{\gamma_i +\mu} S - \epsilon_ia_i \right) .
$$
By the assumption \ref{eq:assumption01} there are $\epsilon_a, \epsilon_i \in (0,1)$ such that
$$ \sup_{t \geq 0} \frac{\beta_a}{\gamma_a}S < \epsilon_a , \ \ \sup_{t \geq 0} \frac{\beta_i}{\gamma_i + \mu}S < \epsilon_i $$
and $\frac{\beta_e}{a_i+a_a} <1- \frac{\epsilon_a a_a + \epsilon_i a_i}{a_i+a_a}$. Thus, we find that $L_{\epsilon}$ is decreasing and satisfies
$$\dot{L_\epsilon} \leq - \delta L_{\epsilon},$$
for some positive $\delta >0$. As a consequence of Gr\"onwall's inequality we find that $L_{\epsilon} \leq L_{\epsilon}(0) e^{-\delta t}$ and thus converges exponentially to $0$ as $t \to \infty$.
\end{proof}
\begin{remark}
The previous proof actually shows that the same result holds under the weaker hypothesis that
$$\sup_{t \geq 0}\frac{\beta_e(t) S(t)}{a_i+a_a} + \sup_{t \geq 0}\frac{a_i}{a_i+a_a} \frac{\beta_i(t) S(t)}{\gamma_i + \mu} + \sup_{t \geq 0}\frac{a_a}{a_i+a_a} \frac{\beta_a(t) S(t)}{\gamma_a} < 1 .$$
\end{remark}
\subsection{Controlling an outbreak}\label{ss:Controling_Outbreak_A}
The main problem when faced with an outbreak is to keep it under control. But what does that mean in the context of the model we are considering. One possible definition is that the asymptotic value os susceptible
$$S_c := \lim_{t \to + \infty}S(t),$$
is large. Alternatively, we may attempt at keeping the total number of infected people, i.e.
$$\lim_{t \to + \infty} \int_0^tI(s) ds,$$
as small as possible. The proof of Lemma \ref{lem:E_I_to_zero} show that these two definitions are equivalent for the model \ref{eq:ODE1}--\ref{eq:ODE4}. However, in model at hand there is the possibility that as susceptible passes to the recovered state taking the asymptomatic route which makes this model fundamentally different. Indeed, from integrating the first equation \ref{eq:A1} we find that
$$S(t) = S(0) \exp \left( -\int_0^t (\beta_iI+\beta_eE+ \beta_a A) ds \right).$$
This shows that maximizing $S_c$ is equivalent to minimizing
$$\int_0^t (\beta_iI+\beta_eE+ \beta_a A) ds.$$
From the point of view of the second definition, one regards the asymptomatic to be innocuous, which seems reasonable from a purely medical point of view as they never become sick. Furthermore, it is conceivable that they can create herd immunity which shields the rest of the population. However, intuitively speaking, having a relatively large number of asymptomatic may also be bad, at least initially while $S$ is large, as they may move undetected and ``infect'' even more people. The numerical conclusion to which we arrive is that depending on the exact parameters of the model the asymptomatic can carry both roles.
\subsubsection{Examples}
Below we shall run some simulations which seem to suggest the previous intuitive reasoning to be correct in different settings. In order to this we shall fix the parameters $\beta_i,\beta_e,\beta_a,\gamma_i,\gamma_a,\mu$ and vary $a_i$ and $a_a$. The examples in this section have been constructed in order to provide answers to the question \ref{que:3} raised in the Introduction.
\begin{remark}
A few words must be said about the choice of the parameters $\beta_i,\beta_e,\beta_a,a_i,a_a,\gamma_i,\gamma_a,\mu$ ruling the model. It is at the date medically impossible to determine the values of all these and so it is meaningless to assign them specific values and pretend we are modeling the true outbreak. Nevertheless, I must justify the values I will be assigning in my model. I shall suppose that
$$\beta_i < \beta_e < \beta_a,$$
and recall that these encode the number of infections that the infected, exposed and asymptomatic cause by unit time. This choice may seem strange but there is a reason for this choice. I assume that the infected, while possibly being extremely infectious, are dealt with carefully and so do not infect as many other people by unit time as the exposed and asymptomatic. This justifies $\beta_i < \beta_a$ and also $\beta_i < \beta_e$. The exposed, even though not having been detected yet, are not as infectious as they will eventually become later thus $\beta_e < \beta_a$. Notice that we do not have $\beta_e<\beta_i$ as the infected individuals will be detected and dealt with in a way that avoids further infections. Without this extra care that inequality would hold. From all this we then have:
$\beta_i < \beta_e < \beta_a$, as claimed.\\
Next, we shall assume in our simulations that
$$\gamma_i< \gamma_a$$
which means that the average time $\gamma_a^{-1}$ an asymptomatic individual takes to reduce its viral charge to zero is smaller than the time an infected indivual needs $\gamma_i^{-1}$. This is highly debatable and, even though seemingly reasonable, I have no way to better justify this choice.
\end{remark}
The simulation in the next example \ref{ex:Realistic} illustrates a case where having a larger fraction of asymptomatics carriers did not make the disease develop faster and actually lead to a smaller peaks of infection. This need not always be the case as we shall later see in example \ref{ex:Realistic2}.
\begin{example}\label{ex:Realistic}
In all the simulations we shall run in this example we shall have $\beta_i=0.9$, $\beta_e=1.5$ and $\beta_a=2.8$ while $\gamma_i=0.8$, $\gamma_a=1.2$ and $\mu=0.01$.\\
In Figure \ref{fig:Comparison_Same_Sum} we shall plot the number of infected people, these are those which actually get sick, as a function of time where we compare different simulations obtained with the same value of $a_i+a_a=1$ but different values of $a_i$ and $a_a$ individually.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{Comparison_Same_Sum}
\caption{In black, green, blue, red and purple the values of $I(t)$ are plotted for the simulations obtained from $a_a$ being $0.1$, $0.3$, $0.5$, $0.7$ and $0.9$ respectively.}
\label{fig:Comparison_Same_Sum}
\end{figure}
Indeed, from this simulation we can infer that the smaller $a_a$ is, or equivalently the larger $a_i$ is, the further high and late the peak is. Intuitively we can justify this from the fact that a large $a_a$ gives a lot of asymptomatic carriers, which when recovered create a group immunity shielding the rest of the population.
\end{example}
I believe the values of the transmission rates $\beta_e$, $\beta_i$, $\beta_a$ in the previous example \ref{ex:Realistic} are reasonable and somewhat realistic. Nevertheless, in order to better understand the role that asymptomatic carriers can carry it is actually more convenient to slightly exaggerate the values by having $\beta_a \gg \beta_e$. We shall do so in the next example which illustrates one other role that asymptomatic carries can have. Namely, that in an early stage, i.e. when $S$ is still large, having more asymptomatic carriers can cause a faster increase of the infected population anticipating and increasing the peak.
\begin{example}\label{ex:Realistic2}
In this example we use the same constants of the previous example except for the values of $\beta_i$ and $\beta_a$ which we shall set to be $\beta_i=0.5$ and $\beta_a=7$. As explained before, the fact that $\beta_a$ is so big when compared to $\beta_i$ means not only that the disease is extremely contagious but also that those individuals which are showing symptoms are isolated and handled extremely carefully while the asymptotic ones are not. In figure \ref{fig:Comparison_Same_Sum2} we have plotted in black and green the simulation obtained when $a_a=0.1$ (and $a_i=0.9$) and $a_a=0.2$ (and $a_i=0.8$) respectively.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{Comparison_Same_Sum2}
\caption{In black and green are the plots of $I(t)$ for the simulations obtained from $a_a$ being $0.1$ and $0.2$ respectively.}
\label{fig:Comparison_Same_Sum2}
\end{figure}
In this very particular example, we see that the higher probability of an exposed person to become asymptomatic leads to higher degree of contagion which itself leads to a higher earlier peak of the infected population. It is natural to inquire about the actual population of asymptomatic carriers in both of these simulations, these are plotted in figure \ref{fig:Comparison_A}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{Comparison_A}
\caption{In black and green are the plots of $A(t)$ for $a_a$ being $0.1$, $0.2$ respectively.}
\label{fig:Comparison_A}
\end{figure}
\end{example}
\subsubsection{Second waves}
In the simpler model given by the system \ref{eq:ODE1}--\ref{eq:ODE4} we saw in Example \ref{ex:Second_Peak_Easy_Model} that a second wave can form. In the same manner as in that case, we expect this will happen when the quantity
$$\frac{1}{a_i+a_a} \left( \beta_e + a_i \frac{\beta_i}{\gamma_i + \mu} + a_a \frac{\beta_a}{\gamma_a} \right) ,$$
derived in subsection \ref{ss:Linear_Second_Model}, rises above $1$. We shall now see an example of a solution to the model \ref{eq:A1}--\ref{eq:A5} for which a second wave forms. In fact, we shall see that the second peak can be larger than the first and thus this example will serve as an answer to the items (b) and (c) of question \ref{que:2} using this more refined model. Also, we shall use this opportunity to further elaborate on the role of asymptomatic carriers giving further input towards question \ref{que:3} in the Introduction.
\begin{example}\label{ex:Second_Wave_Second_Model}
Let $\beta_i(t)=0.9f(t)$, $\beta_e(t)=1.5f(t)$, $\beta_a(t)=2.8f(t)$, $a_a=0.2$ and $a_i=0.8$, $\gamma_i=0.8$, $\gamma_a=1.2$ and $\mu=0.01$, for some function $f(t)$ which was carefully designed\footnote{We are using $f(t)=\tfrac{1}{10}+S_{37}(2-t)+S_{37}(t-7)$ where $S_{37}(t)$ is the sum of the first 37 terms of the Fourier series of the Heaviside function on $[-20,20]$.} but whiose specific form is unimportant for now. As a matter of showing the importance of the quantity \ref{eq:R0_A} we shall plot this in figure \ref{fig:R0} which we can see has two large plateaus, well above $1$, connected by a region where it is substantially smaller. The simulation run for these values is presented in figure \ref{fig:Second_Peak_2} and is made with the initial conditions $S(0)=0.99$, $E(0)=0.01$, $I(0)=0=R(0)$.\\
One sees that the first peak is too small for a sufficient fraction of the population to acquire immunity and the high increase of the rate of transmissions, here enconded in the quantity \ref{eq:R0_A} leads to a second peak of the outbreak.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{R0}
\caption{Plot of $\frac{1}{a_i+a_a} \left( \beta_e(t) + a_i \frac{\beta_i(t)}{\gamma_i + \mu} + a_a \frac{\beta_a(t)}{\gamma_a} \right)$.}
\label{fig:R0}
\end{figure}
Indeed, this second wave of the outbreak creates an even larger peak than the first.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{Second_Peak_2}
\caption{Plot of $I(t)$ and $A(t)$ in green and black. These represent the infected and the asymptomatic carriers respectively.}
\label{fig:Second_Peak_2}
\end{figure}
Of course, the actual values of the transmission rates we used in this example may not be well adapted for a realistic modeling.\footnote{Even though, I have tried to make them as reasonable as I could.} Nevertheless, this serves as an example to stress the possibility of having a second outbreak that may be worse than the first.\\
As a way of comparison, and to further investigate on the role of asymptomatic carriers, we present in figure \ref{fig:Second_Peak_2B} a simulation of the same model except that we now have a much higher fraction of asymptomatic carriers, in this case $a_a=0.8$ and $a_i=0.2$. This means that each individual has a $80$ per cent probability of becoming asymptomatic when exposed to the disease.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth,height=0.35\textheight]{Second_Peak_2B}
\caption{Plot of $I(t)$ and $A(t)$ in green and black respectively.}
\label{fig:Second_Peak_2B}
\end{figure}
In this case, we see that the large quantity of asymptomatic carriers keeps the peaks of infection much smaller. In fact, we also find that in this case the second peak is smaller. This can be interpreted in terms of the herd immunity acquired by the asymptomatic carriers which shields the rest of population while keeping the number of infections smaller.
\end{example}
\begin{remark}
Example \ref{ex:Second_Wave_Second_Model} raises the question of whether it is possibly to find conditions under which an outbreak will develop a unique peak. This is precisely the constant of item (a) in question \ref{que:2}. As we shall see in Corollary \ref{cor:One_Peak} of the next section such a criteria can be found and cast in terms of the quantity \ref{eq:R0_A}.
\end{remark}
\subsection{Precise qualitative results}
As in Lemma \ref{lem:E_I_to_zero} we find that in order for
$$S_c := \lim_{t \to + \infty}S(t),$$
to be positive we must have all $A(t)$, $I(t)$ and $E(t)$ converging to zero as $t \to + \infty$. The other natural easy question to investigate is that of finding criteria for which there will be only one peak of the outbreak, very much as done in Proposition \ref{lem:Only_One_Maximum} for the simpler model \ref{eq:ODE1}--\ref{eq:ODE4}. Here, we shall find an analogue of that which holds under more restrictive hypothesis.
\begin{proposition}\label{prop:One_Peak}
Let $(S,E,A,I,R)$ be a solution to \ref{eq:A1}--\ref{eq:A5} with $a_i,a_a, \gamma_i, \mu$ constant and $S_c>0$. Then, there is a constant $c>0$ so that $A \leq c \frac{\gamma_i+\mu}{\gamma_a} \frac{a_a}{a_i} I$ for all $t \geq 0$. Furthermore, if
$$\sup_{t \geq 0} \frac{1}{a_i +a_a}\left( \frac{a_i}{\gamma_i + \mu} \beta_i + \beta_e + c \frac{a_a}{\gamma_a} \beta_a \right) \leq 1,$$
the function $I(t)$ has at most one critical point and this is a maximum if it exists.
\end{proposition}
\begin{proof}
First notice that as $S_c>0$ we have that both $A$ and $I$ converge to zero and so there is a constant $c$ as in the statement.
Thus, in order to prove the result we shall follow the strategy of Proposition \ref{lem:Only_One_Maximum}. This consists in finding conditions which guarantee that $\ddot{I}<0$ at any critical point of $I$. If that is the case, then $I$ can have no minimum and thus will have at most one maximum, as in-between any two maxima there must be a minima. Thus, we start by computing
\begin{align*}
\ddot{I} & = a_i \dot{E} - (\gamma_i + \mu) \dot{I} \\
& = a_i \left( \left( \beta_i I+\beta_e E+ \beta_aA \right)S - (a_i +a_a) E \right) - (\gamma_i + \mu) \dot{I} ,
\end{align*}
which at a critical point of $I$ further satisfies $\dot{I}=0$, i.e. $a_iE=(\gamma_i + \mu) I$, and so we can rewrite
\begin{align*}
\ddot{I} & = a_i \left( \left( \beta_i I+\frac{\beta_e}{a_i} (\gamma_i+\mu) I + \beta_aA \right)S - \frac{a_i +a_a}{a_i} (\gamma_i + \mu) I \right).
\end{align*}
Thus, we want to find conditions under which the following inequality holds
\begin{equation}\label{eq:Inequality_Intermediate}
\left( \beta_i +\frac{\beta_e}{a_i} (\gamma_i+\mu) + \beta_a \frac{A}{I} \right)S - \frac{a_i +a_a}{a_i} (\gamma_i + \mu) <0 .
\end{equation}
If we further have $A \leq c \frac{\gamma_i+\mu}{\gamma_a} \frac{a_a}{a_i} I$, then for inequality \ref{eq:Inequality_Intermediate} to hold, it is enough that
$$
\left( \beta_i +\frac{\beta_e}{a_i} (\gamma_i+\mu) + c \frac{\gamma_i+\mu}{\gamma_a} \frac{a_a}{a_i} \beta_a \right)S - \frac{a_i +a_a}{a_i} (\gamma_i + \mu) <0 ,
$$
which we can rewrite as
$$
\frac{1}{a_i +a_a}\left( \frac{a_i}{\gamma_i + \mu} \beta_i + \beta_e + c \frac{a_a}{\gamma_a} \beta_a \right)S < 1.
$$
As $S \leq 1$ we find that this condition is immediate from that in the statement.
\end{proof}
\begin{remark}\label{rem:Only_One_Peak_A}
From the proof of Proposition \ref{prop:One_Peak} we find that, in fact, for $I(t)$ to only have at most one maximum it is enough to show that
$$ \sup_{t \geq 0} \frac{1}{a_i +a_a}\left( a_i\frac{\beta_i}{\gamma_i + \mu} \beta_i + \beta_e + a_a \frac{\beta_a}{\gamma_i + \mu} \left( \frac{a_iA}{a_aI} \right) \right) < 1 ,$$
or even
$$\sup_{t \geq 0}\frac{1}{a_i +a_a}\left( a_i\frac{\beta_i}{\gamma_i + \mu} \beta_i + \beta_e + a_a \frac{\beta_a}{\gamma_i + \mu} \left( \frac{a_iA}{a_aI} \right) \right) S < 1.$$
Of course, one would like to keep $S(t)$ as close to one as possible and so the first of this seems to be a more reasonable test. Furthermore, it does not require to know the value of $S(t)$ but that of the ratio $A(t)/I(t)$ which may be obtained by sampling with anti-body tests for example.
\end{remark}
Of course, the previous proposition would be much more useful if for any given solution $(S,E,A,I,R)$ one can determine the value of $c$.
\begin{lemma}\label{lem:A_I}
Let $(S,E,A,I,R)$ be a solution to \ref{eq:A1}--\ref{eq:A5}. Then, for all $t \geq 0$ we have
$$\frac{e^{-\gamma_a t}}{a_a}\frac{d}{dt} \left( e^{\gamma_a t} A \right) = \frac{e^{-(\gamma_i+\mu)t}}{a_i}\frac{d}{dt} \left( e^{(\gamma_i + \mu) t} I \right)$$
Furthermore, let $A(0)$, $I(0)$ respectively denote the initial value of asymptomatic and infected individuals. If $a_iA(0)\leq a_aI(0)$ and $\gamma_a \geq \gamma_i + \mu$, then
$$A(t) \leq \frac{a_a}{a_i} I(t).$$
\end{lemma}
\begin{proof}
Equations \ref{eq:A3}, \ref{eq:A4} can be equivalently rewritten as
\begin{align*}
\frac{d}{dt} \left( e^{\gamma_a t} A \right) & = a_a e^{\gamma_a t} E \\
\frac{d}{dt} \left( e^{(\gamma_i + \mu) t} I \right) & = a_i e^{(\gamma_i+\mu) t} E ,
\end{align*}
from which we infer that
\begin{align*}
\frac{e^{-\gamma_a t}}{a_a}\frac{d}{dt} \left( e^{\gamma_a t} A \right) & = \frac{e^{-(\gamma_i+\mu)t}}{a_i}\frac{d}{dt} \left( e^{(\gamma_i + \mu) t} I \right) ,
\end{align*}
and using the hypothesis that $\gamma_a \geq \gamma_i + \mu$
\begin{align*}
\frac{1}{a_a}\frac{d}{dt} \left( e^{\gamma_a t} A \right) & = \frac{e^{(\gamma_a-(\gamma_i+\mu))t}}{a_i}\frac{d}{dt} \left( e^{\gamma_a t} I \right) \\
& = \frac{1}{a_i}\frac{d}{dt} \left( e^{\gamma_a t} I \right) - \frac{\gamma_a-(\gamma_i+\mu)}{a_i} e^{\gamma_a t} I \\
& \leq \frac{1}{a_i}\frac{d}{dt} \left( e^{\gamma_a t} I \right) .
\end{align*}
Integrating this yields
\begin{align*}
\frac{e^{\gamma_a t} A(t)}{a_a} - \frac{A(0)}{a_a} \leq \frac{e^{\gamma_a t} I(t)}{a_i} - \frac{I(0)}{a_i},
\end{align*}
which upon rearranging gives the result in the statement.
\end{proof}
Putting this Lemma \ref{lem:A_I} together with Proposition \ref{prop:One_Peak} we find the following Corollary. This serves as a rigorous answer to the inquire raised in item (a) of question \ref{que:2}.
\begin{corollary}\label{cor:One_Peak}
Let $(S,E,A,I,R)$ be a solution to \ref{eq:A1}--\ref{eq:A5} with $a_i,a_a, \gamma_i, \mu$ constant and $S_c>0$. Suppose that $a_iA(0)\leq a_aI(0)$, $\gamma_a\geq \gamma_i + \mu$ and
\begin{equation}\label{eq:Cor}
\sup_{t \geq 0} \frac{1}{a_i +a_a}\left( \beta_e + \frac{a_i\beta_i + a_a \beta_a}{\gamma_i + \mu} \right) < 1.
\end{equation}
Then, the function $I(t)$ has at most one critical point and this is a maximum if it exists.
\end{corollary}
\begin{remark}\label{rem:After_Cor}
In the previous Corollary \ref{cor:One_Peak}, the hypothesis that $\gamma_a>\gamma_i + \mu$ holds immediately if the average time for an asymptomatic carrier to stop being contageous is smaller than that required by the average infected individual. Still under this hypothesis, if the condition in equation \ref{eq:Cor} holds, then we also have
$$\sup_{t \geq 0} \frac{1}{a_i +a_a}\left( \beta_e + \frac{a_i\beta_i}{\gamma_i + \mu} + \frac{a_a \beta_a}{\gamma_a} \right) < 1,$$
which should be compared with Conclusion \ref{conclusion:Second_Model}.
\end{remark}
\section{Grouping the population}\label{sec:Groups}
Recall that the idea to be implemented and tested is that of splitting the population into $n$ groups which, to first approximation, do not interact. This obviously supposes that in each household there are only people of the same group as otherwise it would be impossible to guarantee that they do not mix.\\
We shall simply describe how the situation works by using the simplest possible model, namely the modified SEIR model described by equations \ref{eq:ODE1}--\ref{eq:ODE4}. For the SEIAR model the situation is similar and the argument follows exactly the same lines.\\
We start by splitting the fraction of susceptible population as $S=\sum_{k=1}^nS_k$ with similar splits for $E$, $I$ and $R$. Suppose that people from each group only interact with those of their group, i.e. that for all $i , j \in \lbrace 1, \ldots , k \rbrace$ and $i \neq j$, no person from group $i$ meets a person from group $j$. Then, each $k \in \lbrace 1 , \ldots , k \rbrace$ we have that $(S_k,E_k,I_k,R_k)$ solves
\begin{align*}
\dot{S}_k & = - \beta_i I_k S_k - \beta_e E_k S_k \\
\dot{E}_k & = \beta_i I_k S_k + \beta_e E_k S_k - a E_k \\
\dot{I}_k & = a E_k - \gamma I_k - \mu I_k \\
\dot{R}_k & = \gamma I_k .
\end{align*}
If we suppose that all populations follow the same dynamics with the same initial data then $S_1 = \ldots = S_n $ and so $S_k=S/n$ for each $k=1, \ldots , n$, and similarly $E_k=E/n$, $I_k=I/n$, $R_k=R/n$. Then, the system above turns into
\begin{align*}
\dot{S} & = - \frac{\beta_i}{n} I S - \frac{\beta_e}{n} E S \\
\dot{E} & = \frac{\beta_i}{n} I S + \frac{\beta_e}{n} E S - a E \\
\dot{I} & = a E - \gamma I - \mu I \\
\dot{R} & = \gamma I ,
\end{align*}
This is the same system as that we started with but with the transmission rates $\beta$'s replaced by $\beta/n$. This cuts the rate of propagation by dividing it by the number of groups in which the population was split.
\begin{remark}
The exact same reasoning leads to a similar conclusion using the model \ref{eq:A1}--\ref{eq:A5}.
\end{remark}
Building on our Conclusion \ref{conclusion:First_Model} we are lead to the following:
\begin{conclusion}
It is possible to reduce the rates of transmission in half, or third, ou a fourth, and so on... Then, reducing it by a large enough factor, it is one can keep a large fraction of the population $S_c<S(0) <1$ still susceptible (without ever having contracted the disease). One way to achieve thus is to do divide the population into $n$ groups which one supposes not to physically interact with one another. Then, one keeps $n$ large enough so that
$$\frac{1}{n} \frac{\beta_e}{\gamma + \mu} < 1 .$$
and
$$\frac{1}{n} \left( \frac{\beta_i}{\gamma+\mu} + \frac{\beta_e}{a} \right) < 1.$$
In fact, we should actually require that $\frac{\beta_e}{n} \ll \gamma + \mu$ and $\frac{1}{n} \left( \frac{\beta_i}{\gamma+\mu} + \frac{\beta_e}{a} \right) \ll 1$, so that the final fraction of the population which is still susceptible is as high as possible.\\
Using instead the model \ref{eq:A1}--\ref{eq:A5} and having in mind Corollary \ref{cor:One_Peak} and Remark \ref{rem:After_Cor}, if $\gamma_a \geq \gamma_i + \mu$ and $a_iA(0)\leq a_aI(0)$, then requiring that
$$\sup_{t \geq 0}\frac{1}{a_i+a_a} \left( \beta_e + a_i \frac{\beta_i}{\gamma_i + \mu} + a_a \frac{\beta_a}{\gamma_a} \right) <1 ,$$
guarantees that only one peak will form. Alternatively, if the ratio $A/I$ is possible to compute,\footnote{by using anti-body testing for instance.} then one may instead control the quantities put forward in remark \ref{rem:Only_One_Peak_A}.
\end{conclusion}
\begin{example}\label{ex:Second}
Consider the same numerical values as those we previously considered in example \ref{ex:First} and figure \ref{fig:First}. Recall that these are $\beta_i=0.9$, $\beta_e=2.5$, $a=1$, $\gamma=0.9$, $\mu=0.01$ and initial conditions $S(0)=0.99$, $I(0)=0$, $E(0)=0.01$, $R(0)=0$. Also, the curves with color red, blue, green and purple respectively denote the susceptible, exposed, infected and recovered. In figure \ref{fig:First2} the case when $n=2$ is plotted. This is a substantial improvement to the case of example \ref{ex:First}. Indeed, we see that about $30$ per cent of the population got away without ever being infected.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{Third}
\caption{Example with $n=3$ and the color code of figures \ref{fig:First} and \ref{fig:First2}.}
\label{fig:Third}
\end{figure}
Even better is the case when $n=3$ which is plotted in figure \ref{fig:Third} just as a further example. This represents an extremely good scenarium where the number of infected is kept very low and almost disappears by time $t=30$.
\end{example}
We shall now give an example using the more refined model from the system \ref{eq:A1}--\ref{eq:A5}.
\begin{example}
Let $\beta_i=0.9$, $\beta_e=1.5$, $\beta_a=2.8$, while $a_a=0.2$, $a_i=0.8$ and $\gamma_i=0.8$, $\gamma_i=1.2$, $\mu=0.01$. Set the initial conditions to be $S(0)=0.99$, $E(0)=0.01$, $I(0)=0$, $R(0)=0$. Using, these values we run in figure \ref{fig:N2} two simulations, one for the case when $n=1$ which means the population was not split at all and one for $n=2$, i.e. the population was split into two groups.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth,height=0.25\textheight]{N2}
\caption{Comparing $I(t)$, the fraction of infected in the whole population, for $n=1$ in blue and $n=2$ in cyan.}
\label{fig:N2}
\end{figure}
This suggests that, indeed, splitting the population into two groups leads to a much smaller peak. Nevertheless, it does take more time to eliminate the infection.
\end{example}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.